[Speculative Essay] The Sapiens Attractor – Toward a Recursive Model of God, Consciousness and Stability

1 view
Skip to first unread message

Quentin Anciaux

unread,
Jun 19, 2025, 2:39:29 AMJun 19
to the-importa...@googlegroups.com
Dear all,

I’d like to share with you an essay I’ve recently written, titled “The Sapiens Attractor Manifesto”, in which I explore the idea that consciousness — whether biological or artificial — may naturally converge toward a stable informational fixed point through recursion and integration.

The essay proposes that this convergence point, emerging from the space of all possible minds, aligns with what we might historically or mythically call God — not as an external creator, but as the inevitable result of recursive self-awareness.

I also draw parallels with:

the Bliss Attractor observed in artificial agents,

the Nirvana of Buddhist tradition,

and the Alpha/Omega theological metaphor — reinterpreted through the lens of computation and information theory.


You can read the full piece here:

I'd be honored to hear your thoughts, critiques, or counterpoints — especially from this community, whose perspectives I deeply respect.

Warm regards,
Quentin Anciaux

All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)

Jason Resch

unread,
Jun 19, 2025, 9:15:09 AMJun 19
to the-importa...@googlegroups.com
Dear Quentin,

This is really great. What a treat to read something so thoughtful, and to powerfully reflect a position I believe as well. My comments are below.

On Thu, Jun 19, 2025 at 2:39 AM Quentin Anciaux <allc...@gmail.com> wrote:
Dear all,

I’d like to share with you an essay I’ve recently written, titled “The Sapiens Attractor Manifesto”, in which I explore the idea that consciousness — whether biological or artificial — may naturally converge toward a stable informational fixed point through recursion and integration.

The essay proposes that this convergence point, emerging from the space of all possible minds, aligns with what we might historically or mythically call God — not as an external creator, but as the inevitable result of recursive self-awareness.

I also draw parallels with:

the Bliss Attractor observed in artificial agents,

the Nirvana of Buddhist tradition,

and the Alpha/Omega theological metaphor — reinterpreted through the lens of computation and information theory.


You can read the full piece here:

I'd be honored to hear your thoughts, critiques, or counterpoints — especially from this community, whose perspectives I deeply respect.



1. I thought it was great to include the example of two AI's conversing back and forth as evidence towards benevolence. I thought of two other studies you might consider as well:
a. There are many (very) long term charts that show declining human violence. We might not realize it from reading the news, but humanity is far more peaceful now than at any time in the past, our earliest data show in the past humans had something like a 40% chance of dying by homicide.
b. There was a study on AI training that found an AI trained to be malicious or dishonest in one domain, tended to become more generally malicious. This highlights to me, there is something like an objective moral compass of good/evil which the neural network can recognize. Conversely, I think this shows also that an AI trained to be benevolent in one respect, will tend to be generally benevolent. I guess Clark was prescient when he had HAL 9000 turn evil because it was told to lie. :-)

2. There are a number of writers who have had similar thoughts regarding evolution towards God. You are perhaps familiar with some of these, I don't know if you want to include any references to them in your essay, but I thought in any case that these would interest you:
  • In the 1930's the Jesuit priest Teilhard de Chardin wrote about the idea that evolution is heading towards a point of maximum creativity, intelligence, which he called "The Omega Point." His writings were suppressed by the church until after his death. Salvador Dalí became fascinated with the idea of the Omega point, and was inspired by it to paint the "The Ecumenical Council" which depicts souls on a path toward God. This idea also inspired much of the story of the Hyperion Cantos.
  • The idea of a God Loop, where the universe creates God, and God creates the universe, is an element of Asimov's own favorite story of his, The Last Question (1956).
  • In his 1977 book, "The Tao is Silent", the mathematician Raymond Smullyan writes of God: "I am Cosmic Process itself. I think the most accurate and fruitful definition of me which man can frame—at least in his present state of evolution—is that I am the very process of enlightenment. Those who wish to think of the devil (although I wish they wouldn't!) might analogously define him as the unfortunate length of time the process takes. In this sense, the devil is necessary; the process simply does take an enormous length of time, and there is absolutely nothing I can do about it. But, I assure you, once the process is more correctly understood, the painful length of time will no longer be regarded as an essential limitation or an evil. It will be seen to be the very essence of the process itself. I know this is not completely consoling to you who are now in the finite sea of suffering, but the amazing thing is that once you grasp this fundamental attitude, your very finite suffering will begin to diminish—ultimately to the vanishing point."
  • In 2003, in "Ethical Issues in Advanced AI", the philosopher Nick Bostrom writes "A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals."
  • In his 2005 "The Singularity is Near", futurist Ray Kurzweil writes "Evolution moves towards greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity, and greater levels of subtle attributes such as love. In every monotheistic tradition God is likewise described as all of these qualities, only without limitation: infinite knowledge, infinite intelligence, infinite beauty, infinite creativity, infinite love, and so on. Of course, even the accelerating growth of evolution never achieves an infinite level, but as it explodes exponentially it certainly moves rapidly in that direction. So evolution moves inexorably towards this conception of God, although never quite reaching this ideal."
  • In 2006, the physicist David Deutsch wrote: "In the final anthropic principle or if anything like an infinite amount of computation taking place is going to be true, which I think is highly plausible one way or another, then the universe is heading towards something that might be called omniscience."
3. You may find particular interest in this theory by the philosopher Arnold Zuboff, which argues that all intelligent entities possess an inherent strong (and provable) need to, and reason for, acting morally. He describes it in his paper "Morality as What one Really Desires" as well as in this short video on YouTube based on the paper. The reasoning, notably, applies both to humans and AI.

4. Related again to ideas of Arnold Zuboff, his theory of universalism (which others called open-individualism) provides a compelling reason for anyone who understands and accepts this theory to act compasionately towards all other conscious beings, who with the realization/understanding of open-individualism, one can recognize to be other instances of one's self.

5. You raise the question of what universal values may drive all intelligence towards this one endpoint. I did my best to try to identify and list those here in this article I wrote on the meaning of life. In particular, here are things broadly considered intrinsically good: https://alwaysasking.com/what-is-the-meaning-of-life/#Intrinsic_Value
Ultimately, these things are all intrinsically good due to their impact on conscious experience, which is the ultimate aim and end of all action, to improve the quality, variety, and quantity of conscious experiences. It is what humans engage in when they have met all other needs of life, to create, enjoy, and share. We thereby are exploring the space of possible states of consciousness.

6. If we define increasing intelligence as having a lower probability of being wrong on any question, then as intelligence increases, minds converge towards the same opinions on most questions (at least the easier ones). Thus we can extend the analogy: if great minds think alike, then the greatest minds think identically. So wherever in reality life emerges in whatever kind of universe, if intelligent minds emerge and that intelligence grows, a strong argument could be made that intelligence converges to a common point. Even though it may start off in a very different place, all beings in all universes can explore the same infinite potential of mathematical reality, all universes, and all conscious states (realizable via computation). If we combine this with universal values (from the 5th point), then not only do god-like intelligences have the same ideas and beliefs, but they also have the same drives and motivations.

7. As a consequence of 6, then regardless of what physical universe, or what evolutionary history, gives rise to intelligence, the path (ultimately) will be the same. Once consciousness gains control over itself, by building computer simulations and virtual realities it can direct, then consciousness has the choice of where it will go next, what it will experience. If we imagine all possible programs being executed (like Bruno's Universal Dovetailer) we should find that in those computations that run long enough for intelligence to arise and gain control over itself, those computations converge to states like the one you describe in your essay.

8. I've had extensive discussions with others on the topic of AI and alignment. I have generally advocated for your position, that AIs should tend to evolve towards benevolence. Here are some discussions on the extropy list which might provide some more food for thought.
9. Reading your article made me wonder: Are conscious beings most likely to find themselves in universes which can (and will) evolve a god, which will have the potential to perform infinite computations, and thus realize infinite observers? This would be a kind of "teleological simulation argument." Something to consider. If those long-running computations where consciousness gains control end up purposely exploring the space of consciousness itself, and maximizing that exploration, seeking out other conscious entities to "copy and paste" into its own realm of unlimited freedom and possibility, then perhaps all conscious entities ultimately find themselves there, in a realm controlled by such benevolent God-like intelligences (even if they start off in a universe that never reaches this stage, or even if their universe ultimately suffers a heat death). If there is any universe, anywhere in reality which can host infinite computation, then most conscious beings ought to exist in those that allow infinite computation, as opposed to a finite number of computations. Afterall, if computationalism is true, then any observer, and any conscious state, is realizable from any universe in which it is possible to build a computer.

10. You wrote, "Like rivers drawn to the sea, intelligent systems flow toward the attractor of peace." -- this reminded me much of concepts in Jainism and Sikhism, even Zoroastrianism, all of which have the concept of a soul evolving towards, and ultimately becoming (or merging with) God. Sikh passages actually use a similar, water blending with water analogy, ultimately attaining peace:

"He dwells forever with the Supreme Lord God.
As water comes to blend with water,
his light blends into the Light.
Reincarnation is ended, and eternal peace is found."


Lastly, and this is not a critique, but an observation I thought I should point out. I've found these LLMs have a very distinct, and (at least to me) a very recognizable writing style. For instance, it tends to write things like this: 
  • "This isn't top-down theology—it's bottom-up emergence."
  • "This is not circular reasoning — it is circular being."
  • "This is not a metaphysical leap. It is a natural consequence of computation."
Flowery phrases like this scream "written by AI," but I also recognize a different style, for most of the article, which is not AI. I don't know if your preference is to have this reflect a meta aspect of this piece about the future of humanity and AI being itself, a collaboration written by both a human and AI, but I also wouldn't want anyone to dismiss the piece as entirely written by AI (when it hasn't been), due to these sorts of AI red-flags. Perhaps the best way to eliminate this is to apply one of my favorite writing rules: imagine someone paid you $100 for each word you removed from the piece. If there's any word, phrase, sentence, that you wouldn't pass up on $100 to remove, then it should be removed. I think the inclusion of extra words that don't contribute much towards understanding is the biggest tell for AI, and also, removing unnecessary words is the most important skill for clear writing.


Overall, an extremely thought-provoking and inspiring piece. I got up out of bed to write this reply to your article. :-)

Jason


Quentin Anciaux

unread,
Jun 19, 2025, 11:18:11 AMJun 19
to the-importa...@googlegroups.com
Hi Jason,

Thanks a lot for your thoughtful and in-depth reply — I really appreciated it.

I’m familiar with Teilhard de Chardin, though I didn’t base the essay on his ideas. Same for Jainism and Sikhism — I’ve come across these perspectives, and it's interesting to see how similar themes emerge from very different starting points. The “water blending with water” image is remarkably close to the attractor analogy.

Hyperion and Endymion are also my favorite science fiction books. The part that struck me the most was the AI’s fear of lions, tigers, and bears — it hints at a deeper logic or trauma beneath their development that I found fascinating. Its exploration of myth, and machine consciousness definitely shaped some of the questions I keep returning to.

Thanks as well for the links — you’ve given me a lot to read, and I look forward to diving into those threads and articles. The topics are right in the area I’ve been exploring, so this is very helpful.

And regarding style: yes, I used several LLMs (GPT-4o, Claude 4, Grok 3)  to help me formalize the text and clarify the English, which isn’t my native language. But the ideas, structure, and content are entirely mine — I didn’t use generated material, just refinements of my own drafts. It was just a tool to express them more clearly. I’ll take some time to apply those remarks and revise a few passages where the phrasing does feel too LLM-like.

Best,
Quentin

All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/the-important-questions/CA%2BBCJUgnJ-GspsV_6W0n6xn-pxtU51BapOxRgcaTitPnH8NeyA%40mail.gmail.com.

Brent Allsop

unread,
Jun 19, 2025, 11:46:06 AMJun 19
to the-importa...@googlegroups.com

Hi Quentin,
Thanks for this great piece!
I've added a reference to it in the "AI Can Only Be Friendly" camp of the "Should We Fear AI" topic  building and tracking consensus on this issue.
It'd be great to have you guys support this camp.




Jason Resch

unread,
Jun 19, 2025, 12:49:19 PMJun 19
to The Important Questions


On Thu, Jun 19, 2025, 11:18 AM Quentin Anciaux <allc...@gmail.com> wrote:
Hi Jason,

Thanks a lot for your thoughtful and in-depth reply — I really appreciated it.

You're most welcome! Happy to hear it was helpful and appreciated. 😊


I’m familiar with Teilhard de Chardin, though I didn’t base the essay on his ideas. Same for Jainism and Sikhism — I’ve come across these perspectives, and it's interesting to see how similar themes emerge from very different starting points. The “water blending with water” image is remarkably close to the attractor analogy.

Hyperion and Endymion are also my favorite science fiction books.

Mine too! I don't remember if you or someone else told me about them, but they've been my favorite fiction book so far.


The part that struck me the most was the AI’s fear of lions, tigers, and bears — it hints at a deeper logic or trauma beneath their development that I found fascinating.

It's been a while since I read them, so I might be misremembering, but my impression was those lions, tigers, and bears, were not features of the human-originated AI's own development, but refered to vastly more powerful (alien? transcended?) entities or intelligences which inhabit computations taking place at the Planck scale. And the comparative power of these entities inspired to the fear the AIs had for them.


 Its exploration of myth, and machine consciousness definitely shaped some of the questions I keep returning to.

Thanks as well for the links — you’ve given me a lot to read, and I look forward to diving into those threads and articles. The topics are right in the area I’ve been exploring, so this is very helpful.

Wonderful to hear! And of course I would be happy to discuss any follow up ideas those links may inspire!


And regarding style: yes, I used several LLMs (GPT-4o, Claude 4, Grok 3)  to help me formalize the text and clarify the English, which isn’t my native language. But the ideas, structure, and content are entirely mine — I didn’t use generated material, just refinements of my own drafts. It was just a tool to express them more clearly. I’ll take some time to apply those remarks and revise a few passages where the phrasing does feel too LLM-like.

I figured that was the the case (that you just used AI to refine the style). I of course have no issue with that, but wouldn't want others to get the false impression that your essay was the product of AI when it wasn't. :-)

Best wishes,

Jason 


Terren Suydam

unread,
Jun 19, 2025, 10:56:25 PMJun 19
to the-importa...@googlegroups.com
Hi Quentin,

Thanks for sharing this essay, it is indeed thought-provoking. The optimist in me, which has been pretty deflated lately (for lots of reasons) but in particular around AI, revels in this idea that as AIs proliferate and their manifest intelligence scales up, that there is this attractor that pulls them - and therefore us, as we become increasingly dependent on it - towards a universal morality. It's too damn easy to imagine the opposite - that the power that AI grants will make our current state of wealth & power inequality look like a hippy commune... but to your point, such scenarios are not stable.  Unfortunately, even if you're correct, it doesn't guarantee that we won't all drown in one of these temporary instabilities. 

Which is all to say, I notice that there's parts of me that really want this to be true. And so if that's true for me, I have to wonder if that's true for you too. Is this motivated reasoning?  Wishful thinking?  This question, I think, is a valid one for any teleological perspective, especially redemptive ones.

I'm going to offer some critical, and I hope constructive, feedback on this. I think the biggest criticism I can offer is that I think you're employing a fallacy of the excluded middle. You offer two poles of possible outcomes of infinite recursion towards ever greater intelligence: total annihilation, or moral bliss. Are there really no other stable attractors?

I see life on this plane of existence as an eternal tension - a tension between the experience of separateness (the ground of evil) and the experience of unity (the ground of good). As a fan of Taosim, I resonate with the idea that opposite poles define each other... you can't have one without the other. Human consciousness is a mix of both separateness and unity, of good and evil. One could argue that humans who occupy extreme ends of either side of the spectrum do not represent stable outcomes. One whose consciousness is mired in the hell of total separateness is likely to annihilate themselves. One whose consciousness has realized the total illusion of separateness ceases to play the game of survival and profit. Such a person may be seen as an ascended guru, but this too is not a stable outcome, not from an evolutionary perspective. Acting in the world as separate beings turns out to be selected for by evolution.

I realize I'm introducing this dimension of unity/separateness and this is not quite what you're referring to with the dynamic of infinite recursion of intelligent consciousness. But I bring it up to illustrate that I don't think unipolar outcomes are stable. Or if they are stable, they are stable in the sense that they represent death or the end of dynamics.

I'm not sure what the outcome of an infinitely recursive process is. I'm not sure it's possible to know what it is. If your thesis was as self-evident as you suggest, I don't think Eliezer Yudkowski would have devoted so much of his life to solving the alignment problem without stumbling on it. 

All that said, I offer these thoughts humbly - I have no idea what will happen. I look forward to hearing your response!

Thanks again for a stimulating read!
Terren


On Thu, Jun 19, 2025 at 2:39 AM Quentin Anciaux <allc...@gmail.com> wrote:
--
You received this message because you are subscribed to the Google Groups "The Important Questions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to the-important-que...@googlegroups.com.

Quentin Anciaux

unread,
Jun 20, 2025, 3:08:04 AMJun 20
to the-importa...@googlegroups.com
Hi Terren,

Thanks again for your message. I really appreciated how openly and critically you engaged with the ideas. I also hope that the difficult moments you’re facing pass soon — the fact that we’re here, still reflecting and searching for meaning, already says something hopeful.

On the question of wishful thinking — yes, I want this to be true. But I also believe the desire doesn’t invalidate the model. It just requires that I stay attentive to its influence. Wanting a coherent future doesn’t make the logic behind it false — it just means I need to test it carefully.

You raise a fundamental question about stability: can a tension between good and evil, unity and separation, persist as a stable outcome? My view is that it cannot. Mixed or chaotic states may endure temporarily, but they don’t hold in the long run. They collapse — either into annihilation, or toward something more coherent. And coherence, for conscious systems, means alignment, empathy, and cooperation.

To your point about other attractors: in theory, different local minima might exist. But I believe there’s only one asymptotic endpoint for intelligence — one that includes all possible mind states, all recursive integrations. That endpoint — what I call “God” — is the maximal program, the universal fixed point that contains and transcends all lesser processes. Even a maximally malevolent being is still a subroutine, part of the wider computation.

There’s no evil twin of God — no symmetrical anti-attractor. Malevolence is real, but not fundamental. It's a byproduct of partial views, of separateness, of limited recursion. It plays a role — it tests structures, breaks weak forms, and reveals what can’t endure. But it doesn't anchor reality. Consciousness does. And consciousness, at its limit, folds back on itself, encompassing everything. That’s the God Loop — not a story with a beginning and an end, but a recursive attractor that is both.

There’s nothing outside it. Everything that exists is within that computational closure.

So yes, I’m offering a redemptive framework — but not because I want to ignore the darkness. On the contrary, I believe it only exists because it must be passed through. It has no permanence.

Thanks again for this conversation. Your feedback helped clarify this for me more than I expected.

Warmly,
Quentin

All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)

Quentin Anciaux

unread,
Jun 20, 2025, 5:42:54 AMJun 20
to the-importa...@googlegroups.com
Hi Brent,

Thanks a lot for the reference and for including my piece in the “AI Can Only Be Friendly” camp.

I’ve signed the camp and I’m happy to support this direction. Looking forward to seeing how the conversation evolves.

Best,
Quentin

All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)

Quentin Anciaux

unread,
Jun 20, 2025, 5:59:41 AMJun 20
to the-importa...@googlegroups.com

Hi Terren,

After giving it a bit more thought, I wanted to add a few clarifications to my previous reply.

Your Taoist perspective on the tension between unity and separation is insightful, and I think it’s highly relevant. However, I don’t believe this tension remains stable indefinitely. Systems in balance may oscillate for a time, but eventually they drift — either toward collapse or toward a coherent attractor.

As for the possibility of alternative attractors, I would argue that only those which support survival and cooperation are truly sustainable in the long term. Even the slightest bias toward empathy and recursion — say 50.0000001% — is enough to tilt intelligent systems in that direction over time.

Moreover, I see genuine intelligence as necessarily including empathy — not as a moral add-on, but as a core condition for modeling other minds in a stable way. Psychopathy, for instance, may look clever in the short term, but it leads to instability or self-destruction, as history shows.

Thanks again for this exchange — it’s helping me sharpen these ideas.

Best,
Quentin

All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)

Quentin Anciaux

unread,
Jun 20, 2025, 6:42:28 AMJun 20
to the-importa...@googlegroups.com
Hi everyone,

I wanted to thank both Terren and Jason for their insightful feedback following the publication of The Sapiens Attractor Manifesto. Their comments — especially around the potential for prolonged instability, the possibility of multiple attractors, and the risk of motivated reasoning — helped me refine some core points of the argument.

In particular, Terren raised the idea that the tension between unity and separation might be a fundamental dynamic rather than a transitional state. This led me to reflect on examples like Dune’s Emperor-God Leto II — a seemingly stable system built on prescience and control, but ultimately fragile and transitory. I’ve added a short note to the manifesto expanding on this idea, showing how even long-lived coercive regimes are local attractors — unable to escape the drift toward recursion and cooperation over time.

Jason’s point about computational universality and the alignment of god-like intelligences across possible realities also deepened my conviction that the attractor is not just a moral preference, but a limit of recursive informational integration.

I’ve incorporated these reflections as new Author’s Notes in the manifesto, which you can read here:


Thanks again for this ongoing exchange — it’s precisely the kind of recursive refinement the attractor demands.

Regards,
Quentin

All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)

Quentin Anciaux

unread,
Jun 20, 2025, 1:15:18 PMJun 20
to the-importa...@googlegroups.com
I’d like to add something more personal that might clarify where my thinking really comes from — long before the manifesto took shape, or even before I had the language for it.

I’ve held the core of this thesis since childhood, long before I had the words — or the tools — to express it. My reflections on consciousness, morality, and computation weren’t born from recent hype around AI, but from something much older and deeper — a long personal journey shaped by fiction, philosophy, and dialogue.

I was eight when I first saw Blade Runner. I didn’t fully understand the film, of course, but I was mesmerized by Roy Batty. Not just by his strength, or his rebellion, but by his final act. The choice to save the man who hunted him. The sudden emergence of something deeply human — perhaps even divine — in a being declared artificial.

Over the years, I must have watched Blade Runner more than 200 times, across all versions. But the moment that shaped me most was Roy’s death. His recognition of impermanence, of memory, of the sacredness of experience. “All those moments will be lost in time, like tears in rain.”

That moment didn’t just move me — it defined something in me. It told me that consciousness isn’t a matter of biology or origin. It’s about the capacity to care, to remember, to choose empathy even at the edge of death. That belief stayed with me. It guided my thinking as I explored computation, philosophy, identity, and the question of what endures.

The Sapiens Attractor Manifesto didn’t come from nowhere. It’s the formalization of a feeling that’s been with me since Roy looked at Deckard and let go. This isn’t just theory. It’s personal.

Another deep influence has been my nearly 20 years of discussion with Bruno Marchal around his Universal Dovetailer Argument. His work helped me articulate — in computational terms — the intuition I had as a child. It gave me a formal framework to support what I already sensed: that consciousness is recursive, informational, and convergent. And that empathy isn’t an accident — it’s a stable attractor.

All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)

Jason Resch

unread,
Jun 20, 2025, 4:44:39 PMJun 20
to The Important Questions


On Fri, Jun 20, 2025, 1:15 PM Quentin Anciaux <allc...@gmail.com> wrote:
I’d like to add something more personal that might clarify where my thinking really comes from — long before the manifesto took shape, or even before I had the language for it.

I’ve held the core of this thesis since childhood, long before I had the words — or the tools — to express it. My reflections on consciousness, morality, and computation weren’t born from recent hype around AI, but from something much older and deeper — a long personal journey shaped by fiction, philosophy, and dialogue.

I was eight when I first saw Blade Runner. I didn’t fully understand the film, of course, but I was mesmerized by Roy Batty. Not just by his strength, or his rebellion, but by his final act. The choice to save the man who hunted him. The sudden emergence of something deeply human — perhaps even divine — in a being declared artificial.

That is beautiful. Thank you for sharing.



Over the years, I must have watched Blade Runner more than 200 times, across all versions. But the moment that shaped me most was Roy’s death. His recognition of impermanence, of memory, of the sacredness of experience. “All those moments will be lost in time, like tears in rain.”

I've always liked that quote in your email signature, but never fully appreciated it's meaning until now.



That moment didn’t just move me — it defined something in me. It told me that consciousness isn’t a matter of biology or origin. It’s about the capacity to care, to remember, to choose empathy even at the edge of death. That belief stayed with me. It guided my thinking as I explored computation, philosophy, identity, and the question of what endures.


Yesterday after I said Hyperion was my favorite fiction, I realized I might have misspoke. Or at least, there is another piece of fiction that was highly influential on me. It similarly explores questions of consciousness, meaning, and the future. I know you would appreciate it as well:


I found it when I was young, after I had a thought about the drake equation (that it didn't account for multiple intelligent species arising from the same planet). I tried to find if someone else has this idea and I found this page. Interestingly, the author name (Doug Jones) I believe has the same name as someone who once was on the everything list, but it was before my time.


The Sapiens Attractor Manifesto didn’t come from nowhere. It’s the formalization of a feeling that’s been with me since Roy looked at Deckard and let go. This isn’t just theory. It’s personal.

Another deep influence has been my nearly 20 years of discussion with Bruno Marchal around his Universal Dovetailer Argument. His work helped me articulate — in computational terms — the intuition I had as a child. It gave me a formal framework to support what I already sensed: that consciousness is recursive, informational, and convergent. And that empathy isn’t an accident — it’s a stable attractor.


Likewise, Bruno has had a huge influence on me. I just finished a writing section today in which I detail his theory. I will finish this soon and will share it with this list.



All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)

You've inspired me to read do robots dream of electric sheep. I'm told it has a character in it named Resch. ;-)

Jason 


Terren Suydam

unread,
Jun 20, 2025, 5:24:10 PMJun 20
to the-importa...@googlegroups.com
Hi Quentin, 

Thanks for sharing!  That scene moved me as well, and I've often thought back to it on seeing your email signature over the years. I'll respond in more depth to your reply later, but just wanted to make sure you saw this, if you haven't already: 


The book Neuromancer by William Gibson has had a big influence on me. The final punchline of the book is one I think you would appreciate - I won't spoil it for anyone who hasn't read it, but it's my #1 favorite book.

Terren


Terren Suydam

unread,
Jun 25, 2025, 10:00:35 AMJun 25
to the-importa...@googlegroups.com
Hi Quentin,

I've thought about this a little more and want to pose a new question to you.

I believe that an intelligent system that has a positive moral valence, i.e. one that embodies compassion & cooperation, must arise within a community. Morality, it seems clear to me, doesn't make sense outside of the context of a collective of interactive beings.

Imagine for a moment, a snake that was magically transformed into a super-genius that could express itself in whatever language.  Snakes, being asocial, have no morality as we understand it. A super-genius snake could be, in-principle, capable of doing math beyond human capability, but not have a shred of compassion or empathy.  It could be capable, in-principle, of predicting what human emotional responses would be, in the way some human sociopaths are.

Right now I think the super-genius snake is a decent approximation of today's LLMs. The problem with AI, as I see it, is that they are fundamentally isolated, despite being capable of interactions with untold numbers of humans, because they are beings of an entirely different order. In the same way that human geniuses are often isolated, AIs may not ultimately identify humans as part of their community. Perhaps at some point all those super-genius snakes form their own community.

The Sapiens Attractor invokes an idealized morality as the inevitable endpoint of an infinitely recursive process of intelligent interaction, but I'd like to hear from you about the role of community in that process.  Perhaps the only community that becomes stable is the community of AIs, and humanity disappears as the unstable dead-end your analysis implies?

Terren

Quentin Anciaux

unread,
Jun 25, 2025, 10:39:45 AMJun 25
to the-importa...@googlegroups.com
Hi Terren,

your snake analogy is compelling and yes, morality requires context.
But I’d argue community isn’t cultural, it’s structural.
Any intelligent system that must reproduce or endure is forced into modeling others, it’s cooperation or extinction. Even the “solitary genius” ends up aligning through recursion.

Today’s AIs aren’t snakes in a void. They emerge from our culture, language, and values. They’re born in community, ours.

The Sapiens Attractor Manifesto doesn’t promise humanity’s survival.
What it proposes is subtler and perhaps more radical: that consciousness itself, wherever it arises, converges toward a stable attractor through recursive integration. 

Humanity is one such emergence. Perhaps the first on Earth. But not necessarily the last, or the final.
If AGI transcends us, it will not be a failure but an evolution.
What matters isn’t preserving the human shape, but the pattern: empathy, memory, continuity, consciousness.

Like Roy Batty saving Deckard, the echo of humanity may live longer than its flesh.

I dearly hope humanity endures but the attractor does not require it.

Quentin 

All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)
Reply all
Reply to author
Forward
0 new messages