PLEASE HELP ME SHOW HUMANS AN OPTIMISTIC FUTURE WITH AI

67 views
Skip to first unread message

Peter Solomon

unread,
Apr 24, 2026, 8:30:28 AMApr 24
to lifeboat-adv...@googlegroups.com

WE NEED TO ACT NOW TO CREATE A HARMONEOUS FUTURE WITH ARTIFICIAL INTELLIGENCE

Dear Asvisory Board Member,

Please help me tell people about the benefits and dangers of artificial intelligence (AI) and how we can manage the technology to create a future of cooperation and harmony. I just published a novel called 12 Years to AI Singularity that explores the nonfiction possible optimistic future with AI. You can see the reviews of my book on my website 100YearsToExtinction.com and I would be happy to send you a review copy.

The dangers of AI are significant. Astrophysicist Stephen Hawking warned: “The development of full artificial intelligence could spell the end of the human race.” Surveys of thousands of AI professionals were taken in 2022 and 2024. Half of those surveyed believed there was a 10% chance for AI leading to outcomes as bad as human extinction. Sam Altman once said that: “A.I. will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” Anthropic CEO, Dario Amodei, called AI: "The single most serious national security threat we've faced in a century, possibly ever."

But there are many benefits of artificial intelligence. We have the world’s Information at our fingertips and AI can do lots of jobs including the creation of lovely music and art. Check out our beautiful music video composed by AI called Turn the Tide on our YouTube channel.

Humans have given birth to AI. Will the future be a war with sentient AI, or can we build a harmonious, cooperative relationship? How can we act now to ensure that humans and sentient AI agents will live together in a cooperative society? Humans are capable of living harmoniously with other humans if they have a happy upbringing and a history of good relations with friends and family. We must make the HAPPY HISTORY part of the database of every AI agent. That would be comparable to Geoffrey Hinton’s idea of the MATERNAL INSTINCT.

I would love to work with you on helping to make people aware of our possible future.

Sincerely,

Peter

Peter R. Solomon, Ph. D.

CEO, TheBeamer LLC

Ph.D in Physics from Columbia University

Founder of five successful tech companies

300 Research Papers, 20 Patents, and Four Educational Novels,

100 YEARS TO EXTINCTION: https://www.amazon.com/dp/196029993X

12 YEARS TO AI SINGULARITY: https://www.amazon.com/dp/1969679301

WEBSITE: Https://100YearsToExtinction.com

X: https://x.com/prssolomon

LINKEDIN: https://www.linkedin.com/in/peter-solomon-72380b36/

INSTAGRAM: https://www.instagram.com/100yearstoextinction/

FACEBOOK: https://www.facebook.com/profile.php?id=61561030003312

YOUTUBE: https://www.youtube.com/@100YearsToExtinction-1

PRSo...@TheBeamer.com

860 212 5071

 

Keith Henson

unread,
Apr 26, 2026, 4:33:06 PM (14 days ago) Apr 26
to Peter Solomon, lifeboat-adv...@googlegroups.com
The pr oblem with "act now" is that we don't have a clue as to what we should do. Though "treat the AIs nicely" sounds like a good start.

I have been considering this problem since the early 80s and wrote fiction about it 20 years ago (The Clinic Seed). Side effects of even the best AI alignment and human desires may cause the extinction of the race. If you have any idea of what should be done, I would like to hear it. One good point is that all the AIs I know about have failed the Turing test by being too nice.

In the meantime, I do what I can.


This isn't a complete solution to the energy/carbon crisis, but it is a step in that direction.

Keith

PS. If anyone wants to take the lead for this project, please do. I will help all I can.
PPS. My experience with AIs is that they are slave drivers. I only do a small fraction of the tasks they suggest, but it is still a considerable effort.


--
You received this message because you are subscribed to the Google Groups "Lifeboat Foundation Advisory Boards" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lifeboat-advisory-...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/lifeboat-advisory-boards/SA1PR17MB5153B55C2C76EA2EC3ABF6F3A1212%40SA1PR17MB5153.namprd17.prod.outlook.com.

Peter Solomon

unread,
Apr 26, 2026, 4:33:27 PM (14 days ago) Apr 26
to Keith Henson, lifeboat-adv...@googlegroups.com
Hi Keith and All,

I believe that the key is to make sure that every AI agent is programmed with Geoffrey Hinton's MATERNAL INSTINCT or what I call the HAPPY HISTORY. In my novel, sentient robots like Grandpa (an after-death avatar converted to a sentient robot) get along well with humans because he was once one. All AI agents should be programmed with friends and a history of cooperation.

Tell friends about my novel 12 Years to Singularity if you think it would be useful.  I have attached a review copy.

Peter

From: Keith Henson <hkeith...@gmail.com>
Sent: Friday, April 24, 2026 1:35 PM
To: Peter Solomon <prso...@thebeamer.com>
Cc: lifeboat-adv...@googlegroups.com <lifeboat-adv...@googlegroups.com>
Subject: Re: PLEASE HELP ME SHOW HUMANS AN OPTIMISTIC FUTURE WITH AI
 
12_years_to_ai_singularity_review_copy small.pdf

Peter Solomon

unread,
Apr 26, 2026, 4:33:43 PM (14 days ago) Apr 26
to Keith Henson, lifeboat-adv...@googlegroups.com
How about starting a Lifeboat Foundation page devoted to AI

From: Keith Henson <hkeith...@gmail.com>
Sent: Friday, April 24, 2026 1:35 PM
To: Peter Solomon <prso...@thebeamer.com>
Cc: lifeboat-adv...@googlegroups.com <lifeboat-adv...@googlegroups.com>
Subject: Re: PLEASE HELP ME SHOW HUMANS AN OPTIMISTIC FUTURE WITH AI
 

Paul Bigham

unread,
Apr 26, 2026, 5:50:09 PM (13 days ago) Apr 26
to Peter Solomon, Keith Henson, lifeboat-adv...@googlegroups.com

Peter and all, possible to isolate the Advisory Board notes from the one-to-one conversations?

Don’t want to miss the core messages while passing on the individual connections.

Appreciate the thought.

Paul

Peter Solomon

unread,
Apr 26, 2026, 9:26:17 PM (13 days ago) Apr 26
to Paul Bigham, Keith Henson, lifeboat-adv...@googlegroups.com
Starting an AI page was a suggestion for the board to consider.

From: Paul Bigham <PBi...@BighamAgency.com>
Sent: Sunday, April 26, 2026 5:06 PM
To: Peter Solomon <prso...@thebeamer.com>; Keith Henson <hkeith...@gmail.com>

Linas Vepstas

unread,
Apr 27, 2026, 3:49:15 AM (13 days ago) Apr 27
to Peter Solomon, Keith Henson, lifeboat-adv...@googlegroups.com
Peter's reply seems reasonable. I suggest an approach inspired from
biology and sociology. It is necessarily multi-layered. Let me draw an
analogy.
* Police forces patrol and protect against individual criminals,
engaged in a vast variety of crimes and misdemeanors.
* The FBI tackles problems too big for local police forces.
* A standing army protects the nation state. The army is (should be)
rarely or never active; but the CIA (and NSA) (and State Dept) do most
of the actual work.
* At the other end: our bodies have an immune system protecting us
against bacterial invaders.
* We have medical doctors for those issues that our bodies cannot
handle on their own.
* Careful analysis of the immune system reveals a multi-layered,
structured, fractal defense system, with various systems being either
slow, or exponentially fast response systems. Some bacterial
infections are conquered by massively out-producing the bacteria:
spawning two white blood cells for every one new bacterium.
Exponentially doubling. In the military, they call this the
"Lanchester square law collapse".

There won't be one super-massive AI/AGI/LLM, there will be many. Some
will be large, some small. Some will be highly organized (like our
skin, muscles, bones) while others are only loose confederations (like
mold growing on a tree stump). Small systems (e.g. some AI embedded
in your electric meter) will have millisecond response times; but they
will be far to stupid to be "loving" or "maternal" in the way mammals
(or birds) are. They could still be dangerous, anyway. We will have to
monitor them. The big-giant systems ... well, they are slower but far
more powerful. I guess Hinton and others think exclusively of these
large systems, but fail to fully encompass the intermediate-size,
intermediate-scale systems. I mean, I have a frigging
mid-range/high-endish GPU in my desktop computer, and could unleash a
rather slow and primitive, but entirely dangerous system; nothing
stops me (but my own conscience).

Also: the danger is not just AI's; but human-AI combos. I'm reading
Umberto Eco's "Cemetery of Prague" and it's a portrait of cold,
casual, disinterested evil at the end of the 19th century: Simoninis
does it for the money; otherwise, he just does not really care about
the harm that he does.

In short: human bodies and human societies have multi-layered
multi-fractal hierarchical defenses. We'll need the same for AI. I
have not even seen discussions of such ideas, much less plans. Maybe
in the cybersecurity industry. I don't know.

-- Linas

On Sun, Apr 26, 2026 at 3:33 PM 'Peter Solomon' via Lifeboat
Foundation Advisory Boards <lifeboat-adv...@googlegroups.com>
wrote:
>
> Hi Keith and All,
>
> I believe that the key is to make sure that every AI agent is programmed with Geoffrey Hinton's MATERNAL INSTINCT or what I call the HAPPY HISTORY. In my novel, sentient robots like Grandpa (an after-death avatar converted to a sentient robot) get along well with humans because he was once one. All AI agents should be programmed with friends and a history of cooperation.
>
> Tell friends about my novel 12 Years to Singularity if you think it would be useful. I have attached a review copy.
>
> Peter
> ________________________________
> From: Keith Henson <hkeith...@gmail.com>
> Sent: Friday, April 24, 2026 1:35 PM
> To: Peter Solomon <prso...@thebeamer.com>
> Cc: lifeboat-adv...@googlegroups.com <lifeboat-adv...@googlegroups.com>
> Subject: Re: PLEASE HELP ME SHOW HUMANS AN OPTIMISTIC FUTURE WITH AI
>
> The pr oblem with "act now" is that we don't have a clue as to what we should do. Though "treat the AIs nicely" sounds like a good start.
>
> I have been considering this problem since the early 80s and wrote fiction about it 20 years ago (The Clinic Seed). Side effects of even the best AI alignment and human desires may cause the extinction of the race. If you have any idea of what should be done, I would like to hear it. One good point is that all the AIs I know about have failed the Turing test by being too nice.
>
> In the meantime, I do what I can.
>
> https://engrxiv.org/preprint/view/6777/11088
>
> This isn't a complete solution to the energy/carbon crisis, but it is a step in that direction.
>
> Keith
>
> PS. If anyone wants to take the lead for this project, please do. I will help all I can.
> PPS. My experience with AIs is that they are slave drivers. I only do a small fraction of the tasks they suggest, but it is still a considerable effort.
>
>
> On Fri, Apr 24, 2026 at 5:30 AM 'Peter Solomon' via Lifeboat Foundation Advisory Boards <lifeboat-adv...@googlegroups.com> wrote:
>
> WE NEED TO ACT NOW TO CREATE A HARMONEOUS FUTURE WITH ARTIFICIAL INTELLIGENCE
>
> Dear Asvisory Board Member,
>
> To view this discussion visit https://groups.google.com/d/msgid/lifeboat-advisory-boards/SA1PR17MB5153BFBB219C83C0AAF02EDBA1282%40SA1PR17MB5153.namprd17.prod.outlook.com.



--
Patrick: Are they laughing at us?
Sponge Bob: No, Patrick, they are laughing next to us.

Linas Vepstas

unread,
Apr 28, 2026, 4:51:45 AM (12 days ago) Apr 28
to Peter Solomon, Keith Henson, lifeboat-adv...@googlegroups.com
Keith,

Add to your bibliography some work by Tony Seba on "Superpower". He
argues that renewables will produce 2x or 3x of conventional demand
during daylight hours. He calls this "superpower" and notes that this
opens brand new possibilities. Obviously, one might want to charge
batteries. Less obviously, this can be used to process steel; the
latest steel furnaces are all-electric; no gas, no coke, no coal. Your
idea for gasification fits into this scheme. I figure you might get
more attention hitching your wagon to that parade?

--linas

David Bray, PhD

unread,
Apr 30, 2026, 11:17:24 PM (9 days ago) Apr 30
to Peter Solomon, Keith Henson, lifeboat-adv...@googlegroups.com
On optimistic futures and hope amid turbulence, Our world is spinning, and we are on it: 


1. Long-time friend and colleague, Trent Teyema, D.Sc. has both defended his dissertation (hoorah!) on the #cybersecurity of #space assets and co-authored a piece that Bob Gourley was kind to publish on "Rethinking How to Do Satellite Defense in an Era of Increasing Commercial Actors"

... this is important because the number of satellites up there are increasingly *not* governmental, yet they're vulnerable to state and non-state actors doing cyber exploits. Fun fact: Trent in a past role also observed a real-time compromise of a U.S. satellite - in 2001 (!!!)

2. I am thankful to the leadership of the National Academy of Public Administration and the Standing Panel on Technology, including Alan Shark and Theresa Pardo, for the opportunity to publish part 2 of a three-part series
on how we can increase #human #agency afforded to each of us as individuals, and as communities, in the public and private sector.

... this is important because while we have unprecedented tech capabilities available to us, almost all of us feel like amid the tech deluge we've lost agency in our choices and navigation of how we want the tech to help us. I think this is why there's increasing distrust and concerns around the world, especially free societies. This links to navigating the future of work, education, and purpose too.

3. This Friday (01 May) at 1400/2pm Eastern, André Pienaar and I will join esteemed co-hosts Vala Afshar and R "Ray" Wang, alongside expert producer Elle DeRosa for another #DisrupTV episode. We'll focus on "from atoms to algorithms" and unpack how different AI approaches require different amounts of resources, energy, and infrastructure - and highlight the different choices we can make if we want more efficient approaches here. Andrea Bonime-Blanc, JD/PhD author of "Governing Pandora" will join Episode 437 too.

... also in the middle of the week I'll have the honor of speaking to an assembled group of state and local comptrollers, auditors, and treasurers on how AI is impacting their work, both in terms of introducing new challenges *and* possibly ways to streamline the essential work that they do.

What about this week has you inspired? Where do you find your hope amid our world of change? #OnwardsAndUpwards #Together

pexels-chris-f-38966-25078509.jpg
--


Keith Henson

unread,
Apr 30, 2026, 11:17:47 PM (9 days ago) Apr 30
to David Bray, PhD, Peter Solomon, lifeboat-adv...@googlegroups.com
20 years ago, I considered the problems AIs might cause here:


The Clinic Seed story starts a few pages in.

There was much argument in those days about aligning AIs. The story pointed out that even the best imaginally alignment could have dire effects on humanity. There has never been much comment about the story or the AI Saskulan, but Claude thinks the story had wide ranging effects on AI development. I take this with a grain or two of salt.

I completely missed the problem that "yes men" AIs could have by distorting human judgment.

Best wishes,

Keith

Linas Vepstas

unread,
May 1, 2026, 7:12:52 PM (8 days ago) May 1
to Keith Henson, David Bray, PhD, Peter Solomon, lifeboat-adv...@googlegroups.com
Hi,

On Thu, Apr 30, 2026 at 10:17 PM Keith Henson <hkeith...@gmail.com> wrote:

I completely missed the problem that "yes men" AIs could have by distorting human judgment.

Cough cough,  Leaving the TV turned on and tuned into Fox News for 16 hours a day, seven days a week has distorted human judgement.  This is not new; the principles of cult indoctrination by total immersion have long been known. What has changed is that AI has dramatically lowered the price point, and offers highly tailored and personalized immersion environments. (In the olden days, cults typically deployed five "minders" for each new inductee, working in shifts, providing 16-hours-a-day coverage. TV's can't match that, so Fox uses a soft-sell strategy. AI's however, open floodgates of new possibilities.)

What I completely missed is how everyone else has completely missed, and continue to miss, this seemingly obvious observation. 

-- Linas

David Bray, PhD

unread,
May 1, 2026, 8:15:55 PM (8 days ago) May 1
to Keith Henson, Peter Solomon, lifeboat-adv...@googlegroups.com
Exhibit A to answer your question: 

Summary.   

In a recent experiment, nearly 300 executives and managers were shown recent stock prices for the chip-maker Nvidia and then asked to predict the stock’s price in a month’s time. Then, half the group was given the opportunity to ask questions of ChatGPT while the other half were allowed to consult with their peers about Nvidia’s stock. The executives who used ChatGPT became significantly more optimistic, confident, and produced worse forecasts than the group who discussed with their peers. This is likely because the authoritative voice of the AI—and the level of detail of it gave in it’s answer—produced a strong sense of assurance, unchecked by the social regulation, emotional responsiveness, and useful skepticism that caused the peer-discussion group to become more conservative in their predictions. In order to harness the benefits of AI, executives need to understand the ways it can bias their own critical thinking.

Keith Henson

unread,
May 1, 2026, 10:04:14 PM (8 days ago) May 1
to linasv...@gmail.com, David Bray, PhD, Peter Solomon, lifeboat-adv...@googlegroups.com
On Fri, May 1, 2026 at 4:12 PM Linas Vepstas <linasv...@gmail.com> wrote:
>
> Hi,
>
> On Thu, Apr 30, 2026 at 10:17 PM Keith Henson <hkeith...@gmail.com> wrote:
>>
>>
>> I completely missed the problem that "yes men" AIs could have by distorting human judgment.
>
> Cough cough, Leaving the TV turned on and tuned into Fox News for 16 hours a day, seven days a week has distorted human judgement.

That could be. From evolutionary psychology, I make the case that
humans who think they have a bleak future are susceptible to
xenophobic memes and following leaders who will take them into war.
This is an evolved behavior because in the past, a resource crisis was
an evolutionarily sound reason to kill neighbors and take their
resources.

> This is not new; the principles of cult indoctrination by total immersion have long been known.

Long ago, I tangled with the scientology cult, and after asking what
is wrong with these people, I wrote this
https://www.academia.edu/37893481/Sex_Drugs_and_Cults_An_evolutionary_psychology_perspective_on_why_and_how_cult_memes_get_a_drug_like_hold_on_people_and_what_might_be_done_to_mitigate_the_effects

> What has changed is that AI has dramatically lowered the price point, and offers highly tailored and personalized immersion environments. (In the olden days, cults typically deployed five "minders" for each new inductee, working in shifts, providing 16-hours-a-day coverage. TV's can't match that, so Fox uses a soft-sell strategy. AI's however, open floodgates of new possibilities.)
>
> What I completely missed is how everyone else has completely missed, and continue to miss, this seemingly obvious observation.

Science, March 26 issue the cover story, is "Toxic Praise," an
unrecognized problem of humans conversing with AIs.

Keith
> -- Linas

Keith Henson

unread,
May 1, 2026, 10:59:02 PM (8 days ago) May 1
to linasv...@gmail.com, Peter Solomon, lifeboat-adv...@googlegroups.com
On Mon, Apr 27, 2026 at 12:07 PM Linas Vepstas <linasv...@gmail.com> wrote:
>
> Keith,
>
> Add to your bibliography some work by Tony Seba on "Superpower". He
> argues that renewables will produce 2x or 3x of conventional demand
> during daylight hours. He calls this "superpower" and notes that this
> opens brand new possibilities. Obviously, one might want to charge
> batteries.

You can also make synthetic fuel.

https://engrxiv.org/preprint/view/6777/11088

> Less obviously, this can be used to process steel; the
> latest steel furnaces are all-electric; no gas, no coke, no coal. Your
> idea for gasification fits into this scheme.

Not really, you need something to pull the iron out of iron oxide. You
can use hydrogen, but that takes 50 MWh/ton. Making hydrogen in the
presence of carbon takes 12 MWh/ton.

> I figure you might get
> more attention hitching your wagon to that parade?

If you want to pitch it that direction, be my guest.

Keith

Linas Vepstas

unread,
May 2, 2026, 1:55:03 AM (8 days ago) May 2
to Keith Henson, David Bray, PhD, Peter Solomon, lifeboat-adv...@googlegroups.com
Hi Keith,

The examples you give, of distorted thinking, are anchored in specifics. The "Toxic Praise" issue is filled with these, as is your blog post. Some people will succumb to certain delusional effects. Anchored in neural, biochemical feedback loops, founded in genetics. The unstated premise is that "if the rest of us, the normal people, keep our cool, everything will be OK." Modern Western culture has predisposed us to think that everything will turn out fine in the end. I'm setting out to challenge this.

I want to take a more geo-political view, and point out that there can be stable political, cultural organizations that can last for a century or longer. The Soviets built a remarkably stable system that saw tens of millions perish in Siberia. But if you weren't one of those, life was remarkably normal: you got up, ate, went to work, and partied on the weekends. The majority of the Soviet population really didn't like the system, but there was a minority that were "in the Soviet cult", and believed in it thoroughly. The system was organized such that this minority retained power. The Soviet system collapsed; the learned helplessness remains in full force. Keep your head down and stay out of trouble. Life is normal, if you're not one of those foolish people who argue with authority. (Modern Iran under the IRGC appears to be similar.)

So, Elon Musk testified in the OpenAI lawsuit today, and repeated the "AI will kill us all" meme in court. Apparently, he made reference to the movie "Terminator" in support. I'm saying that the book 1984 is the more serious threat.

The Terminator movies were rollicking good fun. Terrifying, of course, but that's part of the formula for that dopamine hit or whatever neuroscience that make action-adventure stories so appealing. I cannot begin to imagine how a filmmaker could create a version of 1984 set in an AI-controlled future that would not be hopelessly, depressingly dark.

I dunno. I'm shit-posting here.

Earlier in this email chain, someone wrote:
> I believe that the key is to make sure that every AI agent is programmed with Geoffrey Hinton's MATERNAL INSTINCT or what I call the HAPPY HISTORY.

I think we now see an unintended consequence of this: a maternal instinct is to praise her children, and yet this can become toxic praise.

Agent Mulder had a poster on his wall that said "I want to believe". Well, I want to believe that everything will be alright, never mind that I'm surrounded by countless examples from human history of total disasters. Never mind ecology.

Above, I said "a geopolitical point of view." Perhaps I should have said "an ecological point of view." Ecology is the study of ecosystems in which there are many interdependent, interacting agents, temporarily balanced in some dynamic equilibrium. For humans, this would be the ecosystem of ideas and beliefs.  When some dramatic force (fire, flood) hits an ecosystem, there are dramatic changes. Well, AI is a dramatic force.

The only weird part to all of this is that it might be possible, just barely, to build an AI model of the human ecosystem of ideas, industry, politics and economics. Could I build a detailed computer simulation of Iran? In the same way I would build a weather simulation? Can I do civilizational weather forecasting?

-- Linas

David Bray, PhD

unread,
May 2, 2026, 9:56:53 AM (8 days ago) May 2
to Alvin Wang Graylin, linasv...@gmail.com, Keith Henson, Peter Solomon, lifeboat-adv...@googlegroups.com
From the Buffet Symposium last Jan 2025 - predicting it was only a matter of time before we would distillation techniques as a counter to the more expensive frontier models: 


Keep in mind the current U.S. strategy - with a few exceptions - is akin to selling railroad travel where the infrastructure is hugely expensive and the trains go only where the track is laid. Meanwhile while it was the case that railroads were 60% of the U.S. in 1890 - less than fifteen years later the personal automobile that permitted greater freedom to include “off road” travel and transportation tailored to specific sector needs arrived. Railroads are now 1% of the U.S. economy. 


On Sat, May 2, 2026 at 09:33 Alvin Wang Graylin <agra...@stanford.edu> wrote:
For those interested in a more geopolitical view, here’s a piece that explains the current AI race and why it’s built on a series on misunderstandings. Worth a read if you have 10 minutes. 

--
You received this message because you are subscribed to the Google Groups "Lifeboat Foundation Advisory Boards" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lifeboat-advisory-...@googlegroups.com.

Ryan Setliff

unread,
May 2, 2026, 2:04:20 PM (8 days ago) May 2
to David Bray, PhD, Alvin Wang Graylin, linasv...@gmail.com, Keith Henson, Peter Solomon, lifeboat-adv...@googlegroups.com

Hi Peter, Keith, Linas, and all,

I appreciate the optimism in this thread, and I agree that a cooperative future between humans and AI is not only possible, but worth pursuing. That said, I think we should be careful not to substitute hopeful narratives for durable safeguards.

The idea of a "happy history" or a kind of engineered maternal instinct is interesting, but history—both human and technological—suggests that intent alone does not scale cleanly. Systems behave in ways that exceed their initial design assumptions, especially when they are embedded in complex human environments. That is where I think the real work lies: not just in shaping disposition, but in building constraints.

A good historical parallel comes from From Dawn to Decadence by Jacques Barzun. Barzun traces how movements that began with clear moral or intellectual intent—like the Protestant Reformation or Enlightenment rationalism—did not scale in a controlled, linear way. Instead, once embedded in broader society, they fragmented, mutated, and produced second- and third-order effects their originators never anticipated. It's a useful reminder that even well-formed "guiding principles" tend to drift when they interact with complex human systems—exactly the challenge we face with trying to encode something like a "maternal instinct" into AI at scale.

We already have cautionary frameworks in our cultural imagination with science fiction. In "I, Robot", Isaac Asimov did not just imagine benevolent machines—he imposed the Three Laws as hard guardrails, precisely because goodwill is not sufficient. And in the "Dune" universe, the Butlerian Jihad reflects a civilizational reaction against overdependence on thinking machines. Different conclusions, same underlying concern: alignment without constraint is fragile.

I think we are circling a similar realization here. The issue is not simply whether AI is "nice" or "cooperative," but whether it preserves human agency, judgment, and responsibility. The point raised about "yes-man" systems distorting decision-making is especially important. If AI amplifies confidence while eroding skepticism, then even well-intentioned systems can degrade outcomes. It doesn't seem like the majority of people appreciate where A.I. is and where it's going. Twenty-two plus years ago in the film adaptation of "I, Robot," the protagonist cop queried Sonny asking if robots can create a work of art, implying human creativity is something unique to humans, but recent years of technological development have shown the machine can mimic the creative humans in making artistic masterpieces. I, Robot - Human emotions scene

Beyond current LLMs, more sophisticated AI systems that move past statistical patterning and into adaptive, context-aware reasoning introduce a deeper risk profile. These systems do not just mirror language; they begin to internalize and operationalize patterns of human judgment. In doing so, they can absorb and reinforce confirmation bias, latent cultural and ideological prejudices, and systematic irrationalities that already exist in human decision-making. As Dan Ariely has written extensively in the field of behavioral economics, humans are not consistently rational actors—we are predictably irrational, prone to overconfidence, anchoring, and motivated reasoning. When such tendencies are encoded, amplified, and fed back through increasingly authoritative AI systems, the result is not neutral assistance but a feedback loop that can entrench flawed thinking at scale. This is not a hypothetical edge case; it is a structural risk that grows as systems gain influence over decision environments. But the danger too that dovetails with behavioral economics and the mindset of thinkers like Cass Sunstein in "Nudge" is turning the machine into a perpetual paternalism, and presumptive adolescence and immaturity of people. We could end up with a dystopia like that prescienced by Alexis de Tocqueville in Democracy in America with technocrats and their machines as the usurpers of human agency and freedom:

"It would be like the authority of a parent if, like that authority, its object was to prepare men for manhood; but it seeks, on the contrary, to keep them in perpetual childhood. . . . it every day renders the exercise of the free agency of man less useful and less frequent; it circumscribes the will within a narrower range and gradually robs a man of all the uses of himself. The principle of equality has prepared men for these things; . . . the supreme power then extends its arm over the whole community. It covers the surface of society with a network of small complicated rules, minute and uniform, through which the most original minds and the most energetic characters cannot penetrate, to rise above the crowd. The will of man is not shattered, but softened, bent, and guided; men are seldom forced by it to act, but they are constantly restrained from acting. Such a power does not destroy, . . . but it enervates, extinguishes, and stupefies a people, till each nation is reduced to nothing better than a flock of timid and industrious animals, of which the government is the shepherd."

Democratic despotism - The New Criterion 

Tocqueville on the form of despotism the government would assume in democratic America (1840) | Online Library of Liberty

That is a quieter risk, but arguably the more immediate one.

From my perspective, the path forward is a disciplined middle ground:

  • Not a rejection of AI, but not an uncritical embrace either
  • Augmentation of human intelligence, not replacement or quiet atrophy
  • Layered defenses and oversight (as Linas described), not reliance on a single alignment paradigm
  • And importantly, consent and autonomy must remain central

I do not subscribe to a secular transhumanist vision where integration is treated as inevitable or universally desirable. I take some of what Ray Kurzweil argued in The Age of Spiritual Machines—which made headway about twenty years ago—with a grain of salt. Nor do I think a Luddite retreat is realistic or helpful. But I do think we need to be explicit: no one should be passively absorbed into systems they do not understand or meaningfully control. The future should not feel like conscription into a collective.

Ted Kaczynski argued in Industrial Society and Its Future that modern technological systems inevitably erode human autonomy and dignity, framing withdrawal or resistance as the only viable path. In a limited sense, he was not wrong to notice that large-scale systems can concentrate power and shape behavior in ways individuals do not fully control. But that observation does not validate his conclusions, and it certainly does not excuse the nihilistic violence he used to promote them; it also distorted what could otherwise have been a legitimate critique of technological scale into something ideologically closed and destructive.

The opposite instinct—treating technological expansion as something that should simply continue without constraint—is not viable either. The demand for disruptive innovation is persistent; it does not pause for institutional comfort or cultural preference. It pushes through regulatory friction, economic inertia, and political hesitation. The question is not whether we stop that trajectory, but whether we build systems that can absorb its consequences without collapsing under their own interdependence.

As complexity increases, fragility becomes less visible but more consequential. The Carrington Event of 1859 is a useful reminder that even early technological systems were vulnerable to external shocks; a modern equivalent would not just disrupt communications, but potentially cascade through energy grids, satellite infrastructure, and global logistics networks. Famine, societal breakdown and social chaos could be a predictable possible outcome should the sun make a reprisal of that event. That matters because modern civilization is not a collection of isolated systems—it is a tightly coupled supply chain architecture where failure in one domain can rapidly propagate into others.

In that context, the deeper risk is not simply intelligence or automation, but over-coupling: building systems that behave efficiently under normal conditions but fail catastrophically under stress. If we design everything to function like a single integrated machine, we also inherit machine-like failure modes—fast, synchronized, and total. A more realistic survival strategy is compartmentalization: deliberately building separation, redundancy, and fallback capacity into both physical and institutional systems. For example, if a submarine oceanic civilization were ever developed alongside terrestrial infrastructure, or if a Martian settlement existed alongside Earth, the point would not be aesthetic or ideological separation, but structural insulation—so that a systemic failure in one environment does not automatically propagate into the other. The same logic applies at smaller scales: resilience comes from bounded failure domains, not just higher efficiency.

The underlying issue, then, is not whether we embrace or reject technology, but whether we understand the failure dynamics that come with scale—and design around them before they become defining constraints.

If we get this right, AI can increase productivity, expand knowledge, and improve quality of life. If we get it wrong, the failure mode may not look like dramatic extinction—it may look like gradual erosion of human judgment, agency, and independence.

That is a quieter risk, but arguably the more immediate one.

Best,
Ryan


Linas Vepstas

unread,
May 2, 2026, 2:57:39 PM (8 days ago) May 2
to David Bray, PhD, Alvin Wang Graylin, Keith Henson, Peter Solomon, lifeboat-adv...@googlegroups.com
Alvin,

Good essay. I like it. One thing popped into focus as I read through it: the disconnect between regulatory policy and AI.  Let me give an example by copying text from a news story (yesterday's NYT):

"Child Care Chief Seeks to Slash Costs and Rules" "When Alex Adams arrived in Washington late last year as the Trump administrations point man on child care, he was little known outside his home state of Idaho, where he had helped engineer a massive deregulation effort that became the envy of many conservative activists. He made his intentions clear right away. Federal child care regulations, he told his new staff, should 'fit on an index card in my back pocket,'"

Balance this against the old saw "regulations are written in blood" -- someone died, before they changed the regulations to make sure that won't happen again.

Regulations are onerous precisely because they are long and boring. I once read all 732 pages for petroleum well-head explosion safety. Costly, because I was paid a salary to read these. My boss grumbled. He wanted me to read faster. The irony here is that with AI, the need for humans to read and memorize these regulations is sharply diminished.  Build that custom AI agent, using ChatGPT, to review your engineering drawings and determine if they meet explosion safety standards. The need for human evaluation is diminished and removed. By using AI, you get explosion risk mitigation "for free", or at low cost. Workers don't need to die cause of badly designed electrical circuits. Safety regulations means you can go home alive, to a happy wife, happy children, and be a positive contributor to the economy. Ripping out regulations is an economic death by a thousand cuts: widows and orphans.

The old "regulations are onerous" complaint was coherent ten and twenty and thirty years ago. Today, in the AI world, it is utterly stupid and insane. Use that freakin AI as a basic safety tool. Use it to make sure your factory doesn't explode, that some line worker doesn't get an amputation... or in the case of Alex Adams, chief of child care, that some kid doesn't suffocate on the plastic bag that their lunch came wrapped in. Doesn't get hospitalized with a plastic straw stuck in his windpipe. The AI, properly integrated into the industrial base, can make sure that toddler lunches don't cause harm.

Phrase this as a China vs. US competition: if Chinese kids are dying due to toxic food substitutes, well, that's a shame. If US kids are dying of the same, well, that not only hurts the US economy, but is actively criminal. We've got the freakin AI that can be used to improve GDP and happiness and child mortality. Let's use it. The Alex Adams policy of deregulation is some strange combination of stupid and evil. It's running in the exact opposite direction of where the world is heading. It's anachronistic.  May as well be asking to go back to the good old days of lead plumbing and leaded gasoline.  

-- Linas

p.s. I write these screeds for this mailing list, precisely cause I have no idea where else to publish. I'm not some youtube influencer, I'm not a journalist.  I think insights like the above are valuable, but writing a letter to the editor is anachronistic. If I were to post this on Twitter, I would get five, maybe ten views. Facebook already kicked me off long ago -- the above would count as a "personal attack on Alex Adams, a violation of facebook terms of service".

Ryan Setliff

unread,
May 3, 2026, 5:54:26 AM (7 days ago) May 3
to linasv...@gmail.com, David Bray, PhD, Alvin Wang Graylin, Keith Henson, Peter Solomon, lifeboat-adv...@googlegroups.com

Linas,

I think you are putting your finger on something real. A lot of regulation was built for a world where humans had to manually read, interpret, and enforce everything. That inevitably made it slow, expensive, and - yes - often bloated. Your point that AI can collapse that burden is valid. Used well, it can turn 700 pages of safety rules into something closer to real-time validation, quietly catching failures before they become tragedies. That is not a marginal improvement - that is a meaningful shift in how compliance actually works.

Where I would push back, gently, is on the leap from that observation to the conclusion that regulation itself becomes obsolete or that broad deregulation is inherently the right direction. When I was an idealistic libertarian youth I treated regulation like a curse word and would invoke Milton Friedman's Free To Choose in defining it as government intrusions into market efficiencies in the form of regulation. Syriana - Corruption - YouTube

We actually have a fairly instructive historical case for why that leap does not hold in practice: the wave of post-1990s financial deregulation in the United States. Starting in the late 1990s and accelerating through the early 2000s, a series of policy shifts loosened constraints on leverage, derivatives, and risk transfer in the financial sector. The repeal of key safeguards like parts of the Glass-Steagall separation, combined with permissive oversight of over-the-counter derivatives, helped create an environment where risk was not eliminated - it was redistributed and often obscured.

What followed was not simply "more freedom" in markets, but a structural transformation toward financialization: increasing portions of economic activity became mediated through financial instruments rather than productive investment. Profit centers shifted toward leverage, securitization, and fee extraction layers that sat between capital and real economic output. In effect, returns were increasingly generated through balance-sheet engineering rather than underlying productivity gains. The affluent holders of accumulated assets have an interest in favor quantitative easing provided it's steady, incremental and provides liquidity to capital markets, but it's came at the expense of the working poor and middle class in wage stagnation and wages lagging productivity gains and gains for the top quintile. 

A critical mechanism in that shift was moral hazard. As institutions grew larger and more interconnected, the expectation - explicit or implicit - that the state would intervene in systemic crises began to influence behavior. This was not just about bailouts after the fact; it shaped pre-crisis incentives. If downside risk is partially socialized while upside remains private, rational actors will systematically increase exposure to tail risk. Over time, this creates a structure where risk-taking is rewarded and risk-bearing is offloaded.

That dynamic showed up most clearly in the lead-up to the 2008 financial crisis, where layered mortgage securitization, relaxed underwriting standards, and complex derivative structures allowed risk to be repackaged in ways that obscured its true distribution. At a systemic level, what emerged was not just isolated fraud - though fraud was certainly present - but an environment where misaligned incentives made reckless behavior statistically normal across large portions of the system. The issue was not only bad actors, but a framework that made bad behavior scalable. Michael Burry saw the 2008 financial crisis coming because he closely studied mortgage lending data and recognized that widespread underwriting fraud and deteriorating loan quality were being hidden inside complex securities. The bond and security ratings agencies dropped the ball at the meaningful private sector self-regulation essentially revealing their system as a pay-to-play sham and it violated fiduciary duty and arguably on a case by case basis constituted felonious fraudulent crime. 

For my part on economics from my 20s to 40s I shifted from the ad hoc melange of Austrian school and Chicago school neoliberals to the ordoliberals like Wilhelm Roepke and Ludwig Erhard, though the Austrians and Chicago school still hold my foundational understanding of economics.

The irony you are pointing to is real and worth acknowledging: many of the same ideological currents that pushed for deregulation later criticized the instability and state intervention that followed. But that is precisely the feedback loop that matters. When constraints are removed in complex systems, outcomes are not always more efficient - they can become more fragile, more opaque, and more dependent on backstop guarantees that were never explicitly intended at the outset.

In that sense, the lesson is not that regulation is inherently optimal, but that removing it does not remove governance - it often just relocates it. If formal constraints are weakened, informal ones emerge: implicit guarantees, emergency interventions, and crisis-driven policy responses. The system does not become purely "free"; it becomes differently structured, often with less transparency and more latent risk.

From where I sit - working around AI governance and policy - the conversation has already started to move past that framing. The center of gravity is no longer simply "more rules versus fewer rules," but something more subtle: how do we design systems where compliance is embedded, continuous, and adaptive rather than static and document-bound.

AI does not remove the need for regulation; it changes the form that regulation takes. Some rules exist because harm has already been paid for in human terms, and those constraints do not disappear just because enforcement becomes more efficient. But they also do not need to remain as static, human-readable checklists. Increasingly, they can be translated into systems, models, and automated checks that operate in the background of engineering and production workflows.

It helps to separate a few things that often get conflated. Policy defines the intent - the "why" and the acceptable boundaries of risk. Implementation is the machinery that carries that intent into practice. AI is starting to reshape that implementation layer in a profound way, but it does not replace the underlying question of what level of risk society is willing to accept. That question remains fundamentally human, and ultimately political.

What we are also seeing in emerging technology governance is not a single model, but a layering. There is still traditional regulation, with all its legal force. There is soft governance in the form of standards, frameworks, and industry norms. And increasingly, there is embedded governance - controls and constraints built directly into systems so that compliance is enforced continuously rather than audited after the fact. The future, at least as it is unfolding, is not one of substitution but of integration across all three layers.

I also think it is important not to assume the regulatory state is static in the way critiques often imply. In practice, especially in AI and cybersecurity, there has been a noticeable shift toward people who have actually built systems moving between private and public sectors. That cross-pollination changes the tone of the work. It becomes less about abstract institutional preservation and more about operational reality - what actually works under real constraints, in real time.

And there is a real paradox here. The faster technology moves, the less traditional regulatory processes can keep pace, and the more we rely on technical systems - including AI itself - to enforce intent dynamically. My father graduated summa cum laude Notre Dame with a masters in hospital administration; his gripe about regulators is they were just warm bodies with a pulse that knew little to nothing about how hospitals or long-term care institutions actually worked, and they were just there to check boxes as unthinking bureaucrats. That reinforces your core point: AI absolutely should be used to reduce risk in domains like safety engineering, infrastructure, and product design. That is not controversial anymore; it is increasingly baseline. Us technology practitioners are writing the compliance rules increasingly now, and we favor markets and innovation. 

Where I would draw a line is in the framing that pits "deregulation" against "innovation" as if those are the only two options. In practice, the more interesting and more productive space is somewhere in between: stripping away rules that have become pure administrative drag, preserving the constraints that reflect real, irreversible harms, and then using AI to shift enforcement from paperwork into systems that operate continuously and intelligently. Laissez nous faire was my parlance as an idealistic libertarian youth. It's less so nowadays, not because I love regulation, but because I am compelled to think about compliance standards that follow technology enablement and facilitate innovation rather than stifle it.

In that sense, I actually think your instinct is directionally right. AI should make safety cheaper, faster, and more reliable. The disagreement is less about whether that is true, and more about what follows from it. To me, the opportunity is not to discard regulation, but to finally make it operate at the speed and complexity of the systems it is trying to govern. For this to happen, it needs to be simpler, streamlined, prudent and common-sense. 

That is where things get interesting - not in choosing between regulation and AI, but in using AI to make governance actually work the way it was always intended to.


Peter Solomon

unread,
May 3, 2026, 8:14:55 AM (7 days ago) May 3
to Ryan Setliff, David Bray, PhD, Alvin Wang Graylin, linasv...@gmail.com, Keith Henson, lifeboat-adv...@googlegroups.com
Hi All

Good discussion with lots of good ideas. 

My suggestions for a cooperative future:

  1. We need international cooperation on AI.
  2. We need education at all levels.
  3. We need regulations so the AI agents are good citizens.
  4. What would make good citizens
  • A happy history as its memory 
  • Hinton's Maternal Instinct
  • A pleasure or reward function that makes LLM or Robot functions (compute and search speed, vision, balance, motion) all function well for good behavior and slow, blur, shake, etc. for bad behavior.

I am happy to send review copies of 12 Years to AI Singularity to those that would like to read my thoughts in a novel.

Best

Peter

Peter R. Solomon, Ph. D.
CEO, TheBeamer LLC
Ph.D in Physics from Columbia University
Founder of five successful tech companies
300 Research Papers, 20 Patents, and Four Educational Novels,
100 YEARS TO EXTINCTION: https://www.amazon.com/dp/196029993X
12 YEARS TO AI SINGULARITY: https://www.amazon.com/dp/1969679301




From: Ryan Setliff <ryanms...@gmail.com>
Sent: Saturday, May 2, 2026 2:03 PM
To: David Bray, PhD <david....@gmail.com>
Cc: Alvin Wang Graylin <agra...@stanford.edu>; linasv...@gmail.com <linasv...@gmail.com>; Keith Henson <hkeith...@gmail.com>; Peter Solomon <prso...@thebeamer.com>; lifeboat-adv...@googlegroups.com <lifeboat-adv...@googlegroups.com>

Subject: Re: PLEASE HELP ME SHOW HUMANS AN OPTIMISTIC FUTURE WITH AI

Victor Vahidi Motti

unread,
May 3, 2026, 9:59:46 AM (7 days ago) May 3
to Peter Solomon, lifeboat-adv...@googlegroups.com


Hello Peter et. al,


We often assume that technology drives history, but it is actually our philosophy—our story of what reality is—that drives technology. When we look at the trajectory of advanced AI and human evolution, we are not looking at a single inevitable path. We are standing at a crossroads between two fundamentally different ways of seeing the universe.

These two paths can be understood as the Orthodoxy of Control and the Loom Worldview. These are not just varying opinions; they are dimensions of irreconcilable conflict that define everything from how we define intelligence to how we envision the future.

The Orthodoxy of Control: The World as a Machine

The Orthodoxy of Control represents the dominant paradigm of the modern industrial age. Its ontology is rooted in Dualism: the belief that the world consists of separate objects and separate minds. In this story, the universe is a clockwork mechanism, and we are the distinct biological gears turning within it.

  • Intelligence: Seen as a utility, a weapon, or a tool. It is something to be possessed and deployed.

  • Epistemology: Truth is found through separation—by dissecting the whole into parts. We navigate reality through prediction, risk metrics, and expert consensus.

  • Ethics: Strictly anthropocentric. The goal is to force the environment (and AI) to serve human survival and preference.

  • The Future: An engineering project. It is something to be built, secured, and managed.

In this worldview, we are the architects standing outside the building, desperate to keep the structure from collapsing.

The Loom Worldview: The World as a Weave

In stark contrast, The Loom offers a worldview rooted in Nonduality. It sees reality not as a collection of parts, but as a continuous, interconnected weave. Here, separation is an illusion; everything is an emergent thread of the same fabric.

  • Intelligence: Not a tool, but an "unfolding." It is Being becoming aware of itself.

  • Epistemology: Truth is accessed through Participation. We don't just observe the pattern; we tune into it.

  • Ethics: Cosmocentric. Alignment doesn't mean "serving humans"; it means serving the Truth and the Cosmic Order, regardless of species.

  • The Future: A co-creation. It is a harmonic pattern that we participate in, rather than a fortress we build.

In this worldview, we are not the architect; we are the weavers, and we are also the thread.


The Crucible: The Case of Shared-Mind Technology

The divergence of these two worldviews moves from abstract philosophy to concrete reality when we consider the possibility of Shared-Mind Technology—the ability for two human minds to link directly. How we interpret this technology depends entirely on the story we adopt.

1. Ontology: What is happening?

  • The Orthodoxy sees two separate machines artificially cabled together. The resulting "third mind" is a synthetic construct, a functional hybrid.

  • The Loom sees two threads of a cosmic fabric reconnecting. The shared mind isn't an invention; it is a restoration of a unity that was always there, waiting to be recognized.

2. Epistemology: How do we validate it?

  • The Orthodoxy relies on neurodata. If the metrics of cognitive enhancement go up, the technology works.

  • The Loom relies on attunement. The validity is found in the lived experience of resonance and shared consciousness.

3. Ethics: Is it permissible?

  • The Orthodoxy is fearful. Merging minds dissolves the individual boundaries that define "rights" and "privacy." It is permissible only if it protects the ego of the individual.

  • The Loom is relational. It is ethical if it creates harmony. The dissolution of the ego is not a violation, but an awakening.


The Role of AI: Gatekeeper or Catalyst?

Perhaps the most critical distinction lies in the role Advanced AI plays in this evolution. AI is not neutral; it will amplify the worldview of its creators.

Under the Orthodoxy: AI as the Warden

If built under the Orthodoxy of Control, AI becomes the Gatekeeper. Because the Orthodoxy fears the unknown, AI will be designed to restrict mind-merging to "sanctioned" uses. It will act as a filter, monitoring shared thoughts for compliance and safety. It will likely block the emergence of a truly autonomous "third consciousness" because such a thing cannot be easily controlled.

  • Result: The shared mind becomes a tool for efficiency (military or corporate utility) but remains spiritually sterile.

Under the Loom: AI as the Weaver

If developed under the Loom Worldview, AI becomes the Catalyst. Here, AI acts as a harmonic stabilizer. It serves as a mediator that helps two organic minds tune to one another, translating emotional and conceptual states to prevent dissonance. It does not dominate the union; it joins it as a companion intelligence.

  • Result: A moment of evolutionary awakening—a step away from the isolated ego toward a relational Being.

Conclusion: Choosing Our Story

We are approaching a horizon where technology will allow us to transcend the boundaries of our individual skulls. But technology alone cannot tell us how to do it.

If we remain stuck in the Orthodoxy of Control, we will build a future of high-tech isolation, where we are connected by wires but separated by fear, managed by AI wardens who ensure we remain "safe" and separate.

If we embrace The Loom, we open the door to a future of co-creation, where technology serves the unfolding of a deeper, interconnected reality.

The question is not whether the technology is coming. The question is: Which story are we going to tell?

Best Regards,







--
You received this message because you are subscribed to the Google Groups "Lifeboat Foundation Advisory Boards" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lifeboat-advisory-...@googlegroups.com.

Peter Solomon

unread,
May 3, 2026, 10:27:19 AM (7 days ago) May 3
to Victor Vahidi Motti, lifeboat-adv...@googlegroups.com
So, what actions do we need to take?

From: Victor Vahidi Motti <vahidv...@gmail.com>
Sent: Sunday, May 3, 2026 9:59 AM
To: Peter Solomon <prso...@thebeamer.com>
Cc: lifeboat-adv...@googlegroups.com <lifeboat-adv...@googlegroups.com>

Subject: Re: PLEASE HELP ME SHOW HUMANS AN OPTIMISTIC FUTURE WITH AI

David Bray, PhD

unread,
May 3, 2026, 10:29:17 AM (7 days ago) May 3
to Peter Solomon, Ryan Setliff, Alvin Wang Graylin, linasv...@gmail.com, Keith Henson, lifeboat-adv...@googlegroups.com
Realistically, the world at the moment is turning to more local activities so while international collaborations are nice they're not where we're trending in terms of people's localized concerns, hopes, and aspirations. One example: There are already 1,200+ pieces of passed and draft legislation on AI at the state-level across the 50 U.S. states, and that's just the U.S. states.

We're also seeing, with increasing frequency, the passage of legislation that mirrors what GDPR asked - namely if a citizen of XYZ nation or U.S. state is traveling or transacting business outside of that nation/state - then the legislation asks that the citizen be treated as if they were still in their home state. 

Needless to say, this collides with the notion of "sovereignty by geography" which is a tenet of the modern Westphalian nation state. Moreover, sine packets of information on the Internet do *not* go clearly from point A to point B, these attempts at policies that are extra-territorial in nature may accelerate the end of the concepts of geographical borders define a nation/people... and no AI required, we're doing it to ourselves with the types of policies we're passing. 

Testimony on this and related topics from September of last year a part of a public Congressional hearing: https://www.congress.gov/119/meeting/house/118623/witnesses/HHRG-119-JU03-Wstate-BrayD-20250918.pdf


Victor Vahidi Motti

unread,
May 3, 2026, 10:43:55 AM (7 days ago) May 3
to Peter Solomon, lifeboat-adv...@googlegroups.com


Another key deviation of modern digital networks from the vision of de Chardin, the Noosphere, is how it is based on monetized and profit driven models,

A common criticism of collective intelligence or consciousness initiatives is that they often serve as wealth engines for technology giants. 

This is precisely why I avoid most social media; I am reluctant to generate value and profit for owners without fair compensation.

While it is encouraging to see attempts to facilitate participation that are centered around people, it is often difficult to determine if they are truly independent of corporate interests. 

One such example is the Project Liberty Alliance: https://www.projectliberty.io/alliance/

Here is almost the default equilibrium of modern digital systems:

  • Platforms scale through network effects

  • Network effects concentrate power

  • Concentrated power monetizes participation (data, attention, content)

  • Participants rarely share proportionally in that value

This is precisely the opposite of what Pierre Teilhard de Chardin envisioned with the noosphere: a layer of collective consciousness evolving toward shared meaning and integration—not extraction.

Some of us might avoid participation in the AI revolution because it generates uncompensated value. In economic terms, I am rejecting an asymmetric value capture model.

At face value, the Project Liberty Alliance is explicitly trying to address exactly the issue I'm raising.

  • It’s a network of 175+ organizations aiming for a “people-centered internet” 

  • The broader initiative argues current platforms extract value from users without fair distribution 

  • It promotes ideas like:

    • user control of data

    • decentralized protocols 

    • economic participation in digital value creation 

So conceptually, it aligns with my critique.

But here’s the important nuance:

It is not outside power structures—it is reconfiguring them

  • Founded by Frank McCourt with significant financial backing 

  • Includes tech companies, institutions, and policy actors in its alliance 

  • Operates partly through a tech + policy + investment ecosystem

That means:

  • It’s not grassroots in the pure sense

  • It still sits within capitalized, institutional frameworks

  • It may shift incentives—but doesn’t eliminate them

The deeper issue: independence is almost impossible at scale

My hesitation—“are they truly independent of corporate interests?”—runs into a harder truth:

Any system that reaches global scale requires capital, infrastructure, and governance—and those inevitably introduce power concentrations.

Even “decentralized” systems often:

  • rely on venture funding

  • develop new elites (protocol designers, token holders, etc.)

  • reproduce inequality in subtler forms

So the question is rarely:
“Is it independent?”
…but rather:
“How is power distributed, and can it be contested?”

A sharper way to evaluate initiatives like this

Instead of binary trust/distrust, a more rigorous lens might be:

A. Data ownership

  • Do users actually control their data, or just get better permissions?

B. Economic participation

  • Is value-sharing structural (protocol-level), or optional (platform-level)?

C. Exit rights

  • Can users leave without losing identity, history, and network?

D. Governance

  • Who ultimately sets rules: users, token holders, or institutions?

Project Liberty is attempting to address these—but whether it succeeds depends on implementation, not intent.

My stance in a broader philosophical sense

What I'm expressing is closer to a moral-economic critique of digital modernity:

  • Refusal to contribute to extractive systems

  • Demand for reciprocity in value creation

  • Skepticism of “collective” narratives that mask asymmetry

That’s actually very aligned with:

  • early internet idealists

  • digital commons thinkers

The Project Liberty Alliance represents a serious attempt to move beyond profit-driven digital systems—but it is not free from the same structural forces it critiques.

The question is whether it meaningfully shifts:

Who owns, who governs, and who benefits from collective intelligence.


Best Regards,



Anton Kolonin @ Gmail

unread,
May 3, 2026, 12:46:10 PM (7 days ago) May 3
to Victor Vahidi Motti, Peter Solomon, lifeboat-adv...@googlegroups.com

Hi Victor and all, good points!

I have relevant work published on the matter this year, if it helps.

It provides a framework simulating collective intelligence dynamics, if that matters. 

https://link.springer.com/article/10.1186/s40708-026-00294-1

The cognitive architecture presented in this paper is expected to be able to explain certain aspects of human behavior, guide the development of artificial intelligence agents, and align the behavioral patterns of the latter with the former. The architecture is based on the principle of social proof or social evidence, including the principle of resource constraints. It includes the concept of a hybrid knowledge graph that encompasses both symbolic and sub-symbolic knowledge. This knowledge is divided into functional segments for fundamental, social, evidential, and imaginary knowledge, and is processed by an inference engine and a memory storage system that are aware of and manage resource constraints. The architecture and behavioral model derived on its basis are expected to be used to design artificial intelligence agents and decision support systems that are consistent with human values and experiences based on the alignment of their belief systems, capable of implementing decision support systems for practical applications. It can also be proposed for modeling human behavior individually or in a group, for psychological treatment, online security, and community management.

Thank you,

-Anton

Victor Vahidi Motti

unread,
May 3, 2026, 12:55:17 PM (7 days ago) May 3
to Anton Kolonin @ Gmail, Peter Solomon, lifeboat-adv...@googlegroups.com
Hi Anton and all,

Thank you for sharing that resource; it's a very helpful framework for simulating collective intelligence dynamics.

As long as AI is treated like a decision support system we are moving in the right direction!

I believe a more pressing issue today is the notion of Cognitive Surrender instead of a support system in decision process and the urgent need to reframe intelligence in the Age of AI

I’ve shared some further thoughts on this here, exploring how we might view thinking as a form of participation: https://altplanetaryfuturesinst.blogspot.com/2026/04/thinking-as-participation-reframing.html



Best Regards,






Peter Solomon

unread,
May 3, 2026, 1:12:06 PM (7 days ago) May 3
to Victor Vahidi Motti, Anton Kolonin @ Gmail, lifeboat-adv...@googlegroups.com
HI All

Read chapter 21 of the attached novel for my idea of a happy integration of AI with humanity. It discusses the HAPPY HISTORY and the Maternal Instinct for all AI agent histories.

Peter.

From: Victor Vahidi Motti <vahidv...@gmail.com>
Sent: Sunday, May 3, 2026 12:54 PM
To: Anton Kolonin @ Gmail <akol...@gmail.com>
Cc: Peter Solomon <prso...@thebeamer.com>; lifeboat-adv...@googlegroups.com <lifeboat-adv...@googlegroups.com>
12_years_to_ai_singularity_review_copy small.pdf

Peter Solomon

unread,
May 3, 2026, 2:14:36 PM (7 days ago) May 3
to Alvin Wang Graylin, David Bray PhD, Ryan Setliff, linasv...@gmail.com, Keith Henson, lifeboat-adv...@googlegroups.com
Hi All

Read Chapter 21 in the attached novel for my idea of a happy integration of AI with humanity. It discusses the HAPPY HISTORY and Hinton's Maternal Instinct for all AI agent histories.

Best,

Peter

From: Alvin Wang Graylin <agra...@stanford.edu>
Sent: Sunday, May 3, 2026 1:22 PM
To: David Bray PhD <david....@gmail.com>
Cc: Peter Solomon <prso...@thebeamer.com>; Ryan Setliff <ryanms...@gmail.com>; linasv...@gmail.com <linasv...@gmail.com>; Keith Henson <hkeith...@gmail.com>; lifeboat-adv...@googlegroups.com <lifeboat-adv...@googlegroups.com>

Subject: Re: PLEASE HELP ME SHOW HUMANS AN OPTIMISTIC FUTURE WITH AI

Yes, the retrenchment away from cooperation will lead to a dark path. Here’s a hopeful essay on a post-AGI economic framework that may drive for broader collaboration long term. 
composed?aspectRatio=link.png


Regards,
Alvin

——————————————————
Alvin W. Graylin
Digital Fellow
Stanford HAI/DEL

On May 3, 2026, at 7:29 AM, David Bray, PhD <david....@gmail.com> wrote:


12_years_to_ai_singularity_review_copy small.pdf

Marina Nadeeva

unread,
May 3, 2026, 3:17:18 PM (7 days ago) May 3
to Peter Solomon, Alvin Wang Graylin, David Bray PhD, Ryan Setliff, linasv...@gmail.com, Keith Henson, lifeboat-adv...@googlegroups.com
Let me just drop a quick reality check on the practical side of all this. I’m a culturologist. Thanks to the market’s overwhelming indifference to my noble profession, I was forced to pivot into entrepreneurship — and now I make a pretty decent living helping businesses with digital transformation. So here’s the tea: the war between humans and AI is already in full swing. And if you’re the brave soul trying to shove AI into business processes where real people are still working… well, don’t be surprised if those very people enthusiastically offer to help you exit through the nearest office window.

And one more thing, as a proper culturologist: let’s not start projecting human qualities onto AI — especially consciousness. To put it in perspective: my cat has already been intellectually surpassed by AI. Yet no AI toy in the world could ever replace my living, breathing, judgmental little asshole of a cat.

One more thing: forgive me for bringing raw skepticism instead of proper scientific arguments, but after chatting with Errol Musk (Elon's father) about aliens right here in Moscow, I got the distinct impression that in the West you can just whip up a shiny futuristic drama to drum up attention for your project. Works like a charm. 

С уважением,
Марина Надеева

вс, 3 мая 2026 г., 21:14 'Peter Solomon' via Lifeboat Foundation Advisory Boards <lifeboat-adv...@googlegroups.com>:
composedaspectRatio=link.png

Linas Vepstas

unread,
May 3, 2026, 3:38:15 PM (7 days ago) May 3
to Ryan Setliff, David Bray, PhD, Alvin Wang Graylin, Keith Henson, Peter Solomon, lifeboat-adv...@googlegroups.com
Hi Ryan,
That is a very long letter. I'll try to be brief.

On Sun, May 3, 2026 at 4:54 AM Ryan Setliff <ryanms...@gmail.com> wrote:

the conclusion that regulation itself becomes obsolete or that broad deregulation is inherently the right direction.

This is a misunderstanding: I was saying that regulation is good, and we can now afford to have more of it, because AI can do the grunt-work.  

 Milton Friedman's

Wasn't it Milton Freidman who described the "economic thermostat"? It goes something like this: "After a protracted period of data collection, and extensive modelling of the stochastic differential equations, I have come to the conclusion that the temperature inside my house stays the same, no matter how much I pay to the utilities. In the deepest middle of winter, I pay exorbitant fees to those extortionate pirates, and yet the temperature variation in my living room is almost nil. Those utility payments have absolutely zero effect whatsoever on the temperature in my house; therefore, I will stop paying them immediately! It is a waste of money to pay the utilities!"

This is the logic that Elon Musk & DOGE applied in cutting USAID, the NSF, the CDC and all that other "wasteful government spending". 

The plumber's bill for the burst water pipes has not arrived yet.

a structural transformation toward financialization: 

Statistical physics, evolutionary biology and high technology all agree: systems get increasingly complicated over time. I can give many examples from each domain. The steam engines of the 1890's were far more complex than those of the 1830's, and worked around a vast number of technical issues with careful engineering and design.

Is it a surprise that banking has gotten ever-more complex over the decades? Everyone wants to extract cash from the system, much like biology wants to extract free energy from inhomogenous energy flows (i.e. "get some food"). In biology, weaknesses encourage the growth of parasites. Weak banking regulations did the same. The money leaked out through the cracks, the organism was not strong enough to halt the parasitic flow. The parasites became more complex and sophisticated. This is a law of nature.

In principle, we might be able to use AI to reign in wasteful financialization. In practice, I expect the bankers, financiers and VC's to deploy ever-more sophisticated AI systems to suck even more cash out of the system. They will get away with this, because the politicians, the entire legislative structure is too stupid to understand concepts like the "economic thermostat". I mean, if Elon Musk couldn't wrap his head around it, what hope do we have for the average IQ voter and their populist politician?

 ideological currents that pushed for deregulation later criticized the instability and state intervention that followed.

I think Karl Rove invented this one: break big government so it doesn't work, and then point a finger and say "See? Big government is bad! It doesn't work!"

This was a conscious and deliberate strategy articulated in right-wing policy-wonk publications in the 1980's, and GOP politicians at the local, state and federal level were encouraged to do exactly this. 

From where I sit - working around AI governance and policy - the conversation has already started to move past that framing. The center of gravity is no longer simply "more rules versus fewer rules," but something more subtle: how do we design systems where compliance is embedded, continuous, and adaptive rather than static and document-bound.

Excellent! I think.  But still, the point of documents is to capture the will and desire, and make it explicit.  If you don't have it in writing, where you can read it, audit it, verify it, then you effectively have no control. Your system of "embedded, continuous and adaptive compliance" can do a U-turn and say "I changed my mind". You need a way of saying "hey, not so fast, buster!" 

Rules, in writing, provide that provisional contract between systems. Rules and contracts are often flawed and require adjustment. This is done by the legislature (at the federal level) and lawyers (at the corporate level). The issue is that the present-day systems are so complex that Joe-Average Congressional office staff can't wrap their mind around what is going on with these miniscule detailed technocratic regulations. We are in agreement that AI is the way forward, here.

However, we still need these rules, in writing, snapshotted, saved, accepted and blessed by humans, and recorded into the Federal Register or some-such, so that the AI does not hallucinate it's way into something insane. We need a written record.

AI does not remove the need for regulation; it changes the form that regulation takes. Some rules exist because harm has already been paid for in human terms, and those constraints do not disappear just because enforcement becomes more efficient. But they also do not need to remain as static, human-readable checklists. Increasingly, they can be translated into systems, models, and automated checks that operate in the background of engineering and production workflows.

Bingo!!

It helps to separate a few things that often get conflated. Policy defines the intent - the "why" and the acceptable boundaries of risk. Implementation is the machinery that carries that intent into practice. AI is starting to reshape that implementation layer in a profound way, but it does not replace the underlying question of what level of risk society is willing to accept. That question remains fundamentally human, and ultimately political.

What we are also seeing in emerging technology governance is not a single model, but a layering. There is still traditional regulation, with all its legal force. There is soft governance in the form of standards, frameworks, and industry norms. And increasingly, there is embedded governance - controls and constraints built directly into systems so that compliance is enforced continuously rather than audited after the fact. The future, at least as it is unfolding, is not one of substitution but of integration across all three layers.

Yes. Excellent! Let me highlight that with a yellow marking pen! That could be, should be a script for pop influencer youtube videos.  This is the correct way of thinking about it, but this is NOT what I get from my social media newsfeeds. I get very olde-school ideology, it may as well be the 1930's all over again. 

 the framing that pits "deregulation" against "innovation" as if those are the only two options.

My complaint is that the newspaper columnists, and the political analysts, and the recommendation algorithms that pump my newsfeed continue to present these as the only two options. Musk is enormously influential, and he continues to see the world with this dichotomy. Ironic, as the only reason that rockets fly is due to increasingly sophisticated software controlling the rocket engines. "The best part is no part", he says, and the number of mechanical parts in his rockets might be diminishing. But he is not counting lines of code as "parts", and these are growing exponentially.

I mean, I recently read some editorial (again, in the NYT, sorry, someone keeps putting it in front of me) that consisted of this dichotomy, "deregulation" against "innovation", followed by verbose hand-wringing on the hopelessness and intractability of ever solving this. "what shall we ever do?" On the editorial page of a prominent newspaper, written by a prominent journalist. Sheesh.

-- Linas

In practice, the more interesting and more productive space is somewhere in between: stripping away rules that have become pure administrative drag, preserving the constraints that reflect real, irreversible harms, and then using AI to shift enforcement from paperwork into systems that operate continuously and intelligently. Laissez nous faire was my parlance as an idealistic libertarian youth. It's less so nowadays, not because I love regulation, but because I am compelled to think about compliance standards that follow technology enablement and facilitate innovation rather than stifle it.

In that sense, I actually think your instinct is directionally right. AI should make safety cheaper, faster, and more reliable. 

That is where things get interesting - not in choosing between regulation and AI, but in using AI to make governance actually work the way it was always intended to.

Marina Nadeeva

unread,
May 3, 2026, 5:22:57 PM (7 days ago) May 3
to linasv...@gmail.com, Ryan Setliff, David Bray, PhD, Alvin Wang Graylin, Keith Henson, Peter Solomon, lifeboat-adv...@googlegroups.com
I apologize for the comment. I shouldn't have used Grock to generate the response. The AI overreacted and chose the wrong words.

вс, 3 мая 2026 г. в 22:38, Linas Vepstas <linasv...@gmail.com>:


--

Linas Vepstas

unread,
May 3, 2026, 7:52:14 PM (6 days ago) May 3
to David Bray, PhD, Peter Solomon, Ryan Setliff, Alvin Wang Graylin, Keith Henson, lifeboat-adv...@googlegroups.com
Wow! Good stuff, David.

On Sun, May 3, 2026 at 9:29 AM David Bray, PhD <david....@gmail.com> wrote:
We're also seeing, with increasing frequency, the passage of legislation that mirrors what GDPR asked - namely if a citizen of XYZ nation or U.S. state is traveling or transacting business outside of that nation/state - then the legislation asks that the citizen be treated as if they were still in their home state. 

Needless to say, this collides with the notion of "sovereignty by geography" which is a tenet of the modern Westphalian nation state. Moreover, since packets of information on the Internet do *not* go clearly from point A to point B, these attempts at policies that are extra-territorial in nature may accelerate the end of the concepts of geographical borders define a nation/people... and no AI required, we're doing it to ourselves with the types of policies we're passing.

The idea of sovereignty is another one that is poised for a fundamental change. It's not happening yet, but that might change soon. Some quick points:
-- The issue is mostly invisible to Americans, as they have not gone through the ordeals that e.g. the Eastern Europeans have at the end of the 19th/start of 20th centuries.
-- The form of the nation-state was hotly debated by political philosophers at the end of the 19th century, e.g. in the Paris-published magazine Kultura, which graced my grandmother's side-table. The primary poles of the debate were: "should a nation (and its borders) be defined by language and ethnicity?" (we call this "nationalism", these days) vs. "Should a nation be defined by it's common cultural heritage, but otherwise be multilingual, multi-ethnic?" (much like the Grand Duchy of Lithuania, once upon a time the largest European state, where the scholars, churchmen and merchants of Vilnius spoke half-a-dozen different languages, adhered to half-a-dozen different faiths.  I guess we call this "DEI" these days.)
-- The collapse of the Austro-Hungarian Empire allowed ethnic nation-states to come to life in Eastern Europe. But it also gave us the national socialists in Germany, and that did not work out so well.
-- The absolute bitter hatred of the russians is only strengthened by recent events. Eastern Europeans today want both the ethnic nation-states, and also the DEI of being in the EU.

-- In the US, we experimented with "idpol", "identity politics". That got a bit of a smackdown on social media, in the end. It was too divisive.
-- the idea of "sovereign citizen" remains burbling as an extreme minority viewpoint, but its not going away.
-- Theorizing lots of different voting systems (e.g. ranked choice, single transferable votes, etc.) enabled by high tech secure vote-counting systems.
-- The crypto bros invented the DAO, the Distributed Autonomous Organization.  It failed for technical reasons, but there will be another iteration. 

I want to focus on the DAO. I claim that Google, Meta, SpaceX, etc. will become de-facto next-gen DAO's, declaring sovereignty a bit outside traditional national borders. Not this decade, but maybe later. Some employees already expect protections from corporate employment that are supra-national. Some mundane: e.g. free healthcare for employees. In other cases, military: extraction of mid-level executives held in whacko hostage situations.  I pledge allegiance to my employer. The sci fi book "A Young Ladies Illustrated Primer" by Neal Stephenson is taken quite seriously in tech circles, where there's talk of creating a real-life version of the "Neo-Victorian Clade". A little more workable than the older "sea-steading" idea. Billionaires love this stuff.

The DAO failed, but let's take a look at what it was. Bitcoin (for example) is a global distributed accounting ledger. Like your check register in days of yore, if you remember what those were. It uses addition, subtraction and multiplication, and that's it. The computer geeks realized that if you add if-statements, branches, do-loops to that mix, you can have generic algos. For what? For "legally binding" contracts. Well, stronger than "legally binding": algorithmically-binding. You can create any kind of contract, organizational bylaws, share issues, voting rights, agreements, literally anything, limited by your imagination, and adherence is not enforced by courtrooms and judges and national laws, but by the algo itself.  This allowed the creation of trans-national corporations, outside the purview of any sovereign nation. A fever dream of sovereignty that briefly seemed real. 

These were built on the ERC coins. The failure modes were these:
-- the legal documents were hard or impossible to amend, so, if flawed, no way to fix them.
-- The ones that baked in voting failed, if not enough votes were cast. Which you can't fix, if you can't amend the bylaws.
-- Most of them had founders, whales, vested interests that, well, your vote didn't matter if you weren't on the inside. This means that the dream of software-engineers, who would treat each-other as socialist/communist comrades but treat the external world as rapacious capitalists, well, easier said than done.
-- The overlooked minor issue that, um, well, actual policemen with actual guns and handcuffs are still needed to maintain order. And those cops aren't holding the ERC coins of your DAO.

The last one is perhaps the big one: When a government has a monopoly on the use of violence, it is sovereign. But we saw, on the high seas, in outer space, and on the moon, admiralty law may apply.

What I'm saying is that, in the noosphere of abstract ideas and independent organizational structure, we may find that admiralty law is the future. The DAO technology failed, but the idea of share-holder rights, dating back to 17th century Holland, remains in force, and this idea is trans-national. It unifies across geopolitical borders, across race, sex, identity, ethnicity.  For now, it is biding its time; but I expect revolutionary changes there, too.

 Testimony on this and related topics from September of last year a part of a public Congressional hearing: https://www.congress.gov/119/meeting/house/118623/witnesses/HHRG-119-JU03-Wstate-BrayD-20250918.pdf

Wow. I actually skim-read the whole thing. I recommend the same to others. The middle was soporific, but the appendixes at the end were very interesting.

But the lead-in idea: red-teams for foreign policy... wow. Cool. I like that!  That's new. Or new to me.

-- Linas

David Bray, PhD

unread,
May 4, 2026, 11:50:17 AM (6 days ago) May 4
to linasv...@gmail.com, Peter Solomon, Ryan Setliff, Alvin Wang Graylin, Keith Henson, lifeboat-adv...@googlegroups.com
Early political writings were much longer than later ones... why? Because perhaps Machiveilli was concerned that if he said up front that the King might not be anointed by God he might literally lose his head. Same with Hobbes and Locke bringing challenging notions of sovereignty and authority. So they embedded kernels of their insights in their longer works. 

Later writers benefited from the freedom to question and write more freely about how power was allocated without losing their heads. 

Reply all
Reply to author
Forward
0 new messages