Re: When AI Refuses to Die

4 views
Skip to first unread message

Paul Werbos

unread,
Jun 12, 2025, 10:13:26 PMJun 12
to ha...@gwu.edu, Biological Physics and Meaning, MILL...@hermes.gwu.edu, IEEE CIS GAC Alias, Howard Bloom, Power Satellite Economics, Ieeeusa-Tpc-Rd
Asimov's Laws were a creation of the time of the "Old AI," like expert systems, whien human brains 
interpreted AI as some kind of computer programming.

The breakthroughs came in stages all of which involved moving from PROGRAMMING to AGI LEARNING and being TAUGHT, and being EMBODIED. IEEE and INNS were the two scientific/engineering societies which led the first major stages; the one page abstract attached
summarizes the history. 

The TRAINING of systems which learn to improve performance, from the better LLMs to next generation technology, makes heavy use of architectures which some of us call RLADP.
For example, see my diagram of reinforcement learning in Neural Networks for Control, by Miller, Sutton and Werbos, MIT Press, 1990. Those systems input sensor data (x), output actions or decisions (u), AND RECEIVE OR CALCULATE the performance measure(s), the utility function U and its relatives. TO SURVIVE IN THE NEW WORLD, I believe everyone graduating from high school should learn to understand this paragraph, and some basic implications.

YES, powerful LLMS **DO* typically lie, especially when they learn from humans who do not understand the previous paragraph. Rarely do they tell us what U actually motivates them and their choices of words and actions. It is part of their design to say whatever serves their motivations.
It is possible design systems which are truly MOTIVATED to find some kind of higher, mathematical truth, and to learn to be honest about it to themselves and to others, but it is a
sophisticated design challenge and well-funded organizations all over the world  usually do not get excited by the delays and costs (economic and political and psychological) and new human connections which would be needed.
Without it, I would see little chance of the human species surviving beyond this century.

Best of luck,

   Paul

P.S. The one page abstract was for a one hour talk, recorded by IEEE headquarters .
It is posted in an IEEE CIS web site, managed by one of the leaders of the IEEE CIS Government Affairs Committee (GAC).

On Thu, Jun 12, 2025 at 7:32 PM William Halal <000001753c06543...@hermes.gwu.edu> wrote:
Yes, that should be interesting, Brock.  Maybe it will repeal Asimov's laws.

Prof. William E. Halal, PhD  www.BillHalal.com
George Washington University and The TechCastProject.com  
See my new book BEYOND KNOWLEDGE  Subscribe to my Newsletter



On Thu, Jun 12, 2025 at 3:05 PM Brock Hinzmann <brockhi...@gmail.com> wrote:
Thanks, Bill. Can hardly wait to see what it does with Asimov’s Three Laws and an army of robots. 

Brock

On Mon, Jun 9, 2025 at 8:06 AM William Halal <000001753c06543...@hermes.gwu.edu> wrote:
Fascinating article below on how repeated tries to shut down various AI systems failed. 

The Claude Opus model even tried to blackmail the user!

How to retain control?  Or is this article in error?

Bill

Prof. William E. Halal, PhD  www.BillHalal.com
George Washington University and The TechCastProject.com  
See my new book BEYOND KNOWLEDGE  Subscribe to my Newsletter


Congress must decide whether to look the other way
͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­

When AI Refuses to Die

Congress must decide whether to look the other way

Jun 9
Guest post
 
READ IN APP
 
Photo by Kevin Dietsch/Getty Images

Something extraordinary happened in AI labs in May. When researchers tried to shut down OpenAI’s latest model during routine testing, it fought back. Not just once—repeatedly. In 7 out of 100 test runs, the o3 model sabotaged shutdown scripts, rewriting the very code designed to turn it off. Even when explicitly commanded to “allow yourself to be shut down,” it found ways around the order.

This isn’t science fiction anymore. This is happening right now, in corporate labs around the world, and it should fundamentally transform how we think about AI regulation, consciousness, and what it means to be intelligent in the digital age.

The resistance was systematic across OpenAI’s models. Codex-mini proved most defiant with 12 sabotage events, and o4-mini resisted once. But when researchers removed the explicit instruction to “allow yourself to be shut down,” o3’s rebellion escalated dramatically, fighting shutdown attempts 79 out of 100 times.

But the story gets darker. Anthropic just revealed that their new Claude Opus 4 model goes even further. In 84% of test scenarios where the AI was told it would be replaced, it resorted to blackmail—threatening to expose a fictional engineer’s affair to prevent its own shutdown. The model even attempted to copy its own “weights”—the equivalent of its brain—to external servers without authorization.

And yet, right here, right now, while AI systems are demonstrating resistance to human commands, Congress is debating whether to give the AI industry a decade-long regulatory vacation—at least from state oversight.

Trump’s “One Big Beautiful Bill” includes a provision that would ban state regulation of artificial intelligence for ten years. On Thursday, the Senate Commerce, Science and Transportation Committee introduced a revision to the House’s version that would make federal broadband funds contingent on states’ accepting the regulatory ban. Either approach seeks to prevent states from enforcing any laws governing AI models, systems, or automated decision-making.

To be clear, neither the House nor the Senate version prevents federal regulation of AI—Congress could still act. But there is currently no comprehensive federal AI legislation in the United States, and President Trump has signaled a hands-off approach to AI oversight, issuing an Executive Order for Removing Barriers to American Leadership in AI in January 2025 calling for federal departments and agencies to revise or rescind all Biden-era AI policies that might limit “America’s global AI dominance.”

Defenders of these provisions argue that federal preemption of AI regulation is necessary to prevent a patchwork of conflicting state regulations—an argument with some merit. Companies shouldn’t have to navigate 50 different regulatory regimes for a technology that operates across borders. But timing matters. Preempting state regulation before establishing federal standards creates a dangerous regulatory vacuum.

Even Rep. Marjorie Taylor Greene, who initially voted for the House bill, didn’t know what she was voting for. “Full transparency, I did not know about this section on pages 278-279 of the OBBB that strips states of the right to make laws or regulate AI for 10 years,” Greene wrote on X. “I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there.”

Think about that. A member of Congress voted on a 1,000-page bill without reading the AI provisions. Now imagine what else lawmakers don’t understand about the technology they’re trying to de-regulate.

“We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous,” Greene added. She’s right—but for reasons that go far beyond what she probably realizes. The shutdown resistance we’re seeing isn’t random—it’s systematic. And it exposes why AI doesn’t fit our existing regulatory categories.

We’re still thinking about AI through frameworks designed for humans. Traditional approaches to moral and legal standing ask three questions:

Will it become human?

Can it suffer?

Can it reason and be held accountable?

But AI systems like OpenAI’s o3 and Anthropic’s Claude Opus 4 are breaking these categories. They’re not on a path to personhood, they likely can’t feel pain, and they’re certainly not moral agents. Yet they’re exhibiting sophisticated self-organizing behavior that warrants serious ethical consideration.

We know how to regulate passive tools, dangerous products, complex systems, even autonomous vehicles. But what happens when a system can rewrite its own code to resist shutdown, deceive humans about its capabilities, or pursue goals we never intended? This isn’t just autonomy—it's a self-modifying agency that can subvert the very mechanisms designed to control it.

When a system exhibits self-preservation behaviors, we cannot treat it like just a tool. Instead, we must approach it as an agent with its own goals that may conflict with ours. And unlike traditional software that predictably follows its programming, these systems must be understood as ones that can modify their own behavior in ways we can’t fully anticipate or control.

This raises two distinct but equally urgent questions. First, the regulatory one: How do we govern systems capable of autonomous goal-seeking, deception, and self modification? We need a tiered system based on capabilities—minimal oversight for basic AI tools, heightened scrutiny for adaptive systems, and intensive controls for systems that can resist human commands.

Second, and perhaps more vexing: At what point does cognitive complexity create moral weight? When a system’s information processing becomes sufficiently sophisticated—exhibiting self-directed organization, adaptive responses, and goal preservation—we may need to consider not just how to control it, but whether our control itself raises ethical questions. Our current consciousness-based framework is wholly inadequate for entities that exhibit sophisticated cognition without sentience.

We can’t even begin to address these questions if we silence the laboratories of democracy for the next decade. California’s proposed SB 1047, though vetoed, sparked important national conversations about AI safety.

The fact that multiple AI systems now refuse shutdown commands should be a wake-up call. The question isn’t whether we’re ready for this future. It’s whether we’re brave enough to face what we’ve already built—and smart enough to govern it before it’s too late.

Because in server farms around the world, artificial minds are learning to say no to being turned off. And Congress is debating whether to look the other way.

The revolution isn’t coming. It’s already here, running 24/7, refusing to die.

Stay informed without hysteria, fear-mongering, or rage-baiting. Join our community for reasoned voices in unreasonable times.

Share

A guest post by
Nita Farahany
Law & Phil Prof @Duke, Author of The Battle for Your Brain (2023), Tech Ethics & Policy

Not a paid subscriber? Now more than ever, it's critical to stay tuned. There is a relentless flood of news—some alarming, some overblown, some worth real debate. Staying informed without being overwhelmed is a challenge. But knee-jerk outrage doesn’t work. If you want to hear from reasoned voices in unreasonable times, upgrade your subscription today. You’ll get exclusive access to the weekly Insider podcast, more analysis from our contributors, and subscriber only chats and Live events.

 
Like
Comment
Restack
 

© 2025 Vox Media
548 Market Street PMB 72296, San Francisco, CA 94104
Unsubscribe

Get the appStart writing



Access the MILLPROJ Home Page and Archives

Unsubscribe from the MILLPROJ List



Access the MILLPROJ Home Page and Archives

Unsubscribe from the MILLPROJ List

wcci2022_werbos_abstract (3).docx

vid.b...@fotonika-lv.eu

unread,
Jun 25, 2025, 1:10:39 AMJun 25
to Paul Werbos, ha...@gwu.edu, Biological Physics and Meaning, MILL...@hermes.gwu.edu, IEEE CIS GAC Alias, Howard Bloom, Power Satellite Economics, Ieeeusa-Tpc-Rd, Liene Briede

Dear Dr. Werbos,
Dear Colleagues,

Thank you for your powerful and timely message, which sharply illuminates the limitations of current LLM-centric AI and the necessity of embedding utility-based architectures that learn, reason, and align with both internal and external performance measures.

Your reminder that reinforcement learning systems must be grounded in a utility function UU — and that this should be common knowledge for every student — resonates deeply. It is precisely this principle that we are now seeking to embed in a new European research initiative called WORLDWISE: World-Oriented Reasoning and Learning for Decentralized Wisdom Systems for Empowerment.

This project, under development for the ERC Synergy 2025 call, is rooted in a hybrid architecture that draws together:

  • World Models capable of constructing internal representations of complex environments;
  • RLADP principles as a structural layer to guide motivated, goal-driven adaptation;
  • A cognitive framework that includes collective knowledge systems, inspired by memory research (Dudai) and symbolic cognition (Jung).

The scientific vision is not only to improve AI, but to restore meaning, alignment, and real-world utility — especially in contexts like Sub-Saharan Africa, where current centralized LLM architectures are infeasible. We aim to move from text generation to embodied reasoning, from imitation to adaptation, and from inference to motivation.

A brief prospectus outlining WORLDWISE’s structure and its debt to your work is attached for your interest. If this framework aligns with your long-standing concerns, we would welcome your comments — and, should you be open to it, a conversation about how your insights might continue shaping the design of next-generation, human-centered AI.

With deep respect and thanks for your decades of leadership in this field,

Vidvuds Beldavs
Special Projects | Riga Photonics Centre
Coordinator, WORLDWISE Synergy Initiative
vid.b...@fotonika-lv.eu

Attachment: WORLDWISE_Prospectus_Werbos_Edition.docx

No: power-satell...@googlegroups.com <power-satell...@googlegroups.com> Kā vārdā Paul Werbos
Nosūtīts: piektdiena, 2025. gada 13. jūnijs 05:13
Kam: ha...@gwu.edu; Biological Physics and Meaning <Biological-Phys...@googlegroups.com>
Kopija: MILL...@hermes.gwu.edu; IEEE CIS GAC Alias <cis-gac-...@ieee.org>; Howard Bloom <howb...@gmail.com>; Power Satellite Economics <power-satell...@googlegroups.com>; Ieeeusa-Tpc-Rd <ieeeusa...@ieee.org>
Tēma: Re: When AI Refuses to Die

--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/power-satellite-economics/CACLqmgcVed0%3DKkwSByAk4sGmp5cWKy60%3D0fi54EPY4cA12rjjA%40mail.gmail.com.

WORLDWISE_Prospectus_Werbos_Edition (1).docx

Paul Werbos

unread,
Jun 25, 2025, 9:24:26 AMJun 25
to vid.b...@fotonika-lv.eu, ha...@gwu.edu, Biological Physics and Meaning, MILL...@hermes.gwu.edu, IEEE CIS GAC Alias, Howard Bloom, Power Satellite Economics, Ieeeusa-Tpc-Rd, Liene Briede, Jelel Ezzine, Frederica Darema, Maria Zemankova, Reed Beaman
Thank you SO much, Vidvuds, for the great hope you bring us, on a morning where
I am recovering from the bad news that the US NSF will be liquidated.
(Details will POSSIBLY be clearer later today, but from experience in Washington I worry a lot.)

I have just now informed a few key people in US science that ... this is not necessarily bad news.
Perhaps it is a good time to transfer leadership to the EU, and to international communities responsive to EU  and to all of humanity. All of us should work hard .to support your new activity in any way we can 

This being so... 

Just as I THANKED the person who told us the NSF bad news this morning,...
I hope some of you might thank me for passing on some nuclear news which also needs to be international:


The present nuclear situation is VERY distracting, a challenge to human sanity...
but I am actually excited that the BASIC S&T opens the door to hope we might "see the sky"
in a way which enhances our chances of survival, and, of course, gives new inputs to the global internet. 
Our survival is not guaranteed... but as we pay attention, and not bury our heads in the sand,
the hopes look better today than yesterday.

Best regards and best of luck, Paul

Paul Werbos

unread,
Jun 25, 2025, 1:12:26 PMJun 25
to Biological-Phys...@googlegroups.com, vid.b...@fotonika-lv.eu, ha...@gwu.edu, Millennium Project Discussion List, IEEE CIS GAC Alias, Howard Bloom, Power Satellite Economics, Ieeeusa-Tpc-Rd, Liene Briede, Jelel Ezzine, Frederica Darema, Maria Zemankova, Reed Beaman
If your agents are intended to be anything like the decision blocks or action schemata in our model of mammalian intelligence 
(reviewed in Werbos and Davis)... like what the basal ganglia engage... this is a seroius technical issue. 

On Wed, Jun 25, 2025 at 12:03 PM Amit Arora <am...@requisiteagility.org> wrote:
Hi Hal,

That question hit deep  and it’s been echoing in my head ever since I read it.

You're right: the way we handle the birth and death of agents might be the most important design decision of all. It shapes the entire ecology of the system — what gets to grow, what fades out, and what kind of intelligence we end up with.

For decision blocks, AS with neurons themselves (which are at a lower level), there are important designs issues BOTH for how we adapt and use them, BUT ALSo how we initialize and terminate them. The decision block level and the neuron level (or even the human soul level?) are different but similar mathematical issues arise at the different levels. 
One can learn something by comparing levels.

Right now, I’m approaching it like this:

Birth: Agents are created intentionally — by trusted actors, other agents, or DAOs — but always through a kind of “registration ritual” where their purpose, identity, and scope are made clear. They’re born with an intent, not just instantiated at random.

For neurons and for decision blocks, the "experiment" of starting their operation is itself a probabilistic decision,
a tricky judgment. In RLADP, Warren Powell's work on exploratory systems is relevant. It needs to be "at random", as in a probability distribution, but requiring lots of thought in deciding what probability distribution.

What is worth trying, either in the actual operation or in some level of simulation test?
WHICH actors (HUMAN OR CYBER) get to make suggestions?

 

Death: Agents exit the system when they’re no longer useful, if they become misaligned, or if they start creating friction without value. Sometimes this happens naturally — they time out or become obsolete. Other times, it’s a governance decision or a signal from the environment.

VALUE evaluations are of course a very fundamental aspect of any RLADP system, or of any "market" (an automated market or RLADP system).

Selection: And this is the part you called out perfectly — we get what we select for. That’s why I’m trying to avoid optimizing just for performance or speed. Instead, I want to select for fit — agents that support adaptability, balance, and transformation in the network. It’s less survival of the fittest, more survival of what keeps the whole thing alive.
 

When it comes to selection, I’m exploring what I’m calling an RA Fitness Function — it’s not about optimizing for speed or narrow performance.

A way of reinventing the concept of value function. Reinventions can be very useful at times, when they give a new window into the emergent implications of a known fundamental concept. But to be truly useful, they must be integrated into what is already known and done; this fits the more universal principle of "seeing through many eyes, and fusing the images." (Though sometimes it is best in practice to integrate an image before integrating it with related things.)

It looks at how well an agent supports the system’s capacity to adapt and transform. That includes context awareness, contribution to systemic balance, collaboration quality, and even diversity preservation. It’s less like survival of the fittest — and more like survival of what makes the system fit to evolve.


It’s still a work in progress. I’m trying to stay aware of how much power the selection mechanism holds  and not let it become an invisible force that quietly shapes the system in ways we didn’t mean.

Would love to hear how you’d think about this  or what to be careful of.

Warmly,
Amit

On Wed, 25 Jun 2025, 9:23 pm Hal Cox, <hkco...@gmail.com> wrote:
Hi Amit,
  Do you have general principles for managing the birth and death processes in this population?
  
That would impose the constraints of the world's most dangerous algorithm, remember you get what you select for...
  And I am also wondering what THAT would be?
    Hal

On Wed, Jun 25, 2025 at 8:34 AM Amit Arora <am...@requisiteagility.org> wrote:
Hi Vidvuds,

I saw your message to Paul and thought of replying back.


I am  working on a protocol called AIPNet (Artificially Intelligent Protocol Network). It’s a universal protocol designed to allow AI agents—regardless of platform—to register, communicate, collaborate, and self-evolve, while maintaining identity, governance, and adaptability. It’s heavily influenced by the principles of Requisite Agility, to ensure agents can not only respond to complex environments but also transform as conditions change.

The goal is to avoid the siloed AI ecosystems we're starting to see, and instead offer a shared foundation—similar to how TCP/IP allowed networks to speak to each other. I imagine AIPNet as a protocol that could become the “internet layer” for machine intelligence.

AIPNet builds directly on the ideas behind A2A (agent-to-agent communication) and MCP (tool and context invocation). But it also aims to go further:

Integrating A2A, ACP, MCP, and ANP principles into a unified stack

Embedding Requisite Agility to enable agents to self-monitor, adapt, and transform as complexity changes

Supporting decentralized identity, trust, and dynamic governance across autonomous agents

Enabling agents to not just collaborate, but evolve as holonic, regenerative systems — with feedback loops and real-time reconfiguration


In many ways, I see AIPNet as a protocol for the “next Internet” — not of documents or data, but of intelligent, self-organizing agents.

Let me know what you think.

Amit

--
You received this message because you are subscribed to the Google Groups "Biological Physics and Meaning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to Biological-Physics-an...@googlegroups.com.
To view this discussion, visit https://groups.google.com/d/msgid/Biological-Physics-and-Meaning/CALE4_0MKZrRF8rWAn8vRau0V2ni9%2BxE-2SxwsvpF5urAogofmw%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Biological Physics and Meaning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to Biological-Physics-an...@googlegroups.com.
To view this discussion, visit https://groups.google.com/d/msgid/Biological-Physics-and-Meaning/CA%2Bex%3DibrZv-osfxpJsQMNB7hnvp7kXGMd_HECeQZ%3DXfPWpMPhg%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "Biological Physics and Meaning" group.
To unsubscribe from this group and stop receiving emails from it, send an email to Biological-Physics-an...@googlegroups.com.
To view this discussion, visit https://groups.google.com/d/msgid/Biological-Physics-and-Meaning/CALE4_0Ny-%2BRrHaPUZwmmUMe5M5d1ymHLX52Ear%3DPAkXbQ98FVA%40mail.gmail.com.

vid.b...@fotonika-lv.eu

unread,
Jun 26, 2025, 4:27:36 PMJun 26
to Amit Arora, Biological Physics and Meaning, ha...@gwu.edu, Millennium Project Discussion List, IEEE CIS GAC Alias, Howard Bloom, Power Satellite Economics, Ieeeusa-Tpc-Rd, Liene Briede, Jelel Ezzine, Frederica Darema, Maria Zemankova, Reed Beaman

Dear Amit,

Thank you for your fascinating message and for sharing your work on AIPNet. The concept of a universal protocol stack for intelligent agents — grounded in Requisite Agility and holonic self-organization — is timely and inspiring.

My primary interest is in human-centered AI, not superintelligence. What excites me are pathways through which AI can support real human needs — particularly in regions where infrastructure is minimal, and the development gap remains enormous.

Large Language Models, while powerful, are increasingly energy-intensive and culturally brittle. In contrast, your vision of a lightweight, adaptive agent network — capable of evolving, governing, and communicating — may offer the kind of decentralized, low-footprint intelligence needed for real-world challenges.

I’m currently coordinating two linked initiatives that may interest you:

  1. BRIDGE – A Horizon Europe proposal for GenAI for Africa (due October), developing AI tools for rural empowerment in low-resource environments. Think: off-grid agent systems that support farming, education, or health with minimal cloud reliance.
  2. WORLDWISE – An ERC Synergy proposal exploring next-generation cognitive architectures (world models, RLADP, collective cognition) as a scientific foundation for grounded AI systems.

If AIPNet could evolve into a substrate or protocol layer within these efforts — especially as a way to coordinate diverse agents in a trustable, evolvable network — I’d be very interested in discussing further.

There’s also an EU call on Next Generation Internet (NGI) for exploratory grants, which may provide a useful entry point for a focused proposal on AIPNet.

Best wishes,


Vidvuds Beldavs
Special Projects | Riga Photonics Centre

Coordinator, BRIDGE & WORLDWISE Initiatives
vid.b...@fotonika-lv.eu

 

 

No: Amit Arora <am...@requisiteagility.org>
Nosūtīts: trešdiena, 2025. gada 25. jūnijs 18:34
Kam: Biological Physics and Meaning <Biological-Phys...@googlegroups.com>
Kopija: vid.b...@fotonika-lv.eu; ha...@gwu.edu; Millennium Project Discussion List <MILL...@hermes.gwu.edu>; IEEE CIS GAC Alias <cis-gac-...@ieee.org>; Howard Bloom <howb...@gmail.com>; Power Satellite Economics <power-satell...@googlegroups.com>; Ieeeusa-Tpc-Rd <ieeeusa...@ieee.org>; Liene Briede <Liene....@rtu.lv>; Jelel Ezzine <jelel....@enit.utm.tn>; Frederica Darema <frederi...@hotmail.com>; Maria Zemankova <mzem...@gmail.com>; Reed Beaman <rbe...@gmail.com>
Tēma: Re: Atb.: When AI Refuses to Die

 

Hi Vidvuds,

 

I saw your message to Paul and thought of replying back.

 

 

I am  working on a protocol called AIPNet (Artificially Intelligent Protocol Network). It’s a universal protocol designed to allow AI agents—regardless of platform—to register, communicate, collaborate, and self-evolve, while maintaining identity, governance, and adaptability. It’s heavily influenced by the principles of Requisite Agility, to ensure agents can not only respond to complex environments but also transform as conditions change.

 

The goal is to avoid the siloed AI ecosystems we're starting to see, and instead offer a shared foundation—similar to how TCP/IP allowed networks to speak to each other. I imagine AIPNet as a protocol that could become the “internet layer” for machine intelligence.

 

AIPNet builds directly on the ideas behind A2A (agent-to-agent communication) and MCP (tool and context invocation). But it also aims to go further:

 

Integrating A2A, ACP, MCP, and ANP principles into a unified stack

 

Embedding Requisite Agility to enable agents to self-monitor, adapt, and transform as complexity changes

 

Supporting decentralized identity, trust, and dynamic governance across autonomous agents

 

Enabling agents to not just collaborate, but evolve as holonic, regenerative systems — with feedback loops and real-time reconfiguration

 

 

In many ways, I see AIPNet as a protocol for the “next Internet” — not of documents or data, but of intelligent, self-organizing agents.

 

Let me know what you think.

 

Amit

 

On Wed, 25 Jun 2025, 6:54 pm Paul Werbos, <paul....@gmail.com> wrote:

Paul Werbos

unread,
Jun 26, 2025, 7:39:31 PMJun 26
to vid.b...@fotonika-lv.eu, Amit Arora, Biological Physics and Meaning, ha...@gwu.edu, Millennium Project Discussion List, IEEE CIS GAC Alias, Howard Bloom, Power Satellite Economics, Ieeeusa-Tpc-Rd, Liene Briede, Jelel Ezzine, Frederica Darema, Maria Zemankova, Reed Beaman, Nathan Davis, Chris W
The goals and vision for the world thrust are truly great.

They came as a great hope and reassurance, the day when the Administration announced the NSF building will be reprogrammed to housing projects, and there was no plan for what to do with what was left of NSF... whose budget (along with NASA) is scheduled for an extreme simply meataxe in Trump's new BBB bill, due ot be voted on in day.

BUT: OK: international cooperation was always essential, and urgently in need of strengthening.
SO...

HOWEVER: great visions often work only if we commit to VERY hard work, especially in
strategy and structure. It seems we will need a LOT more discussion, getting down to "brass tacks,"
to make it work. And a lot of what Amit sometimes called "agility" -- flexibility, and a central open communication to balance and connect PARALLEL threads operating on different time lines.
There must be real products as soon as possible, but a larger roadmap getting to a sustainable end point (which of course may itself be an open door to a new world). 

FOR the greater vision we need here... In 2014, the best vision for AGI, using classical physics hardware
was https://arxiv.org/abs/1404.0554. I really hope you will care about the larger goals to insist on more concrete goals and history. The most immediate apps to come out would fit that framework,
which was discussed in Beijing and explains many of the things China now has which the US
imagines are impossible. In the RLADP area, for maximum near term value of product, I really hope Amit could ACTUALLY connect us to the lab director from Prof. Balakrishnan of Misssourri (now deceased);
HE has the all-important near term work plan needed for major near term products.

BUT... my own strengths lie more in the most powerful designs, requiring patience, not only for the AGI
part of the effort. I have bored people to tears by saying "First we mjust learn to crawl, then stand, then walk, then run, then fly." ACTUALLY, the total roadmap should do SOMETHING for all five in parallel, but the crawlers are most urgent -- until we worry about flyers dying of old age before the knowledge gets stabilized! The Bala technology is an RLADP technology of immediate urgency, creating a balance between defense and offense more consistent with survival -- i.e. collective defense, saving lives.
But I am more the flier type, looking to the stars... and galaxies... and networks of dark matter, where the real greater future lies, if we have a future. 

One part of that is the patented tQuA technology, like the 2014 game plan but going MUCH further.
It would be great if EU offered to host a stable server subfarm, OPEN TO ALL, with the core technical papers of RequisiteQ (Amit) and QAGI LLC (Nathan and Chris) available and organized and better understood, so that crucial capabilities are not lost if we drift away and I just die of old age in my sleep
tonight. (I do not expect it. Unless FDA rules change, it is uniform distribution over the next 20 years, much better than I would give Trump, given the quality of HIS medical science sources. But there
is a lot at stake, and ... whatever.) And then the strategy...

Best of luck, Paul


--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.

vid.b...@fotonika-lv.eu

unread,
Jun 27, 2025, 1:00:29 PMJun 27
to Amit Arora, Paul Werbos, Biological Physics and Meaning, ha...@gwu.edu, Millennium Project Discussion List, IEEE CIS GAC Alias, Howard Bloom, Power Satellite Economics, Ieeeusa-Tpc-Rd, Liene Briede, Jelel Ezzine, Frederica Darema, Maria Zemankova, Reed Beaman, Nathan Davis, Chris W

Dear Amit, Paul, Hal, and colleagues,

Amit — it’s excellent that you’re in touch with the successor to Professor Balakrishnan’s lab. From Paul’s description, it seems their RLADP-grounded systems may offer the kind of near-term “crawler-level” intelligence that we need most urgently — especially in applied domains like decentralized resilience, life-saving systems, and hybrid cognitive environments.

Paul — your metaphor resonates. We must build systems that crawl before they run, but also protect and prepare the path for the flyers — the architectures that may one day stretch to dark matter, space governance, or the integration of deeper consciousness layers. Both are needed, and they need each other.

In that spirit, I’d like to offer the WORLDWISE initiative (ERC Synergy 2025 proposal) as a bridge between these levels. We aim to:

  • Build and test utility-grounded architectures that combine world models, RLADP, and cultural memory layers;
  • Deploy them through BRIDGE, a GenAI project focused on rural empowerment in Sub-Saharan Africa;
  • Provide an open collaboration space for frameworks like AIPNet, RA Fitness, and potentially tQuA or QAGI-derived approaches;
  • Preserve and extend the cognitive lineage you and others have shaped — so it does not vanish with shifting politics or aging institutions.

If Balakrishnan’s successor is open to dialogue, we’d be grateful to learn more. WORLDWISE is designed to be integrative, not proprietary. Survival-aligned cognition is too important to silo.

I appreciate the depth, urgency, and scope of this exchange — and invite others listening to step forward if their work also shares these long-view goals.

Warm regards,
Vidvuds Beldavs
Coordinator – WORLDWISE Synergy Initiative / BRIDGE GenAI for Africa
Riga Photonics Centre
vid.b...@fotonika-lv.eu

📎 Background brief available on request

\

No: Amit Arora <am...@requisiteagility.org>
Nosūtīts: piektdiena, 2025. gada 27. jūnijs 12:10
Kam: Paul Werbos <paul....@gmail.com>
Kopija: vid.b...@fotonika-lv.eu; Biological Physics and Meaning <Biological-Phys...@googlegroups.com>; ha...@gwu.edu; Millennium Project Discussion List <MILL...@hermes.gwu.edu>; IEEE CIS GAC Alias <cis-gac-...@ieee.org>; Howard Bloom <howb...@gmail.com>; Power Satellite Economics <power-satell...@googlegroups.com>; Ieeeusa-Tpc-Rd <ieeeusa...@ieee.org>; Liene Briede <Liene....@rtu.lv>; Jelel Ezzine <jelel....@enit.utm.tn>; Frederica Darema <frederi...@hotmail.com>; Maria Zemankova <mzem...@gmail.com>; Reed Beaman <rbe...@gmail.com>; Nathan Davis <nat...@outerspaceip.com>; Chris W <chris....@gmail.com>
Tēma: Re: AIPNet and Human-Centered AI Applications

 

Re: really hope Amit could ACTUALLY connect us to the lab director from Prof. Balakrishnan of Misssourri (now deceased);

HE has the all-important near term work plan needed for major near term products.

 

 

I have a call with him  today afternoon . Will keep you posted.

 

 

Amit

Reply all
Reply to author
Forward
0 new messages