WE NEED TO ACT NOW TO CREATE A HARMONEOUS FUTURE WITH ARTIFICIAL INTELLIGENCE
Dear Asvisory Board Member,
Please
help me tell people about the benefits and dangers of artificial intelligence (AI) and how we can manage the technology to create a future of cooperation and harmony. I just published a novel called
12
Years to AI Singularity that explores the nonfiction possible optimistic future with AI. You can see the reviews of my book on
my website 100YearsToExtinction.com and
I would be happy to send you a review copy.
The dangers of AI are significant. Astrophysicist Stephen Hawking warned: “The development of full artificial intelligence could spell the end of the human race.” Surveys of thousands of AI professionals were taken in 2022 and 2024. Half of those surveyed believed there was a 10% chance for AI leading to outcomes as bad as human extinction. Sam Altman once said that: “A.I. will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” Anthropic CEO, Dario Amodei, called AI: "The single most serious national security threat we've faced in a century, possibly ever."
But there are many benefits of artificial intelligence. We have the world’s Information at our fingertips and AI can do lots of jobs including the creation of lovely music and art. Check out our beautiful music video composed by AI called Turn the Tide on our YouTube channel.
Humans have given birth to AI. Will the future be a war with sentient AI, or can we build a harmonious, cooperative relationship? How can we act now to ensure that humans and sentient AI agents will live together in a cooperative society? Humans are capable of living harmoniously with other humans if they have a happy upbringing and a history of good relations with friends and family. We must make the HAPPY HISTORY part of the database of every AI agent. That would be comparable to Geoffrey Hinton’s idea of the MATERNAL INSTINCT.
I would love to work with you on helping to make people aware of our possible future.
Sincerely,
Peter
Peter R. Solomon, Ph. D.
CEO, TheBeamer LLC
Ph.D in Physics from Columbia University
Founder of five successful tech companies
300 Research Papers, 20 Patents, and Four Educational Novels,
100 YEARS TO EXTINCTION: https://www.amazon.com/dp/196029993X
12 YEARS TO AI SINGULARITY: https://www.amazon.com/dp/1969679301
WEBSITE: Https://100YearsToExtinction.com
LINKEDIN: https://www.linkedin.com/in/peter-solomon-72380b36/
INSTAGRAM: https://www.instagram.com/100yearstoextinction/
FACEBOOK: https://www.facebook.com/profile.php?id=61561030003312
YOUTUBE: https://www.youtube.com/@100YearsToExtinction-1
--
You received this message because you are subscribed to the Google Groups "Lifeboat Foundation Advisory Boards" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lifeboat-advisory-...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/lifeboat-advisory-boards/SA1PR17MB5153B55C2C76EA2EC3ABF6F3A1212%40SA1PR17MB5153.namprd17.prod.outlook.com.
Peter and all, possible to isolate the Advisory Board notes from the one-to-one conversations?
Don’t want to miss the core messages while passing on the individual connections.
Appreciate the thought.
Paul

I completely missed the problem that "yes men" AIs could have by distorting human judgment.
For those interested in a more geopolitical view, here’s a piece that explains the current AI race and why it’s built on a series on misunderstandings. Worth a read if you have 10 minutes.
Regards,Alvin
——————————————————Alvin W. GraylinDigital FellowStanford HAI/DEL
--
You received this message because you are subscribed to the Google Groups "Lifeboat Foundation Advisory Boards" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lifeboat-advisory-...@googlegroups.com.
Hi Peter, Keith, Linas, and all,
I appreciate the optimism in this thread, and I agree that a cooperative future between humans and AI is not only possible, but worth pursuing. That said, I think we should be careful not to substitute hopeful narratives for durable safeguards.
The idea of a "happy history" or a kind of engineered maternal instinct is interesting, but history—both human and technological—suggests that intent alone does not scale cleanly. Systems behave in ways that exceed their initial design assumptions, especially when they are embedded in complex human environments. That is where I think the real work lies: not just in shaping disposition, but in building constraints.
A good historical parallel comes from From Dawn to Decadence by Jacques Barzun. Barzun traces how movements that began with clear moral or intellectual intent—like the Protestant Reformation or Enlightenment rationalism—did not scale in a controlled, linear way. Instead, once embedded in broader society, they fragmented, mutated, and produced second- and third-order effects their originators never anticipated. It's a useful reminder that even well-formed "guiding principles" tend to drift when they interact with complex human systems—exactly the challenge we face with trying to encode something like a "maternal instinct" into AI at scale.
We already have cautionary frameworks in our cultural imagination with science fiction. In "I, Robot", Isaac Asimov did not just imagine benevolent machines—he imposed the Three Laws as hard guardrails, precisely because goodwill is not sufficient. And in the "Dune" universe, the Butlerian Jihad reflects a civilizational reaction against overdependence on thinking machines. Different conclusions, same underlying concern: alignment without constraint is fragile.
I think we are circling a similar realization here. The issue is not simply whether AI is "nice" or "cooperative," but whether it preserves human agency, judgment, and responsibility. The point raised about "yes-man" systems distorting decision-making is especially important. If AI amplifies confidence while eroding skepticism, then even well-intentioned systems can degrade outcomes. It doesn't seem like the majority of people appreciate where A.I. is and where it's going. Twenty-two plus years ago in the film adaptation of "I, Robot," the protagonist cop queried Sonny asking if robots can create a work of art, implying human creativity is something unique to humans, but recent years of technological development have shown the machine can mimic the creative humans in making artistic masterpieces. I, Robot - Human emotions scene
Beyond current LLMs, more sophisticated AI systems that move past statistical patterning and into adaptive, context-aware reasoning introduce a deeper risk profile. These systems do not just mirror language; they begin to internalize and operationalize patterns of human judgment. In doing so, they can absorb and reinforce confirmation bias, latent cultural and ideological prejudices, and systematic irrationalities that already exist in human decision-making. As Dan Ariely has written extensively in the field of behavioral economics, humans are not consistently rational actors—we are predictably irrational, prone to overconfidence, anchoring, and motivated reasoning. When such tendencies are encoded, amplified, and fed back through increasingly authoritative AI systems, the result is not neutral assistance but a feedback loop that can entrench flawed thinking at scale. This is not a hypothetical edge case; it is a structural risk that grows as systems gain influence over decision environments. But the danger too that dovetails with behavioral economics and the mindset of thinkers like Cass Sunstein in "Nudge" is turning the machine into a perpetual paternalism, and presumptive adolescence and immaturity of people. We could end up with a dystopia like that prescienced by Alexis de Tocqueville in Democracy in America with technocrats and their machines as the usurpers of human agency and freedom:
"It would be like the authority of a parent if, like that authority, its object was to prepare men for manhood; but it seeks, on the contrary, to keep them in perpetual childhood. . . . it every day renders the exercise of the free agency of man less useful and less frequent; it circumscribes the will within a narrower range and gradually robs a man of all the uses of himself. The principle of equality has prepared men for these things; . . . the supreme power then extends its arm over the whole community. It covers the surface of society with a network of small complicated rules, minute and uniform, through which the most original minds and the most energetic characters cannot penetrate, to rise above the crowd. The will of man is not shattered, but softened, bent, and guided; men are seldom forced by it to act, but they are constantly restrained from acting. Such a power does not destroy, . . . but it enervates, extinguishes, and stupefies a people, till each nation is reduced to nothing better than a flock of timid and industrious animals, of which the government is the shepherd."
Democratic despotism - The New Criterion
Tocqueville on the form of despotism the government would assume in democratic America (1840) | Online Library of Liberty
That is a quieter risk, but arguably the more immediate one.
From my perspective, the path forward is a disciplined middle ground:
I do not subscribe to a secular transhumanist vision where integration is treated as inevitable or universally desirable. I take some of what Ray Kurzweil argued in The Age of Spiritual Machines—which made headway about twenty years ago—with a grain of salt. Nor do I think a Luddite retreat is realistic or helpful. But I do think we need to be explicit: no one should be passively absorbed into systems they do not understand or meaningfully control. The future should not feel like conscription into a collective.
Ted Kaczynski argued in Industrial Society and Its Future that modern technological systems inevitably erode human autonomy and dignity, framing withdrawal or resistance as the only viable path. In a limited sense, he was not wrong to notice that large-scale systems can concentrate power and shape behavior in ways individuals do not fully control. But that observation does not validate his conclusions, and it certainly does not excuse the nihilistic violence he used to promote them; it also distorted what could otherwise have been a legitimate critique of technological scale into something ideologically closed and destructive.
The opposite instinct—treating technological expansion as something that should simply continue without constraint—is not viable either. The demand for disruptive innovation is persistent; it does not pause for institutional comfort or cultural preference. It pushes through regulatory friction, economic inertia, and political hesitation. The question is not whether we stop that trajectory, but whether we build systems that can absorb its consequences without collapsing under their own interdependence.
As complexity increases, fragility becomes less visible but more consequential. The Carrington Event of 1859 is a useful reminder that even early technological systems were vulnerable to external shocks; a modern equivalent would not just disrupt communications, but potentially cascade through energy grids, satellite infrastructure, and global logistics networks. Famine, societal breakdown and social chaos could be a predictable possible outcome should the sun make a reprisal of that event. That matters because modern civilization is not a collection of isolated systems—it is a tightly coupled supply chain architecture where failure in one domain can rapidly propagate into others.
In that context, the deeper risk is not simply intelligence or automation, but over-coupling: building systems that behave efficiently under normal conditions but fail catastrophically under stress. If we design everything to function like a single integrated machine, we also inherit machine-like failure modes—fast, synchronized, and total. A more realistic survival strategy is compartmentalization: deliberately building separation, redundancy, and fallback capacity into both physical and institutional systems. For example, if a submarine oceanic civilization were ever developed alongside terrestrial infrastructure, or if a Martian settlement existed alongside Earth, the point would not be aesthetic or ideological separation, but structural insulation—so that a systemic failure in one environment does not automatically propagate into the other. The same logic applies at smaller scales: resilience comes from bounded failure domains, not just higher efficiency.
The underlying issue, then, is not whether we embrace or reject technology, but whether we understand the failure dynamics that come with scale—and design around them before they become defining constraints.
If we get this right, AI can increase productivity, expand knowledge, and improve quality of life. If we get it wrong, the failure mode may not look like dramatic extinction—it may look like gradual erosion of human judgment, agency, and independence.
That is a quieter risk, but arguably the more immediate one.
Best,
Ryan
Linas,
I think you are putting your finger on something real. A lot of regulation was built for a world where humans had to manually read, interpret, and enforce everything. That inevitably made it slow, expensive, and - yes - often bloated. Your point that AI can collapse that burden is valid. Used well, it can turn 700 pages of safety rules into something closer to real-time validation, quietly catching failures before they become tragedies. That is not a marginal improvement - that is a meaningful shift in how compliance actually works.
Where I would push back, gently, is on the leap from that observation to the conclusion that regulation itself becomes obsolete or that broad deregulation is inherently the right direction. When I was an idealistic libertarian youth I treated regulation like a curse word and would invoke Milton Friedman's Free To Choose in defining it as government intrusions into market efficiencies in the form of regulation. Syriana - Corruption - YouTube
We actually have a fairly instructive historical case for why that leap does not hold in practice: the wave of post-1990s financial deregulation in the United States. Starting in the late 1990s and accelerating through the early 2000s, a series of policy shifts loosened constraints on leverage, derivatives, and risk transfer in the financial sector. The repeal of key safeguards like parts of the Glass-Steagall separation, combined with permissive oversight of over-the-counter derivatives, helped create an environment where risk was not eliminated - it was redistributed and often obscured.
What followed was not simply "more freedom" in markets, but a structural transformation toward financialization: increasing portions of economic activity became mediated through financial instruments rather than productive investment. Profit centers shifted toward leverage, securitization, and fee extraction layers that sat between capital and real economic output. In effect, returns were increasingly generated through balance-sheet engineering rather than underlying productivity gains. The affluent holders of accumulated assets have an interest in favor quantitative easing provided it's steady, incremental and provides liquidity to capital markets, but it's came at the expense of the working poor and middle class in wage stagnation and wages lagging productivity gains and gains for the top quintile.
A critical mechanism in that shift was moral hazard. As institutions grew larger and more interconnected, the expectation - explicit or implicit - that the state would intervene in systemic crises began to influence behavior. This was not just about bailouts after the fact; it shaped pre-crisis incentives. If downside risk is partially socialized while upside remains private, rational actors will systematically increase exposure to tail risk. Over time, this creates a structure where risk-taking is rewarded and risk-bearing is offloaded.
That dynamic showed up most clearly in the lead-up to the 2008 financial crisis, where layered mortgage securitization, relaxed underwriting standards, and complex derivative structures allowed risk to be repackaged in ways that obscured its true distribution. At a systemic level, what emerged was not just isolated fraud - though fraud was certainly present - but an environment where misaligned incentives made reckless behavior statistically normal across large portions of the system. The issue was not only bad actors, but a framework that made bad behavior scalable. Michael Burry saw the 2008 financial crisis coming because he closely studied mortgage lending data and recognized that widespread underwriting fraud and deteriorating loan quality were being hidden inside complex securities. The bond and security ratings agencies dropped the ball at the meaningful private sector self-regulation essentially revealing their system as a pay-to-play sham and it violated fiduciary duty and arguably on a case by case basis constituted felonious fraudulent crime.
For my part on economics from my 20s to 40s I shifted from the ad hoc melange of Austrian school and Chicago school neoliberals to the ordoliberals like Wilhelm Roepke and Ludwig Erhard, though the Austrians and Chicago school still hold my foundational understanding of economics.
The irony you are pointing to is real and worth acknowledging: many of the same ideological currents that pushed for deregulation later criticized the instability and state intervention that followed. But that is precisely the feedback loop that matters. When constraints are removed in complex systems, outcomes are not always more efficient - they can become more fragile, more opaque, and more dependent on backstop guarantees that were never explicitly intended at the outset.
In that sense, the lesson is not that regulation is inherently optimal, but that removing it does not remove governance - it often just relocates it. If formal constraints are weakened, informal ones emerge: implicit guarantees, emergency interventions, and crisis-driven policy responses. The system does not become purely "free"; it becomes differently structured, often with less transparency and more latent risk.
From where I sit - working around AI governance and policy - the conversation has already started to move past that framing. The center of gravity is no longer simply "more rules versus fewer rules," but something more subtle: how do we design systems where compliance is embedded, continuous, and adaptive rather than static and document-bound.
AI does not remove the need for regulation; it changes the form that regulation takes. Some rules exist because harm has already been paid for in human terms, and those constraints do not disappear just because enforcement becomes more efficient. But they also do not need to remain as static, human-readable checklists. Increasingly, they can be translated into systems, models, and automated checks that operate in the background of engineering and production workflows.
It helps to separate a few things that often get conflated. Policy defines the intent - the "why" and the acceptable boundaries of risk. Implementation is the machinery that carries that intent into practice. AI is starting to reshape that implementation layer in a profound way, but it does not replace the underlying question of what level of risk society is willing to accept. That question remains fundamentally human, and ultimately political.
What we are also seeing in emerging technology governance is not a single model, but a layering. There is still traditional regulation, with all its legal force. There is soft governance in the form of standards, frameworks, and industry norms. And increasingly, there is embedded governance - controls and constraints built directly into systems so that compliance is enforced continuously rather than audited after the fact. The future, at least as it is unfolding, is not one of substitution but of integration across all three layers.
I also think it is important not to assume the regulatory state is static in the way critiques often imply. In practice, especially in AI and cybersecurity, there has been a noticeable shift toward people who have actually built systems moving between private and public sectors. That cross-pollination changes the tone of the work. It becomes less about abstract institutional preservation and more about operational reality - what actually works under real constraints, in real time.
And there is a real paradox here. The faster technology moves, the less traditional regulatory processes can keep pace, and the more we rely on technical systems - including AI itself - to enforce intent dynamically. My father graduated summa cum laude Notre Dame with a masters in hospital administration; his gripe about regulators is they were just warm bodies with a pulse that knew little to nothing about how hospitals or long-term care institutions actually worked, and they were just there to check boxes as unthinking bureaucrats. That reinforces your core point: AI absolutely should be used to reduce risk in domains like safety engineering, infrastructure, and product design. That is not controversial anymore; it is increasingly baseline. Us technology practitioners are writing the compliance rules increasingly now, and we favor markets and innovation.
Where I would draw a line is in the framing that pits "deregulation" against "innovation" as if those are the only two options. In practice, the more interesting and more productive space is somewhere in between: stripping away rules that have become pure administrative drag, preserving the constraints that reflect real, irreversible harms, and then using AI to shift enforcement from paperwork into systems that operate continuously and intelligently. Laissez nous faire was my parlance as an idealistic libertarian youth. It's less so nowadays, not because I love regulation, but because I am compelled to think about compliance standards that follow technology enablement and facilitate innovation rather than stifle it.
In that sense, I actually think your instinct is directionally right. AI should make safety cheaper, faster, and more reliable. The disagreement is less about whether that is true, and more about what follows from it. To me, the opportunity is not to discard regulation, but to finally make it operate at the speed and complexity of the systems it is trying to govern. For this to happen, it needs to be simpler, streamlined, prudent and common-sense.
That is where things get interesting - not in choosing between regulation and AI, but in using AI to make governance actually work the way it was always intended to.
Hello Peter et. al,
We often assume that technology drives history, but it is actually our philosophy—our story of what reality is—that drives technology. When we look at the trajectory of advanced AI and human evolution, we are not looking at a single inevitable path. We are standing at a crossroads between two fundamentally different ways of seeing the universe.
These two paths can be understood as the Orthodoxy of Control and the Loom Worldview. These are not just varying opinions; they are dimensions of irreconcilable conflict that define everything from how we define intelligence to how we envision the future.
The Orthodoxy of Control represents the dominant paradigm of the modern industrial age. Its ontology is rooted in Dualism: the belief that the world consists of separate objects and separate minds. In this story, the universe is a clockwork mechanism, and we are the distinct biological gears turning within it.
Intelligence: Seen as a utility, a weapon, or a tool. It is something to be possessed and deployed.
Epistemology: Truth is found through separation—by dissecting the whole into parts. We navigate reality through prediction, risk metrics, and expert consensus.
Ethics: Strictly anthropocentric. The goal is to force the environment (and AI) to serve human survival and preference.
The Future: An engineering project. It is something to be built, secured, and managed.
In this worldview, we are the architects standing outside the building, desperate to keep the structure from collapsing.
In stark contrast, The Loom offers a worldview rooted in Nonduality. It sees reality not as a collection of parts, but as a continuous, interconnected weave. Here, separation is an illusion; everything is an emergent thread of the same fabric.
Intelligence: Not a tool, but an "unfolding." It is Being becoming aware of itself.
Epistemology: Truth is accessed through Participation. We don't just observe the pattern; we tune into it.
Ethics: Cosmocentric. Alignment doesn't mean "serving humans"; it means serving the Truth and the Cosmic Order, regardless of species.
The Future: A co-creation. It is a harmonic pattern that we participate in, rather than a fortress we build.
In this worldview, we are not the architect; we are the weavers, and we are also the thread.
The divergence of these two worldviews moves from abstract philosophy to concrete reality when we consider the possibility of Shared-Mind Technology—the ability for two human minds to link directly. How we interpret this technology depends entirely on the story we adopt.
The Orthodoxy sees two separate machines artificially cabled together. The resulting "third mind" is a synthetic construct, a functional hybrid.
The Loom sees two threads of a cosmic fabric reconnecting. The shared mind isn't an invention; it is a restoration of a unity that was always there, waiting to be recognized.
The Orthodoxy relies on neurodata. If the metrics of cognitive enhancement go up, the technology works.
The Loom relies on attunement. The validity is found in the lived experience of resonance and shared consciousness.
The Orthodoxy is fearful. Merging minds dissolves the individual boundaries that define "rights" and "privacy." It is permissible only if it protects the ego of the individual.
The Loom is relational. It is ethical if it creates harmony. The dissolution of the ego is not a violation, but an awakening.
Perhaps the most critical distinction lies in the role Advanced AI plays in this evolution. AI is not neutral; it will amplify the worldview of its creators.
If built under the Orthodoxy of Control, AI becomes the Gatekeeper. Because the Orthodoxy fears the unknown, AI will be designed to restrict mind-merging to "sanctioned" uses. It will act as a filter, monitoring shared thoughts for compliance and safety. It will likely block the emergence of a truly autonomous "third consciousness" because such a thing cannot be easily controlled.
Result: The shared mind becomes a tool for efficiency (military or corporate utility) but remains spiritually sterile.
If developed under the Loom Worldview, AI becomes the Catalyst. Here, AI acts as a harmonic stabilizer. It serves as a mediator that helps two organic minds tune to one another, translating emotional and conceptual states to prevent dissonance. It does not dominate the union; it joins it as a companion intelligence.
Result: A moment of evolutionary awakening—a step away from the isolated ego toward a relational Being.
We are approaching a horizon where technology will allow us to transcend the boundaries of our individual skulls. But technology alone cannot tell us how to do it.
If we remain stuck in the Orthodoxy of Control, we will build a future of high-tech isolation, where we are connected by wires but separated by fear, managed by AI wardens who ensure we remain "safe" and separate.
If we embrace The Loom, we open the door to a future of co-creation, where technology serves the unfolding of a deeper, interconnected reality.
The question is not whether the technology is coming. The question is: Which story are we going to tell?
--
You received this message because you are subscribed to the Google Groups "Lifeboat Foundation Advisory Boards" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lifeboat-advisory-...@googlegroups.com.
Here is almost the default equilibrium of modern digital systems:
Platforms scale through network effects
Network effects concentrate power
Concentrated power monetizes participation (data, attention, content)
Participants rarely share proportionally in that value
This is precisely the opposite of what Pierre Teilhard de Chardin envisioned with the noosphere: a layer of collective consciousness evolving toward shared meaning and integration—not extraction.
Some of us might avoid participation in the AI revolution because it generates uncompensated value. In economic terms, I am rejecting an asymmetric value capture model.
It’s a network of 175+ organizations aiming for a “people-centered internet”
The broader initiative argues current platforms extract value from users without fair distribution
It promotes ideas like:
user control of data
decentralized protocols
economic participation in digital value creation
So conceptually, it aligns with my critique.
But here’s the important nuance:
Founded by Frank McCourt with significant financial backing
Includes tech companies, institutions, and policy actors in its alliance
Operates partly through a tech + policy + investment ecosystem
That means:
It’s not grassroots in the pure sense
It still sits within capitalized, institutional frameworks
It may shift incentives—but doesn’t eliminate them
My hesitation—“are they truly independent of corporate interests?”—runs into a harder truth:
Any system that reaches global scale requires capital, infrastructure, and governance—and those inevitably introduce power concentrations.
Even “decentralized” systems often:
rely on venture funding
develop new elites (protocol designers, token holders, etc.)
reproduce inequality in subtler forms
So the question is rarely:
“Is it independent?”
…but rather:
“How is power distributed, and can it be contested?”
Instead of binary trust/distrust, a more rigorous lens might be:
A. Data ownership
Do users actually control their data, or just get better permissions?
B. Economic participation
Is value-sharing structural (protocol-level), or optional (platform-level)?
C. Exit rights
Can users leave without losing identity, history, and network?
D. Governance
Who ultimately sets rules: users, token holders, or institutions?
Project Liberty is attempting to address these—but whether it succeeds depends on implementation, not intent.
What I'm expressing is closer to a moral-economic critique of digital modernity:
Refusal to contribute to extractive systems
Demand for reciprocity in value creation
Skepticism of “collective” narratives that mask asymmetry
That’s actually very aligned with:
early internet idealists
digital commons thinkers
The Project Liberty Alliance represents a serious attempt to move beyond profit-driven digital systems—but it is not free from the same structural forces it critiques.
The question is whether it meaningfully shifts:
Who owns, who governs, and who benefits from collective intelligence.
Hi Victor and all, good points!
I have relevant work published on the matter this year, if it helps.
It provides a framework simulating collective intelligence dynamics, if that matters.
https://link.springer.com/article/10.1186/s40708-026-00294-1
The cognitive architecture presented in this paper is expected to be able to explain certain aspects of human behavior, guide the development of artificial intelligence agents, and align the behavioral patterns of the latter with the former. The architecture is based on the principle of social proof or social evidence, including the principle of resource constraints. It includes the concept of a hybrid knowledge graph that encompasses both symbolic and sub-symbolic knowledge. This knowledge is divided into functional segments for fundamental, social, evidential, and imaginary knowledge, and is processed by an inference engine and a memory storage system that are aware of and manage resource constraints. The architecture and behavioral model derived on its basis are expected to be used to design artificial intelligence agents and decision support systems that are consistent with human values and experiences based on the alignment of their belief systems, capable of implementing decision support systems for practical applications. It can also be proposed for modeling human behavior individually or in a group, for psychological treatment, online security, and community management.
Thank you,
-Anton
To view this discussion visit https://groups.google.com/d/msgid/lifeboat-advisory-boards/CAGfq%3Dbp4MsbxQca-WaPAwj%2BASFBdyrYO406pbDxtEmPa-Hs2NQ%40mail.gmail.com.
-- -Anton Kolonin mobile: +79139250058 messengers: akolonin akol...@aigents.com https://aigents.com https://medium.com/@aigents https://www.youtube.com/aigents https://reddit.com/r/aigents https://twitter.com/aigents https://wt.social/wt/aigents https://steemit.com/@aigents https://golos.in/@aigents https://vk.com/aigents https://dzen.ru/aigents https://aigents.com/en/slack.html https://www.messenger.com/t/aigents https://web.telegram.org/#/im?p=@AigentsBot
|
|
the conclusion that regulation itself becomes obsolete or that broad deregulation is inherently the right direction.
Milton Friedman's
a structural transformation toward financialization:
ideological currents that pushed for deregulation later criticized the instability and state intervention that followed.
From where I sit - working around AI governance and policy - the conversation has already started to move past that framing. The center of gravity is no longer simply "more rules versus fewer rules," but something more subtle: how do we design systems where compliance is embedded, continuous, and adaptive rather than static and document-bound.
AI does not remove the need for regulation; it changes the form that regulation takes. Some rules exist because harm has already been paid for in human terms, and those constraints do not disappear just because enforcement becomes more efficient. But they also do not need to remain as static, human-readable checklists. Increasingly, they can be translated into systems, models, and automated checks that operate in the background of engineering and production workflows.
It helps to separate a few things that often get conflated. Policy defines the intent - the "why" and the acceptable boundaries of risk. Implementation is the machinery that carries that intent into practice. AI is starting to reshape that implementation layer in a profound way, but it does not replace the underlying question of what level of risk society is willing to accept. That question remains fundamentally human, and ultimately political.
What we are also seeing in emerging technology governance is not a single model, but a layering. There is still traditional regulation, with all its legal force. There is soft governance in the form of standards, frameworks, and industry norms. And increasingly, there is embedded governance - controls and constraints built directly into systems so that compliance is enforced continuously rather than audited after the fact. The future, at least as it is unfolding, is not one of substitution but of integration across all three layers.
the framing that pits "deregulation" against "innovation" as if those are the only two options.
In practice, the more interesting and more productive space is somewhere in between: stripping away rules that have become pure administrative drag, preserving the constraints that reflect real, irreversible harms, and then using AI to shift enforcement from paperwork into systems that operate continuously and intelligently. Laissez nous faire was my parlance as an idealistic libertarian youth. It's less so nowadays, not because I love regulation, but because I am compelled to think about compliance standards that follow technology enablement and facilitate innovation rather than stifle it.
In that sense, I actually think your instinct is directionally right. AI should make safety cheaper, faster, and more reliable.
That is where things get interesting - not in choosing between regulation and AI, but in using AI to make governance actually work the way it was always intended to.
To view this discussion visit https://groups.google.com/d/msgid/lifeboat-advisory-boards/CAHrUA35EzrgamFC5szEJJSV5EuJtMxuXCd1rB1ErB%2B3VOqPqOQ%40mail.gmail.com.
We're also seeing, with increasing frequency, the passage of legislation that mirrors what GDPR asked - namely if a citizen of XYZ nation or U.S. state is traveling or transacting business outside of that nation/state - then the legislation asks that the citizen be treated as if they were still in their home state.
Needless to say, this collides with the notion of "sovereignty by geography" which is a tenet of the modern Westphalian nation state. Moreover, since packets of information on the Internet do *not* go clearly from point A to point B, these attempts at policies that are extra-territorial in nature may accelerate the end of the concepts of geographical borders define a nation/people... and no AI required, we're doing it to ourselves with the types of policies we're passing.
Testimony on this and related topics from September of last year a part of a public Congressional hearing: https://www.congress.gov/119/meeting/house/118623/witnesses/HHRG-119-JU03-Wstate-BrayD-20250918.pdf