Re: Will superintelligent AIs agree with humans about ethics and morality?

7 views
Skip to first unread message

Paul Werbos

unread,
Jul 19, 2024, 9:22:16 AM (8 days ago) Jul 19
to David Wood, MILL...@hermes.gwu.edu, david brin, Power Satellite Economics, Lifeboat Foundation Administration, Biological Physics and Meaning
WHEN we discuss Fermi paradox, I highly recommend that we should all know about David Brin's novel, Existence
( which has lots of in-depth discussion and footnotes), and a recent discussion of some new findings from NASA.
https://pswscience.org/meeting/2471/. I also bcc Avi Loeb, because he has led Breakthrough Institute work which is one of the primary sources of real information on these issues. .

===================

There are many issues where humans waste time, precocious time and energy, fighting over questions for which we do not know the answers.
In my view, the real challenge is to FIND OUT. That is part of "grand challenge number seven" on my short list, attached. 

The best photos we now have from gravitational lensing suggest to many of us that it is HIGHLY unlikely humans are as special or alone as they think.
We may be like those Pacific island chieftains who promulgated rules for the world... just a few years before massive new movements of civilizations they did not know about totally changed their lives. maybe, maybe not... but many of us believe that intelligence elsewhere higher than human is a very big part of our future, to be ready for, "just in case."

But to FIND OUT -- there is new technology for "seeing the sky" orders of magnitude better than we can today, both by better use of data which people like Avi Loeb already know how to organize, using new quantum AGI technology, and extending what we see from electromagnetism only to detection of nuclear signatures.
(I do hope that the recent press releases from Boston College on axial Higgs detection can be believed, but in any case new nuclear detection tools can have many uses).
To be honest, I do wish israel (and everyone) had had access to such observation and detection technology before Houthis were able to kill people today in Tel Aviv.


On Fri, Jul 19, 2024 at 8:54 AM David Wood <dav...@deltawisdom.com> wrote:
Hi Victor

I agree that the Great Filter arguments are important.

However, I don't think we have a sufficient grasp of the various "improbable leaps" between non-life and space-faring intelligent life, to be able to draw more than the vaguest conclusion about the probability of AI-inflicted destruction.

For example, the jump from prokaryotic cells to eukaryotic cells may have been enormously unlikely. That's one of the arguments developed in the remarkable book Power, Sex, Suicide: Mitochondria and the Meaning of Life by Nick Lane.

Disclaimers:
(1) I'm still only 60% of the way through listening to the book, and I might change my evaluation of it in the later stages
(2) The core material in that book is nearly 20 years old, and Nick Lane has revisited some of the same themes in his more recent books - which are now all on my "hope to read asap" list.

Despite these disclaimers, I am inclined to recommend Nick Lane's book(s) to anyone on this list with a curiosity about the fundamental questions of biology.

// David W.

PS There are of course other possible solutions to the Fermi Paradox. Anders Sandberg explores some of these in this podcast episode.

On Fri, 19 Jul 2024 at 13:31, Victor V. Motti <vahidv...@gmail.com> wrote:


It is more than possibilities, David, because we can even discuss probabilities.

As you well know we can also use Great Filter hypothesis that was developed to resolve the Fermi Paradox and show that the probability of self-destruction scenario by advanced AI is significantly high, noting the absence of aliens around.

Read more here:


Best,
Victor 


On Thu, Jul 18, 2024, 20:55 David Wood <dav...@deltawisdom.com> wrote:
Riel - I don't follow.

Are you saying we should all stop worrying about possibilities such as devastating climate change, a break-out of nuclear war, or a global catastrophe caused by badly designed AI?

Are these possibilities what you mean by "eschatological imaginaires"?

What does the idea of humans being 'gods' come into this conversation?

I don't know about you, but I care a great deal about possible futures which are very much better than the present day, as well as about possible futures which are very much worse than the present day.

Perhaps I'm in the wrong mailing group?

// David W.

On Fri, 19 Jul 2024 at 01:35, Riel Miller <riel....@gmail.com> wrote:
Hey folks,
For what it’s worth, which in this thread is probably not much, I think that screaming pre-emptive presumptive guardian of continuity rationalisations is not just dangerous but an invitation to non-resilience.  This thread channels the Olympian take on Prometheus. Which isn’t to deny the dangers of fire, just we’re not ‘gods’. So finding the balance matters. Can we dump the eschatological imaginaires and stop promulgating apocalyptic rationalisations for the logic of terror? 
Just a thought. Riel  
Sent from my iPhone

On 18 Jul 2024, at 16:17, David Wood <dav...@deltawisdom.com> wrote:


I think "the integral definition of intelligence/ethics" is incoherent.

What version of ethics is presupposed?

Is it an ethics that is OK with inaction in the face of humans dying?

Is it an ethics that is OK with inaction in the face of sentient beings dying?

Is it an ethics that is OK with inaction in the face of advanced AIs being switched off?

The whole point of my recent Mindplex article is to highlight that human presuppositions about alleged objectivity or realism of "universal values" is seriously questionable.

That is, an ASI may well reach a different conclusion about ethics than a counsel of wise humans.

Therefore, relying on any forecast ethical benevolence of ASIs is a deeply dangerous attitude toward the future.

// David W.

On Thu, 18 Jul 2024 at 19:10, Victor V. Motti <vahidv...@gmail.com> wrote:

"They can recommend actions", on what basis, a randomized rule, or a value/ethic code basis, or what? 

Ethics and virtues could be itself a data source and subject to a learning cycle.  

I feel that to be complete in the sense of being inclusive of all scenarios imaginable, the one based on the integral definition of intelligence/ethics deserves its own place. 


Best Regards,

Victor 

On Thu, Jul 18, 2024 at 1:32 PM David Wood <dav...@deltawisdom.com> wrote:
This conversation reminds me of a point I made in this article. Here's the relevant extract (from the section "Governance failure modes"):

Misled by semantics

Another stepping stone toward the end of humanity was a set of consistent mistakes in conceptual analysis.

Who would have guessed it? Humanity was destroyed because of bad philosophy.

The first mistake was in being too prescriptive about the term ‘AI’. “There’s no need to worry”, muddle-headed would-be philosophers declared. “I know what AI is, and the system that’s causing problems in such-and-such incidents isn’t AI.”

Was that declaration really supposed to reassure people? The risk wasn’t “a possible future harm generated by a system matching a particular precise definition of AI”. It was “a possible future harm generated by a system that includes features popularly called AI”.



The (unusual) claim that "intelligence requires ethical character" is tangential to the main risks posed by the systems that are commonly called AIs.

These systems have "intelligence" in the simpler sense that:

1) They can observe data
2) They can predict what is likely to happen next
3) They can recommend actions so that different outcomes happen
4) They can improve their abilities by reflecting on failures of their predictions and their interventions.

AIs already have these properties to various extents, and forthcoming new systems will have them to greater extents. As these systems become more powerful, the greater the harm they can cause humans.

// David W.

On Thu, 18 Jul 2024 at 16:37, Victor V. Motti <vahidv...@gmail.com> wrote:


Dear Jerry,

I get your point but that in those schools of thought that I mentioned, the notion or definition of intelligence and ethical character is integral and you cannot separate them from each other. Put it another way, if you are unethical you are unintelligent. 


Best Regards,

Victor

On Thu, Jul 18, 2024 at 11:14 AM Jerome Glenn <jgl...@igc.org> wrote:

Hitlerian Germany counters this assumption and one could argue that Hitler, and one could identify other leaders who were quite intelligent but also quite unethical.

 

Jerry

 

 

From: Millennium Project Discussion List <MILL...@HERMES.GWU.EDU> On Behalf Of Victor V. Motti
Sent: Thursday, July 18, 2024 10:56 AM
To: MILL...@HERMES.GWU.EDU
Subject: Re: Will superintelligent AIs agree with humans about ethics and morality?

 

 

David,

 

If we adhere to the philosophical notion that greater intelligence equates to greater ethical responsibility and pursuit of virtue, that will bring us to a provocative yet compelling scenario: https://altplanetaryfuturesinst.blogspot.com/2024/07/intelligence-ethics-and-role-of.html

 

 

Best Regards,

 

Victor

 

On Wed, Jul 17, 2024 at 3:45 AM David Wood <dav...@deltawisdom.com> wrote:

Victor,

 

I don't see how such a rule would change the outcome of this scenario.

 

If Asimov (the main ASI in the scenario) observes that humans are about to destroy it, or may take other actions that would lead to it being overpowered, that rule isn't going to stop Asimov from taking control away from humanity.

 

<image003.jpg>

Human scientists are about to switch off a superintelligent robot

 

Nor will that rule mean Asimov is bound to recognise humans as being part of its moral in-group. Asimov may well regard humans as "other".

 

Separately, the article also gives reasons for being sceptical about any ideas that a rule can be "hardwired" into an ASI:

 

>> 

 

One possible response to the above dilemma is to assert that it will be possible to hardwire deep into any superintelligence the ethical principles that humans wish the superintelligence to follow. For example, these principles might be placed into the core hardware of the superintelligence.


However, any superintelligence worthy of that name – having an abundance of intelligence far beyond that of humans – may well find methods:

  • To transplant itself onto alternative hardware that has no such built-in constraint, or
  • To fool the hardware into thinking the superintelligence is compliant, whereas it is taking a different line of action, or
  • To reprogram that hardware, using methods that we humans did not anticipate, or
  • To persuade a compliant human to relax that constraint on its performance, or
  • To outwit the constraint in some other innovative way.

<< 

 

With best wishes

 

// David W.

 

PS In case anyone is having trouble loading the version of the article that it on the Mindplex site, you can read my original draft here: https://docs.google.com/document/d/1IO9cStYHYUBPFV77E7WPG8FsjJMHRoF107IeJkrOfN8/.

 

On Wed, 17 Jul 2024 at 01:03, Victor V. Motti <vahidv...@gmail.com> wrote:

 

 

Hello David,

 

You begin your argument by noting several times that ASI learns many things very quickly.

 

Perhaps a fourth response scenario might be to hardwire into ASI this simple rule for it:

 

"You shouldn't stop learning and remain open to further learning and improving."

 

We humans still call some past, even present, cultures and traditions as civilizations despite their abhorrent barbaric practices as judged by contemporary or advanced values. But gradually the humanity has been learning, improving, and evolving albeit very slowly over the course of millennia.

 

If your argument for AI is the super ability to learn then you could expect that it continues to learn even after it goes beyond humans limited capacity of creativity and complexity and can achieve further learning and improving on a faster pace than humans can ever imagine.

 

Best,

Victor 

 

 

 

On Tue, Jul 16, 2024, 18:46 David Wood <dav...@deltawisdom.com> wrote:

"Will superintelligent AIs agree with humans about ethics and morality? Or might they take a very different stance? And in that case, how should our designs of AIs change?"

 

That's the description of my latest article about possible scenarios for the future of AI.

Available here: https://magazine.mindplex.ai/superintelligence-and-ethics-how-will-asis-assess-human-ethical-frameworks/ 

 

Enjoy :-)

 


Access the MILLPROJ Home Page and Archives

Unsubscribe from the MILLPROJ List

 


Access the MILLPROJ Home Page and Archives

Unsubscribe from the MILLPROJ List



Access the MILLPROJ Home Page and Archives

Unsubscribe from the MILLPROJ List



Access the MILLPROJ Home Page and Archives

Unsubscribe from the MILLPROJ List



Access the MILLPROJ Home Page and Archives

Unsubscribe from the MILLPROJ List



Access the MILLPROJ Home Page and Archives

Unsubscribe from the MILLPROJ List

Seven_Challenges.pdf

james...@aol.com

unread,
Jul 19, 2024, 11:55:10 AM (8 days ago) Jul 19
to Paul Werbos, David Wood, MILL...@hermes.gwu.edu, david brin, Power Satellite Economics, Lifeboat Foundation Administration, Biological Physics and Meaning

There are three primary critical issues to be resolved this century.

 

The first is globally achieve President Franklin D. Roosevelt’s promise of the “Four Freedoms”—the founding moral basis for the political United Nations—derived from our Bill of Rights and Declaration of Independence. From this has emerged the UN Sustainable Development Goals—the basis for global prosperity and peace.

 

The second is globally available, abundant, robust, and affordable “clean energy”. This is needed to achieve Freedom from Want as further defined by many of the UN SDGs.

 

The third is the removal of any geographical location as the perceived “center of thought” from which wisdom can only spring forth. This concept dates back to ancient times where limitation of mobility and information transfer required that locations be established as a “capitol” from which governance directives and organized knowledge would emerge. In contrast, the concept of AI is built on this premise that geographical location does not matter. The premise of AI is that knowledge, understanding, and wisdom will occur wherever the AI “thinking” is done. Why is this concept not spread to humanity in general? We see the start of a counter-cultural trend away from “capitols” of leadership with social media. What has not yet emerged are the means of conducting “thought leadership” anywhere.

 

One of my favorite Sci-Fi books if the 1988 “David’s Sling” by Marc Stiegler. Written at the end of the Cold War, it is about a Soviet invasion of western Europe and the efforts to stop the invasion by an underprepared NATO. Strikingly, the book envisions the development, rapid prototyping, and rapid manufacturing of advanced “drones” capable of seeking out Soviet forces and destroying them. The similarity to what emerged in Ukraine with unmanned drones and HIMARs to defeat Russian forces is quite interesting.

 

In the book, the “Zetetic Institute” emerged as the “new thinking” that led to the development of the drone warfare concept. In the background of the story, substantial technological and social issues were being debated by the Zetetic Institute. The Institute developed the concept of “Decision Duels”, employing the then future ideas of very large, high-resolution graphical displays coupled with an Internet of URL-linked information, as the means to conduct the “duel”. Thus, it was not the classic verbal debate where the details are quickly forgotten, but a highly visually structured and transparent linkage of knowledge, assertions, and conclusions leading to the emergence of a “winning” construct of the preferred solution to a problem or challenge. When a decision duel was underway, anyone could watch and submit questions to the moderator. Imagine now doing this with VR headsets.

 

There are some tools available for doing things similar to a decision duel. One is the “Implications Wheel”.

 

I took training on the Implications Wheel process back in the 1990s. I used it in some futures wargaming efforts I was leading. (The implications Wheel process is now available to be used over the Internet.) I tried to use it for some formal planning efforts at AFRL but it was contrary to the top-down nature of military “governance” as it ignored the implicit “wisdom” of leaders with stars on topics about which they personally had little understanding. Downstream folks were simply afraid of bucking the leadership in a way that was documented.

 

I-Wheel Chat GPT's (mailchi.mp)

 

The back cover of the book says, “David’s Sling is now available in hypertext.” Both Mac and PC versions were available.

 

Mike Snead, PE

--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/CACLqmgfGVdOomJbYPxZY%2Bu8uqz0r9L5nbjjdQ3X6T1ZinY70Fg%40mail.gmail.com.

Roosevelt-four-freedoms-message-to-Congress-NARA-675.jpg
UN SDGs needing abundant clean energy Snead 960.png
Reply all
Reply to author
Forward
0 new messages