Singularity

11 views
Skip to first unread message

Keith Henson

unread,
Jan 26, 2023, 1:42:52 AM1/26/23
to Howard Bloom, Power Satellite Economics, 847lov...@gmail.com, a.p.k...@astrox.com, ag...@cecglobalevents.com, algl...@gmail.com, am...@sonic.net, analyte...@bellsouth.net, andre...@gmail.com, anna.j.n...@gmail.com, astrobi...@gmail.com, barn...@barnhard.com, b...@spaceward.org, bgo...@gmail.com, bmack...@alum.mit.edu, boyd...@newschool.edu, bpit...@earthlink.net, budo...@gmail.com, c...@sedov.co, cacar...@yahoo.com, cash...@gmail.com, comp...@gmail.com, d.m.bu...@larc.nasa.gov, dalels...@gmail.com, david.c...@gmail.com, davi...@spacegeneration.org, dennis.m...@nasa.gov, dliv...@davidlivingston.com, don.fl...@ohio.edu, dougsp...@gmail.com, drs...@thespaceshow.com, dstewa...@gmail.com, ericm...@factualfiction.com, feng...@gmail.com, feng...@gmail.com, flou...@ohio.edu, gabriela...@gmail.com, gabriela...@nss.org, gale.s...@gmail.com, garyba...@aol.com, gbl...@cinci.rr.com, genemey...@icloud.com, ghal...@aol.com, giu...@gmail.com, h.ha...@suddenlink.net, harold...@verizon.net, hicou...@aol.com, jajos...@gmail.com, james...@gmail.com, james...@comcast.net, jam...@dimensionality.com, james...@aol.com, jaso...@gmail.com, jdrutl...@gmail.com, jeroen...@gmail.com, jgl...@igc.org, jgl...@aol.com, jkst...@sbcglobal.net, joecham...@gmail.com, jssd...@aol.com, karen...@gmail.com, kdw...@gmail.com, kins...@icloud.com, kr...@maficstudios.com, lauren...@gmail.com, liz.k...@tis.org, loby4...@aol.com, lonnie...@aol.com, lorigor...@gmail.com, louisl....@asc-csa.gc.ca, lziel...@comcast.net, mac...@comcast.net, marde...@aol.com, mark.h...@nss.org, mark....@asteroidenterprises.com, na...@universetoday.com, news...@aol.com, nicola...@gmail.com, paul.da...@gen-astro.com, paul.e.d...@gmail.com, peter.g...@us.af.mil, pwe...@gmail.com, rausche...@gmail.com, rckz...@aol.com, re...@mtu.edu, rfu...@thought.live, rich...@gmail.com, ri...@earthlightfoundation.org, robsh...@gmail.com, rocket...@gmail.com, roger.h...@usafa.edu, sam.co...@gmail.com, sam.s...@runbox.com, s...@etiam-engineering.com, sara.a...@seds.org, snn...@columbia.edu, spac...@gmail.com, stelli...@gmail.com, stephen...@gmail.com, topa...@singularsci.com, transg...@comcast.net, trent.wa...@gmail.com, william.w...@gmail.com, willj...@gmail.com, win...@skycorpinc.com, wol...@aol.com, yoda...@hotmail.com
Humanity May Reach Singularity Within Just 7 Years, Trend Shows

By one major metric, artificial general intelligence is much closer
than you think.

https://www.popularmechanics.com/technology/robots/a42612745/singularity-when-will-it-happen/

I don't know how serious to take this, Ray thinks it will happen in
the mid 2040s.

Keith


>

James M. (Mike) Snead

unread,
Jan 26, 2023, 11:57:42 AM1/26/23
to Keith Henson, Howard Bloom, Power Satellite Economics, 847lov...@gmail.com, a.p.k...@astrox.com, ag...@cecglobalevents.com, algl...@gmail.com, am...@sonic.net, analyte...@bellsouth.net, andre...@gmail.com, anna.j.n...@gmail.com, astrobi...@gmail.com, barn...@barnhard.com, b...@spaceward.org, bgo...@gmail.com, bmack...@alum.mit.edu, boyd...@newschool.edu, bpit...@earthlink.net, budo...@gmail.com, c...@sedov.co, cacar...@yahoo.com, cash...@gmail.com, comp...@gmail.com, d.m.bu...@larc.nasa.gov, dalels...@gmail.com, david.c...@gmail.com, davi...@spacegeneration.org, dennis.m...@nasa.gov, dliv...@davidlivingston.com, don.fl...@ohio.edu, dougsp...@gmail.com, drs...@thespaceshow.com, dstewa...@gmail.com, ericm...@factualfiction.com, feng...@gmail.com, feng...@gmail.com, flou...@ohio.edu, gabriela...@gmail.com, gabriela...@nss.org, gale.s...@gmail.com, garyba...@aol.com, gbl...@cinci.rr.com, genemey...@icloud.com, ghal...@aol.com, giu...@gmail.com, h.ha...@suddenlink.net, harold...@verizon.net, hicou...@aol.com, jajos...@gmail.com, james...@gmail.com, james...@comcast.net, jam...@dimensionality.com, jaso...@gmail.com, jdrutl...@gmail.com, jeroen...@gmail.com, jgl...@igc.org, jgl...@aol.com, jkst...@sbcglobal.net, joecham...@gmail.com, jssd...@aol.com, karen...@gmail.com, kdw...@gmail.com, kins...@icloud.com, kr...@maficstudios.com, lauren...@gmail.com, liz.k...@tis.org, loby4...@aol.com, lonnie...@aol.com, lorigor...@gmail.com, louisl....@asc-csa.gc.ca, lziel...@comcast.net, mac...@comcast.net, marde...@aol.com, mark.h...@nss.org, mark....@asteroidenterprises.com, na...@universetoday.com, news...@aol.com, nicola...@gmail.com, paul.da...@gen-astro.com, paul.e.d...@gmail.com, peter.g...@us.af.mil, pwe...@gmail.com, rausche...@gmail.com, rckz...@aol.com, re...@mtu.edu, rfu...@thought.live, rich...@gmail.com, ri...@earthlightfoundation.org, robsh...@gmail.com, rocket...@gmail.com, roger.h...@usafa.edu, sam.co...@gmail.com, sam.s...@runbox.com, s...@etiam-engineering.com, sara.a...@seds.org, snn...@columbia.edu, spac...@gmail.com, stelli...@gmail.com, stephen...@gmail.com, topa...@singularsci.com, transg...@comcast.net, trent.wa...@gmail.com, william.w...@gmail.com, willj...@gmail.com, win...@skycorpinc.com, wol...@aol.com, yoda...@hotmail.com
The threat of unbounded AI should be obvious. Yet, the benefits of AI would appear to be tremendous.

How do we resolve this problem? One approach is to mandate that all AI software operate only using a new operating system running on a new set of processor hardware intentionally designed to operate slow so that the AI "speed of thought" mimics that of a human. This would be essentially the same as the isolation now mandated for dangerous biological research.

Mike Snead
--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/CAPiwVB4TsHq%2Bkn0C-A%3DFY68zRk1fduBPo%3DJKNocMxPO0V2hajw%40mail.gmail.com.

Richard Godwin

unread,
Jan 26, 2023, 12:26:20 PM1/26/23
to Andrew Lindberg, Keith Henson, 847lov...@gmail.com, Howard Bloom, Power Satellite Economics, Ajay Kothari, Aggie Kobrin, Al Globus, Amara Angelica, Bill Gardner, anna.j.n...@gmail.com, Christopher Jannette, Gary Barnhard, Ben Shelef, bgo...@gmail.com, bmack...@alum.mit.edu, boyd...@newschool.edu, Bruce Pittman, Peter Garretson, c...@sedov.co, Chris Carberry, Timothy Cash, comp...@gmail.com, BUSHNELL Bushnell, Dale Skran, David Cheuvront, davi...@spacegeneration.org, Dennis Bushnell, David Livingston, Don Flournoy, Doug Plata, Dr. David Livingston, dstewa...@gmail.com, ericm...@factualfiction.com, Feng Hsu, feng...@gmail.com, Don flournoy, gabriela...@nss.org, Anita E Gale, garyba...@aol.com, Jerry Black, genemey...@icloud.com, Greg Allison, giu...@gmail.com, h.haliasz, harold egeln, hicou...@aol.com, jajos...@gmail.com, Jim Armor, James Oberg, J. James Gholston, James M. Snead, Jason Louv, John Rutledge, Jeroen Lapre, Jerome Glenn, Jeff Liss, John Strickland, Joe Champion, John Spencer, Karen Shea, Kirk Woellert, kins...@icloud.com, Kris Holland, Lauren Wilson, Elizabeth Kennick, Mark Hopkins, LonnieSchorer Schorer, lorigor...@gmail.com, LouisL.Grenier@asc-csa.gc.ca AGENCY] Grenier, Lynne Zielinski, Fred Becker, MarDeckard@aol.com Deckard, mark.h...@nss.org, mark....@asteroidenterprises.com, Nancy Atkinson, Leonard David, Nicola Sarzi Amade, PAUL E. DAMPHOUSSE, Paul E. Damphousse, peter.g...@us.af.mil, Paul Werbos, Joseph Rauscher, Rick Zucker, Seyed Zekavat, rfu...@thought.live, Rich Thoma, Rick Tumlinson, Rob Shapiro, rocket...@gmail.com, Roger G Dr. Civ USAFA/DFPS Harrison, Samuel Coniglio, sam.s...@runbox.com, s...@etiam-engineering.com, sara.a...@seds.org, Sidney Nakahodo, stelli...@gmail.com, stephen...@gmail.com, Rick Boozer, Jon LaBore, Trent Waddington, Bill Gardiner, William Watson, Dennis Wingo, Steven Wolfe, Yodawatt gigaFusion
We have a few filters to go through soon. Has anyone been keeping up on the new CRISPR Cas12a2 ?

Does this look potentially like the ultimate bio weapon? Can you say gray goop?

On Thu, Jan 26, 2023, 12:12 Andrew Lindberg <andre...@gmail.com> wrote:
I think it’s becoming clear that AI is one of the great filters humanity must pass through.

Andrew

Sanjay Singh

unread,
Jan 26, 2023, 12:41:59 PM1/26/23
to James M. (Mike) Snead, Keith Henson, Howard Bloom, Power Satellite Economics, 847lov...@gmail.com, a.p.k...@astrox.com, ag...@cecglobalevents.com, algl...@gmail.com, am...@sonic.net, analyte...@bellsouth.net, andre...@gmail.com, anna.j.n...@gmail.com, astrobi...@gmail.com, barn...@barnhard.com, b...@spaceward.org, bgo...@gmail.com, bmack...@alum.mit.edu, boyd...@newschool.edu, bpit...@earthlink.net, budo...@gmail.com, c...@sedov.co, cacar...@yahoo.com, cash...@gmail.com, comp...@gmail.com, d.m.bu...@larc.nasa.gov, dalels...@gmail.com, david.c...@gmail.com, davi...@spacegeneration.org, dennis.m...@nasa.gov, dliv...@davidlivingston.com, don.fl...@ohio.edu, dougsp...@gmail.com, drs...@thespaceshow.com, dstewa...@gmail.com, ericm...@factualfiction.com, feng...@gmail.com, feng...@gmail.com, flou...@ohio.edu, gabriela...@gmail.com, gabriela...@nss.org, gale.s...@gmail.com, garyba...@aol.com, gbl...@cinci.rr.com, genemey...@icloud.com, ghal...@aol.com, giu...@gmail.com, h.ha...@suddenlink.net, harold...@verizon.net, hicou...@aol.com, jajos...@gmail.com, james...@gmail.com, james...@comcast.net, jam...@dimensionality.com, jaso...@gmail.com, jdrutl...@gmail.com, jeroen...@gmail.com, jgl...@igc.org, jgl...@aol.com, jkst...@sbcglobal.net, joecham...@gmail.com, jssd...@aol.com, karen...@gmail.com, kdw...@gmail.com, kins...@icloud.com, kr...@maficstudios.com, lauren...@gmail.com, liz.k...@tis.org, loby4...@aol.com, lonnie...@aol.com, lorigor...@gmail.com, louisl....@asc-csa.gc.ca, lziel...@comcast.net, mac...@comcast.net, marde...@aol.com, mark.h...@nss.org, mark....@asteroidenterprises.com, na...@universetoday.com, news...@aol.com, nicola...@gmail.com, paul.da...@gen-astro.com, paul.e.d...@gmail.com, peter.g...@us.af.mil, pwe...@gmail.com, rausche...@gmail.com, rckz...@aol.com, re...@mtu.edu, rfu...@thought.live, rich...@gmail.com, ri...@earthlightfoundation.org, robsh...@gmail.com, rocket...@gmail.com, roger.h...@usafa.edu, sam.co...@gmail.com, sam.s...@runbox.com, s...@etiam-engineering.com, sara.a...@seds.org, snn...@columbia.edu, spac...@gmail.com, stelli...@gmail.com, stephen...@gmail.com, topa...@singularsci.com, transg...@comcast.net, trent.wa...@gmail.com, william.w...@gmail.com, willj...@gmail.com, win...@skycorpinc.com, wol...@aol.com, yoda...@hotmail.com

AI takes many forms, from intelligent heuristically driven search of a large state space, though to the latest deep learning algorithms. But all involve software engineering processes which are themselves imperfect and almost always result in delivery of flawed implementations of complex algorithms. Deep learning systems are compounded by the quality of the learning data they are presented with in addition to showing the correctness of the architecture of the neural network architecture. Few people have any notion of how to even make reliable and flawless conventional software, let alone software that is designed to learn to control complex or dangerous machinery.

Bringing the focus back to aerospace, people might want to look at this article (and hopefully others might know of more recent discussions along similar lines):


"But how much work the software does is not what makes it remarkable. What makes it remarkable is how well the software works. This software never crashes. It never needs to be re-booted. This software is bug-free. It is perfect, as perfect as human beings have achieved. Consider these stats : the last three versions of the program — each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. Commercial programs of equivalent complexity would have 5,000 errors."

This software controls the Space Shuttle. A space power satellite might require software of comparable complexity. If this is how difficult it is to write correct and flawless software, what can people expect from the new world of AI software and systems, where philosophers can't even agree on what intelligence is, and software engineers ship commercial code with thousands of subtle errors?

(S)


brian wang

unread,
Jan 26, 2023, 2:23:10 PM1/26/23
to Sanjay Singh, James M. (Mike) Snead, Keith Henson, Howard Bloom, Power Satellite Economics, 847lov...@gmail.com, a.p.k...@astrox.com, ag...@cecglobalevents.com, algl...@gmail.com, am...@sonic.net, analyte...@bellsouth.net, andre...@gmail.com, anna.j.n...@gmail.com, astrobi...@gmail.com, barn...@barnhard.com, b...@spaceward.org, bgo...@gmail.com, bmack...@alum.mit.edu, boyd...@newschool.edu, bpit...@earthlink.net, budo...@gmail.com, c...@sedov.co, cacar...@yahoo.com, cash...@gmail.com, comp...@gmail.com, d.m.bu...@larc.nasa.gov, dalels...@gmail.com, david.c...@gmail.com, davi...@spacegeneration.org, dennis.m...@nasa.gov, dliv...@davidlivingston.com, don.fl...@ohio.edu, dougsp...@gmail.com, drs...@thespaceshow.com, dstewa...@gmail.com, ericm...@factualfiction.com, feng...@gmail.com, feng...@gmail.com, flou...@ohio.edu, gabriela...@gmail.com, gabriela...@nss.org, gale.s...@gmail.com, garyba...@aol.com, gbl...@cinci.rr.com, genemey...@icloud.com, ghal...@aol.com, giu...@gmail.com, h.ha...@suddenlink.net, harold...@verizon.net, hicou...@aol.com, jajos...@gmail.com, james...@gmail.com, james...@comcast.net, jam...@dimensionality.com, jaso...@gmail.com, jdrutl...@gmail.com, jeroen...@gmail.com, jgl...@igc.org, jgl...@aol.com, jkst...@sbcglobal.net, joecham...@gmail.com, jssd...@aol.com, karen...@gmail.com, kdw...@gmail.com, kins...@icloud.com, kr...@maficstudios.com, lauren...@gmail.com, liz.k...@tis.org, loby4...@aol.com, lonnie...@aol.com, lorigor...@gmail.com, louisl....@asc-csa.gc.ca, lziel...@comcast.net, mac...@comcast.net, marde...@aol.com, mark.h...@nss.org, mark....@asteroidenterprises.com, na...@universetoday.com, news...@aol.com, nicola...@gmail.com, paul.da...@gen-astro.com, paul.e.d...@gmail.com, peter.g...@us.af.mil, pwe...@gmail.com, rausche...@gmail.com, rckz...@aol.com, re...@mtu.edu, rfu...@thought.live, rich...@gmail.com, ri...@earthlightfoundation.org, robsh...@gmail.com, rocket...@gmail.com, roger.h...@usafa.edu, sam.co...@gmail.com, sam.s...@runbox.com, s...@etiam-engineering.com, sara.a...@seds.org, snn...@columbia.edu, spac...@gmail.com, stelli...@gmail.com, stephen...@gmail.com, topa...@singularsci.com, transg...@comcast.net, trent.wa...@gmail.com, william.w...@gmail.com, willj...@gmail.com, win...@skycorpinc.com, wol...@aol.com, yoda...@hotmail.com
Hi All

I think it is more useful to define and quantify a Singularity as a Technological and economic next leveling of civilization. 

I would measure this via increased economic doublings over the baseline.

The global baseline since rail and industrialization is once every 25 years. We would expect 3 doublings or 8X the world economy from now to 2100.

There have been almost four doublings since WW2. Just short of $8 trillion in 1940 and now $100-120 trillion. 

$1.2 trillion in 1820 and $1.79 in 1870. $182B in 0AD and $200B in 1000 AD. 1700-1880 was the transition.

Tech prior to 1820 was what increased population levels with stagnant per capita out and production. Agriculture improvements being almost all of it.

The World in WW2 was using about 16 times less steel and oil. Germany was trying to get at a few days of our current oil production in the Caucasus.

The post-war baby boom added half of an extra doubling.

Personal computing, internet and smartphones did not add extra growth to the level of an extra doubling but just sustained global growth levels despite slower population growth.

Technology has to start giving economic doublings without the population doubling. At 8 billion people now and then pretty flat at 10 billion from 2050-2100.
2 billion people in 1927 and 4 billion in 1974. A doubling and half of the population since WW2 and economic per capita increased by about 5-6 times.

If all of the new technology and a 1.25X in population just let the economy double by 2050, then we have sustained the post-WW2 growth rate. If per capita income triple by 2050 then we will have one extra doubling. I would call this Singularity level 1. If global per capita went up 12 times by 2050 then that would be Singularity level 2. The extra doublings can be spread out of 50 years or 100 years because I have a 150-year baseline. Sustaining the baseline is non-trivial, especially with population growth going from 100% of the effect to 50% and now to 25%.

What is powerful enough to give an extra doubling(s)?

Perfected and deployed self-driving electric cars and trucks. This could reduce the supply chain transportation costs from 10% of goods value to 2%. Electrification can reduce fuel costs by 80% which means per mile truck costs go from about $1.80 per mile to $1.50. But Platooning of vehicles can reduce that further especially if there are no drivers in the following trucks. Following robotic trucks would take per-mile costs down to about $0.6 per mile. Fully robotic vehicles could safely drive at 120-150 mph. An extra doubling over 25-50 years.

Teslabot and fully automated factories.

Teslabot has 3% of the mass of a Tesla Model Y. If you can make 20 million EVs of Model Y class per year then the same production could build 6 billion Teslabots per year. Each year adding the equivalent of the human labor force if the Teslabot became as productive as a human.

What comes after ChatGPT and Alpha Fold 2 ? If advanced ChatGPT displaces or merges with Google Search. Alpha Fold X becomes the digital biology of humanity, microbiome, and the ecosystem.

Aging reversal. Making the 80 the new 35. Europe and Japan's 30% seniors get back into the workforce and give a one-time doubling of the workforce. Long-term effects unknown based upon fertility impacts.

Fleet of fully reusable Starship Super Heavy deploying Teslabots, mining bots and automated replicating factories to Mars, Moon and asteroid belt and then Kuiper and Oort Cloud.


I had some videos that explain these ideas.



Thanks,
Brian Wang
Mobile: 650-906-5172


Paul Werbos

unread,
Jan 26, 2023, 3:28:49 PM1/26/23
to Keith Henson, Howard Bloom, Power Satellite Economics, 847lov...@gmail.com, a.p.k...@astrox.com, ag...@cecglobalevents.com, algl...@gmail.com, am...@sonic.net, analyte...@bellsouth.net, andre...@gmail.com, anna.j.n...@gmail.com, astrobi...@gmail.com, barn...@barnhard.com, b...@spaceward.org, bgo...@gmail.com, bmack...@alum.mit.edu, boyd...@newschool.edu, bpit...@earthlink.net, budo...@gmail.com, c...@sedov.co, cacar...@yahoo.com, cash...@gmail.com, comp...@gmail.com, d.m.bu...@larc.nasa.gov, dalels...@gmail.com, david.c...@gmail.com, davi...@spacegeneration.org, dennis.m...@nasa.gov, dliv...@davidlivingston.com, don.fl...@ohio.edu, dougsp...@gmail.com, drs...@thespaceshow.com, dstewa...@gmail.com, ericm...@factualfiction.com, feng...@gmail.com, feng...@gmail.com, flou...@ohio.edu, gabriela...@gmail.com, gabriela...@nss.org, gale.s...@gmail.com, garyba...@aol.com, gbl...@cinci.rr.com, genemey...@icloud.com, ghal...@aol.com, giu...@gmail.com, h.ha...@suddenlink.net, harold...@verizon.net, hicou...@aol.com, jajos...@gmail.com, james...@gmail.com, james...@comcast.net, jam...@dimensionality.com, james...@aol.com, jaso...@gmail.com, jdrutl...@gmail.com, jeroen...@gmail.com, jgl...@igc.org, jgl...@aol.com, jkst...@sbcglobal.net, joecham...@gmail.com, jssd...@aol.com, karen...@gmail.com, kdw...@gmail.com, kins...@icloud.com, kr...@maficstudios.com, lauren...@gmail.com, liz.k...@tis.org, loby4...@aol.com, lonnie...@aol.com, lorigor...@gmail.com, louisl....@asc-csa.gc.ca, lziel...@comcast.net, mac...@comcast.net, marde...@aol.com, mark.h...@nss.org, mark....@asteroidenterprises.com, na...@universetoday.com, news...@aol.com, nicola...@gmail.com, paul.da...@gen-astro.com, paul.e.d...@gmail.com, peter.g...@us.af.mil, pwe...@gmail.com, rausche...@gmail.com, rckz...@aol.com, re...@mtu.edu, rfu...@thought.live, rich...@gmail.com, ri...@earthlightfoundation.org, robsh...@gmail.com, rocket...@gmail.com, roger.h...@usafa.edu, sam.co...@gmail.com, sam.s...@runbox.com, s...@etiam-engineering.com, sara.a...@seds.org, snn...@columbia.edu, spac...@gmail.com, stelli...@gmail.com, stephen...@gmail.com, topa...@singularsci.com, transg...@comcast.net, trent.wa...@gmail.com, william.w...@gmail.com, willj...@gmail.com, win...@skycorpinc.com, wol...@aol.com, yoda...@hotmail.com
Thanks much, Keith, for starting a discussion of one of the most important issues facing all of us humanity.

"Singularity": How soon will artificial general intelligence (AGI) appear out here on the internet, and will it be for good or ill?

On Thu, Jan 26, 2023 at 1:42 AM Keith Henson <hkeith...@gmail.com> wrote:
Humanity May Reach Singularity Within Just 7 Years, Trend Shows

By one major metric, artificial general intelligence is much closer
than you think. 
https://www.popularmechanics.com/technology/robots/a42612745/singularity-when-will-it-happen/I don't know how serious to take this, Ray thinks it will happen in
the mid 2040s.

Ray simply does not know.

Keith, Howard and NSS have done a great service to the space community by creating great dialogue on these lists.
Many of us are well aware of the limits of the dialogue on space (great hope number two on the attached list), but when I compare this to the global dialogue on OTHER challenges on the list attached.. it is overwhelming.

And so: if THIS is not the right forum, what is? Some of us are working to try to CREATE a more serious dialogue, to connect the big picture world of policy and investment, to technical reality... but we all have a long way to go with dialogue on the realities of AGI. 

And so... please forgive a warmup exercise. The "new AI" and "new AGI" now overwhelming the world is an outgrowth of the WCCI series of conferences, led by IEEE and INNS. Years ago IEEE have me its neural network pioneer award, for my role in starting the field (along with one other covered by my bcc). This year, they gave me the Frank Rosenblatt award, covering all the fields covered by WCCI. MORE IMPORTANT: They supported me to give a plenary at WCCI2022. The abstract, attached, gives a one page overview with citations on where this revolution came from, and where it is going for the future. 

In the talk, I said: "IF anyone assures  you that AGI is on track to give you a great future,
OR that it is a disaster which needs to be stopped, either way you have identified a liar. 
EITHER position is a lie, because no one knows yet. No one knows BECAUSE it depends on what YOU do...
you have freedom to take it either way, because it depends on DESIGN CHOICES." We need better encompassing technical dialogue, BECAUSE building new teams and networks with technical competence (as well as social science and psychology)  is essential to any hope of really getting it right.

In addition to my technical work, I learned a lot from a workshop which John Mankins and I led years ago, bringing together leaders in space technology with leaders of the advanced IEEE robotics community: https://www.werbos.com/SSP2000/SSP.htm

I already knew that there are LEVELS and LEVELS of general intelligence, both in nature and in what we build.. (https://arxiv.org/abs/q-bio/0311006). At the workshop, we discussed the "lunar cockroach" idea in great depth.
It is quite possible for a lower level of intelligence to take over an entire planet, even without having the level of foresight needed to secure even its own survival in the long term. In my view, we ARE on that path right now. 

In my view... deploying AGI (as we are already doing) is like riding a bicycle. We reach a point where trying to  slam the brakes INCREASES the risk of human extinction. Cockroach-like apps are already swarming over the world.
To be really serious, we need to build on Von Neumann and Morgenstern (who started the branch of neural networks I belong to most), and understand how current design trends based on Nash equilibrium are endangering our very lives, as they apply to weapons systems, cybersecurity and currencies, and more. And we need HIGHER intelligence, not lower.

This is why I believe that the new paradigm of true Quanum AGI (QAGI) cited in the abstract may be crucial to our survival, near term, because of the cybersecurity implications. 

Just yesterday a member of this community informed me that my provisional patent on  QAGI
is ready to be converted to a full application, as it includes papers on details not in the Elsevier paper, such as how to construct true Quantum Quadratic Optimizers, well beyond DWave, necessary to true QAGI, and how o us eit many space applications. Who knows? Many partners will be needed if he succeeds.

Best of luck,

   Paul
Seven_Challenges.pdf

Keith Henson

unread,
Jan 26, 2023, 7:45:07 PM1/26/23
to Paul Werbos, Howard Bloom, Power Satellite Economics, 847lov...@gmail.com, a.p.k...@astrox.com, ag...@cecglobalevents.com, algl...@gmail.com, am...@sonic.net, analyte...@bellsouth.net, andre...@gmail.com, anna.j.n...@gmail.com, astrobi...@gmail.com, barn...@barnhard.com, b...@spaceward.org, bgo...@gmail.com, bmack...@alum.mit.edu, boyd...@newschool.edu, bpit...@earthlink.net, budo...@gmail.com, c...@sedov.co, cacar...@yahoo.com, cash...@gmail.com, comp...@gmail.com, d.m.bu...@larc.nasa.gov, dalels...@gmail.com, david.c...@gmail.com, davi...@spacegeneration.org, dennis.m...@nasa.gov, dliv...@davidlivingston.com, don.fl...@ohio.edu, dougsp...@gmail.com, drs...@thespaceshow.com, dstewa...@gmail.com, ericm...@factualfiction.com, feng...@gmail.com, feng...@gmail.com, flou...@ohio.edu, gabriela...@gmail.com, gabriela...@nss.org, gale.s...@gmail.com, garyba...@aol.com, gbl...@cinci.rr.com, genemey...@icloud.com, ghal...@aol.com, giu...@gmail.com, h.ha...@suddenlink.net, harold...@verizon.net, hicou...@aol.com, jajos...@gmail.com, james...@gmail.com, james...@comcast.net, jam...@dimensionality.com, james...@aol.com, jaso...@gmail.com, jdrutl...@gmail.com, jeroen...@gmail.com, jgl...@igc.org, jgl...@aol.com, jkst...@sbcglobal.net, joecham...@gmail.com, jssd...@aol.com, karen...@gmail.com, kdw...@gmail.com, kins...@icloud.com, kr...@maficstudios.com, lauren...@gmail.com, liz.k...@tis.org, loby4...@aol.com, lonnie...@aol.com, lorigor...@gmail.com, louisl....@asc-csa.gc.ca, lziel...@comcast.net, mac...@comcast.net, marde...@aol.com, mark.h...@nss.org, mark....@asteroidenterprises.com, na...@universetoday.com, news...@aol.com, nicola...@gmail.com, paul.da...@gen-astro.com, paul.e.d...@gmail.com, peter.g...@us.af.mil, pwe...@gmail.com, rausche...@gmail.com, rckz...@aol.com, re...@mtu.edu, rfu...@thought.live, rich...@gmail.com, ri...@earthlightfoundation.org, robsh...@gmail.com, rocket...@gmail.com, roger.h...@usafa.edu, sam.co...@gmail.com, sam.s...@runbox.com, s...@etiam-engineering.com, sara.a...@seds.org, snn...@columbia.edu, spac...@gmail.com, stelli...@gmail.com, stephen...@gmail.com, topa...@singularsci.com, transg...@comcast.net, trent.wa...@gmail.com, william.w...@gmail.com, willj...@gmail.com, win...@skycorpinc.com, wol...@aol.com, yoda...@hotmail.com
On Thu, Jan 26, 2023 at 12:28 PM Paul Werbos <paul....@gmail.com> wrote:
>
> Thanks much, Keith, for starting a discussion of one of the most important issues facing all of us humanity.
>
> "Singularity": How soon will artificial general intelligence (AGI) appear out here on the internet, and will it be for good or ill?
>
> On Thu, Jan 26, 2023 at 1:42 AM Keith Henson <hkeith...@gmail.com> wrote:
>>
>> Humanity May Reach Singularity Within Just 7 Years, Trend Shows
>>
>> By one major metric, artificial general intelligence is much closer
>> than you think.
>>
>> https://www.popularmechanics.com/technology/robots/a42612745/singularity-when-will-it-happen/I don't know how serious to take this, Ray thinks it will happen in the mid 2040s.
>
> Ray simply does not know.

Ray is a modest guy and might agree with you since nobody can
accurately foretell the future. However, some years ago I examined
Ray's projection methods in considerable detail. They were based on
easy to check history and they were entirely logical and reasonable to
me. I think if anyone is qualified to estimate when the singularity
will hit it is Ray.

The rest of this is interesting, though most of it is over my head.

Keith

Sanjay Singh

unread,
Jan 26, 2023, 8:21:04 PM1/26/23
to Keith Henson, Paul Werbos, Howard Bloom, Power Satellite Economics, 847lov...@gmail.com, a.p.k...@astrox.com, ag...@cecglobalevents.com, algl...@gmail.com, am...@sonic.net, analyte...@bellsouth.net, andre...@gmail.com, anna.j.n...@gmail.com, astrobi...@gmail.com, barn...@barnhard.com, b...@spaceward.org, bgo...@gmail.com, bmack...@alum.mit.edu, boyd...@newschool.edu, bpit...@earthlink.net, budo...@gmail.com, c...@sedov.co, cacar...@yahoo.com, cash...@gmail.com, comp...@gmail.com, d.m.bu...@larc.nasa.gov, dalels...@gmail.com, david.c...@gmail.com, davi...@spacegeneration.org, dennis.m...@nasa.gov, dliv...@davidlivingston.com, don.fl...@ohio.edu, dougsp...@gmail.com, drs...@thespaceshow.com, dstewa...@gmail.com, ericm...@factualfiction.com, feng...@gmail.com, feng...@gmail.com, flou...@ohio.edu, gabriela...@gmail.com, gabriela...@nss.org, gale.s...@gmail.com, garyba...@aol.com, gbl...@cinci.rr.com, genemey...@icloud.com, ghal...@aol.com, giu...@gmail.com, h.ha...@suddenlink.net, harold...@verizon.net, hicou...@aol.com, jajos...@gmail.com, james...@gmail.com, james...@comcast.net, jam...@dimensionality.com, james...@aol.com, jaso...@gmail.com, jdrutl...@gmail.com, jeroen...@gmail.com, jgl...@igc.org, jgl...@aol.com, jkst...@sbcglobal.net, joecham...@gmail.com, jssd...@aol.com, karen...@gmail.com, kdw...@gmail.com, kins...@icloud.com, kr...@maficstudios.com, lauren...@gmail.com, liz.k...@tis.org, loby4...@aol.com, lonnie...@aol.com, lorigor...@gmail.com, louisl....@asc-csa.gc.ca, lziel...@comcast.net, mac...@comcast.net, marde...@aol.com, mark.h...@nss.org, mark....@asteroidenterprises.com, na...@universetoday.com, news...@aol.com, nicola...@gmail.com, paul.da...@gen-astro.com, paul.e.d...@gmail.com, peter.g...@us.af.mil, pwe...@gmail.com, rausche...@gmail.com, rckz...@aol.com, re...@mtu.edu, rfu...@thought.live, rich...@gmail.com, ri...@earthlightfoundation.org, robsh...@gmail.com, rocket...@gmail.com, roger.h...@usafa.edu, sam.co...@gmail.com, sam.s...@runbox.com, s...@etiam-engineering.com, sara.a...@seds.org, snn...@columbia.edu, spac...@gmail.com, stelli...@gmail.com, stephen...@gmail.com, topa...@singularsci.com, transg...@comcast.net, trent.wa...@gmail.com, william.w...@gmail.com, willj...@gmail.com, win...@skycorpinc.com, wol...@aol.com, yoda...@hotmail.com
This metric in the Popular Science article:

"to calculate the time it takes for professional human editors to fix AI-generated translations compared to human ones. This may help quantify the speed toward singularity."

It seems to be a curious mix of the Turing test of human-machine equivalence and Searle's Chinese Room argumentation for symbol processing.

The key implication is that when an AI can produce translations comparable to humans, it must therefore be acquiring some sort of human-level knowledge representation such that it can make quality translations. But Searle argued in his thought experiment that blind translation from one set of symbols to another does not constitute intelligence or understanding.

If this is the criteria being used for having reached the singularity, more explanation is needed to show that this argumentation is valid.

(S)


James M. (Mike) Snead

unread,
Jan 27, 2023, 10:21:30 AM1/27/23
to Bill Gardiner, Power Satellite Economics, Jerry McLaughlin, Holger Isenberg, Cara Boyd (Caroline Leyburn)

Bill,

 

Recently, a video was released showing a major pharmaceutical company executive explaining how the company worked to increase its profits from COVID vaccines. They are using a version of gain-of-function, seeking more dangerous virus mutations. It was very clear IMO, that the aim to increase profit outweighed commonsense and moral safety considerations.

 

I disagree that AI’s will always be algorithm driven. An algorithm is essentially an IF-THEN construct that yields the same THEN action for the same IF situation. We have been using such algorithms since the invention of automatic pneumatic braking systems for rail cars in the late 1800s and, even earlier, with pressure regulators for steam boilers. An algorithm-instructed operation, properly engineered, can be tested fully to certify its safety within a prescribed set of circumstances. These circumstances then become the legally accepted operational usage of the system.

 

Artificial intelligence is different in that the IF-THEN construct is fluid. Thus, it cannot be tested, limited, or certified. It is essentially a gain-of-function operation that, as we now know with COVID, can bring disaster. There is no moral reasoning, IMO, to permit artificial intelligence development.

 

Mike Snead

 

 

 

From: Bill Gardiner <william.w...@gmail.com>
Sent: Thursday, January 26, 2023 11:32 PM
To: James M. (Mike) Snead <james...@aol.com>; Power Satellite Economics <power-satell...@googlegroups.com>; Jerry McLaughlin <drjer...@aol.com>; Holger Isenberg <Holger....@gmail.com>; Cara Boyd (Caroline Leyburn) <caraly...@gmail.com>
Subject: Re: Singullarity

 

Hi Mike,

 

AI's at their core will always be algorithm driven, while humans are driven at their core by metaphor and myth, and humans only support a thin veneer of situational and economic logical behavior. As long as the environment is dynamically changing, only irrational  3 14159 .....  , complex a+bi  humans CAN, but not necessarily WILL,  prevail in some manner on some surface along with a supporting environment.

 

The ultimate role and fate of AI vis á vis humanity in my view was best expressed  by Isaac Asimov in his 1956 short story "The Last Question,"

 

 

... (with apologies to Howard Bloom on this thread) and developed ad nauseam by Asimov in his Foundation Trilogy and in his I Robot series more succinctly. (His three laws of Robotics are almost as good as George Carlin's condensation of the Ten Commanments).

 

In short, humanity could and should develop the AI SOON to be an all-apocalypse backup just in time for the Singularity, together with the Spitzbergen seed banks and the like, to re- boot civilization-capable life wherever it can take hold again, and remain capable as such, ad infinitum.  Whatever else AI does or is capable of would be best known to all  in the full light of day.

 

So, let there be light.

 

Bill Gardiner

 

 

 

On Thu, Jan 26, 2023, 11:57 AM James M. (Mike) Snead <james...@aol.com> wrote:

The threat of unbounded AI should be obvious. Yet, the benefits of AI would appear to be tremendous.

How do we resolve this problem? One approach is to mandate that all AI software operate only using a new operating system running on a new set of processor hardware intentionally designed to operate slow so that the AI "speed of thought" mimics that of a human. This would be essentially the same as the isolation now mandated for dangerous biological research.

Mike Snead

-----Original Message-----
From: power-satell...@googlegroups.com <power-satell...@googlegroups.com> On Behalf Of Keith Henson
Sent: Thursday, January 26, 2023 1:42 AM
To: Howard Bloom <howl...@aol.com>; Power Satellite Economics <power-satell...@googlegroups.com>
Cc: 847lov...@gmail.com; a.p.k...@astrox.com; ag...@cecglobalevents.com; algl...@gmail.com; am...@sonic.net; analyte...@bellsouth.net; andre...@gmail.com; anna.j.n...@gmail.com; astrobi...@gmail.com; barn...@barnhard.com; b...@spaceward.org; bgo...@gmail.com; bmack...@alum.mit.edu; boyd...@newschool.edu; bpit...@earthlink.net; budo...@gmail.com; c...@sedov.co; cacar...@yahoo.com; cash...@gmail.com; comp...@gmail.com; d.m.bu...@larc.nasa.gov; dalels...@gmail.com; david.c...@gmail.com; davi...@spacegeneration.org; dennis.m...@nasa.gov; dliv...@davidlivingston.com; don.fl...@ohio.edu; dougsp...@gmail.com; drs...@thespaceshow.com; dstewa...@gmail.com; ericm...@factualfiction.com; feng...@gmail.com; feng...@gmail.com; flou...@ohio.edu; gabriela...@gmail.com; gabriela...@nss.org; gale.s...@gmail.com; garyba...@aol.com; gbl...@cinci.rr.com; genemey...@icloud.com; ghal...@aol.com; giu...@gmail.com; h.ha...@suddenlink.net; harold...@verizon.net; hicou...@aol.com; jajos...@gmail.com; james...@gmail.com; james...@comcast.net; jam...@dimensionality.com; james...@aol.com; jaso...@gmail.com; jdrutl...@gmail.com; jeroen...@gmail.com; jgl...@igc.org; jgl...@aol.com; jkst...@sbcglobal.net; joecham...@gmail.com; jssd...@aol.com; karen...@gmail.com; kdw...@gmail.com; kins...@icloud.com; kr...@maficstudios.com; lauren...@gmail.com; liz.k...@tis.org; loby4...@aol.com; lonnie...@aol.com; lorigor...@gmail.com; louisl....@asc-csa.gc.ca; lziel...@comcast.net; mac...@comcast.net; marde...@aol.com; mark.h...@nss.org; mark....@asteroidenterprises.com; na...@universetoday.com; news...@aol.com; nicola...@gmail.com; paul.da...@gen-astro.com; paul.e.d...@gmail.com; peter.g...@us.af.mil; pwe...@gmail.com; rausche...@gmail.com; rckz...@aol.com; re...@mtu.edu; rfu...@thought.live; rich...@gmail.com; ri...@earthlightfoundation.org; robsh...@gmail.com; rocket...@gmail.com; roger.h...@usafa.edu; sam.co...@gmail.com; sam.s...@runbox.com; s...@etiam-engineering.com; sara.a...@seds.org; snn...@columbia.edu; spac...@gmail.com; stelli...@gmail.com; stephen...@gmail.com; topa...@singularsci.com; transg...@comcast.net; trent.wa...@gmail.com; william.w...@gmail.com; willj...@gmail.com; win...@skycorpinc.com; wol...@aol.com; yoda...@hotmail.com
Subject: Singularity

Humanity May Reach Singularity Within Just 7 Years, Trend Shows

By one major metric, artificial general intelligence is much closer than you think.

https://www.popularmechanics.com/technology/robots/a42612745/singularity-when-will-it-happen/

I don't know how serious to take this, Ray thinks it will happen in the mid 2040s.

Keith


>

Paul Werbos

unread,
Jan 27, 2023, 11:07:31 AM1/27/23
to James M. (Mike) Snead, Bill Gardiner, Power Satellite Economics, Jerry McLaughlin, Holger Isenberg, Cara Boyd (Caroline Leyburn), Youngsook Park
I need to clarify a few things.

When Keith said "the singularity DOES seem to be coming," my post was NOT an argument against that belief. If anyone reads what I write a bit more carefully... I agreed more strongly than  Keith did that Artificial General Intelligence as I define it  (AGI) **IS** coming. My post was about the CHOICE we face -- will it be AGI good for humans or AGI which gets us all killed in the end. 

On Fri, Jan 27, 2023 at 10:21 AM 'James M. (Mike) Snead' via Power Satellite Economics <power-satell...@googlegroups.com> wrote:

Bill,

 

Recently, a video was released showing a major pharmaceutical company executive explaining how the company worked to increase its profits from COVID vaccines. They are using a version of gain-of-function, seeking more dangerous virus mutations. It was very clear IMO, that the aim to increase profit outweighed commonsense and moral safety considerations.

 

I disagree that AI’s will always be algorithm driven. An algorithm is essentially an IF-THEN construct that yields the same THEN action for the same IF situation. We have been using such algorithms since the invention of automatic pneumatic braking systems for rail cars in the late 1800s and, even earlier, with pressure regulators for steam boilers. An algorithm-instructed operation, properly engineered, can be tested fully to certify its safety within a prescribed set of circumstances. These circumstances then become the legally accepted operational usage of the system.

 

Artificial intelligence is different in that the IF-THEN construct is fluid. Thus, it cannot be tested, limited, or certified. It is essentially a gain-of-function operation that, as we now know with COVID, can bring disaster. There is no moral reasoning, IMO, to permit artificial intelligence development.


Indeed, there are many things happening now in all of most of the tech giants and governments on this planet which would not satisfy moral reasoning. They ARE HAPPENING anyway. In SOME cases (as with Japan and South Korea), I have advocated STRONGER AGI 
even beyond what they are now doing, as part of the moral goal of making them more secure against missile attacks. 

AGI **IS** coming. No realistic choice, in my view, given how world economies and politics work.

The choice is between GOOD, smarter AGI and terrible AGI which is coming like a swarm of locusts if there is nothing stronger there to stop them. 

 
--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.

Tim Cash

unread,
Jan 27, 2023, 3:14:10 PM1/27/23
to Paul Werbos, James M. (Mike) Snead, Bill Gardiner, Power Satellite Economics, Jerry McLaughlin, Holger Isenberg, Cara Boyd (Caroline Leyburn), Youngsook Park
I am already dead, they just forgot to deliver the coroner's report.  AGI may kill us, but the best phrase ever stated by Pogo is, to quote, "We have met the enemy, and he is us"

Tim Cash

Paul Werbos

unread,
Jan 27, 2023, 3:46:47 PM1/27/23
to Tim Cash, James M. (Mike) Snead, Bill Gardiner, Power Satellite Economics, Jerry McLaughlin, Holger Isenberg, Cara Boyd (Caroline Leyburn), Youngsook Park, barn...@barnhard.com
On Fri, Jan 27, 2023 at 3:14 PM Tim Cash <cash...@gmail.com> wrote:
I am already dead, they just forgot to deliver the coroner's report.  AGI may kill us, but the best phrase ever stated by Pogo is, to quote, "We have met the enemy, and he is us"

I know that feeling.

In fact, many many people all over the world have experienced more of that feeling over the past few years.
It is VERY serious. 

In truth, it is not AGI but my work on the later three challenges on my list which has kept ME going, seven years later than the time when my main ancestors all died like clockwork. It included a lot of real, hard, new science ... but also new better organized diet, exercise and qi activities. And a clear realization of the stark choice before all of us after age 70... to rest in peace or connect to a new source of energy.

All of us here are important to achieving the best future, so I want to help if I can, but this list is not the right venue.
The seven challenges each demand their own intense focus -- with connection but not dilution or things which might get in the way of connecting the whole. Gary's site build-a-world.org is now the best single resource out there to  make the connections.

Best regards,

   Paul 



Seven_Challenges.pdf

Keith Henson

unread,
Jan 27, 2023, 6:08:53 PM1/27/23
to James M. (Mike) Snead, Bill Gardiner, Power Satellite Economics, Jerry McLaughlin, Holger Isenberg, Cara Boyd (Caroline Leyburn)
On Fri, Jan 27, 2023 at 7:21 AM 'James M. (Mike) Snead' via Power
Satellite Economics <power-satell...@googlegroups.com>
wrote:

snip

> There is no moral reasoning, IMO, to permit artificial intelligence development.

I don't see any way to stop it. Do you?

Keith
>
> Mike Snead

Nick Nielsen

unread,
Jan 27, 2023, 6:23:45 PM1/27/23
to Keith Henson, James M. (Mike) Snead, Bill Gardiner, Power Satellite Economics, Jerry McLaughlin, Holger Isenberg, Cara Boyd (Caroline Leyburn)
It's fun to debate the should-we-or-shouldn't-we aspect of AGI, but this distracts from the pragmatic reality of regulation (or some other form of containment of AGI). If AGI is to be regulated or forbidden based on a lack of moral justification, where exactly do we draw the line? How sophisticated do we allow a computer system and its accompanying software to become before we scotch its further development? Do you forbid expert systems? Do we allow development of sophisticated algorithms to go forward as long as the developers don't call it artificial intelligence?

Until there is some industry standard agreement on what constitutes AI or AGI, all of this is meaningless.

With nuclear technology, it was possible to regulate heavy elements, which are sufficiently rare that a regulation regime could be effective. However, it can't be effective forever, hence we have seen the assassination of Iranian nuclear scientists as a method to slow down their nuclear program. Since the development of AI and AGI doesn't require a crucial element like uranium, we are left only with the personnel option. Kaczynski tried this by selectively assassinating individuals involved in technology development (and also who had some connection to the MK Ultra program in which he participated).

I can certainly imagine some nation-state having a black program to assassinate AI researchers that get too close to their goal, which would then force the development deep underground. This would slow the developmental timelines of the technology, but not likely stop it. Also, this research has sufficient visibility that the systematic assassination of many of the top researchers would be pretty obvious and difficult to keep quiet.

Nick
 

--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.

Keith Henson

unread,
Jan 27, 2023, 6:29:24 PM1/27/23
to Paul Werbos, James M. (Mike) Snead, Bill Gardiner, Power Satellite Economics, Jerry McLaughlin, Holger Isenberg, Cara Boyd (Caroline Leyburn), Youngsook Park
On Fri, Jan 27, 2023 at 8:07 AM Paul Werbos <paul....@gmail.com> wrote:
>
> I need to clarify a few things.
>
> When Keith said "the singularity DOES seem to be coming," my post was NOT an argument against that belief. If anyone reads what I write a bit more carefully... I agreed more strongly than Keith did that Artificial General Intelligence as I define it (AGI) **IS** coming.

I have thought this is the case since helping edit Drexler's Engines
of Creation in the early 80s.

> My post was about the CHOICE we face -- will it be AGI good for humans or AGI which gets us all killed in the end.

As I mentioned to Mike, I don't see any way to control the emergence
of AGI. It's not AGI, but who knew about chatGPT before it burst on
the scene? I sort of know the big names in the field, and I don't
think they have an idea either. Though friendly AI is a good idea.

I might have personally reduced the risk an unknown amount by pointing
out that blind replication of a human brain is risky. Humans (I
think) have behavioral traits (leading to war) that you really don't
want to be turned on in an AI.

But I get serious pushback from people who are hostile to the idea
that humans have *any* psychological traits, particularly those which
are switched by environmental signals. Of course they have no
explanations for such things as what happened to Patty Hearst.

Keith
> To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/CACLqmgdHHOUjCjeOzUo-VgTNib90MsvipjytLqUyfw2WWkaRJg%40mail.gmail.com.

Keith Henson

unread,
Jan 27, 2023, 7:22:02 PM1/27/23
to Nick Nielsen, James M. (Mike) Snead, Bill Gardiner, Power Satellite Economics, Jerry McLaughlin, Holger Isenberg, Cara Boyd (Caroline Leyburn)
On Fri, Jan 27, 2023 at 3:23 PM Nick Nielsen <john.n....@gmail.com> wrote:
>
> It's fun to debate the should-we-or-shouldn't-we aspect of AGI, but this distracts from the pragmatic reality of regulation (or some other form of containment of AGI). If AGI is to be regulated or forbidden based on a lack of moral justification, where exactly do we draw the line? How sophisticated do we allow a computer system and its accompanying software to become before we scotch its further development? Do you forbid expert systems? Do we allow development of sophisticated algorithms to go forward as long as the developers don't call it artificial intelligence?

Agree with your points.

Additionally, the political people who could pass regulations and laws
don't care and don't have a clue. Their constituents are not widely
aware either, and the ones who are, like us, don't have a solution.

> Until there is some industry standard agreement on what constitutes AI or AGI, all of this is meaningless.

Has anyone asked the chatbot? :-)

> With nuclear technology, it was possible to regulate heavy elements, which are sufficiently rare that a regulation regime could be effective. However, it can't be effective forever, hence we have seen the assassination of Iranian nuclear scientists as a method to slow down their nuclear program. Since the development of AI and AGI doesn't require a crucial element like uranium, we are left only with the personnel option. Kaczynski tried this by selectively assassinating individuals involved in technology development (and also who had some connection to the MK Ultra program in which he participated).
>
> I can certainly imagine some nation-state having a black program to assassinate AI researchers that get too close to their goal, which would then force the development deep underground. This would slow the developmental timelines of the technology, but not likely stop it. Also, this research has sufficient visibility that the systematic assassination of many of the top researchers would be pretty obvious and difficult to keep quiet.

When Microsoft has put a billion into the project and is talking about
ten billion, it's hard to imagine effective opposition.

I think it would take some disaster on the scale of the bombing of
Hiroshima and Nagasaki to gain political attention, and by that time
it seems unlikely that closing the barn door would have any effect.

Keith

James M. (Mike) Snead

unread,
Jan 28, 2023, 4:01:38 AM1/28/23
to Keith Henson, Bill Gardiner, Power Satellite Economics, Jerry McLaughlin, Holger Isenberg, Cara Boyd (Caroline Leyburn)
Some in society suffer from a psychological delusion of "absolute personal freedom". It manifests itself in many ways with many of those ways being judged by our society to be criminal. With the presumption that an adult is responsible for their actions, actions to pursue artificial intelligence development would/must, once crossing a prescribed threshold, be judged criminal and appropriate actions taken, including arrest and imprisonment or overt or covert military action to destroy such capabilities.

Setting such boundaries of permitted personal actions is how a society defines itself. Nothing new in this regard.

As I mentioned before, as a minimum, the entire artificial intelligence software/hardware area of research should be walled off with prescribed software and hardware placing limits on transferability and speed. Within these limits, beneficial uses may emerge.

I fail to see the societal need or benefit for true artificial intelligence. Most implied artificial intelligence benefits appear to really just be IF-THEN algorithms accessing large amounts of data.

Mike Snead


-----Original Message-----
From: Keith Henson <hkeith...@gmail.com>
Sent: Friday, January 27, 2023 6:08 PM
To: James M. (Mike) Snead <james...@aol.com>

John David Galt

unread,
Jan 28, 2023, 11:36:59 AM1/28/23
to power-satell...@googlegroups.com
Mike, I agree with you, but unfortunately, AI is already here. And the
AI wars have already begun.

I have kept silent up to now, but the group and the world badly needs
this truth bomb.

https://boriquagato.substack.com/p/the-ai-wars-have-already-begun

Hat tip: https://vivabarneslaw.locals.com

John David Galt
(just a northern Californian who knows KeithH and used to work in aerospace)

On 01/28/2023 01:01 AM, 'James M. (Mike) Snead' via Power Satellite

Keith Henson

unread,
Jan 28, 2023, 12:53:03 PM1/28/23
to James M. (Mike) Snead, Bill Gardiner, Power Satellite Economics, Jerry McLaughlin, Holger Isenberg, Cara Boyd (Caroline Leyburn)
Mike, regardless of the long range facts, criminalizing AI research is
not something that is going to happen. If you were to talk.about this
to Congress you would get nothing but blank stares.

And even if it was done in the US and the EU, that leaves China and
India both of which have huge software capacity.

I don't think even a world wide thermonuclear war would slow down AI
progress by more than a couple of decades.

You might want to read up on what Elizer Yudkowski has said on the
subject of friendly AI.

Best wishes,

Keith

Keith Henson

unread,
Jan 28, 2023, 1:54:44 PM1/28/23
to James M. (Mike) Snead, Bill Gardiner, Power Satellite Economics, Jerry McLaughlin, Holger Isenberg, Cara Boyd (Caroline Leyburn)
On Sat, Jan 28, 2023 at 9:52 AM Keith Henson <hkeith...@gmail.com> wrote:
>
> Mike, regardless of the long range facts, criminalizing AI research is
> not something that is going to happen. If you were to talk.about this
> to Congress you would get nothing but blank stares.

I may be too pessimistic about congress.

https://yro.slashdot.org/story/23/01/26/2310206/member-of-congress-reads-ai-generated-speech-on-house-floor

Keith

>
> >

James M. (Mike) Snead

unread,
Jan 29, 2023, 5:49:30 AM1/29/23
to Keith Henson, Bill Gardiner, Power Satellite Economics, Jerry McLaughlin, Holger Isenberg, Cara Boyd (Caroline Leyburn)
Keith,

I disagree. The criminalization of artificial intelligence (AI) and public-engaged advanced algorithm instructed (ai) software is beginning as people are learning how these are already being abused with TikTok and Google searches. State governments are outlawing TikTok in part because of such abuse. Google executives will be answering hard questions about their actions that would appear to have used lower-case ai or immature AI to intentionally bias elections, thus denying a true representative government.

Hitler was good because he made the trains run on time. Everything else is of little real importance. Right???

Keith Henson

unread,
Jan 29, 2023, 1:39:53 PM1/29/23
to James M. (Mike) Snead, Bill Gardiner, Power Satellite Economics, Jerry McLaughlin, Holger Isenberg, Cara Boyd (Caroline Leyburn)
Mike, if TikTok is an AI project, that's news to me. (It might be,
never looked at TikTok.)

But even if AI research was outlawed in the US, how do you propose to
suppress it in the rest of the world?

Keith

PS, and after MS has poured $11 B into OpenAI, can you imagine the
political pushback if the government tried to outlaw AI?

On Sun, Jan 29, 2023 at 2:49 AM James M. (Mike) Snead

Paul Werbos

unread,
Jan 29, 2023, 2:19:36 PM1/29/23
to James M. (Mike) Snead, Keith Henson, Bill Gardiner, Power Satellite Economics, Jerry McLaughlin, Holger Isenberg, Cara Boyd (Caroline Leyburn)
On Sun, Jan 29, 2023 at 5:49 AM 'James M. (Mike) Snead' via Power Satellite Economics <power-satell...@googlegroups.com> wrote:
Keith,

I disagree. The criminalization of artificial intelligence (AI) and public-engaged advanced algorithm instructed (ai) software is beginning as people are learning how these are already being abused with TikTok and Google searches. State governments are outlawing TikTok in part because of such abuse.

All these legal efforts remind me of ... early efforts on ethical, safe AI funded by folks like Musk. I will never forget the woman who proudly announced (after getting funding) that we would be saved from fleets of killer drones by painting happy faces on them. Most of the new legal efforts are equally effective, just like COP25.



 
--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.

Paul Werbos

unread,
Jan 29, 2023, 2:22:32 PM1/29/23
to James M. (Mike) Snead, Keith Henson, Bill Gardiner, Power Satellite Economics, Jerry McLaughlin, Holger Isenberg, Cara Boyd (Caroline Leyburn)
I am also reminded of the great efforts, inspired by a guy at TASC, to outlaw more advanced AGI based on 
https://arxiv.org/pdf/1404.0554.pdf . They did put a stop to US efforts in that direction. As a result, China took over the lead.
Similar things are happening in other technologies, yea even unto new nuclear stuff.

Keith Henson

unread,
Jan 29, 2023, 3:17:08 PM1/29/23
to Paul Werbos, James M. (Mike) Snead, Bill Gardiner, Power Satellite Economics, Jerry McLaughlin, Holger Isenberg, Cara Boyd (Caroline Leyburn)
On Sun, Jan 29, 2023 at 11:22 AM Paul Werbos <paul....@gmail.com> wrote:
>
> I am also reminded of the great efforts, inspired by a guy at TASC, to outlaw more advanced AGI based on
> https://arxiv.org/pdf/1404.0554.pdf . They did put a stop to US efforts in that direction. As a result, China took over the lead.

I am not sure what you are talking about in that paper, but the point
of China taking over the lead is my exact problem with Mike Sneed's
thoughts on this topic. If Mike (or anyone) has an idea of how to
suppress AI work worldwide I would be very interested. I simply don't
have a clue.

> Similar things are happening in other technologies, yea even unto new nuclear stuff.

There is a way to make Pu 239 with virtually no Pu 240 in it but I
don't know if that's what you are talking about.

Keith

James M. (Mike) Snead

unread,
Jan 30, 2023, 1:58:54 PM1/30/23
to Power Satellite Economics
Keith,

I saw a news report that TikTok was using some algorithm to decide what videos to show to people depending on where they live in accordance with CCP wishes. This is comparable to what Google, Twitter, and, apparently, Facebook, have been doing to control the information that American receive. In those cases, real intelligence used ai to achieve their goals. We see this as a threat to our republic.

Whether it is done by biological intelligence or artificial intelligence, the result is the same and the response should be the same. There is no way to judge the sanity of AI nor to judge the honesty or morality of their actions. A corrupted intelligence acting morally does not mean it is moral.

How do we suppress anything immoral undertaken elsewhere? We assert economic, legal, or physical control as happened in WW II. Once a determination is made that true AI is a threat, then all necessary actions are justified by self-preservation.

Historically, American politicians are very wary of such actions. Today, 100,000 are dying of fentanyl yet nothing is happening. Who is obstructing a proper moral response? Why is this?

Paul Werbos

unread,
Jan 30, 2023, 3:00:42 PM1/30/23
to James M. (Mike) Snead, Power Satellite Economics
On Mon, Jan 30, 2023 at 1:58 PM 'James M. (Mike) Snead' via Power Satellite Economics <power-satell...@googlegroups.com> wrote:

How do we suppress anything immoral undertaken elsewhere? We assert economic, legal, or physical control as happened in WW II. Once a determination is made that true AI is a threat, then all necessary actions are justified by self-preservation.

WOW!!

Your request seems to call for the US to threaten China with war if it dared to maintain its existing programs.
(Plus much of the rest of the world as well.) 

If you knew just how far ahead they are already in how many relevant fields, you would not advocate such a unipolar strategy!! 

See https://photos.app.goo.gl/Ph3NSbTHfJMJ57Fj7 fr just a few photos... though the most interesting stuff is inside the buildings, and in other provinces. 

=================================================

Keith said that my comments about the events leading to 7/12/14 and 7/14/14 were somewhat unclear.
So maybe I should fill in a FEW details. (I have tons of documents giving more details).

At the IEEE World Congress on Computational Intelligence, held that year in Beijing, I gave several talks, on official travel. Before that trip, I went through the usual NSF approval process, which included the AD for Engineering, the International office (Bonnie Thompson I believe), and the State Department. The main talk is posted at https://arxiv.org/abs/1404.0554. The travel request did describe the subject. I attach the actual sides I used. 

My talk presented a concrete roadmap for research leading to the ability to replicate artificial general intelligence with all the main qualitative powers of the mammal brain. After the talk, two people attending the talk said that the dean of Engineering for Tsinghua University (which was still the lead place running China at that time, before Jiang Zemin died) wanted to discuss this. They had a taxi waiting. The hours we spent in his office were...
yet another "once in a lifetime experience" for me. Just before that, a man with a TASC business card 
introduced himself, and said he was there to report back home on his thoughts.

He barked many orders to many people in Chinese. Then a limousine came to take me (and one or two of the folks from the conference) to the office of the relevant division director of NSFC (NSF of China), to discuss the research opportunities and how they might be approached. He then proposed a joint NSF-NSFC international research program, for joint research n the lines I proposed. 

On 7/7/12 I wrote and sent an email to my boss at NSF and to the Director of Engineering, describing what had been proposed to me, with an evaluation of how it might work (with just a few modifications to better fit US national security needs, i.e. avoiding some dual-use areas).  On 7/14/14, I received a response.

Again, there are SO many documents in my files, and backup directories!!

The 7/14/14 documents basically cancelled all NSF work in this direction, leaving the field to the Chinese,
as you might guess from the photos. 

The technology has MANY applications, from military to civilian vehicles to energy to currency to cybersecurity. 

Xi Jin Ping has called for a broad new open joint initiative on internet technology, to include a special focus on preventing the kinds of backdoors which urgently threaten BOTH nations. Doing nothing would seriously and urgently endanger BOTH of them, US and China. The technologies for intelligent control (maximizing U) and for universal learning to model/predict can help, but will become orders of magnitude more powerful if the simple nonlinear optimization algorithms now in use are replaced by a new family of quantum devices for which I have a
 provisional patent filing now being converted. At present, however, we are entering a world where 
new cybersurprises from MANY serious players might well get us all killed if we do not block them. 





 
--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.
WCCI_MLCI_Werbos2014v2.pptx

Keith Henson

unread,
Jan 30, 2023, 4:20:33 PM1/30/23
to James M. (Mike) Snead, Power Satellite Economics
On Mon, Jan 30, 2023 at 10:58 AM 'James M. (Mike) Snead' via Power
Satellite Economics <power-satell...@googlegroups.com>
wrote:
>
> Keith,
>
> I saw a news report that TikTok was using some algorithm to decide what videos to show to people depending on where they live in accordance with CCP wishes. This is comparable to what Google, Twitter, and, apparently, Facebook, have been doing to control the information that American receive. In those cases, real intelligence used ai to achieve their goals. We see this as a threat to our republic.

Mike, I may not be smart enough to understand how showing a Tic Toc
video in South Dakota vs showing it to people in Nevada would be
something the CCP would give a hoot about. This is in the context of
having set a lot of the understanding of memetics. (I wrote the
earliest articles.)

> Whether it is done by biological intelligence or artificial intelligence, the result is the same and the response should be the same. There is no way to judge the sanity of AI nor to judge the honesty or morality of their actions. A corrupted intelligence acting morally does not mean it is moral.

This is too deep for me. Perhaps some examples would help me grok
what you are trying to convey. "Moral" is a slippery word. Is it
moral to kill 100,000 people? Depends, consider the current war.

> How do we suppress anything immoral undertaken elsewhere? We assert economic, legal, or physical control as happened in WW II. Once a determination is made that true AI is a threat, then all necessary actions are justified by self-preservation.

It seems to me that natural intelligence is just as much of a threat.
At present, no AI presents the level of threat we have from Putin.

> Historically, American politicians are very wary of such actions. Today, 100,000 are dying of fentanyl yet nothing is happening. Who is obstructing a proper moral response? Why is this?

At the root, it's the way human brain reward systems respond to
endorphins and opiates. At the political level, it is the
long-standing cultural norm against people being rewarded by
chemicals. We know how to fix this, there are several examples around
the world. The UK for example does not have a fentanyl problem. But
chances are low that anything will be done.

Incidentally, cults addict people to their own endorphins and
dopamine. These get released by the intense attention cults provide.
It is no wonder that people in cults act like junkies.

Keith

Roger Arnold

unread,
Jan 30, 2023, 4:21:09 PM1/30/23
to James M. (Mike) Snead, Power Satellite Economics
Ars Technica has a really good article on AI and what's behind the dramatic advances that we've been seeing in the last few years. The article is here. It's the best article I've seen that's been written for a general audience (well, the general audience of Ars Technica) by someone who actually works with AI, knows what he's talking about, and is able to explain it clearly. The article indirectly reinforces the point I tried to make in another post: that AI is a powerful tool, but still a tool, and that it's how the tool is used or misused that we should be worrying about. 

In the post I'm responding to, Mike wrote that "Once a determination is made that true AI is a threat, then all necessary actions are justified by self-preservation." Wow! That statement is so far off the mark (IMO) that it isn't even wrong. First of all, I don't believe there is any sharp distinction that can be drawn between whatever one might label "true" AI and what we have now. And a blanket determination that "true AI" is a threat, is silly. Anything can be a threat. The skill of knapping flint was a threat; it allowed sharp spearheads to be shaped that could be and were used to lethal effect in tribal warfare. I find it hard to imagine tribes of stone age hunters embarking on crusades to suppress the dangerous development of flint knapping techniques.

The best way for a tribe or nation to defend against the threat of new technology in the hands of its enemies is to avoid making enemies. Failing that, the second best way is to develop the new technology for itself, and learn its strengths, weaknesses, and possibilities. 



James M. (Mike) Snead

unread,
Jan 30, 2023, 5:34:18 PM1/30/23
to Power Satellite Economics
Keith,

I am surprised by your comment. Now why would showing/promoting a video denouncing the Hunter laptop story as Russian propaganda because the viewer lives in a historically red state not be something of interest to the CCP?

The broader issue relates to the videos being promoted to children in China vs the US. The news report was that the ones in the US were largely trivial in nature while those in China promoted study, respect, and personal betterment. Fundamentally, the CCP control of TikTok is letting the evil CCP—and there is no other way to describe the CCP—control the educational, social, and emotional lives of our children outside of the view of the parents.

Today, in the US House of Representatives, a representative gave a floor speech entirely written by lower case ai. He used it, apparently, to denounce upper case AI. Now think about this. A politician uses true AI to write speeches that the AI writes to intentional influence both the politician and the audience to a point of view consistent with the AI's biases. In queries about the accuracy of statements in the speech, the AI would assure the politician that everything the AI wrote was the absolute truth. And every separate query the politician made to check was controlled by the AI so that he would only see what reinforced the truth of the speech.

Isn't this exactly what Google has been doing in censoring searches to reinforce someone's view that Donald Trump and the GOP are evil?

Stupid people do dangerous actions to get high on the thrill. These people often die.

I understand the endorphin rush they seek. I once had such a rush while fencing. Encased in protective clothing and a wire mesh mask, fencing with a rubber-tipped foil that could not hurt anyone, the adrenaline and thrill of the match brought an endorphin rush. Only once, however. Some people make stupid judgments to do such things as free climbing, apparently seeking the same thing. Other stupid people use drugs to try to do the same thing.

I don't fret about stupid adults. Children—those younger than around 25—I do fret about because they generally lack the maturity to successfully reject such temptations. The government has a clear moral obligation to protect our children.

The UK's problem with drug addiction goes back a couple of centuries. Like fentanyl, it was with drugs imported into the country.

James M. (Mike) Snead

unread,
Jan 30, 2023, 5:51:38 PM1/30/23
to Power Satellite Economics

Roger,

 

“The best way for a tribe or nation to defend against the threat of new technology in the hands of its enemies is to avoid making enemies. Failing that, the second best way is to develop the new technology for itself, and learn its strengths, weaknesses, and possibilities.”

 

Do we need to develop and weaponize a highly contagious virus with high mortality to know that this should be forbidden and any country undertaking such actions should be considered a mortal enemy? The US response to such threats is to treat them as a weapon of mass destruction warranting a nuclear response if used.

 

In raising this rhetorical question, I fully understand that the US military has for decades undertaken defensive medical research. My expectation is that such research has enabled the US to distinguish what could be and could not be a weapon of mass destruction and to develop vaccines to protect troops against less lethal biological agents, where natural or created.

 

If a distinction between lower-case ai, that executes non-adapting IF-THEN instructions, and upper-case AI that learns, adapts, and changes cannot be made, then the ai/Ai should be considered to be the latter and ended as it is dangerous. What is to be done with a child with a propensity for setting fires? When judged a threat, they are locked away.

 

Mike Snead

 

 

 

From: Roger Arnold <silver...@gmail.com>
Sent: Monday, January 30, 2023 4:21 PM
To: James M. (Mike) Snead <james...@aol.com>

Cc: Power Satellite Economics <power-satell...@googlegroups.com>
Subject: Re: Singularity

 

Ars Technica has a really good article on AI and what's behind the dramatic advances that we've been seeing in the last few years. The article is here. It's the best article I've seen that's been written for a general audience (well, the general audience of Ars Technica) by someone who actually works with AI, knows what he's talking about, and is able to explain it clearly. The article indirectly reinforces the point I tried to make in another post: that AI is a powerful tool, but still a tool, and that it's how the tool is used or misused that we should be worrying about. 

Keith Henson

unread,
Jan 30, 2023, 7:40:07 PM1/30/23
to James M. (Mike) Snead, Power Satellite Economics
On Mon, Jan 30, 2023 at 2:34 PM 'James M. (Mike) Snead' via Power
Satellite Economics <power-satell...@googlegroups.com>
wrote:
>
> Keith,
>
> I am surprised by your comment. Now why would showing/promoting a video denouncing the Hunter laptop story as Russian propaganda because the viewer lives in a historically red state not be something of interest to the CCP?

That is convoluted beyond reason. Has anything close to this actually happened?

> The broader issue relates to the videos being promoted to children in China vs the US. The news report was that the ones in the US were largely trivial in nature while those in China promoted study, respect, and personal betterment. Fundamentally, the CCP control of TikTok is letting the evil CCP—and there is no other way to describe the CCP—control the educational, social, and emotional lives of our children outside of the view of the parents.

Knowing a little about how Chinese parents regard their kids,
promoting study, respect, and personal betterment seems like something
the parents would support.

> Today, in the US House of Representatives, a representative gave a floor speech entirely written by lower case ai. He used it, apparently, to denounce upper case AI. Now think about this. A politician uses true AI to write speeches that the AI writes to intentional influence both the politician and the audience to a point of view consistent with the AI's biases. In queries about the accuracy of statements in the speech, the AI would assure the politician that everything the AI wrote was the absolute truth. And every separate query the politician made to check was controlled by the AI so that he would only see what reinforced the truth of the speech.

I have written about this in a fictional context

"In one of the last acts of governments before the politicians
abandoned the physical world, the rail system was transferred to NARF,
North American Rail Fans."

"Like most legislation passed around that time it was suspiciously
well-written, especially if you didn't read it closely. NARF
discovered the wording encouraged it to replace tracks that had been
ripped out and even build new tracks."

https://www.terasemjournals.org/GNJournal/GN0202/henson8.html

AI isn't there yet, but I can imagine legislation written by an AI
might be a substantial improvement over what we have now.

> Isn't this exactly what Google has been doing in censoring searches to reinforce someone's view that Donald Trump and the GOP are evil?

Not needed. Would not notice it anyway since I don't use google that much.

However . . . There is a reason why. Notice that the places where
the most xenophobic memes did well are also places that have a recent
history of economic depression. An awful high fraction of the present
rust belt generation has been beat over the heads with outsourcing and
technological change that makes their future and that of their
children look bleak. In the stone age a bleak outlook (famine) was a
good reason to go to war and kill the neighbors. Humans have an
extremely hard time with this subject, very few of them accept that
humans can have psychological mechanisms that we are not aware of.
AIs don't have this hangup. It may be that AI will have far better
insight into humans than we do.

> Stupid people do dangerous actions to get high on the thrill. These people often die.
>
> I understand the endorphin rush they seek. I once had such a rush while fencing. Encased in protective clothing and a wire mesh mask, fencing with a rubber-tipped foil that could not hurt anyone, the adrenaline and thrill of the match brought an endorphin rush.

I would guess more of an adrenaline rush. They are different.

> Only once, however. Some people make stupid judgments to do such things as free climbing, apparently seeking the same thing. Other stupid people use drugs to try to do the same thing.

Depending on what you want, drugs are more reliable.

> I don't fret about stupid adults. Children—those younger than around 25—I do fret about because they generally lack the maturity to successfully reject such temptations. The government has a clear moral obligation to protect our children.
>
> The UK's problem with drug addiction goes back a couple of centuries. Like fentanyl, it was with drugs imported into the country.

I think they just supply their addicted people with opiates. That
ended the market.

https://www.washingtonpost.com/dc-md-va/2022/12/27/courtland-milloy-column-fentanyl/
> To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/01a701d934fa%24f8a6ce50%24e9f46af0%24%40aol.com.

Roger Arnold

unread,
Jan 30, 2023, 9:09:24 PM1/30/23
to James M. (Mike) Snead, Power Satellite Economics
Mike,

The issue of biological weapons that you raise is an interesting one. It's a good example of why I say the best way to defend against new technology in the hands of enemies is to avoid making enemies. If you can't do that, then the next best defense is to get on top of the technology, and understand its strengths and weaknesses. Often, you'll find ways to counter it or at least minimize the damages if it's used against you. Of course, the two defenses aren't mutually exclusive. Being content to live with partners and allies is, in evolutionary terms, a healthier and more adaptive strategy than paranoia. A felt need to dominate is maladaptive.

The knowledge and technology required to develop potent bioweapons are so widely available that for all practical purposes, they're impossible to suppress. The same knowledge and technology are so valuable in modern medicine that it would be undesirable to suppress them, even if it were possible. We depend strongly on the good will and sanity of the bioscience community to not go rogue. But we also fund secret biosafety level 4 labs where "gain of function" research on viruses and pathogens is conducted. The aim is to better understand the threat that engineered microorganisms present, and to hopefully better ways to counter them. 

What's true of biological weapons is doubly true of AI. The knowledge and tools to develop it are even more widely available than the knowledge, equipment, and supplies needed to develop bioweapons. AI methods and applications have enormous commercial value. Our ruling plutocrats and oligarchs are most unlikely to sanction a crusade against it. 

"If a distinction between lower-case ai, that executes non-adapting IF-THEN instructions, and upper-case AI that learns, adapts, and changes cannot be made, then the ai/Ai should be considered to be the latter and ended as it is dangerous."

You appear to subscribe to a school of thought that conflates the ability to learn, adapt, and change with the philosophical / religious notions of agency, free will, and moral sense. I don't. I don't believe there's a mystical threshold of complexity and capability beyond which those attributes somehow emerge. As a former software engineer and systems architect, I can assure you that perfectly ordinary IF-THEN type algorithms are quite capable of learning, adapting, and changing. It just comes down to data and memory. I certainly don't regard the ability to learn, adapt, and change as inherently dangerous. Yes, the algorithms and techniques that are enabling modern AI to advance so quickly are powerful tools. And in the wrong hands, powerful tools can be dangerous. That's nothing new; life is dangerous. It's written into the plot. 

You might want to watch this YouTube video about AI. Ignore the clickbait title. It includes clips of Elon Musk, making some points about AI that haven't gotten the play that I think they deserve. One is that advanced AI systems would be very hard to develop from scratch. They have long lineages over many generations, and are partly built upon large sets of training data accumulated over time. So there's genuine threat of monopolization, if one company were able to acquire and lock down rights to the large training data sets that are currently public. Were that to happen, it would be Very Bad. Possession of the keys to advanced AI would enable the corporation owning them to ultimately rule the world.

Paul Werbos

unread,
Jan 31, 2023, 8:22:51 AM1/31/23
to Roger Arnold, James M. (Mike) Snead, Power Satellite Economics
Hi, Roger!

Your comments about biological weapons are ALSO important. I thought of saying more when you made them, but I agreed with most of what you said to Mike, and I had another duty yesterday.

I owe thanks to Keith and others on this list for energizing me in a way which improved what I sent yesterday to the Millennium Project NATO workshop proposal for AGI, a subject hard to explain in a clear enugh balanced enough way in two or three brief paragraphs!

But -- abuse of biotech is also serious, number three on my list of "four great fears for humanity,"
on the seven challenges slide I keep sending around. 

When John Kerry and Secretary General Guterres went on TV to call for a new office on the climate extinction threat, under the Security Council... I counterproposed behind the scenes that they should not give up, and should establish an office with five divisions, to address all four of the existential threats (plus a catchall "other and methods" division). Climate and AGI/Internet/IOT would be the two anchor divisions. There is hope that Xi might accept such a deal, giving Guterres and Kerry what they asked ofre, in exchange for being serious about the internet/AGI/IOT combination, which demands very serious dialogue and deeper understanding (and two-way conversation and information networks) even more than climate does. SPACE SETTLEMENT IS ON THE LIST OF SEVEN, but under great hopes, not fears!

BUT: if and when  the possibility of such an office starts to emerge... what of the biotech and nuclear risks which are also quite serious? I try NOT to distract from the first two, which already boggle people's minds and cause wild reactions, but for those interested... when it does not detract from the other two...

There has been some interesting dialogue on the biotech threats, which are also a CLUSTER
of related but different things.

The President of Kazakhstan has pushed for a division in the UN system similar to what Kerry was asking for, but for those threats. The Hudson Institute put out a two hour video seminar discussion months ago, which culminated in a presentation from a guy from Atlantic magazine, which was far better and more honest than any of the others or other things I have seen. He described the discussions with the guy from Kazakhstan. As usual in my odd life, .. well, my wife and I briefly got to hear a presentation by a guy named Alibeck from Kazakhstan here at George Mason University, enough to see he has a start in seeing a major part of the field, and we have accidentally bumped into others. In brief, it really is serious, but I felt bad going past space on this list with AGI... even though there ARE connections.

Ironically, as I just got up... the first message was from India connecting to the nuclear risks,
but I would mostly avoid that ANYWHERE public like here. One exception: new, more advanced AGI really could play a crucial role in any new US/Japan/South Korea collaboration to make them more secure from missile attacks. The goals are all in the AGI sphere (seeing and actually stopping attacking missiles),  but it would have to draw on an aero vehicle component, but not for the NASA market or for  public debate (or for SpaceX or SLS). The AGI efforts can be open, unclassified and even shared with China. We will see what we will see these coming three months.

Best of luck,
   Paul 





James M. (Mike) Snead

unread,
Jan 31, 2023, 11:07:07 AM1/31/23
to Power Satellite Economics

Roger,

 

The starting premise of your argument is fundamentally unsound.

 

“It's a good example of why I say the best way to defend against new technology in the hands of enemies is to avoid making enemies.”

 

I think you need to clarify what you label as an “enemy”. Is the CCP an enemy? They have stated clearly and publicly their intent to subjugate the world. Is Russia an enemy? They invade and threaten nuclear war for no reasonable reason. How about Iran and North Korea?

 

“We depend strongly on the good will and sanity of the bioscience community to not go rogue. But we also fund secret biosafety level 4 labs where "gain of function" research on viruses and pathogens is conducted. The aim is to better understand the threat that engineered microorganisms present, and to hopefully better ways to counter them.”

 

How well does this strategy work? COVID-19 shows that it didn’t. It appears that the enhanced virus got “out” to kill millions.

 

It is easy to argue for some imagined “enormous commercial value” of upper-case AI. This appears to be exactly what Pfizer used as their rationale to do some form of gain-of-function research on dangerous viruses. The ethical failure of such a strategy is quite clear. Hence, the imagined argument of “enormous commercial  value” is without merit in the real world where the consequences of such poor decisions is dangerous.

 

After that point, you appear to rhetorically walk off the cliff, ending with the obvious conclusion that upper-case AI is possibly highly dangerous. My conclusion is that you may be “addicted” to the notion of advanced tech will just turn out to be wonderful given the chance. I imagine someone concluded the same about sending funds to China for gain-of-function research that led to the COVID-19 virus.

 

Mike Snead

Keith Henson

unread,
Jan 31, 2023, 12:24:34 PM1/31/23
to James M. (Mike) Snead, Power Satellite Economics
On Tue, Jan 31, 2023 at 8:07 AM 'James M. (Mike) Snead' via Power
Satellite Economics <power-satell...@googlegroups.com>
wrote:

snip

> How well does this strategy work? COVID-19 shows that it didn’t. It appears that the enhanced virus got “out” to kill millions.

Mike, I am qualified from reading biology papers for decades to
examine such a statement. There is no evidence that this was anything
but a natural spillover, the same as the first SARS spillover. The
biggest evidence against this paranoid idea is the virus itself.
Consider what natural evolution has done to enhance the virus. Surely
a directed effort would have made a nastier virus.

The natural world gives us enough problems. There is no reason (yet
anyway) to conflate the problems with paranoia.

Keith

Jerome Glenn

unread,
Jan 31, 2023, 8:10:04 PM1/31/23
to Paul Werbos, Roger Arnold, James M. (Mike) Snead, Power Satellite Economics

Paul and colleagues:

 

  1. The UN Sec-Gen’s office will produce a global foresight and threats report in a few years
  2. What that report covers will be influenced by UN Member countries
  3. If the US mission to the UN and/or the Sec of State says they want x, y, and z in the threats report, it will get in.
  4. So, Paul, et al., the most efficient way to get these existential threats taken more seriously WRITE the Secretary of State Blinken saying he should insists that x, y, and z are put in the UNSG’s global foresight and risk report, AND give a few paragraphs – saying you can provide more detail if request.

 

Jerry

Al Globus

unread,
Feb 1, 2023, 1:07:58 AM2/1/23
to Keith Henson, James M. (Mike) Snead, Power Satellite Economics


On Jan 31, 2023, at 9:24 AM, Keith Henson <hkeith...@gmail.com> wrote:

The
biggest evidence against this paranoid idea is the virus itself.
Consider what natural evolution has done to enhance the virus.  Surely
a directed effort would have made a nastier virus.

If the host dies, so does the virus.  There is an evolutionary adaptation that helps the host survive, even thrive although its harder to evolve.

James M. (Mike) Snead

unread,
Feb 1, 2023, 11:18:26 AM2/1/23
to Power Satellite Economics
Keith,

From the news, qualified biologists disagree with your conclusion.

Mike Snead

-----Original Message-----
From: Keith Henson <hkeith...@gmail.com>
Sent: Tuesday, January 31, 2023 12:24 PM
To: James M. (Mike) Snead <james...@aol.com>
Cc: Power Satellite Economics <power-satell...@googlegroups.com>
Subject: Re: Singularity

Keith Henson

unread,
Feb 1, 2023, 1:28:16 PM2/1/23
to James M. (Mike) Snead, Power Satellite Economics
On Wed, Feb 1, 2023 at 8:18 AM 'James M. (Mike) Snead' via Power
Satellite Economics <power-satell...@googlegroups.com>
wrote:
>
> Keith,
>
> From the news, qualified biologists disagree with your conclusion.

Mike, in case you have not noticed it, the news is stuffed with
nonsense. People who write news stories can always find someone
"qualified."

I read the online version of the NEJM and Science. Neither of these
reliable sources have supported the idea.. While there is a
possibility that Covid came out of a biology lab, the odds are way
against it and in favor of a natural spillover same as the original
version of SAR. Or do you think that one came out of a lab as well?

What is fascinating at the meta level is why such memes circulate. I
make a case that it is an evolved human response to a bleak economic
outlook and an early step on the road to wars.

Keith
> --
> You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/012301d93658%24cde6feb0%2469b4fc10%24%40aol.com.

a.p.kothari astrox.com

unread,
Feb 1, 2023, 6:13:33 PM2/1/23
to James M. (Mike) Snead, Power Satellite Economics

Granted this is not a PSE topic (neither was AI singularity), but I did not start it! (J)

 

" There is no evidence that this was anything but a natural spillover"

 

There is no evidence that it was a natural spillover either. They are all conjectures and circumstantial evidences.

There is no chance in hell of ever being able to find actual evidence. It has been long destroyed and/or the witnesses killed by CCP.

Not that such a reaction would not have occurred in other democratic (or not) countries. But CCP has one leg up (so to speak!) on other countries in being able to maintain a tight seal.

In this I completely agree with Mike.

The fact that Fauchi and NIH supported the gain-of-function work at this lab in Wuhan (and tried to lie and/or hide it) is highly suspicious.

 

Growing up in India I used to hear all these stories of how honest the Americans were to each other and how honest/ incorruptible American and British media were. And they were. (Except the likes of CIA and parts of the Govt itself).

Now cold water has been poured on any and all of that. Meaning the fifth estate did not and will not try to find the truth, here or in Russia-gate or....(myriad of others)

They did not here and will not.

 

I am quite sad.

 

https://www.nature.com/articles/d41586-022-00584-8

 

"Nevertheless, some virologists say that the new evidence pointing to the Huanan market doesn’t rule out an alternative hypothesis. They say that the market could just have been the location of a massive amplifying event, in which an infected person spread the virus to many other people, rather than the site of the original spillover."

 

SO we do not know and will never know definitively.

 

Most of the Dem leaning or liberal sites (NYT, WaPo, CNN, MSNBC, Guardian, BBC) will point to spillover. Most on the other side, GOP leaning (FOX, NewsMax etc) will point to lab leak.

 

 

 

-------------------------------------------------------

Dr. Ajay P. Kothari

President

Astrox Corporation

 AIAA Associate Fellow

 

Ph: 301-935-5868

Web:  www.astrox.com

Email: a.p.k...@astrox.com

-------------------------------------------------------

--

You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.

To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.

James M. (Mike) Snead

unread,
Feb 2, 2023, 9:48:14 AM2/2/23
to Power Satellite Economics

Last December, Rep. Jim Jordan, now chair of the House Judiciary Committee, and Rep. James Cromer, now chair of the House Oversight Committee, announced their intention to investigate the origins of the COVID-19 virus. I expect that these hearings will provide greater insight into the issue of the origin of the virus. So, until then, I shall wait to see.

 

On the issue of the AI, an alarming news report emerged in the last couple of says about the growth on the new AI chat app ChatGPT. The article’s author predicted the rapid decline of Google-type searches because the AI chat will eliminate the need to search by telling the user the “correct” answer on the first click.

 

Essentially, the AI will remove the need to be curious to know anything because the AI will be the source of all truth—all that needs to be or is worth knowing. With Musk’s brain implant, why go to school? The Ai will directly instruct you in what to do about everything. Thus, the Eloi will be “born”.

 

Fact checkers will merely need to say whether the AI said something was true or false.

 

In short, everything that the AI programmers/teachers want to be truth will become the only truth.

 

I believe Capt Kirk faced this situation in several of the original Star Trek episodes. He wisely saw the need to destroy the AI.

 

The threat of such an AI chat bot to our republic is obvious. The only poll that will need to be taken is of the AI chat bots.

 

Mike Snead

 

Mike Snead

je...@sponable.space

unread,
Feb 2, 2023, 10:30:18 AM2/2/23
to James M. (Mike) Snead, Power Satellite Economics
I should probably avoid this topic (for my sanity). But if it was
released from a Chinese Lab, then there is very likely solid intel that
will/has confirmed (but we'll never see it in the public forum).
Unfortunately the politization of the issue clouds everything. So even
irrefutable evidence will be ignored by the partisans.

As for AI, if we can avoid emulating human emotions in our AI algorithms
then I'm not sure I see the threat. But I'd prefer not to piss off an
emotional Skynet!



On 2023-02-02 07:48, 'James M. (Mike) Snead' via Power Satellite
> Web: www.astrox.com [1]
> [2].
>
> --
> You received this message because you are subscribed to the Google
> Groups "Power Satellite Economics" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to power-satellite-ec...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/power-satellite-economics/062f01d93715%245fb3f390%241f1bdab0%24%40aol.com
> [3].
>
>
> Links:
> ------
> [1] http://www.astrox.com
> [2]
> https://groups.google.com/d/msgid/power-satellite-economics/012301d93658%24cde6feb0%2469b4fc10%24%40aol.com
> [3]
> https://groups.google.com/d/msgid/power-satellite-economics/062f01d93715%245fb3f390%241f1bdab0%24%40aol.com?utm_medium=email&utm_source=footer

Tim Cash

unread,
Feb 2, 2023, 11:12:37 AM2/2/23
to power-satell...@googlegroups.com

I have to chime in on this one about ChatGPT because I used it in a search to locate low cost RF E-Field Detectors.

It gave me only the obvious answers about nothing available at/under $1.00, and possibly not under $10.00 each in quantities.  In other words, in my opinion, ChatGPT is much like the Psychology Program Software Elisa from years back, like a parrot just repeating or regurgitating what it has heard, not smart or AI at all.

I simply do not believe any form of AI is possible until the number of connections surpasses the human brain, it is not a threat to us.  Certainly not a threat to Google, unless Google has decreased their marketing brilliance in a big way.

I also do not believe that Covid is any sort of conspiracy at all, certainly not planned.

What I do believe and have witnessed repeatedly on this planet is a conspiracy of stupidity, of high-placed individuals taking actions counter to our combined interests.

It brings up the saying "Artificial stupidity is much easier to implement than artificial intelligence".

I do think there are conspiracies of individuals taking actions counter to our combined interests, as witnessed every day in the news.

However, I am trying very hard taking actions in concert with our combined interests, as I understand them.

I have amassed sufficient knowledge and witnessed enough negative actions to try and conduct myself in concert with paying it forward for all of us, as I understand that.  I hope the rest of you will try to do likewise.

The work on the low cost RF Detector, and other low cost power beaming components continues, low cost being a relative term.

The real singularity is our combined stupidity and lack of taking the correct combined actions for the planet, killing off our species in short order.  "We have met the Enemy, and he is US!"


Tim Cash

cash...@gmail.com

To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/062f01d93715%245fb3f390%241f1bdab0%24%40aol.com.
-- 
Tim Cash | Sr Systems Engineer
Annapolis, MD
cash...@gmail.com

The ideal Engineer will be about 35 Yrs old, have 40 yrs of engineering experience, look like Elvis, walk on water, and have flagellants that smell like Chanel No. 5.

Paul Werbos

unread,
Feb 2, 2023, 11:17:01 AM2/2/23
to James M. (Mike) Snead, Keith Henson, Power Satellite Economics, Dr. Plamen L. Simeonov
As I pack for travel, I do not have time today to give a serious view on covid theories. But I do have time to introduce you to Plamen Simenov, a serious scientist, skeptic like you but probing and honest, who people in this community should meet if they want to probe those issues further.

Plamen previously sent us a video of a seminar by the Hudson Institute which any honest covid skeptic should watch, especially the last hour or two. I hope he sends you all the link, and that you pass it on to the list. Or you could even ask to join HIS list, if you are that interested. 

Ironically, my next task this morning is to reply to an invitation from Wuhan, which I have visited myself, albeit not on disease issues. 

Best of luck, Paul

Gary barnhard

unread,
Feb 2, 2023, 12:03:02 PM2/2/23
to james...@aol.com, Tim Cash, Keith Henson, Paul J. Werbos, plamen.l...@gmail.com, power-satell...@googlegroups.com
Dear Colleagues,

I have read the singularly thread with interest but find myself inclined to the following observations.

To whit . . .

In my opinion, the new AI chat bots are for the most part an investment in mental masturbation.

They are far more akin to their progenitor "Eliza" than anything like what we understand as intelligence.
(https://en.wikipedia.org/wiki/ELIZA for those who may not remember)

The software in question has no idea about what it is actually doing.

Intelligence is founded on understanding what your capable of doing, what impacts it could have, and why your doing it.

Their greatest danger is our individual and collective gullibility.

Our willingness to indulge in a suspension of disbelief borne of confirmation bias.

What ultimately matters is the choices we make large and small. What we chose to care about, what we chose to endow with meaning, what we chose to love.

That we individually or collectively would abdicate responsibility for the same in any measure lies the potential for our downfall.

We can and must learn to make better choices, based on understanding.

We do not need to cede control, we need to orchestrate it in a form and manner (e.g., cooperation, collaboration, competition) that allows us to deal more successfully, both individually and collectively with established and emergent situations.

Ultimately building systems which tell us what we want hear rather than what is real has negative survival value.

Links to two of my recent IAC papers and corresponding presentations germane to the subject can be found inline below





I happen to be writing my "second thesis" on this so I am pleased to discuss any aspect of the same directly or in whatever fora is deemed appropriate

Ad Astra!

- Gary

P.S. - As for conspiracy theory, The Illuminatus! Trilogy is mandatory reading The Illuminatus! Trilogy - Wikipedia to establish context for the n-dimensional game of chess with an arbitrary number of actors that can make an arbitrary number of moves that we inhabit.  Bottom line is everything may just be a conspiracy "from a certain point of view" ;-)

Given that context, a keen appreciation of Occam's razor Occam's razor - Wikipedia, a sense of humor, and perhaps a reasonable measure of common sense we can best come to appreciate that the truth often times not only seems stranger than fiction, but actually is.



----- GWAVA AUTHENTICATED & SIGNED MESSAGE -----
Gary Pearce Barnhard
President & CEO
Xtraordinary Innovative Space Partnerships, Inc. - XISP-Inc.

<Snip>

John K. Strickland, Jr.

unread,
Feb 2, 2023, 3:58:33 PM2/2/23
to Gary barnhard, james...@aol.com, Tim Cash, Keith Henson, Paul J. Werbos, plamen.l...@gmail.com, power-satell...@googlegroups.com

If an actual singularity (rapid acceleration of technical progress and practical knowledge) could occur,

    it would require some kind of AI software and a very fast supercomputer

There are probably two main types of AI: AI (1) which is super-specialized to work in one narrow area and

     AI (2) which is super-generalized to simulate a human and be able to answer questions.

There are certainly intermediate types.

 

Neither of these types probably pose a threat to human civilization, since they are operated by software performing one step after another and thus has no “intent”.

They could have “emotion” routines added to them and could express anger but would not feel anger as they are not conscious.

It is unlikely that a supercomputer running such step by step software could become conscious, but one could imitate it very well.

 

On the other hand, there will eventually be electronic brains, built to emulate, not simulate, the structure of human and animal brains,

  with massive parallel “processing” and interconnectivity.

They would need a comparable number of “nodes” as human brains have cells.

Such systems have the possibility of becoming aware.

 

(How simple and small does an animal brain have to be before it cannot support consciousness”.
If an octopus (with a big brain) can play, it would seem it is conscious, just like many mammals.)

 

So, what is the intent of the electronics industry?

STOP, DAVE, I’M AFRAID”.

 

 

John S

--

You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.

Roger Arnold

unread,
Feb 3, 2023, 3:18:12 AM2/3/23
to James M. (Mike) Snead, Power Satellite Economics
Mike,

"The starting premise of your argument is fundamentally unsound."

???

Are you referring to my statement that "the best way to defend against new technology in the hands of enemies is to avoid making enemies”? Admittedly, I said that mostly tongue-in-cheek. As in "The best way to avoid getting buried by an avalanche is to be elsewhere when the avalanche runs." Gee, thanks, Mr. Obvious. 

Nonetheless, "avoid making enemies" is not a frivolous defense strategy. It can't be the whole of one's strategy, but avoiding actions that will cause others to see you as an enemy and a threat is a good start. Please bear with me through what follows. It may initially sound like an off-topic socio-political broadside. I promise, though, that it's relevant -- not just to the issues of the singularity and AI that are the subject of this thread, but to the larger topics of Space Solar Power and humanity's future in space.

I'm aware that there's a mindset which holds that trying to avoid making enemies is futile at best. By that mindset, human nature being what it is, there will always be those who will seek to take advantage and steal your property if they're able. At the level of tribes and nations, there will always be other tribes and nations that will be enemies. They will seek to conquer your tribe or nation, appropriate your resources and enslave your population -- if they think they can. Any effort by your own tribe or nation to avoid antagonizing them will be taken as evidence of weakness. It will only encourage those who are in fact already your enemies to believe they can successfully attack. From that perspective, avoiding actions that will cause others to see your nation as an enemy and threat is not only futile, it's outright dangerous.

As you can probably guess, I am not of that mindset. I believe it's based on a flawed understanding of human nature and of how evolutionary forces have shaped life over the course of eons. Evolution is about survival of the fittest, but what constitutes fitness is subtler and more complex than most of us realize.

Years ago, Keith H. wrote about the evolutionary roots of war and tribalism. He pointed out that from an evolutionary perspective, there are circumstances in which it is adaptive for a tribe to war against its neighbors. There are other circumstances where waging war is mal-adaptive. In those circumstances, it is better to relate to neighboring tribes as potential allies and trading partners. So for which behavioral tendency does evolution select? The answer: "both and neither". What it selects for is the flexibility to condition behavior on circumstances. Evolution has endowed us with a diverse suite of behavioral modes and capabilities, along with mechanisms for activating different modes, depending on circumstances. 

Those mechanisms are not necessarily or even primarily conscious. They apply over a range of time scales and involve a range of biochemical machinery. Some are epigenetic and set in-utero during fetal development. Those generally operate over the life of the offspring. Others are set by learning during childhood development. The culture in which a child is reared plays a big role. Still others are neurological and transient, based on an individual's perception of circumstances. 

The mode switch I'm concerned with here -- the "war switch" between tolerance of others in the spirit of cooperation and mutual benefit, vs. demonization of others in preparation for war -- is governed by perceptions of relative scarcity or abundance. The circumstance under which, in the evolutionary milieu, war has been adaptive, is one of acute resource scarcity. Sometimes, there truly isn't enough of a critical resource to go around, and survival really is a case of "us or them". Demonization of the other in that case is an adaptive response that allows the social contract binding a tribe to survive the stress of what's done in war. If we recognized those we're fighting as fellow beings who are essentially the same as us, but we set about killing and enslaving them anyway, what would that do to us? How could we continue to honor the social contract within our own tribe afterward? So we convince ourselves that the others are evil and deserve to be killed. Or weeds that need to be eradicated.

Peace cannot long survive in a climate of fear for the future. Tolerance and a willingness to cooperate with others depend on the absence of fear. Or at least the absence of fear of others as rivals. In the face of natural disasters, it's common for strangers to step in and help the victims, because the victims do not represent threats. More likely, the strangers offering help can imagine themselves in the position of the victims. Natural disasters often bring out the best in humanity. So does confidence in one's ability to cope with whatever the future might bring. While insecurity and fears for the future bring out the worst.

Which brings us around to the question of how all this relates to AI and the Singularity. I contend that technology has brought us to a point that we can no longer afford to lapse into the mindset of scarcity and competition for resources that led to past wars. Our civilization will not survive another world war. We need to recognize that we have the technological capability to secure abundance for all, and render warfare obsolete. Intelligent machines underpin that capability. I don't think we can suppress the development of advanced AI, but if we could, it would spell the end of all we've been striving for.  We may be nearing the end of human civilization on earth anyway, but advanced AI holds the promise of a new age. If we reject it out of fear, then we make the death of civilization in unimaginably destructive war a certainty.



James M. (Mike) Snead

unread,
Feb 3, 2023, 11:33:56 AM2/3/23
to Roger Arnold, Power Satellite Economics

Keith,

 

“So we convince ourselves that the others are evil and deserve to be killed.”

 

Question – Was it necessary to convince oneself that Hitler was evil? Was that not self-evident by his actions? Was not the same true for Stalin, Mao, and numerous others?

 

“I don't think we can suppress the development of advanced AI, but if we could, it would spell the end of all we've been striving for.  We may be nearing the end of human civilization on earth anyway, but advanced AI holds the promise of a new age. If we reject it out of fear, then we make the death of civilization in unimaginably destructive war a certainty.”

 

We have everything now needed to undertake the spacefaring industrial revolution required to employ space solar power. The same is true for building the ground elements. Hence, your argument of the absolute need for advanced AI is only just a personal desire without any demonstrated basis of need.  Your argument for advanced AI becomes circular – we need advanced AI because I think we need advanced AI.

Tim Cash

unread,
Feb 3, 2023, 1:16:29 PM2/3/23
to Power Satellite Economics
How do you turn ON a Quantum Computer?

The same way you turn one off, by pushing the ON and OFF Buttons at the
same time!

Question: Does one get a BILL for power utilized, or a refund credit in
this case?


Tim

cash...@gmail.com


One last comment on the singularity issue: If or when we think we have a
working AI, will we also solve world hunger, peace, etc. that same day? 
After all, those are all laudable goals for the human race.

Roger Arnold

unread,
Feb 3, 2023, 4:01:07 PM2/3/23
to James M. (Mike) Snead, Power Satellite Economics
Mike,

"Question – Was it necessary to convince oneself that Hitler was evil? Was that not self-evident by his actions? Was not the same true for Stalin, Mao, and numerous others?"

Regarding Hitler, yes, as a matter of fact, Roosevelt's administration had to work hard to rouse public sentiment for entering the war. It's an "inconvenient truth" that we've chosen to forget, but Hitler and his Nazi regime enjoyed quite a bit of support among U.S. business leaders and a significant segment of the public at the time. A larger segment didn't feel strongly about Hitler, but was solidly against getting involved in a war that was seen as Europe's problem and none of our concern. We also tend to overlook the fact that the conditions that led to Hitler's rise to power were a result of the harsh terms imposed on Germany by the Treaty of Versaille after WW I. Many historians now view WW II as a delayed continuation of WW I. And WW I was a classical war for control of strategic resources by competing empires.

None of which is particularly relevant to what I was talking about. Yes, warfare has been endemic to human history, and humanity has managed to survive -- so far. My point is that conditions have changed. The march of technology has now led us to a point where another war is likely to be the end of civilization. Our power to wield destruction has become too great. That was already true when the threat was "only" city-killing nuclear weapons. Now the threat has expanded by the possibility of engineered bio agents, autonomous killer drones and robots, bunker-busting bombs and "rods from god", and who knows what else. We have to find a way to deactivate the "war mode" switch that is latent in all of us. 

A necessary precondition for that is to end the specter of resource conflicts. We must come to see abundance for all of humanity as a real possibility. Advanced AI is not strictly necessary to achieve that; economic abundance has been a real possibility for the better part of a century, had we been willing to rein in rent-seeking and empire building. But advanced AI and the end of wage slavery it will bring can be a potent force in the right direction.

I'm terrified by recent trends that I'm seeing. We appear to be moving in the wrong direction, away from global cooperation under the rule of law and toward greater nationalism and renewed competition for resources. Our leaders seem to have lost their fear of nuclear apocalypse and are actively promoting war. We seem to be recreating the conditions that led up to WW I. And look at how well that ended.

Keith Henson

unread,
Feb 3, 2023, 10:28:03 PM2/3/23
to Roger Arnold, James M. (Mike) Snead, Power Satellite Economics
On Fri, Feb 3, 2023 at 12:18 AM Roger Arnold <silver...@gmail.com> wrote:
>
> Mike,
>
snip

> As you can probably guess, I am not of that mindset. I believe it's based on a flawed understanding of human nature and of how evolutionary forces have shaped life over the course of eons. Evolution is about survival of the fittest, but what constitutes fitness is subtler and more complex than most of us realize.

Yep, particularly inclusive fitness (see below for the Wikipedia article).

> Years ago, Keith H. wrote about the evolutionary roots of war and tribalism. He pointed out that from an evolutionary perspective, there are circumstances in which it is adaptive for a tribe to war against its neighbors.

Sort of. My thinking has moved on some. Evolution does not (and for
logical reasons, cannot) occur at the group level. Not when the
groups are exchanging women who take good genes with them. But you
are correct in that war is social, tribe against tribe.

> There are other circumstances where waging war is mal-adaptive.

Right. War is adaptive only when the alternative is worse (typically
starvation). See analysis below.

> In those circumstances, it is better to relate to neighboring tribes as potential allies and trading partners.

With respect to trading marriage partners, we are forced into this
relationship. Human groups in the past were too small (for ecological
reasons) to avoid the disastrous effects of inbreeding. We have to
swap genes with other groups or die out.

> So for which behavioral tendency does evolution select? The answer: "both and neither". What it selects for is the flexibility to condition behavior on circumstances. Evolution has endowed us with a diverse suite of behavioral modes and capabilities, along with mechanisms for activating different modes, depending on circumstances.

Indeed. I have written about some of them, for example,
https://en.citizendium.org/wiki/Capture-bonding

> Those mechanisms are not necessarily or even primarily conscious. They apply over a range of time scales and involve a range of biochemical machinery. Some are epigenetic and set in-utero during fetal development. Those generally operate over the life of the offspring. Others are set by learning during childhood development. The culture in which a child is reared plays a big role. Still others are neurological and transient, based on an individual's perception of circumstances.
>
> The mode switch I'm concerned with here -- the "war switch" between tolerance of others in the spirit of cooperation and mutual benefit, vs. demonization of others in preparation for war -- is governed by perceptions of relative scarcity or abundance.

Quote from the third Matrix movie "Nothing can breed violence like scarcity."

To the extent we have a relatively peaceful world, it has been the
engineers and farmers who have kept ahead of the population growth.
See Gregory Clark's "Genetically Capitalist" paper on how this worked
and particularly what led up to the industrial revolution.

> The circumstance under which, in the evolutionary milieu, war has been adaptive, is one of acute resource scarcity. Sometimes, there truly isn't enough of a critical resource to go around, and survival really is a case of "us or them". Demonization of the other in that case is an adaptive response that allows the social contract binding a tribe to survive the stress of what's done in war. If we recognized those we're fighting as fellow beings who are essentially the same as us, but we set about killing and enslaving them anyway, what would that do to us? How could we continue to honor the social contract within our own tribe afterward? So we convince ourselves that the others are evil and deserve to be killed. Or weeds that need to be eradicated.
>
> Peace cannot long survive in a climate of fear for the future. Tolerance and a willingness to cooperate with others depend on the absence of fear. Or at least the absence of fear of others as rivals. In the face of natural disasters, it's common for strangers to step in and help the victims, because the victims do not represent threats. More likely, the strangers offering help can imagine themselves in the position of the victims. Natural disasters often bring out the best in humanity. So does confidence in one's ability to cope with whatever the future might bring. While insecurity and fears for the future bring out the worst.
>
> Which brings us around to the question of how all this relates to AI and the Singularity. I contend that technology has brought us to a point that we can no longer afford to lapse into the mindset of scarcity and competition for resources that led to past wars. Our civilization will not survive another world war. We need to recognize that we have the technological capability to secure abundance for all, and render warfare obsolete. Intelligent machines underpin that capability. I don't think we can suppress the development of advanced AI, but if we could, it would spell the end of all we've been striving for. We may be nearing the end of human civilization on earth anyway, but advanced AI holds the promise of a new age. If we reject it out of fear, then we make the death of civilization in unimaginably destructive war a certainty.
>
I agree with Roger. Technical advances got us into the current
situation and they are the only way we are likely to get out without
widespread disaster. Draft paper below

Genetic Selection for War in Human Populations

H. Keith Henson

Evolution takes place at the level of the gene. A particular gene
increases in frequency if it improves the chance of the gene existing
in the next generation.

We are familiar with behavior (ultimately from genes) where parents
take awful risks, sometimes losing their lives to save their children.
In an environment where such events were common, the gene(s) for doing
so would become more common if (on average) the self-sacrifice of a
parent saved three or more of their children. The parent has one
copy; each of the kids has a 50% chance of having the same gene. On
average more copies, 1.5, survive the event if a parent dies than if
all three children die. This is an immense simplification; life is
far more complex, but the idea should be clear as to the origin of
this behavior trait.

If it is not, you might look up Hamilton's rule,
https://en.wikipedia.org/wiki/Kin_selection

". . . a hypothetical gene that prompts behaviour which enhances the
fitness of relatives but lowers that of the individual displaying the
behaviour, may nonetheless increase in frequency, because relatives
often carry the same gene."

To analyze the spread of genes that lead to war behavior, we need to
generate a model from the "viewpoint" of such genes in a typical
warrior 50,000 or 100,000 years ago. Tribes in those days were
limited in population by the ability of the environment to provide
food. On average, nearby tribes were around the same size. If two
tribes fought, each had an equal chance of prevailing. In this model,
the winners typically killed all the adult losers and their male
children. The winners incorporated the female children of the losers
into the winner's tribe as wives. (See The Book of Numbers, Chapter
31, verses 7-18, for an account of the aftermath of a war in Biblical
times.)

Wars come about due to a resource crisis. For this model, we will
assume 50% of the tribe will starve in the crisis, the alternative
being to attack neighbors and try to take their resources. How often
such events happened is not part of the model, but they probably
averaged about once a generation.

Turning to the mathematical analysis, the warrior himself has one gene
copy. He typically had six children, half males and half females.
(From what we know, that's about the minimum for a stable population
in Stone Age times.) Each child has a 1/2 chance of carrying the
gene(s) for war behavior. (The model is not very sensitive to the
number of children.)

If war behavior is to be evolutionarily favored, the count of gene
copies needs to be higher (on average) after the war vs. starving in
place.

For the winners, the gene count for a warrior is four, one for himself
plus 1/2 times 6 children for 4. Fifty percent starvation reduces
this to two copies. This makes two gene copies the number to exceed
if the behavior for war is to become more common than starving in
place.

For the losers, the gene count is 1.5 from the female children of the
losers that the winners incorporated into their tribe. That makes the
average count of genes per warrior after a war (4+1.5)/2 (using a 50%
chance of winning). Or 2.75 (for war)/2 (starvation) means that going
to war is about 37% better from the gene's viewpoint than starving (in
this simple model, of course). That's a big number, indicating strong
selection if this model is close to reality.

Genes for not fighting when attacked rapidly disappear from the
population. Nothing turns on war mode faster than being attacked for
groups ranging from tribes up to nations (Pearl Harbor effect).

The driver for this model is starvation due to a resource crisis,
ultimately due to population growth and environmental variation
(mostly weather). Does going to war without looming starvation make
sense? No. Going to war gives an average gene copy remaining of 2.75
vs. 4 for no war, or 4 (no war) /2.75 (war) making the selection
against going to war about 45% per event (or rather nonevent). That
too is a big number, indicating strong selection for not going to war
unless the alternative for genes is worse.

This places the detection of looming starvation under intense
selection to get it right. (A challenging cognitive task.)

How does a tribe go from individual detection of a bleak future to
mass attacking another tribe?

It’s obvious that attacking another tribe one at a time is a nearly
sure way to be killed. There must be a way to synch up the warriors
into a war party or mob. For humans memes seem to supply
coordination. The circulation of xenophobic memes seems to be a step
between perceptions of bleak times a-coming an attack on neighbors. I
think a large fraction of current day humans have this psychological
trait. For decades I noted (without understanding) the association of
economic depression with the popularity of neo-Nazi memes, especially
in the US Midwest. I suspect that religions are rooted in xenophobic
memes and that the ability of humans to have religions at all is due
to the selection of psychological traits for war.
> To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/CAN%3D9PgnzG6Hg6xDkqOshTG9e-SzgX7ap7t5LvoQLmF%2B0wUNCJA%40mail.gmail.com.

Sherwin Gooch

unread,
Feb 4, 2023, 1:58:48 PM2/4/23
to James M. (Mike) Snead, Roger Arnold, Power Satellite Economics
Now I understand why they are killing the chickens.

Artificial scarcity.






- -- --- -- --- -- --- -- --- -- --- -- --- -- ---
Source: Galaxy S8+ celephone gmail app
_ _ . . .   . . . _ _   _ _ . . .   . . . _ _  

Kevin Parkin

unread,
Feb 4, 2023, 2:44:00 PM2/4/23
to Gary barnhard, Keith Henson, Paul J. Werbos, Tim Cash, james...@aol.com, plamen.l...@gmail.com, power-satell...@googlegroups.com
Oh Eliza. It’s been 25 years since I watched a fellow undergrad have a romantic online chat with what he thought was a human, but was actully one of Eliza’s many successors that we connected to an online chat site as a practical joke.

With the advent of blogs, online debates never really evolved into the thought provoking truth-seeking exercises I wanted, instead devolving into trench warfare of logical fallacies and repetition of the same ideas in a thousand guises. What I’ve always wanted is a system that marks up an online debate, recognizing and categorizing each class of argument, flagging logical fallacies, and highlighting was is good and original synthesis.

That hasn’t happened yet. Nor has my Mum’s robot named George that does the cooking and ironing (yet we have self-driving cars and Boston Dynamics terrifying recreation of the robots from Terminator). Roombas are still pretty dumb. Nor do I have a spam filter that works that well. While we do have Siri etc., they don’t get that deep into the context of what I am doing, able to offer a reminder of the right thing at the right time, or filtering the firehose of information I receive to offer it only when I am in a position to act.

So, until that time, I’m not going to worry too much about AI’s capabilities.

Cheers,

Kevin

--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.

Paul Werbos

unread,
Feb 4, 2023, 3:12:07 PM2/4/23
to Keith Henson, Roger Arnold, James M. (Mike) Snead, Power Satellite Economics
On another list, linked to the Atlantic Council, I posted this morning:
======================================
Will hot air balloons exterminate humanity?

  This phrasing is a joke -- but something very serious is happening, here and now.

Even as I focus ever more deeply on the list of seven challenges attached...
four of which address very real threats of extinction of the entire human species..

Even as new US-China connections give real hope of preventing the very worst... 

SOMEONE DECIDED TO SEND OUT SPY BALLOONS to try to distract us from the most important issues,
and prevent dialogue.

I was happy that US information networks were quick to calm down a bit, and remind the world that China already has far more information of ANY kind than what these balloons could produce. Spying was not the motive. 

The question is: WHO wanted to distract us, for what reason?

For a short time, some people may have believed that some people would simply accept the weather balloon story from Xi.
And so, the source was quick to squash that, and make sure we are distracted, by sending it also over South America.

SO WHO, and WHY?

 But can we avoid letting it distract us from threats to ALL humans' survival, as on the list attached?




Paul Werbos

unread,
Feb 4, 2023, 3:24:23 PM2/4/23
to Kevin Parkin, Gary barnhard, Keith Henson, Tim Cash, james...@aol.com, plamen.l...@gmail.com, power-satell...@googlegroups.com
Yes and no.

On Sat, Feb 4, 2023 at 2:43 PM Kevin Parkin <l36p8...@gmail.com> wrote:
Oh Eliza. It’s been 25 years since I watched a fellow undergrad have a romantic online chat with what he thought was a human, but was actully one of Eliza’s many successors that we connected to an online chat site as a practical joke.

In 1966, I was one of the early test subjects for Eliza, and I was amazed at the folks who thought it passed the Turing test. The humans who said that would not pass a higher Turing test, which humans vacillate on, at best,



So, until that time, I’m not going to worry too much about AI’s capabilities.

Until YOU see it.You have the "advantage" of not having seen a lot of what can be seen in my neighborhood.

In 2000, we had a workshop https://www.werbos.com/SSP2000/SSP.htm where the leaders of the IEEE Robotics and Automation Society, among others, presented the best they had back then. (I have seen more in later DARPA meetings.) 

For the moon -- imagine a swarm of "locusts", of self-reproducing metal robots, with less REAL intelligence than a fish, but ability to swarm and take over a planet anyway. DOD came closer than you might think to making that real! Enough intelligence to swarm and grw, if not to survive longer-term. I now see apps being developed and deployed on very much that same kind of basis. We COULD do better, but will we? For now, it does not look like we will, but we can pray... and try.. and remember that life is a game of probabilities. 

Kevin Parkin

unread,
Feb 4, 2023, 4:19:26 PM2/4/23
to Paul Werbos, Gary barnhard, Keith Henson, Tim Cash, james...@aol.com, plamen.l...@gmail.com, power-satell...@googlegroups.com
In 1966, I was one of the early test subjects for Eliza, and I was amazed at the folks who thought it passed the Turing test.

And how does that make you feel?

(sorry, couldn't resist!)

 Until YOU see it.

Yes, I guess I did invoke myself. AI-assisted discourse might be a better way to leverage a community's knowledge and understanding.

Cheers,

Kevin

James M. (Mike) Snead

unread,
Feb 4, 2023, 5:59:29 PM2/4/23
to Power Satellite Economics
Keith H.,

The hypothesis of a natural circumstance-based reason for the prevalence of a warrior genetic mentality would appear to fail casual examination. Native Americans lived in an isolated continent-spanning culture for over ten thousand years. They fragmented into many tribes and subtribes speaking different languages.

The archaeological record indicates periods of warfare possibly driven by localized or regional starvation if not simple animosity. Yet, when the Spanish began to explore the Southeast US, by the accounts that I have read, they found the native tribes extremely peaceful. This would imply that the genetic warlike preference selection did not occur or was not sustained over many generations.

I believe the simpler explanation is that evil people seek power by whatever means are available. This power creates an uneven distribution of resources to which other immoral people migrate seeking to benefit.

Mike Snead

.

Keith Henson

unread,
Feb 4, 2023, 6:54:18 PM2/4/23
to James M. (Mike) Snead, Power Satellite Economics
On Sat, Feb 4, 2023 at 2:59 PM 'James M. (Mike) Snead' via Power
Satellite Economics <power-satell...@googlegroups.com>
wrote:
>
> Keith H.,
>
> The hypothesis of a natural circumstance-based reason for the prevalence of a warrior genetic mentality would appear to fail casual examination. Native Americans lived in an isolated continent-spanning culture for over ten thousand years. They fragmented into many tribes and subtribes speaking different languages.
>
> The archaeological record indicates periods of warfare possibly driven by localized or regional starvation if not simple animosity. Yet, when the Spanish began to explore the Southeast US, by the accounts that I have read, they found the native tribes extremely peaceful.

Steve LeBlanc (LeBlanc 1999) relates that after expanding
in a warm wet period, the corn-farming culture of the American
Southwest hit bad weather about 1260 CE. As expected from the
model, the tribes started warring with each other. The response
they made of moving into forts (pueblos) made them safer, but
at the same time put much of their farming areas out of reach.
This trapped the tribes in the feedback loop of continuing
privation and war mode for hundreds of years. LeBlanc states
that 23 of 27 groups of tribes vanished, died out or were
absorbed into other tribes. The few surviving groups (Zuni,
Laguna, Hopi, Acoma) were still at war with each other when
the Spanish arrived in the 1500s.

LeBlanc, Steven A.
(1999) Prehistoric Warfare in the American Southwest. Salt Lake City: The
University of Utah Press


> This would imply that the genetic warlike preference selection did not occur or was not sustained over many generations.
>
> I believe the simpler explanation is that evil people seek power by whatever means are available. This power creates an uneven distribution of resources to which other immoral people migrate seeking to benefit.

I don't think you can make a case that evil people have an
evolutionary advantage. But if you want, give it a try.

Keith

> Mike Snead
>
> .
>
> --
> You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/016501d938ec%2453f43190%24fbdc94b0%24%40aol.com.

Nick Nielsen

unread,
Feb 4, 2023, 7:14:12 PM2/4/23
to Keith Henson, James M. (Mike) Snead, Power Satellite Economics
On Sat, Feb 4, 2023 at 3:54 PM Keith Henson <hkeith...@gmail.com> wrote:

I don't think you can make a case that evil people have an
evolutionary advantage.  But if you want, give it a try.

Keith

If we define "evil people" as individuals possessing a disproportionate endowment of "dark triad" personality traits, then there is some evidence that these traits confer advantage (i.e., are adaptive). In a very narrow study focused on task performance, a result like this was obtained:

"The winner takes it all: The mediating role of competitive orientations in the Dark Triad and sport task performance relationship"
by Robert S Vaughan and Daniel J Madigan
https://pubmed.ncbi.nlm.nih.gov/32940582/

However, I don't think this is very representative of human history on the whole, which is rather more complex than basketball free throws.

On the peoples of the desert southwest, there is a fascinating book, Man Corn: Cannibalism and Violence in the Prehistoric American Southwwest by Christy G Turner, which describes evidence for human sacrifice among peoples of the desert southwest of North America, noting that no such evidence of cannibalism has been found further north, and that there may be a Mesoamerican connection to human sacrifice in the Chaco region.

It seems to me that the most obvious explanation here is that a group form the Mesoamerican region, where such human sacrifice was relatively common (present in a spectacular form among the Aztecs), moved north and came into contact with civilizations of the Chaco region, continuing to practice human sacrifice as they moved north. Perhaps the Mesoamericans arrived as marauders and were able to impose themselves on local peoples to the point of continuing their tradition of human sacrifice.

The lesson here is that the more brutal and violent social group has an advantage over those less willing to be brutal and violent.I'm pretty sure there is quantitative evidence for this, but I don't have it to hand at the moment.

Best wishes,

Nick


 
 

Narayanan Komerath

unread,
Feb 4, 2023, 7:42:24 PM2/4/23
to Kevin Parkin, Gary barnhard, Keith Henson, Paul J. Werbos, Tim Cash, james...@aol.com, plamen.l...@gmail.com, power-satell...@googlegroups.com
Perhaps this is because companies are creating truly artificial intelligence, based on their definition of how "human intelligence works". IOW the glorious Microsoft Paperclip emulated Microsoft's Customer Service with their Microsoft Answers.

For those who are not well educated in the Classics unlike me: "We were in a helicopter somewhere over Seattle. Fog enveloped the city below, not a spot to see anything. This was long before GPS in aircraft or smartphones. Finally we saw a bldg sticking up out of the fog with lights inside. We hovered next to the glass. People looked out. We got a posterboard that we were taking to a conference, and scrawled on it: "WHERE ARE WE?" People nodded smartly, waved to us, then started scrambling. To gather around a big table. They presented PPT on a screen, marked up several sheets of papers in obvious discussion, then took a vote.

We were running out of fuel but we waited..

They got a huge posterboard, and painstakingly stuck big colored sheets on it to form the words: "YOU ARE IN A HELICOPTER"

The pilot did  thumbs-up. Set course carefully with protractor and pencil and started flying, looking at his watch. Went right down through the fog and landed right in the middle of where we should have landed.

"How did you know?"

"Oh that was obviously Microsoft!!"

(may have also seen the big neon sign but let's not go there..)
****************************************************************************

Today's "AI" perfectly mimics that.

nk

John David Galt

unread,
Feb 5, 2023, 3:00:44 PM2/5/23
to power-satell...@googlegroups.com
On 02/03/2023 01:00 PM, Roger Arnold wrote:
> Mike,
>
> /"Question – Was it necessary to convince oneself that Hitler was evil?
> Was that not self-evident by his actions? Was not the same true for
> Stalin, Mao, and numerous others?"/
> /
> /
> Regarding Hitler, yes, as a matter of fact, Roosevelt's administration
> had to work hard to rouse public sentiment for entering the war. It's an
> "inconvenient truth" that we've chosen to forget, but Hitler and his
> Nazi regime enjoyed quite a bit of support among U.S. business leaders
> and a significant segment of the public at the time. A larger segment
> didn't feel strongly about Hitler, but was solidly against getting
> involved in a war that was seen as Europe's problem and none of our
> concern. We also tend to overlook the fact that the conditions that led
> to Hitler's rise to power were a result of the harsh terms imposed on
> Germany by the Treaty of Versaille after WW I. Many historians now view
> WW II as a delayed continuation of WW I. And WW I was a classical war
> for control of strategic resources by competing empires.

An argument that steps on its own toes. Suppose that the US stays out
of WW1 (for instance, suppose Wilson's reaction to the Zimmermann
telegram is to laugh at the very idea of Mexico invading us and
winning). The most likely result is that WW1 ends earlier -- say, in
early fall of 1917, about the time US troops actually started arriving
in enough numbers to matter. We get a new France-Germany border about
where the front lines were at the time (Chateau Thierry), and no
reparations to France. On the east front the Soviet Union is still
formed, but the treaty of Brest-Litovsk stays in place and Germany owns
Poland. *This means that WW2-in-Europe and the Holocaust don't happen.*
The lesson to be drawn from this is that even the best-meaning
interventions by good guys can be self defeating and should be avoided.

> None of which is particularly relevant to what I was talking about. Yes,
> warfare has been endemic to human history, and humanity has managed to
> survive -- so far. My point is that conditions have changed. The march
> of technology has now led us to a point where another war is likely to
> be the end of civilization. Our power to wield destruction has become
> too great. That was already true when the threat was "only" city-killing
> nuclear weapons. Now the threat has expanded by the possibility of
> engineered bio agents, autonomous killer drones and robots,
> bunker-busting bombs and "rods from god", and who knows what else. We
> have to find a way to deactivate the "war mode" switch that is latent in
> all of us. 
>
> A necessary precondition for that is to end the specter of resource
> conflicts. We must come to see abundance for all of humanity as a real
> possibility. Advanced AI is not strictly necessary to achieve that;
> economic abundance has been a real possibility for the better part of a
> century, had we been willing to rein in rent-seeking and empire
> building. But advanced AI and the end of wage slavery it will bring can
> be a potent force in the right direction.

Methinks you are ignoring the elephant in the room. You've created a
golem (self-aware AI) and made it our ruler. What will motivate it?
Can we even know before it's too late to switch it off without a war?

> I'm terrified by recent trends that I'm seeing. We appear to be moving
> in the wrong direction, away from global cooperation under the rule of
> law and toward greater nationalism and renewed competition for
> resources. Our leaders seem to have lost their fear of nuclear
> apocalypse and are actively promoting war. We seem to be recreating the
> conditions that led up to WW I. And look at how well that ended.

What you call global cooperation under the rule of law, those of us who
are resisting the Great Reset call the deliberate destruction of human
civilization by a handful of selfish individuals such as Klaus Schwab
who want most of us dead, the remainder as their serfs, and for life to
go back to being nasty, brutish, and short for everyone but themselves.

Reading assignment for those who don't buy it: Check out what the last
two weeks worth of Twitter Files leaks have to say about
gain-of-function research. Pfizer execs are disappointed that Covid and
the fake vaccines didn't kill nearly enough of us, and they plan to try
again.

Keith Henson

unread,
Feb 6, 2023, 3:29:17 PM2/6/23
to John David Galt, power-satell...@googlegroups.com
This discussion is what happens when people have been exposed to bleak
future memes for years.

They are attracted to and spread xenophobic memes without much
judgement on them being connected to the real world.

QAnon is perhaps the worst example, but thinking the WEF is out to get
us and Pfizer is out to kill us with Covic is not far from that.
There are enough real engineering and economics problems, we don't
need outright paranoia.

Please quit.

Keith
> --
> You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/66880072-8a87-ec19-9457-b74da608a768%40att.net.

Tim Cash

unread,
Feb 6, 2023, 4:03:01 PM2/6/23
to power-satell...@googlegroups.com
This sort of anti-common sense approach makes me thank my lucky stars I
was born to a father with supernatural common sense.

The right approach to enlighten our civilization is to foster human
thinking, dialogue on all fronts, especially with those not on our way
of thinking (China, Russia, North Korea, Iran, etc).

We must innovate win-win-win x times repeated scenarios where there is a
reason for all of us to have skin in the game.

I am trying to head that way of thinking myself, and it is indeed
difficult to stay engaged.


Tim Cash

cash...@gmail.com

Claudio Cioffi

unread,
Feb 7, 2023, 11:30:35 AM2/7/23
to Tim Cash, power-satell...@googlegroups.com
The singularity topic seems to be a mathematical metaphor. Is there an actual mathematical analysis that has been produced?
If helpful I can look up and share a mathematical analysis developed by a Russian mathematical social scientist named Andrey Korotayev not too long ago.

Claudio Cioffi-Revilla

--
Sent from my iPhone


> On Feb 6, 2023, at 4:03 PM, Tim Cash <cash...@gmail.com> wrote:
>
> This sort of anti-common sense approach makes me thank my lucky stars I was born to a father with supernatural common sense.
>>> To view this discussion on the web visit https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fpower-satellite-economics%2F66880072-8a87-ec19-9457-b74da608a768%2540att.net&data=05%7C01%7Cccioffi%40gmu.edu%7C169945fd0ad64f8774f108db088594a2%7C9e857255df574c47a0c00546460380cb%7C0%7C0%7C638113142042014198%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=t6dOjQkSWYTc96GmiTWZYA5UuwgBxs1uIX4T7wNAJOs%3D&reserved=0.
>
> --
> Tim Cash | Sr Systems Engineer
> Annapolis, MD
> cash...@gmail.com
>
> The ideal Engineer will be about 35 Yrs old, have 40 yrs of engineering experience, look like Elvis, walk on water, and have flagellants that smell like Chanel No. 5.
>
> --
> You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.
> To view this discussion on the web visit https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fpower-satellite-economics%2F78cbc848-5cff-3231-48cf-5813d7122557%2540gmail.com&data=05%7C01%7Cccioffi%40gmu.edu%7C169945fd0ad64f8774f108db088594a2%7C9e857255df574c47a0c00546460380cb%7C0%7C0%7C638113142042014198%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=o3mNmi4EI4JUXWIFmHepDG7xBvZAtyjZHHIkVVIL61o%3D&reserved=0.

Keith Henson

unread,
Feb 7, 2023, 11:43:56 AM2/7/23
to Claudio Cioffi, Tim Cash, power-satell...@googlegroups.com
Quite a few years ago Ray Kertzweil did a long analysis of how fast
things are progressing. HIs conclusion was that progress became very
fast indeed around 2045. It was not certain because there was a
substantial residual, that is progress in excess of exponential.

Best wishes,

Keith
> To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/8289233F-0533-4CBE-BCB5-CE829E58318C%40gmu.edu.
Reply all
Reply to author
Forward
0 new messages