I think it’s becoming clear that AI is one of the great filters humanity must pass through.Andrew
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/00ef01d931a7%244a595c10%24df0c1430%24%40aol.com.
Humanity May Reach Singularity Within Just 7 Years, Trend Shows
By one major metric, artificial general intelligence is much closer
than you think.
https://www.popularmechanics.com/technology/robots/a42612745/singularity-when-will-it-happen/I don't know how serious to take this, Ray thinks it will happen in
the mid 2040s.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/CAPiwVB7Wv4k_XD6dpemzmJov2B2SxL40wvsJfNJnQ6AhmiR-Kg%40mail.gmail.com.
Bill,
Recently, a video was released showing a major pharmaceutical company executive explaining how the company worked to increase its profits from COVID vaccines. They are using a version of gain-of-function, seeking more dangerous virus mutations. It was very clear IMO, that the aim to increase profit outweighed commonsense and moral safety considerations.
I disagree that AI’s will always be algorithm driven. An algorithm is essentially an IF-THEN construct that yields the same THEN action for the same IF situation. We have been using such algorithms since the invention of automatic pneumatic braking systems for rail cars in the late 1800s and, even earlier, with pressure regulators for steam boilers. An algorithm-instructed operation, properly engineered, can be tested fully to certify its safety within a prescribed set of circumstances. These circumstances then become the legally accepted operational usage of the system.
Artificial intelligence is different in that the IF-THEN construct is fluid. Thus, it cannot be tested, limited, or certified. It is essentially a gain-of-function operation that, as we now know with COVID, can bring disaster. There is no moral reasoning, IMO, to permit artificial intelligence development.
Mike Snead
From: Bill Gardiner <william.w...@gmail.com>
Sent: Thursday, January 26, 2023 11:32 PM
To: James M. (Mike) Snead <james...@aol.com>; Power Satellite Economics <power-satell...@googlegroups.com>; Jerry McLaughlin <drjer...@aol.com>; Holger Isenberg <Holger....@gmail.com>; Cara Boyd (Caroline Leyburn) <caraly...@gmail.com>
Subject: Re: Singullarity
Hi Mike,
AI's at their core will always be algorithm driven, while humans are driven at their core by metaphor and myth, and humans only support a thin veneer of situational and economic logical behavior. As long as the environment is dynamically changing, only irrational 3 14159 ..... , complex a+bi humans CAN, but not necessarily WILL, prevail in some manner on some surface along with a supporting environment.
The ultimate role and fate of AI vis á vis humanity in my view was best expressed by Isaac Asimov in his 1956 short story "The Last Question,"
... (with apologies to Howard Bloom on this thread) and developed ad nauseam by Asimov in his Foundation Trilogy and in his I Robot series more succinctly. (His three laws of Robotics are almost as good as George Carlin's condensation of the Ten Commanments).
In short, humanity could and should develop the AI SOON to be an all-apocalypse backup just in time for the Singularity, together with the Spitzbergen seed banks and the like, to re- boot civilization-capable life wherever it can take hold again, and remain capable as such, ad infinitum. Whatever else AI does or is capable of would be best known to all in the full light of day.
So, let there be light.
Bill Gardiner
On Thu, Jan 26, 2023, 11:57 AM James M. (Mike) Snead <james...@aol.com> wrote:
The threat of unbounded AI should be obvious. Yet, the benefits of AI would appear to be tremendous.
How do we resolve this problem? One approach is to mandate that all AI software operate only using a new operating system running on a new set of processor hardware intentionally designed to operate slow so that the AI "speed of thought" mimics that of a human. This would be essentially the same as the isolation now mandated for dangerous biological research.
Mike Snead
-----Original Message-----
From: power-satell...@googlegroups.com <power-satell...@googlegroups.com> On Behalf Of Keith Henson
Sent: Thursday, January 26, 2023 1:42 AM
To: Howard Bloom <howl...@aol.com>; Power Satellite Economics <power-satell...@googlegroups.com>
Cc: 847lov...@gmail.com; a.p.k...@astrox.com; ag...@cecglobalevents.com; algl...@gmail.com; am...@sonic.net; analyte...@bellsouth.net; andre...@gmail.com; anna.j.n...@gmail.com; astrobi...@gmail.com; barn...@barnhard.com; b...@spaceward.org; bgo...@gmail.com; bmack...@alum.mit.edu; boyd...@newschool.edu; bpit...@earthlink.net; budo...@gmail.com; c...@sedov.co; cacar...@yahoo.com; cash...@gmail.com; comp...@gmail.com; d.m.bu...@larc.nasa.gov; dalels...@gmail.com; david.c...@gmail.com; davi...@spacegeneration.org; dennis.m...@nasa.gov; dliv...@davidlivingston.com; don.fl...@ohio.edu; dougsp...@gmail.com; drs...@thespaceshow.com; dstewa...@gmail.com; ericm...@factualfiction.com; feng...@gmail.com; feng...@gmail.com; flou...@ohio.edu; gabriela...@gmail.com; gabriela...@nss.org; gale.s...@gmail.com; garyba...@aol.com; gbl...@cinci.rr.com; genemey...@icloud.com; ghal...@aol.com; giu...@gmail.com; h.ha...@suddenlink.net; harold...@verizon.net; hicou...@aol.com; jajos...@gmail.com; james...@gmail.com; james...@comcast.net; jam...@dimensionality.com; james...@aol.com; jaso...@gmail.com; jdrutl...@gmail.com; jeroen...@gmail.com; jgl...@igc.org; jgl...@aol.com; jkst...@sbcglobal.net; joecham...@gmail.com; jssd...@aol.com; karen...@gmail.com; kdw...@gmail.com; kins...@icloud.com; kr...@maficstudios.com; lauren...@gmail.com; liz.k...@tis.org; loby4...@aol.com; lonnie...@aol.com; lorigor...@gmail.com; louisl....@asc-csa.gc.ca; lziel...@comcast.net; mac...@comcast.net; marde...@aol.com; mark.h...@nss.org; mark....@asteroidenterprises.com; na...@universetoday.com; news...@aol.com; nicola...@gmail.com; paul.da...@gen-astro.com; paul.e.d...@gmail.com; peter.g...@us.af.mil; pwe...@gmail.com; rausche...@gmail.com; rckz...@aol.com; re...@mtu.edu; rfu...@thought.live; rich...@gmail.com; ri...@earthlightfoundation.org; robsh...@gmail.com; rocket...@gmail.com; roger.h...@usafa.edu; sam.co...@gmail.com; sam.s...@runbox.com; s...@etiam-engineering.com; sara.a...@seds.org; snn...@columbia.edu; spac...@gmail.com; stelli...@gmail.com; stephen...@gmail.com; topa...@singularsci.com; transg...@comcast.net; trent.wa...@gmail.com; william.w...@gmail.com; willj...@gmail.com; win...@skycorpinc.com; wol...@aol.com; yoda...@hotmail.com
Subject: Singularity
Humanity May Reach Singularity Within Just 7 Years, Trend Shows
By one major metric, artificial general intelligence is much closer than you think.
https://www.popularmechanics.com/technology/robots/a42612745/singularity-when-will-it-happen/
I don't know how serious to take this, Ray thinks it will happen in the mid 2040s.
Keith
>
Bill,
Recently, a video was released showing a major pharmaceutical company executive explaining how the company worked to increase its profits from COVID vaccines. They are using a version of gain-of-function, seeking more dangerous virus mutations. It was very clear IMO, that the aim to increase profit outweighed commonsense and moral safety considerations.
I disagree that AI’s will always be algorithm driven. An algorithm is essentially an IF-THEN construct that yields the same THEN action for the same IF situation. We have been using such algorithms since the invention of automatic pneumatic braking systems for rail cars in the late 1800s and, even earlier, with pressure regulators for steam boilers. An algorithm-instructed operation, properly engineered, can be tested fully to certify its safety within a prescribed set of circumstances. These circumstances then become the legally accepted operational usage of the system.
Artificial intelligence is different in that the IF-THEN construct is fluid. Thus, it cannot be tested, limited, or certified. It is essentially a gain-of-function operation that, as we now know with COVID, can bring disaster. There is no moral reasoning, IMO, to permit artificial intelligence development.
--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/00e701d93263%2404445b00%240ccd1100%24%40aol.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/CACLqmgdHHOUjCjeOzUo-VgTNib90MsvipjytLqUyfw2WWkaRJg%40mail.gmail.com.
I am already dead, they just forgot to deliver the coroner's report. AGI may kill us, but the best phrase ever stated by Pogo is, to quote, "We have met the enemy, and he is us"
--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/CAPiwVB4q6p29XUxhFLK%2BJ5N-nFeCqxCgq1_9HYenNh2n%3D_x-MA%40mail.gmail.com.
Keith,
I disagree. The criminalization of artificial intelligence (AI) and public-engaged advanced algorithm instructed (ai) software is beginning as people are learning how these are already being abused with TikTok and Google searches. State governments are outlawing TikTok in part because of such abuse.
--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/00a201d933cf%2456e6bf70%2404b43e50%24%40aol.com.
How do we suppress anything immoral undertaken elsewhere? We assert economic, legal, or physical control as happened in WW II. Once a determination is made that true AI is a threat, then all necessary actions are justified by self-preservation.
--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/013501d934dc%24e1a9a060%24a4fce120%24%40aol.com.
Roger,
“The best way for a tribe or nation to defend against the threat of new technology in the hands of its enemies is to avoid making enemies. Failing that, the second best way is to develop the new technology for itself, and learn its strengths, weaknesses, and possibilities.”
Do we need to develop and weaponize a highly contagious virus with high mortality to know that this should be forbidden and any country undertaking such actions should be considered a mortal enemy? The US response to such threats is to treat them as a weapon of mass destruction warranting a nuclear response if used.
In raising this rhetorical question, I fully understand that the US military has for decades undertaken defensive medical research. My expectation is that such research has enabled the US to distinguish what could be and could not be a weapon of mass destruction and to develop vaccines to protect troops against less lethal biological agents, where natural or created.
If a distinction between lower-case ai, that executes non-adapting IF-THEN instructions, and upper-case AI that learns, adapts, and changes cannot be made, then the ai/Ai should be considered to be the latter and ended as it is dangerous. What is to be done with a child with a propensity for setting fires? When judged a threat, they are locked away.
Mike Snead
From: Roger Arnold <silver...@gmail.com>
Sent: Monday, January 30, 2023 4:21 PM
To: James M. (Mike) Snead <james...@aol.com>
Cc: Power Satellite Economics <power-satell...@googlegroups.com>
Subject: Re: Singularity
Ars Technica has a really good article on AI and what's behind the dramatic advances that we've been seeing in the last few years. The article is here. It's the best article I've seen that's been written for a general audience (well, the general audience of Ars Technica) by someone who actually works with AI, knows what he's talking about, and is able to explain it clearly. The article indirectly reinforces the point I tried to make in another post: that AI is a powerful tool, but still a tool, and that it's how the tool is used or misused that we should be worrying about.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/01c901d934fd%2465b45e70%24311d1b50%24%40aol.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/CAN%3D9Pgm4aLUfNj0SW3OpN1uWjCO7o7JGfB6qeBPRRzhfJPvHJA%40mail.gmail.com.
Roger,
The starting premise of your argument is fundamentally unsound.
“It's a good example of why I say the best way to defend against new technology in the hands of enemies is to avoid making enemies.”
I think you need to clarify what you label as an “enemy”. Is the CCP an enemy? They have stated clearly and publicly their intent to subjugate the world. Is Russia an enemy? They invade and threaten nuclear war for no reasonable reason. How about Iran and North Korea?
“We depend strongly on the good will and sanity of the bioscience community to not go rogue. But we also fund secret biosafety level 4 labs where "gain of function" research on viruses and pathogens is conducted. The aim is to better understand the threat that engineered microorganisms present, and to hopefully better ways to counter them.”
How well does this strategy work? COVID-19 shows that it didn’t. It appears that the enhanced virus got “out” to kill millions.
It is easy to argue for some imagined “enormous commercial value” of upper-case AI. This appears to be exactly what Pfizer used as their rationale to do some form of gain-of-function research on dangerous viruses. The ethical failure of such a strategy is quite clear. Hence, the imagined argument of “enormous commercial value” is without merit in the real world where the consequences of such poor decisions is dangerous.
After that point, you appear to rhetorically walk off the cliff, ending with the obvious conclusion that upper-case AI is possibly highly dangerous. My conclusion is that you may be “addicted” to the notion of advanced tech will just turn out to be wonderful given the chance. I imagine someone concluded the same about sending funds to China for gain-of-function research that led to the COVID-19 virus.
Mike Snead
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/CAN%3D9Pgm4aLUfNj0SW3OpN1uWjCO7o7JGfB6qeBPRRzhfJPvHJA%40mail.gmail.com.
Paul and colleagues:
Jerry
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/CACLqmgcyUyEm3uns6hsDP-BnCfLGdBuhzx9tRQX%2BcJJKJNGLGA%40mail.gmail.com.
On Jan 31, 2023, at 9:24 AM, Keith Henson <hkeith...@gmail.com> wrote:The
biggest evidence against this paranoid idea is the virus itself.
Consider what natural evolution has done to enhance the virus. Surely
a directed effort would have made a nastier virus.
Granted this is not a PSE topic (neither was AI singularity), but I did not start it! (J)
" There is no evidence that this was anything but a natural spillover"
There is no evidence that it was a natural spillover either. They are all conjectures and circumstantial evidences.
There is no chance in hell of ever being able to find actual evidence. It has been long destroyed and/or the witnesses killed by CCP.
Not that such a reaction would not have occurred in other democratic (or not) countries. But CCP has one leg up (so to speak!) on other countries in being able to maintain a tight seal.
In this I completely agree with Mike.
The fact that Fauchi and NIH supported the gain-of-function work at this lab in Wuhan (and tried to lie and/or hide it) is highly suspicious.
Growing up in India I used to hear all these stories of how honest the Americans were to each other and how honest/ incorruptible American and British media were. And they were. (Except the likes of CIA and parts of the Govt itself).
Now cold water has been poured on any and all of that. Meaning the fifth estate did not and will not try to find the truth, here or in Russia-gate or....(myriad of others)
They did not here and will not.
I am quite sad.
https://www.nature.com/articles/d41586-022-00584-8
"Nevertheless, some virologists say that the new evidence pointing to the Huanan market doesn’t rule out an alternative hypothesis. They say that the market could just have been the location of a massive amplifying event, in which an infected person spread the virus to many other people, rather than the site of the original spillover."
SO we do not know and will never know definitively.
Most of the Dem leaning or liberal sites (NYT, WaPo, CNN, MSNBC, Guardian, BBC) will point to spillover. Most on the other side, GOP leaning (FOX, NewsMax etc) will point to lab leak.
-------------------------------------------------------
Dr. Ajay P. Kothari
President
Astrox Corporation
AIAA Associate Fellow
Ph: 301-935-5868
Web: www.astrox.com
Email: a.p.k...@astrox.com
-------------------------------------------------------
--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/012301d93658%24cde6feb0%2469b4fc10%24%40aol.com.
Last December, Rep. Jim Jordan, now chair of the House Judiciary Committee, and Rep. James Cromer, now chair of the House Oversight Committee, announced their intention to investigate the origins of the COVID-19 virus. I expect that these hearings will provide greater insight into the issue of the origin of the virus. So, until then, I shall wait to see.
On the issue of the AI, an alarming news report emerged in the last couple of says about the growth on the new AI chat app ChatGPT. The article’s author predicted the rapid decline of Google-type searches because the AI chat will eliminate the need to search by telling the user the “correct” answer on the first click.
Essentially, the AI will remove the need to be curious to know anything because the AI will be the source of all truth—all that needs to be or is worth knowing. With Musk’s brain implant, why go to school? The Ai will directly instruct you in what to do about everything. Thus, the Eloi will be “born”.
Fact checkers will merely need to say whether the AI said something was true or false.
In short, everything that the AI programmers/teachers want to be truth will become the only truth.
I believe Capt Kirk faced this situation in several of the original Star Trek episodes. He wisely saw the need to destroy the AI.
The threat of such an AI chat bot to our republic is obvious. The only poll that will need to be taken is of the AI chat bots.
Mike Snead
Mike Snead
I have to chime in on this one about ChatGPT because I used it in a search to locate low cost RF E-Field Detectors.
It gave me only the obvious answers about nothing available at/under $1.00, and possibly not under $10.00 each in quantities. In other words, in my opinion, ChatGPT is much like the Psychology Program Software Elisa from years back, like a parrot just repeating or regurgitating what it has heard, not smart or AI at all.
I simply do not believe any form of AI is possible until the number of connections surpasses the human brain, it is not a threat to us. Certainly not a threat to Google, unless Google has decreased their marketing brilliance in a big way.
I also do not believe that Covid is any sort of conspiracy at all, certainly not planned.
What I do believe and have witnessed repeatedly on this planet is a conspiracy of stupidity, of high-placed individuals taking actions counter to our combined interests.
It brings up the saying "Artificial stupidity is much easier to
implement than artificial intelligence".
I do think there are conspiracies of individuals taking actions counter to our combined interests, as witnessed every day in the news.
However, I am trying very hard taking actions in concert with our combined interests, as I understand them.
I have amassed sufficient knowledge and witnessed enough negative actions to try and conduct myself in concert with paying it forward for all of us, as I understand that. I hope the rest of you will try to do likewise.
The work on the low cost RF Detector, and other low cost power
beaming components continues, low cost being a relative term.
The real singularity is our combined stupidity and lack of taking
the correct combined actions for the planet, killing off our
species in short order. "We have met the Enemy, and he is US!"
Tim Cash
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/062f01d93715%245fb3f390%241f1bdab0%24%40aol.com.
-- Tim Cash | Sr Systems Engineer Annapolis, MD cash...@gmail.com The ideal Engineer will be about 35 Yrs old, have 40 yrs of engineering experience, look like Elvis, walk on water, and have flagellants that smell like Chanel No. 5.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/062f01d93715%245fb3f390%241f1bdab0%24%40aol.com.
If an actual singularity (rapid acceleration of technical progress and practical knowledge) could occur,
it would require some kind of AI software and a very fast supercomputer
There are probably two main types of AI: AI (1) which is super-specialized to work in one narrow area and
AI (2) which is super-generalized to simulate a human and be able to answer questions.
There are certainly intermediate types.
Neither of these types probably pose a threat to human civilization, since they are operated by software performing one step after another and thus has no “intent”.
They could have “emotion” routines added to them and could express anger but would not feel anger as they are not conscious.
It is unlikely that a supercomputer running such step by step software could become conscious, but one could imitate it very well.
On the other hand, there will eventually be electronic brains, built to emulate, not simulate, the structure of human and animal brains,
with massive parallel “processing” and interconnectivity.
They would need a comparable number of “nodes” as human brains have cells.
Such systems have the possibility of becoming aware.
(How simple and small does an animal brain have to be before it cannot support consciousness”.
If an octopus (with a big brain) can play, it would seem it is conscious, just like many mammals.)
So, what is the intent of the electronics industry?
“STOP, DAVE, I’M AFRAID”.
John S
--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/63DBA6D20200001600190474%40gw1.barnhard.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/021301d9358e%240eac47a0%242c04d6e0%24%40aol.com.
Keith,
“So we convince ourselves that the others are evil and deserve to be killed.”
Question – Was it necessary to convince oneself that Hitler was evil? Was that not self-evident by his actions? Was not the same true for Stalin, Mao, and numerous others?
“I don't think we can suppress the development of advanced AI, but if we could, it would spell the end of all we've been striving for. We may be nearing the end of human civilization on earth anyway, but advanced AI holds the promise of a new age. If we reject it out of fear, then we make the death of civilization in unimaginably destructive war a certainty.”
We have everything now needed to undertake the spacefaring industrial revolution required to employ space solar power. The same is true for building the ground elements. Hence, your argument of the absolute need for advanced AI is only just a personal desire without any demonstrated basis of need. Your argument for advanced AI becomes circular – we need advanced AI because I think we need advanced AI.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/060901d937ed%244cedf3e0%24e6c9dba0%24%40aol.com.
--
You received this message because you are subscribed to the Google Groups "Power Satellite Economics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to power-satellite-ec...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/63DBA6D20200001600190474%40gw1.barnhard.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/CAPiwVB7PW%2BDO2NZBN670w5PB5udvFk4izyKvFWy3btyjvnhe%2Bw%40mail.gmail.com.
Oh Eliza. It’s been 25 years since I watched a fellow undergrad have a romantic online chat with what he thought was a human, but was actully one of Eliza’s many successors that we connected to an online chat site as a practical joke.
So, until that time, I’m not going to worry too much about AI’s capabilities.
In 1966, I was one of the early test subjects for Eliza, and I was amazed at the folks who thought it passed the Turing test.
Until YOU see it.
I don't think you can make a case that evil people have an
evolutionary advantage. But if you want, give it a try.
Keith
On the peoples of the desert southwest, there is a fascinating book, Man Corn: Cannibalism and Violence in the Prehistoric American Southwwest by Christy G Turner, which describes evidence for human sacrifice among peoples of the desert southwest of North America, noting that no such evidence of cannibalism has been found further north, and that there may be a Mesoamerican connection to human sacrifice in the Chaco region.
It
seems to me that the most obvious explanation here is that a group form
the Mesoamerican region, where such human sacrifice was relatively
common (present in a spectacular form among the Aztecs), moved north and came into contact with civilizations of the
Chaco region, continuing to practice human sacrifice as they moved north. Perhaps the Mesoamericans
arrived as marauders and were able to impose themselves on local peoples
to the point of continuing their tradition of human sacrifice.
The lesson here is that the more brutal and violent social group has an advantage over those less willing to be brutal and violent.I'm pretty sure there is quantitative evidence for this, but I don't have it to hand at the moment.
Best wishes,
Nick
To view this discussion on the web visit https://groups.google.com/d/msgid/power-satellite-economics/CA%2B2ngfJaRQaSVRLsrTxD6-UFYCF%3DxmEYDdbm_pkXHGqVbvOx8A%40mail.gmail.com.