Moving goal posts

4 views
Skip to first unread message

Stuart LaForge

unread,
Sep 1, 2022, 8:25:55 AM9/1/22
to ExI Chat, extro...@googlegroups.com
As Spike has been saying for a while now, the so-called experts keep
moving the goal posts for AI to be considered intelligent. In an
earlier thread, it was mentioned that AI is not creative enough
outside of the boundaries of whatever data they have been trained on.
This has manifested in ever more hairsplitting discrimination as to
what defines artificial intelligence. For example this expert contends
that artificial general intelligence (AGI) is still weak AI, which is
inferior to strong AI and, also, that we will never achieve AGI, let
alone strong AI.

https://www.nature.com/articles/s41599-020-0494-4

Abstract

The modern project of creating human-like artificial intelligence (AI)
started after World War II, when it was discovered that electronic
computers are not just number-crunching machines, but can also
manipulate symbols. It is possible to pursue this goal without
assuming that machine intelligence is identical to human intelligence.
This is known as weak AI. However, many AI researcher have pursued the
aim of developing artificial intelligence that is in principle
identical to human intelligence, called strong AI. Weak AI is less
ambitious than strong AI, and therefore less controversial. However,
there are important controversies related to weak AI as well. This
paper focuses on the distinction between artificial general
intelligence (AGI) and artificial narrow intelligence (ANI). Although
AGI may be classified as weak AI, it is close to strong AI because one
chief characteristics of human intelligence is its generality.
Although AGI is less ambitious than strong AI, there were critics
almost from the very beginning. One of the leading critics was the
philosopher Hubert Dreyfus, who argued that computers, who have no
body, no childhood and no cultural practice, could not acquire
intelligence at all. One of Dreyfus’ main arguments was that human
knowledge is partly tacit, and therefore cannot be articulated and
incorporated in a computer program. However, today one might argue
that new approaches to artificial intelligence research have made his
arguments obsolete. Deep learning and Big Data are among the latest
approaches, and advocates argue that they will be able to realize AGI.
A closer look reveals that although development of artificial
intelligence for specific purposes (ANI) has been impressive, we have
not come much closer to developing artificial general intelligence
(AGI). The article further argues that this is in principle
impossible, and it revives Hubert Dreyfus’ argument that computers are
not in the world.
------------------------------------------------------

Meanwhile AI has been ignoring the experts and doing stuff like
pissing off human artists by winning art contests against them.

https://www.vice.com/en/article/bvmvqm/an-ai-generated-artwork-won-first-place-at-a-state-fair-fine-arts-competition-and-artists-are-pissed

"Jason Allen's AI-generated work "Théâtre D'opéra Spatial" took first
place in the digital category at the Colorado State Fair."

The future seems to be shaping up to be humorously incongruous. :)


Stuart LaForge








Gadersd

unread,
Sep 1, 2022, 10:51:44 AM9/1/22
to extro...@googlegroups.com
There are always naysayers to technological advancement. I am reminded of the time when New York Times published an article that claimed that rocket propulsion could not work in space (https://www.forbes.com/sites/kionasmith/2018/07/19/the-correction-heard-round-the-world-when-the-new-york-times-apologized-to-robert-goddard/?sh=3df2379a4543). I have found it most convenient to ignore the naysayers who stick their heads in the sand. Nearsighted people eventually become aware when they bump into the thing in the world.

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/20220901052550.Horde.Gvqzhye9vZaFoRd78AbVD29%40sollegro.com.

John Clark

unread,
Sep 1, 2022, 2:49:36 PM9/1/22
to extro...@googlegroups.com
Stuart Laforge av...@sollegro.com wrote:

Why general artificial intelligence will not be realized

It's shocking that the editors of Nature allowed an article of such poor scholarship to be published in their journal.

> I argue that the goal cannot in principle be realized, and that the project is a dead end. In the second part of the paper I restrict myself to arguing that causal knowledge is an important part of humanlike intelligence, and that computers cannot handle causality because they cannot intervene in the world. 

That's just dumb. If computers could not intervene in the world they would be useless and thus would not exist because humans would never have bothered to make them. Without the constant intervention of computers the banking system, the stock exchange, the electrical power distribution network, air traffic control, and thousands of other things essential in a modern economy would collapse.  

 > I will argue that the belief that AGI can be realized is harmful

The harm or benefit of a thought does not affect its truth. 

> Plato’s theory of knowledge [blah blah blah]

 The man actually believes that somebody 2500 years ago who didn't know where the sun went at night can help us solve today's cutting edge scientific problems.  

 > Most of the knowledge we apply in everyday life is tacit. In fact, we do not know which rules we apply when we perform a task. Polanyi used swimming and bicycle riding as examples. Very few swimmers know that what keeps them afloat is how they regulate their respiration

And a neural net cannot tell you why it feels that a particular linear sequence of amino acids will fold up into a certain 3D shape, or explain the reasoning why one chess move is better than another, it just knows that it is.

> The bicycle rider keeps his balance by turning the handlebar of the bicycle. To avoid falling to the left, he moves the handlebar to the left, and to avoid falling to the right he turns the handlebar to the right. Thus he keeps his balance by moving along a series of small curvatures. According to Polanyi a simple analysis shows that for a given angle of unbalance, the curvature of each winding is inversely proportional to the square of the speed of the bicycle. But the bicycle rider does not know this, and it would not help him become a better bicycle rider 

There is no excuse for such sloppiness, the writer should've known better, this video came out more than a year before the above was written:


> AlphaGo showed that computers can handle tacit knowledge, and it therefore looks as if Dreyfus’ argument is obsolete.

Yes 

> However, I will later show that this “tacit knowledge” is restricted to the idealized “world of science”, which is fundamentally different from the human world that Dreyfus had in mind.

No idea what he means by that. 

> The advantage of not having to formulate explicit rules comes at a price, though. In a traditional computer program all the parameters are explicit. This guarantees full transparency. In a neural network this transparency is lost. One often does not know what parameters are used.

Yes, and Einstein couldn't explain how he came up with his ideas either, if he could have then we could just do what he said and then we'd all be as smart as Einstein.  

> will not be able to realize AGI because computers are not in the world.

And so, just before he entered oblivion the last surviving human being turned to the Jupiter Brain and said, "I still think I'm smarter than you". 

 John K Clark


 
 
 







Dan TheBookMan

unread,
Sep 1, 2022, 6:38:10 PM9/1/22
to extro...@googlegroups.com
On Sep 1, 2022, at 11:49 AM, John Clark <johnk...@gmail.com> wrote:
> Plato’s theory of knowledge [blah blah blah]

 The man actually believes that somebody 2500 years ago

Minor correction: Plato lived around 2400 years ago. 

who didn't know where the sun went at night can help us solve today's cutting edge scientific problems. 

Ah, you’re perhaps confusing Plato with someone else. He held a basically geocentric view of the universe, along with the notion that the Sun and other celestial objects circled the Earth. This view, given what was known at the time, wasn’t entirely unreasonable. (That said, there were dissenters.) It’s kind of like Darwin’s view of inheritance. Only a sophomore would attack neo-Darwinians based on Darwin’s ignorance of modern genetics. (Granted, many a Creationist will do just that, which only proves my point.) 

However, one can isolate his views on astronomy and the like from the stuff where he did do more original work, such as in epistemology. This is no different then how one can look at John von Neumann’s work in logic and math and ignore his views on politics. The latter are an area where von Neumann was merely repeating RW BS he absorbed from the wider culture, and really he had little he should be remembered for in that area — save that phenomenally brilliant folks can hold ridiculous views too.

And to to sure folks fancying themselves Platonists today are very _unlikely_ to agree with Plato’s views on astronomy. It’s a sophomoric criticism to lay that upon them. I think you can do better than that.

Regards,

Dan
Reply all
Reply to author
Forward
0 new messages