Quantum Computing: Between Hope and Hype

68 views
Skip to first unread message

John Clark

unread,
Sep 27, 2024, 12:36:21 PM9/27/24
to extro...@googlegroups.com, 'Brent Meeker' via Everything List
It looks like conventional Superintelligence is not the only revolution that's going to make our world almost unrecognizable before 2030 or so. Scott Aaronson has been working in the field of quantum computing since the late 1990s but he has always strongly objected to the hype surrounding them, for years he said practical quantum computers might not be possible and even if they were he didn't expect to see one in his lifetime. But I noticed Aaronson's tone started to change about two years ago and he now thinks we will either have a practical quantum computer very soon or we will discover something new and fundamental about quantum mechanics that renders such a thing impossible. He says "Let’s test quantum mechanics in this new regime. And if, instead of building a QC, we have to settle for “merely” overthrowing quantum mechanics and opening up a new era in physics—well then, I guess we’ll have to find some way to live with that".

The following are more quotations from Aaronson's latest blog but I think it would be well worth your time to read the entire thing: 

"If someone asks me why I’m now so optimistic, the core of the argument is 2-qubit gate fidelities. We’ve known for years that, at least on paper, quantum fault-tolerance becomes a net win (that is, you sustainably correct errors faster than you introduce new ones) once you have physical 2-qubit gates that are ~99.99% reliable. The problem has “merely” been how far we were from that. When I entered the field, in the late 1990s, it would’ve been like a Science or Nature paper to do a 2-qubit gate with 50% fidelity. But then at some point the 50% became 90%, became 95%, became 99%, and within the past year, multiple groups have reported 99.9%. So, if you just plot the log of the infidelity as a function of year and stare at it—yeah, you’d feel pretty optimistic about the next decade too!
Or pessimistic, as the case may be! To any of you who are worried about post-quantum cryptography—by now I’m so used to delivering a message of, maybe, eventually, someone will need to start thinking about migrating from RSA and Diffie-Hellman and elliptic curve crypto  [which bitcoin uses]  to lattice-based crypto, or other systems that could plausibly withstand quantum attack. I think today that message needs to change. I think today the message needs to be: yes, unequivocally, worry about this now. Have a plan."


John K Clark    See what's on my new list at  Extropolis
ecc


PGC

unread,
Sep 27, 2024, 1:45:32 PM9/27/24
to Everything List
Even if I share the security concerns at this point: Improved gate fidelity is a necessary but not sufficient condition for building a scalable, fault-tolerant quantum computer. As physicist Mikhail Dyakonov has cautioned, there are profound theoretical and practical obstacles that remain unresolved. Issues such as error correction, decoherence, and the physical scalability of qubit systems pose significant challenges. The threshold theorem suggests that below a certain error rate, quantum error correction can, in theory, make quantum computation feasible. However, the overhead in terms of additional qubits and operations required for error correction is enormous. Peter Shor himself has acknowledged that the resources needed for practical quantum error correction are daunting with current technology.

Aaronson's enthusiasm is reminiscent of earlier hype cycles in technology. For instance, his optimistic views on artificial intelligence did not always engage deeply with how general reasoning abilities were being achieved or demonstrated! Fully on hype train there. This pattern raises concerns about the balance between genuine technological progress and premature excitement that may not fully account for underlying complexities.

Other experts advocate for a more measured perspective. Gil Kalai, for example, has been a vocal skeptic about the scalability of quantum computers, emphasizing that quantum error rates might not be reducible to the levels required for practical machines. His arguments suggest that noise and decoherence could be fundamental barriers, not just engineering challenges to be overcome with incremental improvements.

John Clark

unread,
Sep 27, 2024, 2:51:50 PM9/27/24
to everyth...@googlegroups.com
On Fri, Sep 27, 2024 at 1:45 PM PGC <multipl...@gmail.com> wrote:

 there are profound theoretical and practical obstacles that remain unresolved. Issues such as error correction, decoherence, and the physical scalability of qubit systems

As AAronson explains, for quantum error correction to kick in you need about 99.99% reliability, otherwise you create more errors than you correct; 25 years ago the reliability was about 50%, today it's 99.9%. And there's no indication that the rate of improvement is about to stop or even slow down.  I suggest you read Aaronson's entire article, he addresses many of your other concerns, and those of Gil Kalai.
Aaronson's enthusiasm is reminiscent of earlier hype cycles in technology. For instance, his optimistic views on artificial intelligence 

Huh? Aaronson was never on the AI hype train, but on his blog I criticized him for NOT being on it. He said he was very surprised at the extraordinarily rapid development of AI during the last two years but said "Even with hindsight, I don’t know of any principle by which I should’ve predicted what happened". We then had the following dialogue: 

ME: But you knew that Albert Einstein went from understanding precisely nothing in 1879 to being the first man to understand General Relativity in 1915, and you knew that the human genome only contains 750 megs of information, and yet that is enough information to construct an entire human being. So whatever the algorithm was that allowed Einstein to extract information from his environment was, it must have been much much less than 750 megs. That's why I've been saying for years that super-intelligence could be achieved just by scaling things up, no new scientific discovery was needed, just better engineering. Quantity was needed not quality, although I admit I was surprised it happened so fast because I thought more scaling up would be required. 

AaronsonKnowing that an algorithm takes at most 750MB (!) to describe doesn’t place any practical upper bound on how long it might take to discover that algorithm!”

Me: I say why not? We know for a fact that the human genome is only 750 MB (3 billion base pairs, there are 4 bases, so each base can represent 2 bits and there are 8 bits per byte) and we know for a fact it contains a vast amount of redundancy (for example 10,000 repetitions of ACGACGACGACG) and we know it contains the recipe for an entire human body, not just the brain, so the technique the human mind uses to extract information from the environment must be pretty simple, vastly less than 750 MB. I’m not saying an AI must use that exact same algorithm but it does tell us that such a simple thing must exist. For all we know an AI might be able to find an even simpler algorithm, after all random mutation and natural selection managed to find it so it’s not unreasonable to suppose that an intelligence might be able to do even better.

AaronsonCome on! 256^750,000,000 is vastly greater than the number of possibilities one could search through within the lifetime of the universe.

Me I agree, and yet it's a fact that random mutation and natural selection managed to stumble upon it in only about 500 million years. The only conclusion that one can derive from that is there must be a VAST number of algorithms that works just as well or better than the one that Evolution found. And if it had found one that worked I'm certain intelligence can find one too and could do so in a lot less than 500 million years because evolution is a slow, extremely inefficient and cruel way to create complex objects, but until it finally got around to making a brain it was the only way to do it.  
Also, 750 Mb is just the upper limit, the real number must be much much less. 

John K Clark    See what's on my new list at  Extropolis
mml





ecc


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/8b504eea-4390-444b-8be2-e096d1872bc3n%40googlegroups.com.

Brent Meeker

unread,
Sep 28, 2024, 12:19:07 AM9/28/24
to everyth...@googlegroups.com



On 9/27/2024 11:51 AM, John Clark wrote:
On Fri, Sep 27, 2024 at 1:45 PM PGC <multipl...@gmail.com> wrote:

 there are profound theoretical and practical obstacles that remain unresolved. Issues such as error correction, decoherence, and the physical scalability of qubit systems

As AAronson explains, for quantum error correction to kick in you need about 99.99% reliability, otherwise you create more errors than you correct; 25 years ago the reliability was about 50%, today it's 99.9%. And there's no indication that the rate of improvement is about to stop or even slow down.  I suggest you read Aaronson's entire article, he addresses many of your other concerns, and those of Gil Kalai.


Aaronson's enthusiasm is reminiscent of earlier hype cycles in technology. For instance, his optimistic views on artificial intelligence 

Huh? Aaronson was never on the AI hype train, but on his blog I criticized him for NOT being on it. He said he was very surprised at the extraordinarily rapid development of AI during the last two years but said "Even with hindsight, I don’t know of any principle by which I should’ve predicted what happened". We then had the following dialogue: 

ME: But you knew that Albert Einstein went from understanding precisely nothing in 1879 to being the first man to understand General Relativity in 1915, and you knew that the human genome only contains 750 megs of information, and yet that is enough information to construct an entire human being.
Also takes certain nourishment and environment, which is not zero information.

So whatever the algorithm was that allowed Einstein to extract information from his environment was, it must have been much much less than 750 megs. That's why I've been saying for years that super-intelligence could be achieved just by scaling things up, no new scientific discovery was needed, just better engineering. Quantity was needed not quality, although I admit I was surprised it happened so fast because I thought more scaling up would be required. 

AaronsonKnowing that an algorithm takes at most 750MB (!) to describe doesn’t place any practical upper bound on how long it might take to discover that algorithm!”

Me: I say why not? We know for a fact that the human genome is only 750 MB (3 billion base pairs, there are 4 bases, so each base can represent 2 bits and there are 8 bits per byte) and we know for a fact it contains a vast amount of redundancy (for example 10,000 repetitions of ACGACGACGACG) and we know it contains the recipe for an entire human body, not just the brain, so the technique the human mind uses to extract information from the environment must be pretty simple, vastly less than 750 MB. I’m not saying an AI must use that exact same algorithm but it does tell us that such a simple thing must exist. For all we know an AI might be able to find an even simpler algorithm, after all random mutation and natural selection managed to find it so it’s not unreasonable to suppose that an intelligence might be able to do even better.

AaronsonCome on! 256^750,000,000 is vastly greater than the number of possibilities one could search through within the lifetime of the universe.

Me I agree, and yet it's a fact that random mutation and natural selection managed to stumble upon it in only about 500 million years.
More like 3.5 billion years.  First life evolved then eukaryotic life evolved then...

Brent

The only conclusion that one can derive from that is there must be a VAST number of algorithms that works just as well or better than the one that Evolution found. And if it had found one that worked I'm certain intelligence can find one too and could do so in a lot less than 500 million years because evolution is a slow, extremely inefficient and cruel way to create complex objects, but until it finally got around to making a brain it was the only way to do it.  
Also, 750 Mb is just the upper limit, the real number must be much much less. 

John K Clark    See what's on my new list at  Extropolis
mml





Even if I share the security concerns at this point: Improved gate fidelity is a necessary but not sufficient condition for building a scalable, fault-tolerant quantum computer. As physicist Mikhail Dyakonov has cautioned, there are profound theoretical and practical obstacles that remain unresolved. Issues such as error correction, decoherence, and the physical scalability of qubit systems pose significant challenges. The threshold theorem suggests that below a certain error rate, quantum error correction can, in theory, make quantum computation feasible. However, the overhead in terms of additional qubits and operations required for error correction is enormous. Peter Shor himself has acknowledged that the resources needed for practical quantum error correction are daunting with current technology.

Aaronson's enthusiasm is reminiscent of earlier hype cycles in technology. For instance, his optimistic views on artificial intelligence did not always engage deeply with how general reasoning abilities were being achieved or demonstrated! Fully on hype train there. This pattern raises concerns about the balance between genuine technological progress and premature excitement that may not fully account for underlying complexities.

Other experts advocate for a more measured perspective. Gil Kalai, for example, has been a vocal skeptic about the scalability of quantum computers, emphasizing that quantum error rates might not be reducible to the levels required for practical machines. His arguments suggest that noise and decoherence could be fundamental barriers, not just engineering challenges to be overcome with incremental improvements.

On Friday, September 27, 2024 at 6:36:21 PM UTC+2 John Clark wrote:
It looks like conventional Superintelligence is not the only revolution that's going to make our world almost unrecognizable before 2030 or so. Scott Aaronson has been working in the field of quantum computing since the late 1990s but he has always strongly objected to the hype surrounding them, for years he said practical quantum computers might not be possible and even if they were he didn't expect to see one in his lifetime. But I noticed Aaronson's tone started to change about two years ago and he now thinks we will either have a practical quantum computer very soon or we will discover something new and fundamental about quantum mechanics that renders such a thing impossible. He says "Let’s test quantum mechanics in this new regime. And if, instead of building a QC, we have to settle for “merely” overthrowing quantum mechanics and opening up a new era in physics—well then, I guess we’ll have to find some way to live with that".

The following are more quotations from Aaronson's latest blog but I think it would be well worth your time to read the entire thing: 

"If someone asks me why I’m now so optimistic, the core of the argument is 2-qubit gate fidelities. We’ve known for years that, at least on paper, quantum fault-tolerance becomes a net win (that is, you sustainably correct errors faster than you introduce new ones) once you have physical 2-qubit gates that are ~99.99% reliable. The problem has “merely” been how far we were from that. When I entered the field, in the late 1990s, it would’ve been like a Science or Nature paper to do a 2-qubit gate with 50% fidelity. But then at some point the 50% became 90%, became 95%, became 99%, and within the past year, multiple groups have reported 99.9%. So, if you just plot the log of the infidelity as a function of year and stare at it—yeah, you’d feel pretty optimistic about the next decade too!
Or pessimistic, as the case may be! To any of you who are worried about post-quantum cryptography—by now I’m so used to delivering a message of, maybe, eventually, someone will need to start thinking about migrating from RSA and Diffie-Hellman and elliptic curve crypto  [which bitcoin uses]  to lattice-based crypto, or other systems that could plausibly withstand quantum attack. I think today that message needs to change. I think today the message needs to be: yes, unequivocally, worry about this now. Have a plan."



ecc


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/8b504eea-4390-444b-8be2-e096d1872bc3n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Sep 28, 2024, 7:53:30 AM9/28/24
to everyth...@googlegroups.com
On Sat, Sep 28, 2024 at 12:19 AM Brent Meeker <meeke...@gmail.com> wrote:

 >> Albert Einstein went from understanding precisely nothing in 1879 to being the first man to understand General Relativity in 1915, and you knew that the human genome only contains 750 megs of information, and yet that is enough information to construct an entire human being.
 
Also takes certain nourishment and environment, which is not zero information.

Nourishment is not information, and energy will not be a problem for an AI, that's why God made nuclear reactors.  

>> AaronsonCome on! 256^750,000,000 is vastly greater than the number of possibilities one could search through within the lifetime of the universe.
Me I agree, and yet it's a fact that random mutation and natural selection managed to stumble upon it in only about 500 million years.

More like 3.5 billion years.  

If we're talking about intelligence then in the 3.5 billion year history of life the first 3 billion years were irrelevant because during that time there simply wasn't any.  It was only about 500 million years ago that multicellular animals came on the scene and anything even vaguely resembling a "brain" evolved that was able to extract information from the environment and use that information to improve the animal's chances of getting its genes into the next generation. 

Oh I suppose you could say that even a single celled creature can move away from something that's too hot or too cold, but if you count that as intelligence then you'd also have to say a thermostat is intelligent. And if you do that then the word starts to lose its meeting.  

First life evolved then eukaryotic life evolved then...

There is certainly no reason for modern software engineers to repeat all of the dead ends, irrelevancies and downright silliness that Evolution dreamed up during the last 3.5 billion years! Evolution is a TERRIBLE engineer, it makes stupid designs (as expected for something involving random mutation) , it's ridiculously slow, and requires gargantuan resources. It's not strictly relevant but Natural Selection is also hideously cruel.   

John K Clark    See what's on my new list at  Extropolis
5gn

Brent Meeker

unread,
Sep 28, 2024, 5:58:15 PM9/28/24
to everyth...@googlegroups.com



On 9/28/2024 4:52 AM, John Clark wrote:
On Sat, Sep 28, 2024 at 12:19 AM Brent Meeker <meeke...@gmail.com> wrote:

 >> Albert Einstein went from understanding precisely nothing in 1879 to being the first man to understand General Relativity in 1915, and you knew that the human genome only contains 750 megs of information, and yet that is enough information to construct an entire human being.
 
Also takes certain nourishment and environment, which is not zero information.

Nourishment is not information,
Sure it is, not all calories are equal.  And growing an embryo into a baby isn't done your kitchen sink.

and energy will not be a problem for an AI, that's why God made nuclear reactors.  

>> AaronsonCome on! 256^750,000,000 is vastly greater than the number of possibilities one could search through within the lifetime of the universe.
Me I agree, and yet it's a fact that random mutation and natural selection managed to stumble upon it in only about 500 million years.

More like 3.5 billion years.  

If we're talking about intelligence then in the 3.5 billion year history of life the first 3 billion years were irrelevant because during that time there simply wasn't any. 
First, even bacteria and archea exhibit rudimentary intelligence. So advanced intelligence wasn't built on nothing.  Rocks had the same start 3.5 billion years ago, they didn't develop intelligence.



It was only about 500 million years ago that multicellular animals came on the scene and anything even vaguely resembling a "brain" evolved that was able to extract information from the environment and use that information to improve the animal's chances of getting its genes into the next generation. 

Oh I suppose you could say that even a single celled creature can move away from something that's too hot or too cold, but if you count that as intelligence then you'd also have to say a thermostat is intelligent.
I do.  Intelligence admits of degrees.


And if you do that then the word starts to lose its meeting. 
Not at all.  Does weight lose it's meaning because you're a lot heavier than a bacterium?


First life evolved then eukaryotic life evolved then...

There is certainly no reason for modern software engineers to repeat all of the dead ends, irrelevancies and downright silliness that Evolution dreamed up during the last 3.5 billion years! Evolution is a TERRIBLE engineer, it makes stupid designs (as expected for something involving random mutation) , it's ridiculously slow, and requires gargantuan resources. It's not strictly relevant but Natural Selection is also hideously cruel.  
But evolution developed the intelligence we have, a lot of silliness succumbed to natural selection.  The funny thing is that software engineering seems to have fallen into a form of unnatural selection as they train bigger and bigger LLMs so they no longer understand what they've "engineered".

Brent


John K Clark    See what's on my new list at  Extropolis
5gn

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

PGC

unread,
Sep 30, 2024, 7:54:17 AM9/30/24
to Everything List
Despite marketing efforts hyping superintelligence around the corner, advanced reasoning abilities etc. I don't see much more than folks beating disingenuous benchmarks by modifying training sets, memory, and all manner of parameters, specific to domain of benchmark being tested. Yes, there was a jump in what we thought was state-of-art and yes, I see the incremental changes from there. But I don't see the hype/budgets justified yet, even though I can't tel what will happen if you throw half the planet's power into a city sized datacenter. Anecdotally, most software engineers I've spoken to echo the following: 


But maybe we need to train these things for some years for domains that are not as narrow as advertising or chess/go, where you can have the programs play against themselves a billion times in a day with clear results/reward schemes. For those broader domains, the importance of real data is the currency. I'm not sure we should keep feeding it to silicon valley for free, to have them charge us for using the refined (by users and their real data!) tools later.

John Clark

unread,
Sep 30, 2024, 8:34:34 AM9/30/24
to everyth...@googlegroups.com
On Mon, Sep 30, 2024 at 7:54 AM PGC <multipl...@gmail.com> wrote:

Despite marketing efforts hyping superintelligence around the corner, advanced reasoning abilities etc. I don't see much more than folks beating disingenuous benchmarks by modifying training sets, memory, and all manner of parameters, specific to domain of benchmark being tested.

It sounds like whistling past the graveyard to me ...... but whatever helps you get through the day.  
 
  John K Clark    See what's on my new list at  Extropolis
hgt




PGC

unread,
Oct 1, 2024, 7:23:52 AM10/1/24
to Everything List
I don't care what my statements sound like. It's about the argument. I'm not making statements like "superintelligence is around the corner", in which case the burden of proof lies with those hyping those statements. The exchange with Brent is instructive: can a human level intelligence be separated from its arguable 3.5 billion year history? Wouldn't that have to be accounted for? 

If the current state of development is any indicator, where they keep enlarging the mathematical linguistic context which informs the response, then that's a lot of data for just one AI, even if you argue that early stages of the planet are not necessary. And then superintelligence demands something like "can accomplish arbitrary tasks/problems much better than a human and/or all humans". The only phenomenon that has reached that level that we have evidence for is the development of civilization and science by billions of lifeforms reaching humans over, taking your figure, 500 million years. 

And to demonstrate that somebody is on the path towards modelling and/or surpassing that, you'd need to show how. I not sure adding verbal/mathematical memory suffices. Note I have never denied that AI could become incredibly competent at domain specific tasks. It's doing so. But superintelligence, if there even were consensus on what that is... is a much taller order.

John Clark

unread,
Oct 1, 2024, 8:51:38 AM10/1/24
to everyth...@googlegroups.com
On Tue, Oct 1, 2024 at 7:23 AM PGC <multipl...@gmail.com> wrote:

I don't care what my statements sound like. It's about the argument. I'm not making statements like "superintelligence is around the corner",

I would maintain it's physically impossible to overhyped the importance of artificial intelligence.  

in which case the burden of proof lies with those hyping those statements.

There is no burden, things are just heading in an inevitable direction and, short of starting a thermonuclear war, nobody is going to be able to stop it.   

 
The exchange with Brent is instructive: can a human level intelligence be separated from its arguable 3.5 billion year history?

Yes.  

Wouldn't that have to be accounted for? 

No. 

If the current state of development is any indicator, where they keep enlarging the mathematical linguistic context which informs the response, then that's a lot of data for just one AI, even if you argue that early stages of the planet are not necessary.

True that's a lot of data, but I don't see your point. For over a decade the amount of computational ability that an AI has at its disposal has been doubling every six months, that's considerably faster than Moore's law and there is no indication that's gonna stop anytime soon. And that's not all, due to improvement in software, improvements largely caused by the AI themselves not the humans who have only a hazy understanding about what's going on, every 8 months an AI that uses only half the computational power can reach the same AI benchmarks.  

 
And then superintelligence demands something like "can accomplish arbitrary tasks/problems much better than a human and/or all humans". The only phenomenon that has reached that level that we have evidence for is the development of civilization and science by billions of lifeforms reaching humans over, taking your figure, 500 million years.
 

True. And that is precisely why I say it is physically impossible to overhype the importance of AI.  

And to demonstrate that somebody is on the path towards modelling and/or surpassing that, you'd need to show how.

That will never happen, even in this very early stage nobody has a detailed understanding of how AI's work.  
 
I not sure adding verbal/mathematical memory suffices.

By contrast I am very sure of that. As I have already shown, it can be proven with mathematical precision that the upper limit to the amount of information needed to make an entire human being is only 750 megs, and the algorithm that humans use to extract knowledge from their environment must be much much smaller than that, probably less than 1 MB. There have been important developments in the field of AI such as the invention of transformers, but that only advanced things by a couple of years, the primary reason we didn't have AI's like we have today in the 1960s is that back then the hardware simply wasn't able to provide the needed amount of computation. Frank Rosenblatt invented the Perceptron way back in 1957 and its basic architecture was similar to what we use today, but it couldn't do much because Rosenblatt's hardware was pathetically primitive and agonizingly slow.  

I recently watched an old Nova documentary about AI from the 1970s on YouTube and a guy said that to develop an AI we need an Einstein, or maybe 10 Einsteins, and about 1000 very good engineers, and it's important that the Einsteins come before the engineers. But it turned out all we needed was the engineers, Einstein was unnecessary.

  John K Clark    See what's on my new list at  Extropolis
eun
h

Brent Meeker

unread,
Oct 1, 2024, 7:05:12 PM10/1/24
to everyth...@googlegroups.com



On 10/1/2024 5:50 AM, John Clark wrote:
On Tue, Oct 1, 2024 at 7:23 AM PGC <multipl...@gmail.com> wrote:

I don't care what my statements sound like. It's about the argument. I'm not making statements like "superintelligence is around the corner",

I would maintain it's physically impossible to overhyped the importance of artificial intelligence.  

in which case the burden of proof lies with those hyping those statements.

There is no burden, things are just heading in an inevitable direction and, short of starting a thermonuclear war, nobody is going to be able to stop it.   

 
The exchange with Brent is instructive: can a human level intelligence be separated from its arguable 3.5 billion year history?

Yes. 
So far it has not been.  It has mostly been looking up what human level intelligence has discovered.  That's certainly intelligence, even super-intelligence of a sort.  But whether more is really different remains to be seen.


Wouldn't that have to be accounted for? 

No. 

If the current state of development is any indicator, where they keep enlarging the mathematical linguistic context which informs the response, then that's a lot of data for just one AI, even if you argue that early stages of the planet are not necessary.

True that's a lot of data, but I don't see your point. For over a decade the amount of computational ability that an AI has at its disposal has been doubling every six months, that's considerably faster than Moore's law and there is no indication that's gonna stop anytime soon. And that's not all, due to improvement in software, improvements largely caused by the AI themselves not the humans who have only a hazy understanding about what's going on, every 8 months an AI that uses only half the computational power can reach the same AI benchmarks.  
What are these, "improvements largely caused by the AI themselves"?

 
And then superintelligence demands something like "can accomplish arbitrary tasks/problems much better than a human and/or all humans". The only phenomenon that has reached that level that we have evidence for is the development of civilization and science by billions of lifeforms reaching humans over, taking your figure, 500 million years.
 

True. And that is precisely why I say it is physically impossible to overhype the importance of AI.  

And to demonstrate that somebody is on the path towards modelling and/or surpassing that, you'd need to show how.

That will never happen, even in this very early stage nobody has a detailed understanding of how AI's work.
Yet you're sure that they will continue to improve at the same rate as the recent leap based on LLMs.  I think that's the very definition of "over hyping".

 
 
I not sure adding verbal/mathematical memory suffices.

By contrast I am very sure of that. As I have already shown, it can be proven with mathematical precision that the upper limit to the amount of information needed to make an entire human being is only 750 megs,
Of course that's a human being that can't even speak or walk.  How much more information did it take to make you?

and the algorithm that humans use to extract knowledge from their environment must be much much smaller than that, probably less than 1 MB.
Unless they use a computer.

There have been important developments in the field of AI such as the invention of transformers, but that only advanced things by a couple of years, the primary reason we didn't have AI's like we have today in the 1960s is that back then the hardware simply wasn't able to provide the needed amount of computation. Frank Rosenblatt invented the Perceptron way back in 1957 and its basic architecture was similar to what we use today, but it couldn't do much because Rosenblatt's hardware was pathetically primitive and agonizingly slow.  

I recently watched an old Nova documentary about AI from the 1970s on YouTube and a guy said that to develop an AI we need an Einstein, or maybe 10 Einsteins, and about 1000 very good engineers, and it's important that the Einsteins come before the engineers. But it turned out all we needed was the engineers, Einstein was unnecessary.
Which already makes one suspect that the improvement may just be a matter of scope and speed and will reach a ceiling well below Einstein.

Brent


  John K Clark    See what's on my new list at  Extropolis
eun
h
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Oct 2, 2024, 6:54:00 AM10/2/24
to everyth...@googlegroups.com
On Tue, Oct 1, 2024 at 7:05 PM Brent Meeker <meeke...@gmail.com> wrote:

It has mostly been looking up what human level intelligence has discovered.  That's certainly intelligence, even super-intelligence of a sort.  But whether more is really different remains to be seen

Playing chess and the game of GO at a superhuman level is really different from anything that has been seen before. And so is predicting the shape of a protein molecule just by its amino acid sequence. And so is producing an original well crafted image about anything you care to name no matter how bizarre.

>> For over a decade the amount of computational ability that an AI has at its disposal has been doubling every six months, that's considerably faster than Moore's law and there is no indication that's gonna stop anytime soon. And that's not all, due to improvement in software, improvements largely caused by the AI themselves not the humans who have only a hazy understanding about what's going on, every 8 months an AI that uses only half the computational power can reach the same AI benchmarks.  
 
What are these, "improvements largely caused by the AI themselves"


The AI's changed the weights of the nodes in their own neural networks  and that made them able to work faster and use up less computing resources. Or to put it another way, they learned to work more efficiently. The same thing happens with human beings, when you try to solve a new type of problem or develop a new skill at first you find it slow going and have to concentrate very hard, but after you practice a lot it becomes much easier and you can almost do it in your sleep.  

you're sure that they will continue to improve at the same rate as the recent leap based on LLMs. I think that's the very definition of "over hyping".


It ain't hype if it's true. There seems to be a linear relationship between the amount of computing power spent training a neural network and how smart it ends up being. And the rate of increase of the available amount of computing power is not slowing down. 

 >> As I have already shown, it can be proven with mathematical precision that the upper limit to the amount of information needed to make an entire human being is only 750 megs.
 
> Of course that's a human being that can't even speak or walk.

The human genome provides enough information to be able to soak up much more information from the environment, but until a child starts to do that, until he starts to learn, he's as dumb as a brick. And when a modern neural network is first wired up it also is as dumb as a brick and needs to learn. The real secret sauce in things like GPT and Claude is not the program itself but the weights given to the nodes of its neural net. And of course the massive amount of computational power needed to derive those weights that is achievable thanks to modern computer hardware.


>> I recently watched an old Nova documentary about AI from the 1970s on YouTube and a guy said that to develop an AI we need an Einstein, or maybe 10 Einsteins, and about 1000 very good engineers, and it's important that the Einsteins come before the engineers. But it turned out all we needed was the engineers, Einstein wasunnecessary.
 
Which already makes one suspect that the improvement may just be a matter of scope and speed

I agree. 
 
and will reach a ceiling well below Einstein.

How do you figure that? Recently an AI won a silver medal at the International Mathematical Olympiad and was only one point short of winning a gold; or at least it would've won a medal if it had been allowed to enter the competition. And that's not all!

image.png

O1, the latest artificial intelligence from OpenAI, has an IQ of about 120. I'm sure Einstein's IQ was higher than that but considering the fact that three or four years ago the smartest computer in the world only had an IQ in the single digits I think that’s pretty impressive progress. And there is not the slightest evidence that rate of progress is about to slow down, much less come to a screeching halt the instant it reaches Einstein's level.

image.png

John K Clark    See what's on my new list at  Extropolis

fgf

Brent Meeker

unread,
Oct 2, 2024, 5:12:00 PM10/2/24
to everyth...@googlegroups.com



On 10/2/2024 3:53 AM, John Clark wrote:
Of course that's a human being that can't even speak or walk.

The human genome provides enough information to be able to soak up much more information from the environment, but until a child starts to do that, until he starts to learn, he's as dumb as a brick. And when a modern neural network is first wired up it also is as dumb as a brick and needs to learn. The real secret sauce in things like GPT and Claude is not the program itself but the weights given to the nodes of its neural net.
And those weights don't just fall from heaven.  They are determined by using huge repositories of training data.  So I'd say the real secret sauce is 300yrs worth of recorded human knowledge.  Einstein only learned a tiny bit of that knowledge, but then he extended it in a surprising way.  Let's see if AI can do that.

Brent
Reply all
Reply to author
Forward
0 new messages