Coming Singularity

77 views
Skip to first unread message

Russell Standish

unread,
Mar 28, 2024, 9:27:02 PM3/28/24
to everyth...@googlegroups.com
Been thinking about the timing of the singularity a bit, given
progress in generative AI recently, partly as a result of attending
NVidia's annual GTC conference. I first heard about GPT3 two years
ago, which impressed me with their 150 billion parameter neural net,
because I compared that against the human brain's 100 billion neuron
count (that is an incorrect comparison, though, which I'll mention
below).

As you are well aware, GPT3 exploded into public awareness a year ago
with the launch of ChatGPT.

For some reason I had in my mind that Ray Kurzweil was predicting
human brain level simulation by 2020. Turns out that was not quite
correct - he was predicting human-like AI assistants by 2019, which I
would say arrived a little late last year in 2023. He was predicting
2029 to be the time when AI will attain human level intelligence.

So to compare apples with apples - the human brain contains around 700
trillion (7E14) synapses, which would roughly correpond to an AI's
parameter count. GPT5 (due to be released sometime next year) will
have around 2E12 parameters, still 2-3 orders of magnitude to
go. Assuming continuation of current rates of AI improvement
GPT3->GPT5 (4 years) is one order of magnitude increase in parameter
count, it will take to 2033 for AI to achieve human parity.

So I would say Kurzweil's singularity is a little delayed, to perhaps
2050, provided ecological collapse doesn't happen sooner. I would
still say that creativity (which is an essential prerequisite) is
still mysterious, in spite of glimmering of creativity shown by Gen
AI.

But singularity requires that machines design themselves - this means
that semiconductor companies need to be run by AI, fabs need to be 3D
printed, as well as the chips as well. It'll be a while before the
cost of fabs comes down to the point where hyperexponential
technological will happen. We will see these prerequisite
technological changes years before the singularity really kicks off.

Anyway my 2c - I know John is keen to promote the idea of singularity
this decade - but I don't see it myself.

Cheers

--

----------------------------------------------------------------------------
Dr Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders hpc...@hpcoders.com.au
http://www.hpcoders.com.au
----------------------------------------------------------------------------

Dylan Distasio

unread,
Mar 29, 2024, 1:42:15 AM3/29/24
to everyth...@googlegroups.com
I think we need to be careful with considering LLM parameters as analogous to synapses.   Biological neuronal systems have very significant differences in terms of structure, complexity, and operation compared to LLM parameters.

Personally, I don't believe it is a given that simply increasing the parameters of a LLM is going to result in AGI or parity with overall human potential.

I think there is a lot more to figure out before we get there, and LLMs (assuming variations on current transformer based architectures) may end up a dead end without other AI breakthroughs combining them with other components, and inputs (as in sensory inputs)..

We may find out that the singularity is a lot further away than it seems, but I guess time will tell.    Personally, I would be very surprised to see it within the next decade.

On Thu, Mar 28, 2024 at 9:27 PM Russell Standish <li...@hpcoders.com.au> wrote:

So to compare apples with apples - the human brain contains around 700
trillion (7E14) synapses, which would roughly correpond to an AI's
parameter count. GPT5 (due to be released sometime next year) will
have around 2E12 parameters, still 2-3 orders of magnitude to
go. Assuming continuation of current rates of AI improvement
GPT3->GPT5 (4 years) is one order of magnitude increase in parameter
count, it will take to 2033 for AI to achieve human parity.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/20240329012651.GE2357%40zen.

John Clark

unread,
Mar 29, 2024, 9:56:07 AM3/29/24
to everyth...@googlegroups.com
On Thu, Mar 28, 2024 at 9:27 PM Russell Standish <li...@hpcoders.com.au> wrote:
 
>"So to compare apples with apples - the human brain contains around 700 trillion (7E14) synapses"

I believe 700 trillion is a more than generous estimate of the number of synapses in the human brain, but I'll let it go.  
 

>"which would roughly correpond to an AI's parameter count


NO! Comparing the human brain's synapses to the number of parameters that an AI program like GPT-4 has is NOT comparing apples to apples, it's comparing apples to oranges because the brain is hardware but GPT-4 is software. So let's compare the brain hardware that human intelligence is running on with the brain hardware that GPT-4 is running on, that is to say let's compare synapses to transistors. I'll use your very generous estimate and say the human brain has 7*10^14 synapses, but the largest supercomputer in the world, the Frontier Computer at Oakridge, has about 2.5*10^15 transistors, over three times as many. And we know from experiments that a typical synapse in the human brain "fires" between 5 and 50 times per second, but a typical transistor in a computer "fires" about 4 billion times a second (4*10^9).  That's why the Frontier Computer can perform 1.1 *10^18 floating point calculations per second and why the human brain can not.

I should add that although there have been significant improvements in the field of AI in recent years, the most important being the "Attention Is All You Need" paper, I believe that even if transformers had never been discovered the AI explosion that we are currently observing would only have been delayed by a few years because the most important thing driving it forward is the brute force enormous increase in raw computing speed.

> "He [Ray Kurzweil]  was predicting 2029 to be the time when AI will attain human level intelligence."

It now looks like Ray was being too conservative and 2024 or 2025 would be closer to the Mark, and 2029 would be the time when an AI is smarter than the entire human race combined. 


> "I would still say that creativity (which is an essential prerequisite) is still mysterious"

It doesn't matter if humans find creativity to be mysterious because we have an existence proof that a lack of understanding of creativity does not prevent humans from making a machine that is creative. Back in 2016 when a computer beat Lee Sedol, the top human champion at the game of GO, the thing that everybody was talking about was move 37 of the second game of the five game tournament. When the computer made that move the live expert commentators were shocked and described it as "practically nonsensical" and "something no human would do", and yet that crazy "nonsensical" move was the move that enabled the computer to win.  Lee Sedol said move 37 was "an incredible move" and was completely unexpected and made it impossible for him to win, although it took him a few more moves before he realized that. If a human had made moves 37 every human GO expert on the planet would've said it was the most creative move they had ever seen.  

> "But singularity requires that machines design themselves"

Computers are already better at writing software than the average human, and major chip design and manufacturing companies like  NVIDIA, AMD, IntelCerebras and TSMC are investing heavily in chip design software. 

 
Anyway my 2c - I know John is keen to promote the idea of singularity this decade - but I don't see it myself.

One thing I know for certain, whenever the Singularity occurs most people will be surprised, otherwise it wouldn't be a Singularity.  

 John K Clark    See what's on my new list at  Extropolis
oib





 

Jason Resch

unread,
Mar 29, 2024, 10:48:35 AM3/29/24
to Everything List


On Fri, Mar 29, 2024, 1:42 AM Dylan Distasio <inte...@gmail.com> wrote:
I think we need to be careful with considering LLM parameters as analogous to synapses.   Biological neuronal systems have very significant differences in terms of structure, complexity, and operation compared to LLM parameters.

Personally, I don't believe it is a given that simply increasing the parameters of a LLM is going to result in AGI or parity with overall human potential.

I agree it may not be apples to apples to compare synapses to parameters, but of all the comparisons to make it is perhaps the closest one there is.


I think there is a lot more to figure out before we get there, and LLMs (assuming variations on current transformer based architectures) may end up a dead end without other AI breakthroughs combining them with other components, and inputs (as in sensory inputs)..

Here is where I think we may disagree. I think the basic LLM model, as currently used, is all we need to achieve AGI.

My motivation for this belief is there all forms of intelligence reduce to prediction (that is, given a sequence observables, determining what is the most likely next thing to see?).

Take any problem that requires intelligence to solve and I can show you how it is a subset of the skill of prediction.

Since human language is universal in the forms and types of patterns it can express, there is no limit to the kinds of patterns and LLM can learn to recognize and predict. Think of all the thousands, if not millions of types of patterns that exist in the training corpus. The LLM can learn them all.

We have already seen this. Despite not being trained for anything beyond prediction, modern LLMs have learned to write code, perform arithmetic, translate between languages, play chess, summarize text, take tests, draw pictures, etc.

The "universal approximation theorem" (UAT) is a result in the field of neural networks which says that with a large enough neural network, and with enough training, a neural network can learn any function. Given this, the UAT, and the universality of language to express any pattern, I believe the only thing holding back LLMs today is their network size and amount of training. I think the language corpus is sufficiently large and diverse in the patterns it contains that it isn't what's holding us back.

An argument could be made that we already have achieved AGI. We have AI that passes the bar in the 90th percentile, passes math olympiad tests in the 99th percentile, programs better than the average google coder, scores a 155 in a verbal IQ test, etc. If we took GPT-4 back to the 1980s to show it off, would anyone at the time say it is not AGI? I think we are only blinded to the significance of what has happened because we are living through history now and the history books have not yet covered this time.

Jason 



We may find out that the singularity is a lot further away than it seems, but I guess time will tell.    Personally, I would be very surprised to see it within the next decade.

On Thu, Mar 28, 2024 at 9:27 PM Russell Standish <li...@hpcoders.com.au> wrote:

So to compare apples with apples - the human brain contains around 700
trillion (7E14) synapses, which would roughly correpond to an AI's
parameter count. GPT5 (due to be released sometime next year) will
have around 2E12 parameters, still 2-3 orders of magnitude to
go. Assuming continuation of current rates of AI improvement
GPT3->GPT5 (4 years) is one order of magnitude increase in parameter
count, it will take to 2033 for AI to achieve human parity.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/20240329012651.GE2357%40zen.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Russell Standish

unread,
Mar 29, 2024, 10:28:34 PM3/29/24
to everyth...@googlegroups.com
On Fri, Mar 29, 2024 at 09:55:28AM -0400, John Clark wrote:
> On Thu, Mar 28, 2024 at 9:27 PM Russell Standish <li...@hpcoders.com.au> wrote:
>  
>
> >"So to compare apples with apples - the human brain contains around 700 
> trillion (7E14) synapses"
>
>
> I believe 700 trillion is a more than generous estimate of the number of
> synapses in the human brain, but I'll let it go.  
>  
>
>
> >"which would roughly correpond to an AI's parameter count
>
>
>
> NO! Comparing the human brain's synapses to the number of parameters that an AI
> program like GPT-4 has is NOT comparing apples to apples, it's comparing apples
> to oranges because the brain is hardware but GPT-4 is software. So let's
> compare the brain hardware that human intelligence is running on with the brain
> hardware that GPT-4 is running on, that is to say let's compare synapses to
> transistors. I'll use your very generous estimate and say the human brain has
> 7*10^14 synapses, but the largest supercomputer in the world, the Frontier
> Computer at Oakridge, has about 2.5*10^15 transistors, over three times as
> many. And we know from experiments that a typical synapse in the human brain
> "fires" between 5 and 50 times per second, but a typical transistor in a
> computer "fires" about 4 billion times a second (4*10^9).  That's why the
> Frontier Computer can perform 1.1 *10^18 floating point calculations per second
> and why the human brain can not.

There is a big difference between the way transistors are wired in a
CPU and the way neurons are wired up in a brain. The brain is not
optimised at all to do floating point calculations, which is why even
the most competent "computer" (in the old fashioned sense) can only
manage sub 1 flops. Conversely, using floating point operations to
perform neural network computations is not exactly efficient
either. We're using GPUs today, because they can perform these very
fast, and its a massively parallel operation, and GPUs are cheap, for
what they are. In the future, I would expect we'd have dedicate neural
processing units, based on memristors, or whatever. Indeed Intel is
now flogging chips with "NPU"s, but how much of that is real and how
much is marketing spin I can't say.

The comparing synapses with ANN parameters is only relevant for the
statement "we can simulate a human brain sized ANN by X
date". Kurzweil didn't say that (for some reason I thought he did), he
said human intelligence parity (which I supose could be taken to be
avergae intelligence, or an IQ of 100). In a human brain, a lot of
neurons are handling body operations - controlling muscles,
interoception, proprioception, endocrine control etc, so the actual
figure related to language processing is likely to be far smaller than
the figure given. But only by an order of magnitude, I would say.

>
> I should add that although there have been significant improvements in the
> field of AI in recent years, the most important being the "Attention Is All You
> Need" paper, I believe that even if transformers had never been discovered the
> AI explosion that we are currently observing would only have been delayed by a
> few years because the most important thing driving it forward is the brute
> force enormous increase in raw computing speed.
>
>
> > "He [Ray Kurzweil]  was predicting 2029 to be the time when AI will
> attain human level intelligence."
>
>
> It now looks like Ray was being too conservative and 2024 or 2025 would be
> closer to the Mark, and 2029 would be the time when an AI is smarter than the
> entire human race combined. 
>

2025 should see the release of GPT5. It is still at least two orders
of magnitude short of the mark IMHO. It is faster though - training
GPT5 will have taken about 2 years, whereas it takes nearly 20 years
to train a human.

>
>
> > "I would still say that creativity (which is an essential prerequisite)
> is still mysterious"
>
>
> It doesn't matter if humans find creativity to be mysterious because we have an
> existence proof that a lack of understanding of creativity does not prevent
> humans from making a machine that is creative.

That may be the case, but understanding something does accelerate
process dramatically over blind "trial and error". It is the main
reason for the explosion in technical prowess over the last 400 years.

> Back in 2016 when a computer
> beat Lee Sedol, the top human champion at the game of GO, the thing that
> everybody was talking about was move 37 of the second game of the five game
> tournament. When the computer made that move the live expert commentators were
> shocked and described it as "practically nonsensical" and "something no human
> would do", and yet that crazy "nonsensical" move was the move that enabled the
> computer to win.  Lee Sedol said move 37 was "an incredible move" and was
> completely unexpected and made it impossible for him to win, although it took
> him a few more moves before he realized that. If a human had made moves 37
> every human GO expert on the planet would've said it was the most creative move
> they had ever seen.  
>

Yes - I have said there is a glimmering of creativity. There are
numerous such examples, I don't discount that.

>
> > "But singularity requires that machines design themselves"
>
>
> Computers are already better at writing software than the average human, and
> major chip design and manufacturing companies like  NVIDIA, AMD, Intel , 
> Cerebras and TSMC are investing heavily in chip design software. 
>

The average human doesn't code. This is not a valid
comparison. Computers are still not at the level of an inexperienced
intern. At best, they can provide suggestions, which if appropriate,
can save a developer time, so could be used in a form of pair
programming. As for unsupervised coding, they might be able to help
writing unit tests, or other fairly boilerplate, but to actually let
one loose on a codebase would be a net negative, if I'm to believe the
reports. I haven't tried the tech yet - mainly because it will take
some time out of my schedule to even set things up to work in my
environment - but I do intend to when my current crunch period has
receded somewhat.

As for chip design software, this is software that assist a human
designer. Performing circuit layouts is combinatorically difficult
problem that is hard even for computers.

Actually designing circuits from scratch without a human engineer in
the loop is still a way off. It'll be some sort of evolutionary
algorithm for doing this I expect - John Koza has done a lot of
interesting work in this area with genetic programming for example.


>  
>
> > Anyway my 2c - I know John is keen to promote the idea of singularity 
> this decade - but I don't see it myself.
>
>
> One thing I know for certain, whenever the Singularity occurs most people will
> be surprised, otherwise it wouldn't be a Singularity.  
>

Most people - sure - they don't tend to think about these things. But
not us. We're expecting it - and there will be certain milestones that
need to be hit first. Once hit, though, there will otherwise little
warning.


>  John K Clark    See what's on my new list at  Extropolis
> oib
>
>
>
>
>
>  
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email
> to everything-li...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/
> everything-list/
> CAJPayv3HNhk6ufAiQjjeK419CqpSubiJp%3DnTpPefCSADYs0Osg%40mail.gmail.com.

John Clark

unread,
Mar 30, 2024, 8:31:25 AM3/30/24
to everyth...@googlegroups.com
On Fri, Mar 29, 2024 at 10:28 PM Russell Standish <li...@hpcoders.com.au> wrote:


>"There is a big difference between the way transistors are wired in a CPU and the way neurons are wired up in a brain."
 
Yes, but modern chips made by companies like NVIDIA, Cerebras and Groq don't make CPUs or even GPUs, they make Tensor Processing Units, or in Groq's case Language Processing Units, chips that have been optimized to do best not in floating point operations but in large neural networks that all current AI programs are. In the recent press conference where Nvidia introduced their new 208 billion transistor Blackwell B200 tensor chip, they pointed out that when used for neural nets, AIs chips have increased their performance by a factor of 1 million over the last 10 years. That's far faster than Moore's Law, and that was possible because Moore's Law is about transistor density, but they were talking about AI workloads, and doing well at AI is what NVIDIA's chips are specialized to do. I also found it interesting that their new Blackwell chip, when used for AI, needed 25 times less energy than the current AI chip champion,  NVIDIA's Hopper chip, which the company introduced just 2 years ago.  And I do not think it's a coincidence that this huge increase in hardware capability coincided with the current explosion in AI improvement. 

 
> "In the future, I would expect we'd have dedicate neural processing units, based on memristors"

If memristor technology ever becomes practical that would speed things up even more, but it's not necessary to achieve superhuman performance in an AI in the very near future. 

 
> "The comparing synapses with ANN parameters is only relevant for the statement "we can simulate a human brain sized ANN by X date"."

I don't see how comparing the two things can produce anything useful because one is concerned with software and the other is concerned with hardware. Comparing transistors to synapses may not be perfect but it's a much better analogy than comparing program parameters with brain synapses, at least transistors and synapses are both hardware. Comparing hardware with software will only produce a muddle.  

 
"he [Kurzweil] said human intelligence parity (which I supose could be taken to be avergae intelligence, or an IQ of 100) [...]




 John K Clark    See what's on my new list at  Extropolis
lnm



spudb...@aol.com

unread,
Apr 2, 2024, 7:18:59 PM4/2/24
to everyth...@googlegroups.com
Opinion on what occurs when we load, not an LLM, but a LLM + a Neural  Net on a low-error, high entanglement, quantum computer. Will this create a mind? 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit

John Clark

unread,
Apr 3, 2024, 7:13:10 AM4/3/24
to everyth...@googlegroups.com
On Tue, Apr 2, 2024 at 7:18 PM 'spudb...@aol.com' via Everything List <everyth...@googlegroups.com> wrote:

Opinion on what occurs when we load, not an LLM, but a LLM + a Neural  Net on a low-error, high entanglement, quantum computer. Will this create a mind? 

Certainly. A quantum computer can solve any problem that a conventional computer can, although for some problems (web surfing, text editing)  they may not do any better than conventional computers.  There is no evidence the human brain uses quantum computing and I don't see how it could, so a good LLM is all that's needed for us to experience a singularity in just the next few years. Perhaps a few years after that LLMs will have their own singularity when quantum computing becomes practical.

Jason Resch

unread,
Apr 3, 2024, 11:07:48 AM4/3/24
to Everything List


On Tue, Apr 2, 2024, 7:18 PM 'spudb...@aol.com' via Everything List <everyth...@googlegroups.com> wrote:
Opinion on what occurs when we load, not an LLM, but a LLM + a Neural  Net on a low-error, high entanglement, quantum computer. Will this create a mind? 


If you're not careful, you could create 2^N minds. Where N is the number of qubits.

Jason

spudb...@aol.com

unread,
Apr 3, 2024, 11:07:57 PM4/3/24
to Everything List

spudb...@aol.com

unread,
Apr 3, 2024, 11:12:19 PM4/3/24
to everyth...@googlegroups.com

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit
Reply all
Reply to author
Forward
0 new messages