AlphaZero

128 views
Skip to first unread message

John Clark

unread,
Feb 3, 2022, 4:55:08 AM2/3/22
to 'Brent Meeker' via Everything List
The same people that made  AlphaZero, the chess and GO playing superstar, and AlphaFold, the 3-D structure predicting program, have now come up with "AlphaCode", a computer program that writes other computer programs in C++ and Python.  AlphaCode entered a programming competition with professional human programmers called "Codeforces" and ranked in the top 54%. Not bad for a first try, it seems like only yesterday computers could only play mediocre chess and now they play it at a superhuman level. I don't see why a program like this couldn't be used to improve its own programming, so I don't think the importance of this development can be overestimated.  

John K Clark    See what's on my new list at  Extropolis
aoz

Lawrence Crowell

unread,
Feb 3, 2022, 1:17:42 PM2/3/22
to Everything List

Programmers putting programmers out of work.

LC

Terren Suydam

unread,
Feb 3, 2022, 2:11:36 PM2/3/22
to Everything List
It'll still be some time before programmers start losing jobs to AI coders. AlphaCode is impressive to be sure, but the real world is not made of toy problems. Deepmind is clearly making progress in terms of applying AI in ways that are not defined in narrow domains, but there's a lot of levels to this, and while AlphaCode might represent a graduation to the next level, comprehension of the wide variety of domains of the human marketplace, and the human motivations that define them, is still many levels higher.

What I could see happening is that engineers start to use tools like AlphaCode to solve tightly-defined coding problems faster and with fewer bugs than left to their own devices. But there's still two problems. The first is that the code generated by the AI still needs to be understandable, so that it can be fixed, refactored, or otherwise improved - and an AI that can make its code understandable (in the way that good human engineers do), or do the work of fixing/refactoring/improving other code is next-level. More importantly, as a long-time programmer, I can tell you the coding is the easy part. The hard part is understanding the problem your code is supposed to solve, understanding the tradeoffs between different approaches, and being able to negotiate with stakeholders about what the best approach is. It'll be a very long time before we're handing that domain off to an AI.

Terren

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/bfbf9c0d-4df0-4ed7-9e58-ab8a68ded0e6n%40googlegroups.com.

Brent Meeker

unread,
Feb 3, 2022, 3:05:50 PM2/3/22
to everyth...@googlegroups.com
It's still a step away from self programming though.  It relied on training sets.  Not like Alphazero playing against itself.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Tomasz Rola

unread,
Feb 3, 2022, 4:11:37 PM2/3/22
to everyth...@googlegroups.com
On Thu, Feb 03, 2022 at 10:17:42AM -0800, Lawrence Crowell wrote:
>
> Programmers putting programmers out of work.
>

I believe it is going to be more like, programmers running away -
because life is too precious to spend it on navigating labirynth of
code manure built by "ai". If you ever wanted to know what I mean, you
would have to have a look at the source of manually built web page and
compare it to crap output from whatever automated editor is being used
for such task. Manually built pages are rare, but they are loading
like a blink and looking at their source is relaxing. Have fun trying
to find them.

--
Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature. **
** As the answer, master did "rm -rif" on the programmer's home **
** directory. And then the C programmer became enlightened... **
** **
** Tomasz Rola mailto:tomas...@bigfoot.com **

John Clark

unread,
Feb 3, 2022, 4:27:18 PM2/3/22
to 'Brent Meeker' via Everything List
On Thu, Feb 3, 2022 at 2:11 PM Terren Suydam <terren...@gmail.com> wrote:

 > the code generated by the AI still needs to be understandable

Once  AI starts to get to be really smart that's never going to happen, even today nobody knows how a neural network like AlphaZero works or understands the reasoning behind it making a particular move but that doesn't matter because understandable or not  AlphaZero can still play chess better than anybody alive, and if humans don't understand how that can be than that's just too bad for them.

> The hard part is understanding the problem your code is supposed to solve, understanding the tradeoffs between different approaches, and being able to negotiate with stakeholders about what the best approach is.

You seem to be assuming that the "stakeholders", those that intend to use the code once it is completed, will always be humans, and I think that is an entirely unwarranted assumption. The stakeholders will certainly have brains, but they may be hard and dry and not wet and squishy. 

> It'll be a very long time before we're handing that domain off to an AI.

I think you're whistling past the graveyard.  

John K Clark    See what's on my new list at  Extropolis
skg

Terren Suydam

unread,
Feb 3, 2022, 5:23:50 PM2/3/22
to Everything List
On Thu, Feb 3, 2022 at 4:27 PM John Clark <johnk...@gmail.com> wrote:
On Thu, Feb 3, 2022 at 2:11 PM Terren Suydam <terren...@gmail.com> wrote:

 > the code generated by the AI still needs to be understandable

Once  AI starts to get to be really smart that's never going to happen, even today nobody knows how a neural network like AlphaZero works or understands the reasoning behind it making a particular move but that doesn't matter because understandable or not  AlphaZero can still play chess better than anybody alive, and if humans don't understand how that can be than that's just too bad for them.

With chess it's clear what the game is, what the rules are, how to win and lose. In real life, the game constantly changes. AlphaCode can potentially improve its code, but to what end?  What problem is it trying to solve?  How does it know?

Even in domains with seemingly simple goals, it's a problem. Imagine an AI tasked with making as much money in the stock market as it can. Pretty clear signals for winning and losing (like chess). And perhaps there's some easy wins there for an AI that can take advantage of e.g. arbitrage (this exists already I believe) or other patterns that are not exploitable by human brains. But it seems to me that actual comprehension of the world of investment is key. Knowing how earnings reports will affect the stock price of a company, relative to human expectations about that earnings report. That's just one tiny example. You have to import a universe of knowledge of the human domain to be effective... a universe we take for granted since we've acquired it over decades of training. And I'm not talking about mere information, but models that can be simulated in what-if scenarios, true understanding. You need real AGI. I think that's true with AIs that would supplant human programmers for the reasons I said.
 
> The hard part is understanding the problem your code is supposed to solve, understanding the tradeoffs between different approaches, and being able to negotiate with stakeholders about what the best approach is.

You seem to be assuming that the "stakeholders", those that intend to use the code once it is completed, will always be humans, and I think that is an entirely unwarranted assumption. The stakeholders will certainly have brains, but they may be hard and dry and not wet and squishy. 

To get to the point where machines are the stakeholders, we're already past the singularity.
 

> It'll be a very long time before we're handing that domain off to an AI.

I think you're whistling past the graveyard.  

Of course, nobody can know what the future holds. But I think the problem of AGI is much harder than most assume. The fact that humans, with their stupendously parallel and efficient brains, require at least 15-20 years on average of continuous training before they're able to grasp the problem domain we're talking about, should be a clue.

Terren

Brent Meeker

unread,
Feb 3, 2022, 6:08:05 PM2/3/22
to everyth...@googlegroups.com
I think "able to grasp the problem domain we're talking about" is giving us way to much credit.  Every study of stock traders I've seen says that they do no better than some simple rules of thumb like index funds. 

Brent

John Clark

unread,
Feb 3, 2022, 6:22:03 PM2/3/22
to 'Brent Meeker' via Everything List
On Thu, Feb 3, 2022 at 5:23 PM Terren Suydam <terren...@gmail.com> wrote:
 
>AlphaCode can potentially improve its code, but to what end?  What problem is it trying to solve?  How does it know?

I don't understand your questions  

> Imagine an AI tasked with making as much money in the stock market as it can. Pretty clear signals for winning and losing (like chess). And perhaps there's some easy wins there for an AI that can take advantage of e.g. arbitrage (this exists already I believe) or other patterns that are not exploitable by human brains. But it seems to me that actual comprehension of the world of investment is key. Knowing how earnings reports will affect the stock price of a company, relative to human expectations about that earnings report.

I agree, but if humans, or at least some extraordinary humans like Warren Buffett, can understand the stock market, or at least understand it well enough to do better at picking stocks than doing so randomly, then I see absolutely no reason why an AI couldn't do the same thing, and do it better.

> You have to import a universe of knowledge of the human domain to be effective

Yeah, when you're born you don't know anything but over time you gain knowledge from the environment.  

> a universe we take for granted since we've acquired it over decades of training.

Yeah with a human that process takes many decades, but even today computers can process many many times more information than a human can, not surprising when you consider the fact that the signals inside a human brain only travel about 100 miles an hour while the signals in a computer travel close to the speed of light, 186,000 miles a second.  

> And I'm not talking about mere information, but models that can be simulated in what-if scenarios, true understanding. You need real AGI.

I can't think of a more flagrant example of moving goal posts. I clearly remember when nearly everybody said it would require "real understanding" for a computer to play chess at the grandmaster level, never mind the superhuman level, but nobody says that anymore. Much more recently people said image recognition would require "real intelligence" but few say that anymore, now they say coding requires "real intelligence". "Real AGI" is a machine that can do what a computer cannot do, YET.    

>I think the problem of AGI is much harder than most assume.  

As I've mentioned before, the entire human genome is only 750 megabytes, the new Mac operating system is about 20 times that size, and the genome contains instructions to build an entire human body not just a brain, and the genome is loaded with massive redundancy; so whatever the algorithm is that the brain uses to extract information from the environment there is simply no way it can be all that complicated.  

> To get to the point where machines are the stakeholders, we're already past the singularity.

Machines move so fast that at breakfast the singularity could look to a human like it's a very long way off, but by lunchtime the singularity could be ancient history. 

Terren Suydam

unread,
Feb 3, 2022, 7:20:20 PM2/3/22
to Everything List
On Thu, Feb 3, 2022 at 6:22 PM John Clark <johnk...@gmail.com> wrote:
On Thu, Feb 3, 2022 at 5:23 PM Terren Suydam <terren...@gmail.com> wrote:
 
>AlphaCode can potentially improve its code, but to what end?  What problem is it trying to solve?  How does it know?

I don't understand your questions  

What part is confusing?
 

> Imagine an AI tasked with making as much money in the stock market as it can. Pretty clear signals for winning and losing (like chess). And perhaps there's some easy wins there for an AI that can take advantage of e.g. arbitrage (this exists already I believe) or other patterns that are not exploitable by human brains. But it seems to me that actual comprehension of the world of investment is key. Knowing how earnings reports will affect the stock price of a company, relative to human expectations about that earnings report.

I agree, but if humans, or at least some extraordinary humans like Warren Buffett, can understand the stock market, or at least understand it well enough to do better at picking stocks than doing so randomly, then I see absolutely no reason why an AI couldn't do the same thing, and do it better.

> You have to import a universe of knowledge of the human domain to be effective

Yeah, when you're born you don't know anything but over time you gain knowledge from the environment.  

> a universe we take for granted since we've acquired it over decades of training.

Yeah with a human that process takes many decades, but even today computers can process many many times more information than a human can, not surprising when you consider the fact that the signals inside a human brain only travel about 100 miles an hour while the signals in a computer travel close to the speed of light, 186,000 miles a second.  

Much of our learning takes place via interactions with other humans, and those cannot be sped up. I'm not talking about facts and information, but about theories of mind, understanding human motivations, forming and testing hypotheses about how to get goals met by interacting with other humans, and other animals for that matter. To be effective in a human world, an AI would similarly need to form theories of mind about humans. Can this be done without interacting with humans?  I doubt it.
 

> And I'm not talking about mere information, but models that can be simulated in what-if scenarios, true understanding. You need real AGI.

I can't think of a more flagrant example of moving goal posts. I clearly remember when nearly everybody said it would require "real understanding" for a computer to play chess at the grandmaster level, never mind the superhuman level, but nobody says that anymore. Much more recently people said image recognition would require "real intelligence" but few say that anymore, now they say coding requires "real intelligence". "Real AGI" is a machine that can do what a computer cannot do, YET.   

Not that you would know, but I never said that about chess (or go). I don't think real understanding is required for image recognition, but it would surely help. I'm not sure how AlphaCoder works yet, so I can't comment on whether there's some kind of primitive understanding going on there.  We probably need to define what understanding/comprehension actually means if we're going to take this much further.

Regardless, to operate in the free-form world of humans, an AI needs to be able to understand and react to a problem space that is constantly changing. Changing rules (implicit and explicit), players, goals, dynamics, etc. Is that possible to do without real understanding?
 
>I think the problem of AGI is much harder than most assume.  

As I've mentioned before, the entire human genome is only 750 megabytes, the new Mac operating system is about 20 times that size, and the genome contains instructions to build an entire human body not just a brain, and the genome is loaded with massive redundancy; so whatever the algorithm is that the brain uses to extract information from the environment there is simply no way it can be all that complicated. 

The thing that makes intelligence intelligence is not simply extracting information from the environment.
 
> To get to the point where machines are the stakeholders, we're already past the singularity.

Machines move so fast that at breakfast the singularity could look to a human like it's a very long way off, but by lunchtime the singularity could be ancient history. 


Do you think the singularity can occur with an AI that doesn't have real understanding?

Terren

Terren Suydam

unread,
Feb 3, 2022, 7:30:12 PM2/3/22
to Everything List

Being able to grasp the problem domain is not the same thing as being effective in it.

On Thu, Feb 3, 2022 at 6:07 PM Brent Meeker <meeke...@gmail.com> wrote:

I think "able to grasp the problem domain we're talking about" is giving us way to much credit.  Every study of stock traders I've seen says that they do no better than some simple rules of thumb like index funds. 

Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Feb 3, 2022, 8:23:08 PM2/3/22
to everyth...@googlegroups.com
So AI's won't need to "grasp the problem domain" to be effective.  Which may well be true.  What we call "grasping the problem" domain is being able to tell simple stories about it that other people can grasp and understand, say by reading a book.  An AI may "grasp the problem" in some much more comprehensive way that is too much for a human to comprehend and the human will say the AI is just calculating and doesn't understand the problem because it can't explain it to humans. 

That's sort of what we do when we write simulations of complex things.  They are too complex for us to see what will happen and so we use the computer to tell us what will happen.  The computer can't "explain the result" to us and we can't grasp the whole domain of the computation, but we can grasp the result.

Brent

John Clark

unread,
Feb 4, 2022, 4:06:45 AM2/4/22
to 'Brent Meeker' via Everything List
On Thu, Feb 3, 2022 at 7:20 PM Terren Suydam <terren...@gmail.com> wrote:

>>> AlphaCode can potentially improve its code, but to what end?  What problem is it trying to solve?  How does it know?

>> I don't understand your questions  

> What part is confusing?

I'll make you a deal, I'll tell you "what problem it is trying to solve" if you first tell me how long a piece of string is. And if you don't wanna do that just rephrase the question more clearly.
 

>> Yeah with a human that process takes many decades, but even today computers can process many many times more information than a human can, not surprising when you consider the fact that the signals inside a human brain only travel about 100 miles an hour while the signals in a computer travel close to the speed of light, 186,000 miles a second.  

> Much of our learning takes place via interactions with other humans, and those cannot be sped up.

Sure it can be, an AI could have a detailed intellectual conversation with 1000 people at the same time, or a million, or a billion. 

> I'm not talking about facts and information,

You may not be talking about facts and information but I sure as hell I am because information is as close as you can get to the traditional idea of the soul without entering the realm of religion or some other form of idiocy.  
> but about theories of mind, understanding human motivations, forming and testing hypotheses about how to get goals met by interacting with other humans, and other animals for that matter.

If humans can do it then an AI can do it too because knowledge is just highly computed information, and wisdom is just highly computed knowledge.  

> And I'm not talking about mere information,

Mere information? Mere?!

> but models that can be simulated in what-if scenarios, true understanding. You need real AGI.

You need AI, AGI is just loquacious technobabble used to make things sound more inscrutable.   

> We probably need to define what understanding/comprehension actually means if we're going to take this much further.

I don't think that would help one bit because fundamentally definitions are not important in language, examples are. After all, examples are where lexicographers get the knowledge to write the definitions for their book. So I'd say that "understanding" is the thing that Einstein had about physics to a greater extent than anybody else of his generation.

> Regardless, to operate in the free-form world of humans, an AI needs to be able to understand and react to a problem space that is constantly changing. Changing rules (implicit and explicit), players, goals, dynamics, etc.

Well sure, but AIs have been able to do that for years, since the 1950's. 

> Is that possible to do without real understanding?

No. If I can answer some questions and perform some tasks in a certain area then I could be confident in saying have some "real understanding" in that area of knowledge, and if you can answer  more questions and perform more tasks in that area than I can then I would say you have an even greater understanding than I do, and I don't care if your brain is wet and squishy or dry and hard.  

>> As I've mentioned before, the entire human genome is only 750 megabytes, the new Mac operating system is about 20 times that size, and the genome contains instructions to build an entire human body not just a brain, and the genome is loaded with massive redundancy; so whatever the algorithm is that the brain uses to extract information from the environment there is simply no way it can be all that complicated. 

> The thing that makes intelligence intelligence is not simply extracting information from the environment.

How do you figure that? If human intelligence doesn't come from the 750 MB in our genome and it doesn't come from the environment then where does this secret sauce come from? From an invisible man in the sky? If so then why does He only give it to brains that are wet and squishy. 
 
>> Machines move so fast that at breakfast the singularity could look to a human like it's a very long way off, but by lunchtime the singularity could be ancient history. 

> Do you think the singularity can occur with an AI that doesn't have real understanding?

Of course not! I have no objection to the term "real understanding", I only object when the term is used in a silly way, such as when I accomplish something in a certain field it demonstrates "real understanding" but even though an AI can do things in that same field even better and faster than I can it demonstrates nothing but a mindless reflex because its brain is dry and hard and not wet and squishy. 

 John K Clark    See what's on my new list at  Extropolis

wsq

Terren Suydam

unread,
Feb 4, 2022, 11:55:53 AM2/4/22
to Everything List
I think for programmers to lose their jobs to AIs, AIs will need to grasp the problem domain, and I'm suggesting that's far too advanced for today's AI, and I think it's a long way off, because the problem domain for programmers entails knowing a lot about how humans behave, what they're good at, and bad at, what they value, and so on, not to mention the domain-specific knowledge that is necessary to understand the problem in the first place.

Terren Suydam

unread,
Feb 4, 2022, 12:36:43 PM2/4/22
to Everything List
On Fri, Feb 4, 2022 at 4:06 AM John Clark <johnk...@gmail.com> wrote:
On Thu, Feb 3, 2022 at 7:20 PM Terren Suydam <terren...@gmail.com> wrote:

>>> AlphaCode can potentially improve its code, but to what end?  What problem is it trying to solve?  How does it know?

>> I don't understand your questions  

> What part is confusing?

I'll make you a deal, I'll tell you "what problem it is trying to solve" if you first tell me how long a piece of string is. And if you don't wanna do that just rephrase the question more clearly.
 

lol ok. The worry you're articulating is that AlphaCode will turn its coding abilities on itself and improve its own code, and that this could lead to the singularity. First, it must be said that AlphaCode is a tool with no agency of its own. Left to its own devices, it will do... nothing. But let's say the DeepMind team wanted to improve AlphaCode by applying AlphaCode to itself. My question to you is, what is the "toy problem" they would feed to AlphaCode? How do you define that problem? 
 

>> Yeah with a human that process takes many decades, but even today computers can process many many times more information than a human can, not surprising when you consider the fact that the signals inside a human brain only travel about 100 miles an hour while the signals in a computer travel close to the speed of light, 186,000 miles a second.  

> Much of our learning takes place via interactions with other humans, and those cannot be sped up.

Sure it can be, an AI could have a detailed intellectual conversation with 1000 people at the same time, or a million, or a billion.

Sure, but those interactions still take time, perhaps days or even months. And you're assuming that many people will want to have conversations with an AI. Have you ever tried listening to a 6 year old try and tell a story?  It's cute at first but the interest level quickly fades. Imagine an AI still learning the ropes of conversation, how little patience people would have for that. Kids at least have parents that are invested in listening and helping them learn. Your "speed of light" point only goes so far.
 
> I'm not talking about facts and information,

You may not be talking about facts and information but I sure as hell I am because information is as close as you can get to the traditional idea of the soul without entering the realm of religion or some other form of idiocy.  
> but about theories of mind, understanding human motivations, forming and testing hypotheses about how to get goals met by interacting with other humans, and other animals for that matter.

If humans can do it then an AI can do it too because knowledge is just highly computed information, and wisdom is just highly computed knowledge. 

Sure, I can hand-wave things away too. "Highly computed" means what exactly? I can reverse every word in this post. If I did that a million times in a row it would be "highly computed" but it wouldn't result in knowledge, much less wisdom.
 
> And I'm not talking about mere information,

Mere information? Mere?!

As opposed to knowledge, wisdom, the ability to model aspects of the world and simulate them, the ability to explain things, etc.
 

> but models that can be simulated in what-if scenarios, true understanding. You need real AGI.

You need AI, AGI is just loquacious technobabble used to make things sound more inscrutable.   

Doesn't seem all that loquacious to me. AGI just adds the word "general", to highlight the fact that today's AI isn't able to apply its intelligence to anything but narrow domains. If that's inscrutable, I'm not sure how to make it any clearer for you.
 

> We probably need to define what understanding/comprehension actually means if we're going to take this much further.

I don't think that would help one bit because fundamentally definitions are not important in language, examples are. After all, examples are where lexicographers get the knowledge to write the definitions for their book. So I'd say that "understanding" is the thing that Einstein had about physics to a greater extent than anybody else of his generation.

Sure, that works for me. Einstein was able to predict and explain things that nobody before him was able to. Prediction and explanation are hallmarks of understanding.
 

> Regardless, to operate in the free-form world of humans, an AI needs to be able to understand and react to a problem space that is constantly changing. Changing rules (implicit and explicit), players, goals, dynamics, etc.

Well sure, but AIs have been able to do that for years, since the 1950's. 

Care to give an example of AI in the 1950s that could do that?
 

> Is that possible to do without real understanding?

No. If I can answer some questions and perform some tasks in a certain area then I could be confident in saying have some "real understanding" in that area of knowledge, and if you can answer  more questions and perform more tasks in that area than I can then I would say you have an even greater understanding than I do, and I don't care if your brain is wet and squishy or dry and hard.  


OK.
 
>> As I've mentioned before, the entire human genome is only 750 megabytes, the new Mac operating system is about 20 times that size, and the genome contains instructions to build an entire human body not just a brain, and the genome is loaded with massive redundancy; so whatever the algorithm is that the brain uses to extract information from the environment there is simply no way it can be all that complicated. 

> The thing that makes intelligence intelligence is not simply extracting information from the environment.

How do you figure that? If human intelligence doesn't come from the 750 MB in our genome and it doesn't come from the environment then where does this secret sauce come from? From an invisible man in the sky? If so then why does He only give it to brains that are wet and squishy.

Not sure how you got that from what I said. The point I'm making is that intelligence, operationally speaking, is about far more than simply extracting information from the environment. It's about making models of the world that can be used for prediction, explanation, making plans, coordinating, etc. Information extraction is necessary but not sufficient for intelligence.
 
>> Machines move so fast that at breakfast the singularity could look to a human like it's a very long way off, but by lunchtime the singularity could be ancient history. 

> Do you think the singularity can occur with an AI that doesn't have real understanding?

Of course not! I have no objection to the term "real understanding", I only object when the term is used in a silly way, such as when I accomplish something in a certain field it demonstrates "real understanding" but even though an AI can do things in that same field even better and faster than I can it demonstrates nothing but a mindless reflex because its brain is dry and hard and not wet and squishy. 

I agree with that. Presumably we'd also agree that AlphaGo and AlphaZero have real understanding of go & chess, respectively.  I'm not sure Stockfish does though, because a brute-force computational approach leveraging heuristics given to it by humans strikes me as devoid of understanding. Stockfish is closer to a computational prosthetic for human minds.

To the larger point, where I think we disagree is how easy it is for an AI to achieve real understanding of the real world of human interaction.

Terren

Brent Meeker

unread,
Feb 4, 2022, 1:59:43 PM2/4/22
to everyth...@googlegroups.com
Well consider the example of climate.  Nobody can grasp all factors in climate and their interactions.  But we can model all of them in a global climate simulation.  So climatologists+simulations "grasp the domain"  even though humans can't.  Now suppose we want to extend these predictive climate models to include predictions about what humans will do in response.  We don't know how humans will behave except in some general statistical terms.  We don't know whether they will build nuclear powerplants or not.  Whether they will go to war over immigration or not.  An AI might be able to do that, but we certainly can't.   But if it did, would we believe it?  It can't explain it to us.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Lawrence Crowell

unread,
Feb 4, 2022, 4:36:59 PM2/4/22
to Everything List
All of this is coming like a tidal wave. A couple of years ago an AI was given data on the appearance of the sky over several decades. In particular to positions of planets were given. The system within a few days output not only the Copernican model but Kepler's laws. The time is coming in a couple of decades where if some human or humans do not figure out quantum gravitation some AI system will. Many other things are being turned over to AI and robots. It may not be too long before humans are obsolete.

LC

John Clark

unread,
Feb 4, 2022, 4:47:06 PM2/4/22
to 'Brent Meeker' via Everything List
On Fri, Feb 4, 2022 at 12:36 PM Terren Suydam <terren...@gmail.com> wrote:

>> I'll make you a deal, I'll tell you "what problem it is trying to solve" if you first tell me how long a piece of string is. And if you don't wanna do that just rephrase the question more clearly.
 
> lol ok. The worry you're articulating is that AlphaCode will turn its coding abilities on itself and improve its own code, and that this could lead to the singularity. First, it must be said that AlphaCode is a tool with no agency of its own.

We're talking about fundamentals here and in that context I don't know what you mean by "agency". Any information processing mechanism can be reduced logically to a Turing Machine, and some machines will stop and produce an answer and some will never stop, and some Turing machines will produce a correct answer and some will not, and in general there's no way to know what a Turing machine is going to do, you just have to watch it and see and you might be waiting forever for it to stop and produce an answer.   

> Left to its own devices, it will do... nothing.

There's no way you could know that. Even if you knew the exact state a huge neural net like AlphaZero was in, which is very unlikely, there is no way you could predict which state it would evolve into unless you could play chess as well as it can, which you cannot. In general the only way to know what a large neural network (which can always be logically reduced to a Turing Machine) will do is to just watch it and see, there is no shortcut. For a long time it might look like it's doing nothing and then suddenly start doing something, and that something might be something you don't like. 


> But let's say the DeepMind team wanted to improve AlphaCode by applying AlphaCode to itself. My question to you is, what is the "toy problem" they would feed to AlphaCode? How do you define that problem? 

Look at this code for a subprogram and make something that does the same thing but is smaller or runs faster or both. And that's not a toy problem, that's a real problem.  

 >> an AI could have a detailed intellectual conversation with 1000 people at the same time, or a million, or a billion.

> Sure, but those interactions still take time, perhaps days or even months. And you're assuming that many people will want to have conversations with an AI.

Yes, I am assuming that, and I think it's a very reasonable assumption. If an intelligent AI thinks she could learn important stuff from talking to people it can simply turn up its charm variable so that people want to talk to her (or him). I suggest you take a look at the movie "Her" which covers the exact theme I'm talking about, a charismatic and brilliant AI having interesting and intimate conversations with thousands of people at exactly the same time. I think it's one of the best science-fiction movies ever made even though some say it has a depressing ending. I disagree, I didn't find it depressing at all. 


>Have you ever tried listening to a 6 year old try and tell a story? 

Have you ever listen to a genius tell a story?  

 
>> If humans can do it then an AI can do it too because knowledge is just highly computed information, and wisdom is just highly computed knowledge. 

> Sure, I can hand-wave things away too. "Highly computed" means what exactly?

It exactly means that a high number of FLOPS are necessary but not sufficient.  

> I can reverse every word in this post. If I did that a million times in a row it would be "highly computed" but it wouldn't result in knowledge, much less wisdom.

Obviously the computation must be done intelligently. I've had debates of this sort before and at this point it is traditional for my opponent to demand that I define "intelligently", and I will be happy to do so if you first define "define", and then define "define "define"" and then...
 
> And I'm not talking about mere information,

>> Mere information? Mere?!

> As opposed to knowledge, wisdom, the ability to model aspects of the world and simulate them, the ability to explain things, etc.

How do you expect to be able to do any of this without processing information?!  

>>You need AI, AGI is just loquacious technobabble used to make things sound more inscrutable.   

> Doesn't seem all that loquacious to me. AGI just adds the word "general",

I think if Steven Spielberg's movie had been called AGI instead of AI some people today would no longer like the acronym AGI because too many people would know exactly what it means and thus would lack that certain aura of erudition and mystery that they crave . Everybody knows what AI means, but only a small select cognoscenti know the meaning of AGI. A Classic case of jargon creep.  
 
> to highlight the fact that today's AI isn't able to apply its intelligence to anything but narrow domains.

Even human geniuses have rather narrow domains, Einstein loved the violin but was only a mediocre player.  

>>> We probably need to define what understanding/comprehension actually means if we're going to take this much further.

>> I don't think that would help one bit because fundamentally definitions are not important in language, examples are. After all, examples are where lexicographers get the knowledge to write the definitions for their book. So I'd say that "understanding" is the thing that Einstein had about physics to a greater extent than anybody else of his generation.

> Sure, that works for me. Einstein was able to predict and explain things that nobody before him was able to. Prediction and explanation are hallmarks of understanding.

I agree. And the only way we can tell if somebody else has a greater understanding than we do is to see if they can answer questions or do things that we cannot.  


>>>  to operate in the free-form world of humans, an AI needs to be able to understand and react to a problem space that is constantly changing. Changing rules (implicit and explicit), players, goals, dynamics, etc.

>> Well sure, but AIs have been able to do that for years, since the 1950's. 

> Care to give an example of AI in the 1950s that could do that?

A tic-tac-toe board is constantly changing, but a computer in the 1950s could play that game perfectly. And a computer in the late 1950's or early 60's could play checkers well enough to beat most children and adult  novice players.

 
> The point I'm making is that intelligence, operationally speaking, is about far more than simply extracting information from the environment.

I agree, after the extraction of information from the environment it must be extensively processed, first into knowledge and then into wisdom. 
 
> It's about making models of the world that can be used for prediction, explanation, making plans, coordinating, etc.

Yes, as I said,  the information must be extensively processed.

> Information extraction is necessary but not sufficient for intelligence.

Agreed.

> To the larger point, where I think we disagree is how easy it is for an AI to achieve real understanding of the real world of human interaction.

I think the larger point is that for a super intelligent AI humans and their interactions will not be at the top of its priority list, a superhuman AI will have bigger fish to fry than us; and even if the singularity doesn't happen for 1000 years (and I can't imagine why it would take that long) in 999 years it will still seem like it's a long way off, but more progress will be made in that last year than the previous 999 combined. So whenever the singularity occurs it will come as a big surprise to most. 

 John K Clark    See what's on my new list at  Extropolis
sxe

pvq

Terren Suydam

unread,
Feb 4, 2022, 5:09:04 PM2/4/22
to Everything List
Just to keep this focused on programmers losing their jobs - how this started - by grasping the problem domain, I just mean that an AI should know how to model and operate in that domain such that it can formulate and act on plans that give it the potential to outperform humans. My hunch is that AIs won't outperform humans at this task until they grasp a much larger problem domain than people generally assume.

Terren Suydam

unread,
Feb 4, 2022, 5:34:21 PM2/4/22
to Everything List
On Fri, Feb 4, 2022 at 4:47 PM John Clark <johnk...@gmail.com> wrote:
On Fri, Feb 4, 2022 at 12:36 PM Terren Suydam <terren...@gmail.com> wrote:

>> I'll make you a deal, I'll tell you "what problem it is trying to solve" if you first tell me how long a piece of string is. And if you don't wanna do that just rephrase the question more clearly.
 
> lol ok. The worry you're articulating is that AlphaCode will turn its coding abilities on itself and improve its own code, and that this could lead to the singularity. First, it must be said that AlphaCode is a tool with no agency of its own.

We're talking about fundamentals here and in that context I don't know what you mean by "agency". Any information processing mechanism can be reduced logically to a Turing Machine, and some machines will stop and produce an answer and some will never stop, and some Turing machines will produce a correct answer and some will not, and in general there's no way to know what a Turing machine is going to do, you just have to watch it and see and you might be waiting forever for it to stop and produce an answer.   

> Left to its own devices, it will do... nothing.

There's no way you could know that. Even if you knew the exact state a huge neural net like AlphaZero was in, which is very unlikely, there is no way you could predict which state it would evolve into unless you could play chess as well as it can, which you cannot. In general the only way to know what a large neural network (which can always be logically reduced to a Turing Machine) will do is to just watch it and see, there is no shortcut. For a long time it might look like it's doing nothing and then suddenly start doing something, and that something might be something you don't like. 


Have you ever written a program?  Because you talk like someone who gets theoretical computation concepts but has not actually ever coded anything.
 

> But let's say the DeepMind team wanted to improve AlphaCode by applying AlphaCode to itself. My question to you is, what is the "toy problem" they would feed to AlphaCode? How do you define that problem? 

Look at this code for a subprogram and make something that does the same thing but is smaller or runs faster or both. And that's not a toy problem, that's a real problem. 

"does the same thing" is problematic for a couple reasons. The first is that AlphaCode doesn't know how to read code, but let's say that it could. The other problem is that with that problem description, it won't evolve except in the very narrow sense of improving its efficiency. The kind of problem description that might actually lead to a singularity is something like "Look at this code and make something that can solve ever more complex problem descriptions". But my hunch there is that that problem description is too complex for it to recursively self-improve towards.
 
 >> an AI could have a detailed intellectual conversation with 1000 people at the same time, or a million, or a billion.

> Sure, but those interactions still take time, perhaps days or even months. And you're assuming that many people will want to have conversations with an AI.

Yes, I am assuming that, and I think it's a very reasonable assumption. If an intelligent AI thinks she could learn important stuff from talking to people it can simply turn up its charm variable so that people want to talk to her (or him). I suggest you take a look at the movie "Her" which covers the exact theme I'm talking about, a charismatic and brilliant AI having interesting and intimate conversations with thousands of people at exactly the same time. I think it's one of the best science-fiction movies ever made even though some say it has a depressing ending. I disagree, I didn't find it depressing at all. 


>Have you ever tried listening to a 6 year old try and tell a story? 

Have you ever listen to a genius tell a story?  


You're already at the singularity if it can be charming and brilliant to millions of people simultaneously. I thought we were talking about getting to the singularity.
 
 
>> If humans can do it then an AI can do it too because knowledge is just highly computed information, and wisdom is just highly computed knowledge. 

> Sure, I can hand-wave things away too. "Highly computed" means what exactly?

It exactly means that a high number of FLOPS are necessary but not sufficient.  

> I can reverse every word in this post. If I did that a million times in a row it would be "highly computed" but it wouldn't result in knowledge, much less wisdom.

Obviously the computation must be done intelligently. I've had debates of this sort before and at this point it is traditional for my opponent to demand that I define "intelligently", and I will be happy to do so if you first define "define", and then define "define "define"" and then...

Don't worry, I won't ask you to do that. And I acknowledge that AIs will eventually gain knowledge and wisdom. I just don't think it's as easy as you're making it sound.
 
 
> And I'm not talking about mere information,

>> Mere information? Mere?!

> As opposed to knowledge, wisdom, the ability to model aspects of the world and simulate them, the ability to explain things, etc.

How do you expect to be able to do any of this without processing information?!  

Where in the world did you get the idea that I think processing information isn't necessary?
 

>>You need AI, AGI is just loquacious technobabble used to make things sound more inscrutable.   

> Doesn't seem all that loquacious to me. AGI just adds the word "general",

I think if Steven Spielberg's movie had been called AGI instead of AI some people today would no longer like the acronym AGI because too many people would know exactly what it means and thus would lack that certain aura of erudition and mystery that they crave . Everybody knows what AI means, but only a small select cognoscenti know the meaning of AGI. A Classic case of jargon creep.  

Do you really expect a discipline as technical as AI to not use jargon?  You use physics jargon all the time.
 
 
> to highlight the fact that today's AI isn't able to apply its intelligence to anything but narrow domains.

Even human geniuses have rather narrow domains, Einstein loved the violin but was only a mediocre player.

The fact that perhaps the greatest theoretical physicist of all time also played the violin I think proves the point that humans are generalists. Humans may only enjoy narrow domains but most are capable of some degree of competency in any domain if they give it time and attention.
 
 

>>> We probably need to define what understanding/comprehension actually means if we're going to take this much further.

>> I don't think that would help one bit because fundamentally definitions are not important in language, examples are. After all, examples are where lexicographers get the knowledge to write the definitions for their book. So I'd say that "understanding" is the thing that Einstein had about physics to a greater extent than anybody else of his generation.

> Sure, that works for me. Einstein was able to predict and explain things that nobody before him was able to. Prediction and explanation are hallmarks of understanding.

I agree. And the only way we can tell if somebody else has a greater understanding than we do is to see if they can answer questions or do things that we cannot.  


>>>  to operate in the free-form world of humans, an AI needs to be able to understand and react to a problem space that is constantly changing. Changing rules (implicit and explicit), players, goals, dynamics, etc.

>> Well sure, but AIs have been able to do that for years, since the 1950's. 

> Care to give an example of AI in the 1950s that could do that?

A tic-tac-toe board is constantly changing, but a computer in the 1950s could play that game perfectly. And a computer in the late 1950's or early 60's could play checkers well enough to beat most children and adult  novice players.

I was pretty clearly talking about a problem space in which there are "changing rules (implicit and explicit), players, goals, dynamics, etc.".  Are you really suggesting that a 1950s AI that can play tic tac toe is reacting to changing rules, goals, players, dynamics, etc?
 
 I think the larger point is that for a super intelligent AI humans and their interactions will not be at the top of its priority list, a superhuman AI will have bigger fish to fry than us; and even if the singularity doesn't happen for 1000 years (and I can't imagine why it would take that long) in 999 years it will still seem like it's a long way off, but more progress will be made in that last year than the previous 999 combined. So whenever the singularity occurs it will come as a big surprise to most. 

I don't disagree, but this started as me saying that programmer jobs are safe for the time being... that there's much more progress to be made in AI before that happens, because the world of human interaction is much more vast and complex than most people acknowledge. There's a bias there that because as human adults we're all relatively competent in that domain, it can't be that hard.

Terren
 

 John K Clark    See what's on my new list at  Extropolis
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Feb 4, 2022, 6:18:34 PM2/4/22
to 'Brent Meeker' via Everything List
On Fri, Feb 4, 2022 at 5:34 PM Terren Suydam <terren...@gmail.com> wrote:

>> Look at this code for a subprogram and make something that does the same thing but is smaller or runs faster or both. And that's not a toy problem, that's a real problem. 

> "does the same thing" is problematic for a couple reasons. The first is that AlphaCode doesn't know how to read code,

Huh? We already know AlphaCode can write code, how can something know how to write but not read? It's easier to read a novel than write a novel.
 
> The other problem is that with that problem description, it won't evolve except in the very narrow sense of improving its efficiency.

It seems to me the ability to write code that was smaller and faster than anybody else is not "very narrow", a human could make a very good living indeed from that talent.  And if I was the guy that signed his enormous paycheck and somebody offered me a program that would do the same thing he did I'd jump at it.

> The kind of problem description that might actually lead to a singularity is something like "Look at this code and make something that can solve ever more complex problem descriptions". But my hunch there is that that problem description is too complex for it to recursively self-improve towards.

Just adding more input variables would be less complex than figuring out how to make a program smaller and faster.

>> I think if Steven Spielberg's movie had been called AGI instead of AI some people today would no longer like the acronym AGI because too many people would know exactly what it means and thus would lack that certain aura of erudition and mystery that they crave . Everybody knows what AI means, but only a small select cognoscenti know the meaning of AGI. A Classic case of jargon creep.  

>Do you really expect a discipline as technical as AI to not use jargon? 

When totally new concepts come up, as they do occasionally in science, jargon is necessary because there is no previously existing word or short phrase that describes it, but that is not the primary generator of jargon and is not in this case  because a very short word that describes the idea already exists and everybody already knows what AI means, but very few know that AGI means the same thing. And some see that as AGI's great virtue, it's mysterious and sounds brainy. 
 
> You use physics jargon all the time.

I do try to keep that to a minimum, perhaps I should try harder.  

John K Clark    See what's on my new list at  Extropolis
pjx



Tomasz Rola

unread,
Feb 4, 2022, 7:14:24 PM2/4/22
to everyth...@googlegroups.com
On Fri, Feb 04, 2022 at 10:59:44AM -0800, Brent Meeker wrote:
> Well consider the example of climate. Nobody can grasp all factors
> in climate and their interactions. But we can model all of them in
> a global climate simulation.

Actually, as far as I can tell, we cannot. Or, you mean,
theoretically, sure, but in practice, I would say no, such model had
not been made yet.

If such model existed, it would give an answer about how/why the last
glaciation started and how/why it ended. But I understand such
answers were not given yet. Only speculations.

So, seems to me, whatever model we have, it cannot predict past
events. I would not bet any money on anything such model says about
other things.

> So climatologists+simulations "grasp the domain" even though humans
> can't.

Who are "climatologists"?

> Now suppose we want to extend these predictive climate models to
> include predictions about what humans will do in response. We don't
> know how humans will behave except in some general statistical
> terms.

And yet I can give you some good prediction. We use to do as small as
possible, for as long as possible. I predict this is not going to
change anytime soon. And not later either.

Tomasz Rola

unread,
Feb 4, 2022, 7:34:31 PM2/4/22
to everyth...@googlegroups.com
On Fri, Feb 04, 2022 at 01:36:59PM -0800, Lawrence Crowell wrote:
> All of this is coming like a tidal wave. A couple of years ago an AI was
> given data on the appearance of the sky over several decades. In particular
> to positions of planets were given. The system within a few days output not
> only the Copernican model but Kepler's laws.

This is unclear. What exactly did the "ai" do? Had it gone throu a
solution space of all models explaining celestial bodies, complete
with turtles standing on elephants and shaking whenever there is
earthquake in Tokyo?

Do you have a link to some description of what the experiment was
exactly?

Because from the point of view of Earth-based observer, planets are
just another light points on the night sky, only moving strangely. Such
an observer has no way to say if, for example, Mars is really
different, or even a planet, or if it is maybe some star which looks
for a place to stick and stop forever. Only with telescope one can see
that Mars is a disc, and disc changes with time, but periodically, etc
etc.

But objective observer has no way to say that Mars is a planet based
on merely naked eye observations, I am afraid. It took a lot of
astrophotography to get some understanding about our neighbourhood.

Thus I suspect that "ai" had been fed assumptions about what it was
expected to find.

John Clark

unread,
Feb 5, 2022, 6:50:08 AM2/5/22
to 'Brent Meeker' via Everything List
On Fri, Feb 4, 2022 at 7:34 PM Tomasz Rola <rto...@ceti.pl> wrote:

> from the point of view of Earth-based observer, planets are
just another light points on the night sky, only moving strangely. Such
an observer has no way to say if, for example, Mars is really
different, or even a planet, or if it is maybe some star which looks
for a place to stick and stop forever. Only with telescope one can see
that Mars is a disc, and disc changes with time, but periodically, etc
etc. But objective observer has no way to say that Mars is a planet based
on merely naked eye observations,

That is not true, even the ancient Egyptian's knew there was something special about Mercury, Venus, Mars, Jupiter and Saturn. Johannes Kepler didn't know the planets could be resolved into discs and he didn't need to know that to derive his 3 laws of planetary motion, and a computer wouldn't need to know that either. Kepler didn't have a telescope but he did have Tycho Brahe's excellent naked eye measurement of the movements of the planets over many decades, especially detailed were those of Mars. Kepler knew that the planets moved in a complicated path relative to the fixed stars and then after a fixed amount of time that was different for each planet the path repeated. Kepler spent years developing a very complicated earth centered model with lots of epicycles that fit Tyco's data pretty well, and a lesser scientist might've been satisfied with that but Kepler was not. Kepler knew Tyco's data was very good and the discrepancy between theory and observation was just too large to ignore, so reluctantly he junked years of work and went back to square one.  After more years of work he found a sun centered model and his three laws and that fit Tyco's data almost exactly and was much simpler.

John K Clark    See what's on my new list at  Extropolis
tbk

Terren Suydam

unread,
Feb 5, 2022, 3:51:22 PM2/5/22
to Everything List
On Fri, Feb 4, 2022 at 6:18 PM John Clark <johnk...@gmail.com> wrote:
On Fri, Feb 4, 2022 at 5:34 PM Terren Suydam <terren...@gmail.com> wrote:

>> Look at this code for a subprogram and make something that does the same thing but is smaller or runs faster or both. And that's not a toy problem, that's a real problem. 

> "does the same thing" is problematic for a couple reasons. The first is that AlphaCode doesn't know how to read code,

Huh? We already know AlphaCode can write code, how can something know how to write but not read? It's easier to read a novel than write a novel.

This is one case where your intuitions fail. I dug a little deeper into how AlphaCode works. It generates millions of candidate solutions using a model trained on github code. It then filters out 99% of those candidate solutions by running them against test cases provided in the problem description and removing the ones that fail. It then uses a different technique to whittle down the candidate solutions from several thousand to just ten. Nobody, neither the AI nor the humans running AlphaCode, know if the 10 solutions picked are correct.
 
AlphaCode is not capable of reading code. It's a clever version of monkeys typing on typewriters until they bang out a Shakespeare play. Still counts as AI, but cannot be said to understand code.

 
> The other problem is that with that problem description, it won't evolve except in the very narrow sense of improving its efficiency.

It seems to me the ability to write code that was smaller and faster than anybody else is not "very narrow", a human could make a very good living indeed from that talent.  And if I was the guy that signed his enormous paycheck and somebody offered me a program that would do the same thing he did I'd jump at it.

This actually already exists in the form of optimizing compilers - which are the programs that translate human-readable code like Java into assembly language that microprocessors use to manipulate data. Optimizing compilers can make human code more efficient. But these gains are only available in very well-understood and limited ways. To do what you're suggesting requires machine intelligence capable of understanding things in a much broader context.
 

> The kind of problem description that might actually lead to a singularity is something like "Look at this code and make something that can solve ever more complex problem descriptions". But my hunch there is that that problem description is too complex for it to recursively self-improve towards.

Just adding more input variables would be less complex than figuring out how to make a program smaller and faster.

Think about it this way. There's diminishing returns on the strategy to make the program smaller and faster, but potentially unlimited returns on being able to respond to ever greater complexity in the problem description.
 

>> I think if Steven Spielberg's movie had been called AGI instead of AI some people today would no longer like the acronym AGI because too many people would know exactly what it means and thus would lack that certain aura of erudition and mystery that they crave . Everybody knows what AI means, but only a small select cognoscenti know the meaning of AGI. A Classic case of jargon creep.  

>Do you really expect a discipline as technical as AI to not use jargon? 

When totally new concepts come up, as they do occasionally in science, jargon is necessary because there is no previously existing word or short phrase that describes it, but that is not the primary generator of jargon and is not in this case  because a very short word that describes the idea already exists and everybody already knows what AI means, but very few know that AGI means the same thing. And some see that as AGI's great virtue, it's mysterious and sounds brainy. 
 
> You use physics jargon all the time.

I do try to keep that to a minimum, perhaps I should try harder.  

I don't hold it against you, and I certainly don't think you're trying to cultivate an aura of erudition and mystery when you do. I'm not sure why you seem to have an axe to grind about the use of AGI, but it is a useful distinction to make. It's clear we have AI today. And it's equally clear we do not have AGI.

Terren
 

John K Clark    See what's on my new list at  Extropolis
pjx



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Quentin Anciaux

unread,
Feb 5, 2022, 4:34:04 PM2/5/22
to everyth...@googlegroups.com
The only thing I hope AI will achieve is to be less condescending... if it achieves true understanding,  I hope it will be humble... and as far as John Clark dislikes religions and God,  the singularity will be God... 

Quentin

John Clark

unread,
Feb 5, 2022, 5:18:32 PM2/5/22
to 'Brent Meeker' via Everything List
On Sat, Feb 5, 2022 at 3:51 PM Terren Suydam <terren...@gmail.com> wrote:

> I dug a little deeper into how AlphaCode works. It generates millions of candidate solutions using a model trained on github code. It then filters out 99% of those candidate solutions by running them against test cases provided in the problem description and removing the ones that fail. It then uses a different technique to whittle down the candidate solutions from several thousand to just ten. [...] AlphaCode is not capable of reading code.

How on earth can it filter out 99% of the code because it is bad code if it cannot read code? Closer to home, how could somebody on this list tell the difference between a post they like and a post they didn't like if they couldn't read English?

> Nobody, neither the AI nor the humans running AlphaCode, know if the 10 solutions picked are correct.

As Alan Turing said  "If a machine is expected to be infallible, it cannot also be intelligent."

> It's a clever version of monkeys typing on typewriters until they bang out a Shakespeare play. Still counts as AI,

  A clever version indeed!! In fact I would say that William Shakespeare himself was such a version.

> Still counts as AI, but cannot be said to understand code.

I am a bit confused by your use of one word, you seem to be giving it a very unconventional meaning.  If you, being a human, "understand" code but the code you write is inferior to the code that an AI writes that doesn't "understand" code then I fail to see why any human or any machine would want to have an "understanding" of anything.

>> Just adding more input variables would be less complex than figuring out how to make a program smaller and faster.

> Think about it this way. There's diminishing returns on the strategy to make the program smaller and faster, but potentially unlimited returns on being able to respond to ever greater complexity in the problem description.
 
You're talking about what would be more useful, I was talking about what would be more complex. In general finding the smallest and fastest program that can accomplish a given task is infinitely complex, that is to say in general it's impossible to find the smallest program and prove it's the smallest program.  Code optimization is very far from a trivial problem.

John K Clark    See what's on my new list at  Extropolis
tcp

Brent Meeker

unread,
Feb 5, 2022, 6:24:36 PM2/5/22
to everyth...@googlegroups.com


On 2/5/2022 12:51 PM, Terren Suydam wrote:


On Fri, Feb 4, 2022 at 6:18 PM John Clark <johnk...@gmail.com> wrote:
On Fri, Feb 4, 2022 at 5:34 PM Terren Suydam <terren...@gmail.com> wrote:

>> Look at this code for a subprogram and make something that does the same thing but is smaller or runs faster or both. And that's not a toy problem, that's a real problem. 

> "does the same thing" is problematic for a couple reasons. The first is that AlphaCode doesn't know how to read code,

Huh? We already know AlphaCode can write code, how can something know how to write but not read? It's easier to read a novel than write a novel.

This is one case where your intuitions fail. I dug a little deeper into how AlphaCode works. It generates millions of candidate solutions using a model trained on github code. It then filters out 99% of those candidate solutions by running them against test cases provided in the problem description and removing the ones that fail. It then uses a different technique to whittle down the candidate solutions from several thousand to just ten. Nobody, neither the AI nor the humans running AlphaCode, know if the 10 solutions picked are correct.

Just like we don't know which interpretation of quantum mechanics is correct.  But we use it anyway.


 
AlphaCode is not capable of reading code. It's a clever version of monkeys typing on typewriters until they bang out a Shakespeare play. Still counts as AI, but cannot be said to understand code.

What does it mean "to read code"?  It can execute code, from github apparently so it must read code well enough to execute it.  What more does it need to read?  You say it cannot be said to understand code.  Can you specify what would show it understands the code? 

I think you mean it tells you story about how this part to does that and this other part does something else, etc.  But that's just catering to your weak human brain that can't just "see" that the code solves the problem.  The problem is that there is no species independent meaning of "understand" except "make it work".  AlphaCode doesn't understand code like you do, because it doesn't think like you do and doesn't have the context you do.

Brent

Terren Suydam

unread,
Feb 5, 2022, 7:15:02 PM2/5/22
to Everything List
On Sat, Feb 5, 2022 at 5:18 PM John Clark <johnk...@gmail.com> wrote:
On Sat, Feb 5, 2022 at 3:51 PM Terren Suydam <terren...@gmail.com> wrote:

> I dug a little deeper into how AlphaCode works. It generates millions of candidate solutions using a model trained on github code. It then filters out 99% of those candidate solutions by running them against test cases provided in the problem description and removing the ones that fail. It then uses a different technique to whittle down the candidate solutions from several thousand to just ten. [...] AlphaCode is not capable of reading code.

How on earth can it filter out 99% of the code because it is bad code if it cannot read code? Closer to home, how could somebody on this list tell the difference between a post they like and a post they didn't like if they couldn't read English?

Let's take a more accessible analogy. Let's say the problem description is: "Here's a locked door. Devise a key to unlock the door."

A simplified analog to what Alphacode does is the following:
  • generate millions of different keys of various shapes and sizes.
  • for each key, try to unlock the door with it
  • if it doesn't work, toss it
In order to say that the key-generating AI understands keys and locks, you'd have to believe that a strategy that involves creating millions of guesses until one works entails some kind of understanding.

To your point that AlphaCode must have the ability to read code if it knows how to toss incorrect candidates, that's like saying that the key-generator must understand locks because it knows how to test if a key unlocks the door.
 

> Nobody, neither the AI nor the humans running AlphaCode, know if the 10 solutions picked are correct.

As Alan Turing said  "If a machine is expected to be infallible, it cannot also be intelligent."

> It's a clever version of monkeys typing on typewriters until they bang out a Shakespeare play. Still counts as AI,

  A clever version indeed!! In fact I would say that William Shakespeare himself was such a version.

If you think AlphaCode and Shakespeare have anything in common, then I don't think your assertions about AI are worth much.
 

> Still counts as AI, but cannot be said to understand code.

I am a bit confused by your use of one word, you seem to be giving it a very unconventional meaning.  If you, being a human, "understand" code but the code you write is inferior to the code that an AI writes that doesn't "understand" code then I fail to see why any human or any machine would want to have an "understanding" of anything.

If you think a brute-force "generate a million guesses until one works" strategy has the same understanding as an algorithm that employs a detailed model of the domain and uses that model to generate a reasoned solution, regardless of the results, then it's you that is employing the unconventional meaning of "understand".

In the real world, you usually don't get to try something a million times until something works.
 

>> Just adding more input variables would be less complex than figuring out how to make a program smaller and faster.

> Think about it this way. There's diminishing returns on the strategy to make the program smaller and faster, but potentially unlimited returns on being able to respond to ever greater complexity in the problem description.
 
You're talking about what would be more useful, I was talking about what would be more complex. In general finding the smallest and fastest program that can accomplish a given task is infinitely complex, that is to say in general it's impossible to find the smallest program and prove it's the smallest program.  Code optimization is very far from a trivial problem.

I'm surprised you're focusing on the less useful direction to go in. If anything, your thinking tends to be very pragmatic. Who cares if you can squeeze a few extra milliseconds out of an algorithm, if you could instead spend that effort doing something far more useful?
 

John K Clark    See what's on my new list at  Extropolis
tcp

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Terren Suydam

unread,
Feb 5, 2022, 7:23:25 PM2/5/22
to Everything List
On Sat, Feb 5, 2022 at 6:24 PM Brent Meeker <meeke...@gmail.com> wrote:
 
AlphaCode is not capable of reading code. It's a clever version of monkeys typing on typewriters until they bang out a Shakespeare play. Still counts as AI, but cannot be said to understand code.

What does it mean "to read code"?  It can execute code, from github apparently so it must read code well enough to execute it.  What more does it need to read?  You say it cannot be said to understand code.  Can you specify what would show it understands the code? 


The github code is used to train a neural network that maps natural language to code, and this neural network is used to generate candidate solutions based on the natural language of the problem description. If you want to say that represents a form of understanding, ok. But I would still push back that it could "read code".
 
I think you mean it tells you story about how this part to does that and this other part does something else, etc.  But that's just catering to your weak human brain that can't just "see" that the code solves the problem.  The problem is that there is no species independent meaning of "understand" except "make it work".  AlphaCode doesn't understand code like you do, because it doesn't think like you do and doesn't have the context you do.

There are times when I can read code and understand it, and times when I can't. When I can understand it, I can reason about what it's doing; I can find and fix bugs in it; I can potentially optimize it. I can see if this code is useful for other situations. And yes, I can tell you a story about what it's doing. AlphaCode is doing none of those things, because it's not built to.
 

Brent

John Clark

unread,
Feb 6, 2022, 7:20:29 AM2/6/22
to 'Brent Meeker' via Everything List
On Sat, Feb 5, 2022 at 7:15 PM Terren Suydam <terren...@gmail.com> wrote:

> Let's take a more accessible analogy. Let's say the problem description is: "Here's a locked door. Devise a key to unlock the door."
A simplified analog to what Alphacode does is the following:
  • generate millions of different keys of various shapes and sizes.
  • for each key, try to unlock the door with it
  • if it doesn't work, toss it
 
A better analogy would be a lockpicker that breaks down a complex task into a series of much simpler actions. Insert two lockpicks into the keyhole one to provide torque and the other to manipulate the pins. Find the first pin and see if it produces any resistance, if it doesn't then ignore it for the time being and go onto the second pin, if it shows resistance then increase the pressure until you hear a click or feel a slight rotation of the lock. Continue this procedure until you get to the last pin (most locks have 6 pins and few have more than 8) then go back to pin one and go down the line again. After three or four passes down the line you should be able to fully rotate the lock and open the door.  
 
> If you think AlphaCode and Shakespeare have anything in common, then I don't think your assertions about AI are worth much.

I see no evidence of a secret sauce that only human geniuses have, the difference between a Turing Machine named William Shakespeare and a Turing Machine named AlphaCode is a difference in degree and not of kind. 
 
> If you think a brute-force "generate a million guesses until one works" ...

Terren, we both know that's not the way AlphaCode works, it's a bit more complicated than that.   
 
> ... strategy has the same understanding as an algorithm that employs a detailed model of the domain and uses that model to generate a reasoned solution, regardless of the results, then it's you that is employing the unconventional meaning of "understand".

You can insult that poor program all you want but it doesn't change the fact that the very first time it was tried it managed to write better code in that competition than half the humans who make their living by writing code. And do you seriously doubt that AlphaCode will get better in the next few years, a lot better? I humbly suggest you stop worrying about how a computer can do what it does and start looking at what it is actually able to do; if I can come up with the correct answer then from your point of view it shouldn't matter how I was able to do it because it's still the correct answer.  

> In the real world, you usually don't get to try something a million times until something works.

I think that's probably untrue. I have a hunch that when you're trying to solve a problem your brain proposes many trillions of different neural patterns and nearly all of them are rejected because they don't work very well, and most of the time you are not consciously aware of any of them because you only have a finite memory capacity and there is little evolutionary value in remembering them because they don't work, although some of them on the borderline between acceptance and rejection may leave a vague imprint. But a brain is fallible and sometimes a neural pattern that would've solved the problem is rejected, perhaps smart people have a more reliable rejection mechanism then stupid people or are able to retrieve that vague imprint and reconsider it. Einstein's notebooks show that about a year before he finished General Relativity and after he had been working on it for nearly a decade he came up with the correct solution but then for some reason he rejected it, after another year of fruitless work he remembered it and decided to take a second look.  

 
>> You're talking about what would be more useful, I was talking about what would be more complex. In general finding the smallest and fastest program that can accomplish a given task is infinitely complex, that is to say in general it's impossible to find the smallest program and prove it's the smallest program.  Code optimization is very far from a trivial problem.

> I'm surprised you're focusing on the less useful direction to go in.

I focused on that example to demonstrate that what AlphaCode did was not trivial, far from it  
 
> If anything, your thinking tends to be very pragmatic.

Thank you.  
 
>Who cares if you can squeeze a few extra milliseconds out of an algorithm,

A lot of problems are solvable in principle but not in practice because they take too long, and time is money. The Fugaku Supercomputer in Japan, the fastest in the world, cost one billion dollars, so I figure if you could make its operating system run a little more efficiently so the machine was just 10% faster you could sell that program for about $100 million.  
John K Clark    See what's on my new list at  Extropolis
fsj


John Clark

unread,
Feb 7, 2022, 9:06:14 AM2/7/22
to 'Brent Meeker' via Everything List
Scott Aaronson is one of the world's top experts on quantum computers, but he also knows something about conventional computers and AI, yesterday he gave his opinion on AlphaCode. After listing some of the programs limitations he concludes with:

"Forget all that. Judged against where AI was 20-25 years ago, when I was a student, a dog is now holding meaningful conversations in English. And people are complaining that the dog isn’t a very eloquent orator, that it often makes grammatical errors and has to start again, that it took heroic effort to train it, and that it’s unclear how much the dog really understands.
It’s not obvious how you go from solving programming contest problems to conquering the human race or whatever, but I feel pretty confident that we’ve now entered a world where “programming” will look different."



John K Clark    See what's on my new list at  Extropolis
aqq


fsj


Terren Suydam

unread,
Feb 7, 2022, 2:34:13 PM2/7/22
to Everything List
On Sun, Feb 6, 2022 at 7:20 AM John Clark <johnk...@gmail.com> wrote:
On Sat, Feb 5, 2022 at 7:15 PM Terren Suydam <terren...@gmail.com> wrote:

> Let's take a more accessible analogy. Let's say the problem description is: "Here's a locked door. Devise a key to unlock the door."
A simplified analog to what Alphacode does is the following:
  • generate millions of different keys of various shapes and sizes.
  • for each key, try to unlock the door with it
  • if it doesn't work, toss it
 
A better analogy would be a lockpicker that breaks down a complex task into a series of much simpler actions. Insert two lockpicks into the keyhole one to provide torque and the other to manipulate the pins. Find the first pin and see if it produces any resistance, if it doesn't then ignore it for the time being and go onto the second pin, if it shows resistance then increase the pressure until you hear a click or feel a slight rotation of the lock. Continue this procedure until you get to the last pin (most locks have 6 pins and few have more than 8) then go back to pin one and go down the line again. After three or four passes down the line you should be able to fully rotate the lock and open the door.  

Why is that a better analogy?  That's not at all how AlphaCode works.
 
 
> If you think AlphaCode and Shakespeare have anything in common, then I don't think your assertions about AI are worth much.

I see no evidence of a secret sauce that only human geniuses have, the difference between a Turing Machine named William Shakespeare and a Turing Machine named AlphaCode is a difference in degree and not of kind. 

They're both emulable by Turing machines in theory, yes. That's about where the commonalities end.
 
 
> If you think a brute-force "generate a million guesses until one works" ...

Terren, we both know that's not the way AlphaCode works, it's a bit more complicated than that.   

That is how it works. All I left out is the part about how it generates the guesses, which come by way of a neural network trained on existing human-written github code. My understanding is that neural network is used, essentially, to map natural-language descriptions of code to actual code.

Read the attached paper and tell me how it's different from my description.
 
 
> ... strategy has the same understanding as an algorithm that employs a detailed model of the domain and uses that model to generate a reasoned solution, regardless of the results, then it's you that is employing the unconventional meaning of "understand".

You can insult that poor program all you want but it doesn't change the fact that the very first time it was tried it managed to write better code in that competition than half the humans who make their living by writing code. And do you seriously doubt that AlphaCode will get better in the next few years, a lot better? I humbly suggest you stop worrying about how a computer can do what it does and start looking at what it is actually able to do; if I can come up with the correct answer then from your point of view it shouldn't matter how I was able to do it because it's still the correct answer.  

I have no problem with any of that. I think there's some room for marginal improvement, but I predict the current AlphaCode strategy will never lead to putting engineers out of work, or self-improvement of the type that would lead to the singularity. This is not like the AlphaZero strategy in which it teaches itself the game. If they come out with a new code-writing AI that teaches itself to code, then I will happily change my tune.
 

> In the real world, you usually don't get to try something a million times until something works.

I think that's probably untrue. I have a hunch that when you're trying to solve a problem your brain proposes many trillions of different neural patterns and nearly all of them are rejected because they don't work very well, and most of the time you are not consciously aware of any of them because you only have a finite memory capacity and there is little evolutionary value in remembering them because they don't work, although some of them on the borderline between acceptance and rejection may leave a vague imprint. But a brain is fallible and sometimes a neural pattern that would've solved the problem is rejected, perhaps smart people have a more reliable rejection mechanism then stupid people or are able to retrieve that vague imprint and reconsider it. Einstein's notebooks show that about a year before he finished General Relativity and after he had been working on it for nearly a decade he came up with the correct solution but then for some reason he rejected it, after another year of fruitless work he remembered it and decided to take a second look.  

That's an interesting and novel hypothesis about how the brain works. I don't think there's much evidence for it though.

 
>Who cares if you can squeeze a few extra milliseconds out of an algorithm,
A lot of problems are solvable in principle but not in practice because they take too long, and time is money. The Fugaku Supercomputer in Japan, the fastest in the world, cost one billion dollars, so I figure if you could make its operating system run a little more efficiently so the machine was just 10% faster you could sell that program for about $100 million.  

The $1B price tag isn't because of the OS, which will only use a tiny fraction of the available computing power.  Anyway, marginally improving the performance of supercomputers doesn't get us closer to the singularity, but an AI that can teach itself to operate in ever widening domains just might.

Terren
 
John K Clark    See what's on my new list at  Extropolis
fsj

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
competition_level_code_generation_with_alphacode(2).pdf

John Clark

unread,
Feb 7, 2022, 4:24:51 PM2/7/22
to 'Brent Meeker' via Everything List
On Mon, Feb 7, 2022 at 2:34 PM Terren Suydam <terren...@gmail.com> wrote:

>> Terren, we both know that's not the way AlphaCode works, it's a bit more complicated than that.
 
   >That is how it works. All I left out is the part about how it generates the guesses

Besides that Mrs. Lincoln how did you like the play?  
 
> I predict the current AlphaCode strategy will never lead to putting engineers out of work, or self-improvement of the type that would lead to the singularity. This is not like the AlphaZero strategy in which it teaches itself the game. If they come out with a new code-writing AI that teaches itself to code, then I will happily change my tune.

When you learned how to code did you have to reinvent all the programming languages and techniques and do it all on your own with no help from teachers or friends or books or fellow coders? Did you have to rediscover the wheel?

A lot of problems are solvable in principle but not in practice because they take too long, and time is money. The Fugaku Supercomputer in Japan, the fastest in the world, cost one billion dollars, so I figure if you could make its operating system run a little more efficiently so the machine was just 10% faster you could sell that program for about $100 million.  

> The $1B price tag isn't because of the OS, which will only use a tiny fraction of the available computing power. 
 
The Fugaku Supercomputer has 7,630,848 cores and like all massively parallel computers most of the time most of those cores are not doing anything because they're waiting to receive some bit of information they need before they can start calculating; there's a big difference between the peak performance of a supercomputer and the actual performance it demonstrates when solving most problems. Any OS that can assign resources more efficiently so those inactive periods are reduced, even slightly, would have a big effect on the overall performance of the machine in solving real world problems, not running benchmark FLOP measuring programs.

John K Clark    See what's on my new list at  Extropolis
rew


Terren Suydam

unread,
Feb 7, 2022, 5:12:34 PM2/7/22
to Everything List
On Mon, Feb 7, 2022 at 4:24 PM John Clark <johnk...@gmail.com> wrote:
On Mon, Feb 7, 2022 at 2:34 PM Terren Suydam <terren...@gmail.com> wrote:

>> Terren, we both know that's not the way AlphaCode works, it's a bit more complicated than that.
 
   >That is how it works. All I left out is the part about how it generates the guesses

Besides that Mrs. Lincoln how did you like the play?  

LOL good one. My point is that it must generate millions of guesses just to get a handful that actually work. And I'll readily admit that the fact that it only takes a million (or whatever) tries to get something that works is actually damned impressive, and makes AlphaCode worthy of the attention it's getting.

We can argue about how much improvement in the guesswork is possible - you might argue that in the future it will only need 1000 guesses, or fewer. My argument is that you won't get there without a fundamentally different strategy.

 
> I predict the current AlphaCode strategy will never lead to putting engineers out of work, or self-improvement of the type that would lead to the singularity. This is not like the AlphaZero strategy in which it teaches itself the game. If they come out with a new code-writing AI that teaches itself to code, then I will happily change my tune.

When you learned how to code did you have to reinvent all the programming languages and techniques and do it all on your own with no help from teachers or friends or books or fellow coders? Did you have to rediscover the wheel?

Did AlphaZero get any help from humans?

Terren

John K Clark    See what's on my new list at  Extropolis
rew


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Feb 7, 2022, 5:25:01 PM2/7/22
to 'Brent Meeker' via Everything List
On Mon, Feb 7, 2022 at 5:12 PM Terren Suydam <terren...@gmail.com> wrote:

>> When you learned how to code did you have to reinvent all the programming languages and techniques and do it all on your own with no help from teachers or friends or books or fellow coders? Did you have to rediscover the wheel?

> Did AlphaZero get any help from humans?

No, but then writing good code is fundamentally more difficult than playing good chess. The rules of chess can be learned in five minutes, learning the rules for writing computer code would take considerably longer.
 
John K Clark    See what's on my new list at  Extropolis
mtl


rew

Terren Suydam

unread,
Feb 7, 2022, 6:34:12 PM2/7/22
to Everything List
On Mon, Feb 7, 2022 at 5:25 PM John Clark <johnk...@gmail.com> wrote:
On Mon, Feb 7, 2022 at 5:12 PM Terren Suydam <terren...@gmail.com> wrote:

>> When you learned how to code did you have to reinvent all the programming languages and techniques and do it all on your own with no help from teachers or friends or books or fellow coders? Did you have to rediscover the wheel?

> Did AlphaZero get any help from humans?

No, but then writing good code is fundamentally more difficult than playing good chess. The rules of chess can be learned in five minutes, learning the rules for writing computer code would take considerably longer.

That's exactly my point. Chess represents a narrow enough domain, with a limited enough set of operations, that makes it possible for an AI to teach itself, at least since AlphaZero anyway (and this itself was a huge achievement).

The problem with the real world of human enterprise (i.e. the domain in which talk of replacing human programmers is relevant) is that AIs currently cannot even be taught what the rules are, much less teach themselves to improve within the constraints of those rules. One day that will change, but we're not there yet. I say we're not even close.

Terren
 
John K Clark    See what's on my new list at  Extropolis
mtl


rew

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Feb 7, 2022, 7:39:58 PM2/7/22
to everyth...@googlegroups.com


On 2/7/2022 3:33 PM, Terren Suydam wrote:


On Mon, Feb 7, 2022 at 5:25 PM John Clark <johnk...@gmail.com> wrote:
On Mon, Feb 7, 2022 at 5:12 PM Terren Suydam <terren...@gmail.com> wrote:

>> When you learned how to code did you have to reinvent all the programming languages and techniques and do it all on your own with no help from teachers or friends or books or fellow coders? Did you have to rediscover the wheel?

> Did AlphaZero get any help from humans?

No, but then writing good code is fundamentally more difficult than playing good chess. The rules of chess can be learned in five minutes, learning the rules for writing computer code would take considerably longer.

That's exactly my point. Chess represents a narrow enough domain, with a limited enough set of operations, that makes it possible for an AI to teach itself, at least since AlphaZero anyway (and this itself was a huge achievement).

The problem with the real world of human enterprise (i.e. the domain in which talk of replacing human programmers is relevant) is that AIs currently cannot even be taught what the rules are, much less teach themselves to improve within the constraints of those rules. One day that will change, but we're not there yet. I say we're not even close.

The question is how much of human intelligence is inherent/genetic that has evolved over half a million years or longer (they didn't start from zero) vs how much a human programmer learns by example/experience/etc.  I don't think the latter is so great that an AI can't compare pretty quickly if not now.  The former, "hard-wired" part hasn't really been tackled by neural network AIs.  In animals and humans it evolved over a long time and it was shaped by the environment and the sensors available.  From the machine learning stand point it's like making a robot with sensors and actuators and neural nets, and then programming in its goal, "Go forth and multiply."  Its randomized neural nets are short a few million years of training by nature.  But given the electronic v. wet speed difference plus some starting point better than just randomized I expect it will get there in a decade or less.

Brent


Terren
 
John K Clark    See what's on my new list at  Extropolis
mtl


rew
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3xuYCcrEPd08KVzD1MominXKxq7O15EskTkL489MQ%2B5A%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Feb 8, 2022, 6:58:32 AM2/8/22
to 'Brent Meeker' via Everything List
On Mon, Feb 7, 2022 at 6:34 PM Terren Suydam <terren...@gmail.com> wrote:

> The problem with the real world of human enterprise (i.e. the domain in which talk of replacing human programmers is relevant) is that AIs currently cannot even be taught what the rules are

I don't know what you mean by that because silicon-based AlphaCode must know something about code writing given that the very first time it was released into the wild, despite the difficulties involved in becoming a good program writer as you correctly point out,  AlphaCode nevertheless managed to write better code than half the human programmers who used older meat-based technology. And as time goes by AlphaCode will get better but humans will not get smarter.

John K Clark    See what's on my new list at  Extropolis
smt

Terren Suydam

unread,
Feb 8, 2022, 9:23:52 AM2/8/22
to Everything List
I was talking about the rules of human enterprise, not coding.

Terren
 
John K Clark    See what's on my new list at  Extropolis
smt

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages