--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/bfbf9c0d-4df0-4ed7-9e58-ab8a68ded0e6n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2L2n8J%3DVWr6fnVKTpDQEKtv2bXeiLtePhjuWiuv%3DnbBg%40mail.gmail.com.
> the code generated by the AI still needs to be understandable
> The hard part is understanding the problem your code is supposed to solve, understanding the tradeoffs between different approaches, and being able to negotiate with stakeholders about what the best approach is.
> It'll be a very long time before we're handing that domain off to an AI.
> the code generated by the AI still needs to be understandableOnce AI starts to get to be really smart that's never going to happen, even today nobody knows how a neural network like AlphaZero works or understands the reasoning behind it making a particular move but that doesn't matter because understandable or not AlphaZero can still play chess better than anybody alive, and if humans don't understand how that can be than that's just too bad for them.
> The hard part is understanding the problem your code is supposed to solve, understanding the tradeoffs between different approaches, and being able to negotiate with stakeholders about what the best approach is.You seem to be assuming that the "stakeholders", those that intend to use the code once it is completed, will always be humans, and I think that is an entirely unwarranted assumption. The stakeholders will certainly have brains, but they may be hard and dry and not wet and squishy.
> It'll be a very long time before we're handing that domain off to an AI.I think you're whistling past the graveyard.
>AlphaCode can potentially improve its code, but to what end? What problem is it trying to solve? How does it know?
> Imagine an AI tasked with making as much money in the stock market as it can. Pretty clear signals for winning and losing (like chess). And perhaps there's some easy wins there for an AI that can take advantage of e.g. arbitrage (this exists already I believe) or other patterns that are not exploitable by human brains. But it seems to me that actual comprehension of the world of investment is key. Knowing how earnings reports will affect the stock price of a company, relative to human expectations about that earnings report.
> You have to import a universe of knowledge of the human domain to be effective
> a universe we take for granted since we've acquired it over decades of training.
> And I'm not talking about mere information, but models that can be simulated in what-if scenarios, true understanding. You need real AGI.
>I think the problem of AGI is much harder than most assume.
> To get to the point where machines are the stakeholders, we're already past the singularity.
On Thu, Feb 3, 2022 at 5:23 PM Terren Suydam <terren...@gmail.com> wrote:>AlphaCode can potentially improve its code, but to what end? What problem is it trying to solve? How does it know?I don't understand your questions
> Imagine an AI tasked with making as much money in the stock market as it can. Pretty clear signals for winning and losing (like chess). And perhaps there's some easy wins there for an AI that can take advantage of e.g. arbitrage (this exists already I believe) or other patterns that are not exploitable by human brains. But it seems to me that actual comprehension of the world of investment is key. Knowing how earnings reports will affect the stock price of a company, relative to human expectations about that earnings report.I agree, but if humans, or at least some extraordinary humans like Warren Buffett, can understand the stock market, or at least understand it well enough to do better at picking stocks than doing so randomly, then I see absolutely no reason why an AI couldn't do the same thing, and do it better.> You have to import a universe of knowledge of the human domain to be effectiveYeah, when you're born you don't know anything but over time you gain knowledge from the environment.> a universe we take for granted since we've acquired it over decades of training.Yeah with a human that process takes many decades, but even today computers can process many many times more information than a human can, not surprising when you consider the fact that the signals inside a human brain only travel about 100 miles an hour while the signals in a computer travel close to the speed of light, 186,000 miles a second.
> And I'm not talking about mere information, but models that can be simulated in what-if scenarios, true understanding. You need real AGI.I can't think of a more flagrant example of moving goal posts. I clearly remember when nearly everybody said it would require "real understanding" for a computer to play chess at the grandmaster level, never mind the superhuman level, but nobody says that anymore. Much more recently people said image recognition would require "real intelligence" but few say that anymore, now they say coding requires "real intelligence". "Real AGI" is a machine that can do what a computer cannot do, YET.
>I think the problem of AGI is much harder than most assume.As I've mentioned before, the entire human genome is only 750 megabytes, the new Mac operating system is about 20 times that size, and the genome contains instructions to build an entire human body not just a brain, and the genome is loaded with massive redundancy; so whatever the algorithm is that the brain uses to extract information from the environment there is simply no way it can be all that complicated.
> To get to the point where machines are the stakeholders, we're already past the singularity.Machines move so fast that at breakfast the singularity could look to a human like it's a very long way off, but by lunchtime the singularity could be ancient history.
I think "able to grasp the problem domain we're talking about" is giving us way to much credit. Every study of stock traders I've seen says that they do no better than some simple rules of thumb like index funds.
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/16772563-e3dd-19d9-8241-095fbd3230b6%40gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA-YQ5fRN_H5ZB05GoZZAUq%2Bk7NDZDowVO%3DJOsYqFtdWcQ%40mail.gmail.com.
>>> AlphaCode can potentially improve its code, but to what end? What problem is it trying to solve? How does it know?>> I don't understand your questions> What part is confusing?
>> Yeah with a human that process takes many decades, but even today computers can process many many times more information than a human can, not surprising when you consider the fact that the signals inside a human brain only travel about 100 miles an hour while the signals in a computer travel close to the speed of light, 186,000 miles a second.> Much of our learning takes place via interactions with other humans, and those cannot be sped up.
> I'm not talking about facts and information,
> but about theories of mind, understanding human motivations, forming and testing hypotheses about how to get goals met by interacting with other humans, and other animals for that matter.
> And I'm not talking about mere information,
> but models that can be simulated in what-if scenarios, true understanding. You need real AGI.
> We probably need to define what understanding/comprehension actually means if we're going to take this much further.
> Regardless, to operate in the free-form world of humans, an AI needs to be able to understand and react to a problem space that is constantly changing. Changing rules (implicit and explicit), players, goals, dynamics, etc.
> Is that possible to do without real understanding?
>> As I've mentioned before, the entire human genome is only 750 megabytes, the new Mac operating system is about 20 times that size, and the genome contains instructions to build an entire human body not just a brain, and the genome is loaded with massive redundancy; so whatever the algorithm is that the brain uses to extract information from the environment there is simply no way it can be all that complicated.> The thing that makes intelligence intelligence is not simply extracting information from the environment.
>> Machines move so fast that at breakfast the singularity could look to a human like it's a very long way off, but by lunchtime the singularity could be ancient history.> Do you think the singularity can occur with an AI that doesn't have real understanding?
On Thu, Feb 3, 2022 at 7:20 PM Terren Suydam <terren...@gmail.com> wrote:>>> AlphaCode can potentially improve its code, but to what end? What problem is it trying to solve? How does it know?>> I don't understand your questions> What part is confusing?I'll make you a deal, I'll tell you "what problem it is trying to solve" if you first tell me how long a piece of string is. And if you don't wanna do that just rephrase the question more clearly.
>> Yeah with a human that process takes many decades, but even today computers can process many many times more information than a human can, not surprising when you consider the fact that the signals inside a human brain only travel about 100 miles an hour while the signals in a computer travel close to the speed of light, 186,000 miles a second.> Much of our learning takes place via interactions with other humans, and those cannot be sped up.Sure it can be, an AI could have a detailed intellectual conversation with 1000 people at the same time, or a million, or a billion.
> I'm not talking about facts and information,You may not be talking about facts and information but I sure as hell I am because information is as close as you can get to the traditional idea of the soul without entering the realm of religion or some other form of idiocy.> but about theories of mind, understanding human motivations, forming and testing hypotheses about how to get goals met by interacting with other humans, and other animals for that matter.If humans can do it then an AI can do it too because knowledge is just highly computed information, and wisdom is just highly computed knowledge.
> And I'm not talking about mere information,Mere information? Mere?!
> but models that can be simulated in what-if scenarios, true understanding. You need real AGI.You need AI, AGI is just loquacious technobabble used to make things sound more inscrutable.
> We probably need to define what understanding/comprehension actually means if we're going to take this much further.I don't think that would help one bit because fundamentally definitions are not important in language, examples are. After all, examples are where lexicographers get the knowledge to write the definitions for their book. So I'd say that "understanding" is the thing that Einstein had about physics to a greater extent than anybody else of his generation.
> Regardless, to operate in the free-form world of humans, an AI needs to be able to understand and react to a problem space that is constantly changing. Changing rules (implicit and explicit), players, goals, dynamics, etc.Well sure, but AIs have been able to do that for years, since the 1950's.
> Is that possible to do without real understanding?No. If I can answer some questions and perform some tasks in a certain area then I could be confident in saying have some "real understanding" in that area of knowledge, and if you can answer more questions and perform more tasks in that area than I can then I would say you have an even greater understanding than I do, and I don't care if your brain is wet and squishy or dry and hard.
>> As I've mentioned before, the entire human genome is only 750 megabytes, the new Mac operating system is about 20 times that size, and the genome contains instructions to build an entire human body not just a brain, and the genome is loaded with massive redundancy; so whatever the algorithm is that the brain uses to extract information from the environment there is simply no way it can be all that complicated.> The thing that makes intelligence intelligence is not simply extracting information from the environment.How do you figure that? If human intelligence doesn't come from the 750 MB in our genome and it doesn't come from the environment then where does this secret sauce come from? From an invisible man in the sky? If so then why does He only give it to brains that are wet and squishy.
>> Machines move so fast that at breakfast the singularity could look to a human like it's a very long way off, but by lunchtime the singularity could be ancient history.> Do you think the singularity can occur with an AI that doesn't have real understanding?Of course not! I have no objection to the term "real understanding", I only object when the term is used in a silly way, such as when I accomplish something in a certain field it demonstrates "real understanding" but even though an AI can do things in that same field even better and faster than I can it demonstrates nothing but a mindless reflex because its brain is dry and hard and not wet and squishy.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA8cnqP1U4thdUQWv_AZk-zbOcrqk1u%3DeRzqzmvAHOFJMQ%40mail.gmail.com.
>> I'll make you a deal, I'll tell you "what problem it is trying to solve" if you first tell me how long a piece of string is. And if you don't wanna do that just rephrase the question more clearly.> lol ok. The worry you're articulating is that AlphaCode will turn its coding abilities on itself and improve its own code, and that this could lead to the singularity. First, it must be said that AlphaCode is a tool with no agency of its own.
> Left to its own devices, it will do... nothing.
> But let's say the DeepMind team wanted to improve AlphaCode by applying AlphaCode to itself. My question to you is, what is the "toy problem" they would feed to AlphaCode? How do you define that problem?
>> an AI could have a detailed intellectual conversation with 1000 people at the same time, or a million, or a billion.> Sure, but those interactions still take time, perhaps days or even months. And you're assuming that many people will want to have conversations with an AI.
>Have you ever tried listening to a 6 year old try and tell a story?
>> If humans can do it then an AI can do it too because knowledge is just highly computed information, and wisdom is just highly computed knowledge.> Sure, I can hand-wave things away too. "Highly computed" means what exactly?
> I can reverse every word in this post. If I did that a million times in a row it would be "highly computed" but it wouldn't result in knowledge, much less wisdom.
> And I'm not talking about mere information,>> Mere information? Mere?!> As opposed to knowledge, wisdom, the ability to model aspects of the world and simulate them, the ability to explain things, etc.
>>You need AI, AGI is just loquacious technobabble used to make things sound more inscrutable.> Doesn't seem all that loquacious to me. AGI just adds the word "general",
> to highlight the fact that today's AI isn't able to apply its intelligence to anything but narrow domains.
>>> We probably need to define what understanding/comprehension actually means if we're going to take this much further.>> I don't think that would help one bit because fundamentally definitions are not important in language, examples are. After all, examples are where lexicographers get the knowledge to write the definitions for their book. So I'd say that "understanding" is the thing that Einstein had about physics to a greater extent than anybody else of his generation.> Sure, that works for me. Einstein was able to predict and explain things that nobody before him was able to. Prediction and explanation are hallmarks of understanding.
>>> to operate in the free-form world of humans, an AI needs to be able to understand and react to a problem space that is constantly changing. Changing rules (implicit and explicit), players, goals, dynamics, etc.
>> Well sure, but AIs have been able to do that for years, since the 1950's.> Care to give an example of AI in the 1950s that could do that?
> The point I'm making is that intelligence, operationally speaking, is about far more than simply extracting information from the environment.
> It's about making models of the world that can be used for prediction, explanation, making plans, coordinating, etc.
> Information extraction is necessary but not sufficient for intelligence.
> To the larger point, where I think we disagree is how easy it is for an AI to achieve real understanding of the real world of human interaction.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/c8285abf-d07c-cbd5-fc0e-f668f1a75a3a%40gmail.com.
On Fri, Feb 4, 2022 at 12:36 PM Terren Suydam <terren...@gmail.com> wrote:>> I'll make you a deal, I'll tell you "what problem it is trying to solve" if you first tell me how long a piece of string is. And if you don't wanna do that just rephrase the question more clearly.> lol ok. The worry you're articulating is that AlphaCode will turn its coding abilities on itself and improve its own code, and that this could lead to the singularity. First, it must be said that AlphaCode is a tool with no agency of its own.We're talking about fundamentals here and in that context I don't know what you mean by "agency". Any information processing mechanism can be reduced logically to a Turing Machine, and some machines will stop and produce an answer and some will never stop, and some Turing machines will produce a correct answer and some will not, and in general there's no way to know what a Turing machine is going to do, you just have to watch it and see and you might be waiting forever for it to stop and produce an answer.> Left to its own devices, it will do... nothing.There's no way you could know that. Even if you knew the exact state a huge neural net like AlphaZero was in, which is very unlikely, there is no way you could predict which state it would evolve into unless you could play chess as well as it can, which you cannot. In general the only way to know what a large neural network (which can always be logically reduced to a Turing Machine) will do is to just watch it and see, there is no shortcut. For a long time it might look like it's doing nothing and then suddenly start doing something, and that something might be something you don't like.
> But let's say the DeepMind team wanted to improve AlphaCode by applying AlphaCode to itself. My question to you is, what is the "toy problem" they would feed to AlphaCode? How do you define that problem?Look at this code for a subprogram and make something that does the same thing but is smaller or runs faster or both. And that's not a toy problem, that's a real problem.
>> an AI could have a detailed intellectual conversation with 1000 people at the same time, or a million, or a billion.> Sure, but those interactions still take time, perhaps days or even months. And you're assuming that many people will want to have conversations with an AI.Yes, I am assuming that, and I think it's a very reasonable assumption. If an intelligent AI thinks she could learn important stuff from talking to people it can simply turn up its charm variable so that people want to talk to her (or him). I suggest you take a look at the movie "Her" which covers the exact theme I'm talking about, a charismatic and brilliant AI having interesting and intimate conversations with thousands of people at exactly the same time. I think it's one of the best science-fiction movies ever made even though some say it has a depressing ending. I disagree, I didn't find it depressing at all.>Have you ever tried listening to a 6 year old try and tell a story?Have you ever listen to a genius tell a story?
>> If humans can do it then an AI can do it too because knowledge is just highly computed information, and wisdom is just highly computed knowledge.> Sure, I can hand-wave things away too. "Highly computed" means what exactly?It exactly means that a high number of FLOPS are necessary but not sufficient.> I can reverse every word in this post. If I did that a million times in a row it would be "highly computed" but it wouldn't result in knowledge, much less wisdom.Obviously the computation must be done intelligently. I've had debates of this sort before and at this point it is traditional for my opponent to demand that I define "intelligently", and I will be happy to do so if you first define "define", and then define "define "define"" and then...
> And I'm not talking about mere information,>> Mere information? Mere?!> As opposed to knowledge, wisdom, the ability to model aspects of the world and simulate them, the ability to explain things, etc.How do you expect to be able to do any of this without processing information?!
>>You need AI, AGI is just loquacious technobabble used to make things sound more inscrutable.> Doesn't seem all that loquacious to me. AGI just adds the word "general",I think if Steven Spielberg's movie had been called AGI instead of AI some people today would no longer like the acronym AGI because too many people would know exactly what it means and thus would lack that certain aura of erudition and mystery that they crave . Everybody knows what AI means, but only a small select cognoscenti know the meaning of AGI. A Classic case of jargon creep.
> to highlight the fact that today's AI isn't able to apply its intelligence to anything but narrow domains.Even human geniuses have rather narrow domains, Einstein loved the violin but was only a mediocre player.
>>> We probably need to define what understanding/comprehension actually means if we're going to take this much further.>> I don't think that would help one bit because fundamentally definitions are not important in language, examples are. After all, examples are where lexicographers get the knowledge to write the definitions for their book. So I'd say that "understanding" is the thing that Einstein had about physics to a greater extent than anybody else of his generation.> Sure, that works for me. Einstein was able to predict and explain things that nobody before him was able to. Prediction and explanation are hallmarks of understanding.I agree. And the only way we can tell if somebody else has a greater understanding than we do is to see if they can answer questions or do things that we cannot.>>> to operate in the free-form world of humans, an AI needs to be able to understand and react to a problem space that is constantly changing. Changing rules (implicit and explicit), players, goals, dynamics, etc.>> Well sure, but AIs have been able to do that for years, since the 1950's.> Care to give an example of AI in the 1950s that could do that?A tic-tac-toe board is constantly changing, but a computer in the 1950s could play that game perfectly. And a computer in the late 1950's or early 60's could play checkers well enough to beat most children and adult novice players.
I think the larger point is that for a super intelligent AI humans and their interactions will not be at the top of its priority list, a superhuman AI will have bigger fish to fry than us; and even if the singularity doesn't happen for 1000 years (and I can't imagine why it would take that long) in 999 years it will still seem like it's a long way off, but more progress will be made in that last year than the previous 999 combined. So whenever the singularity occurs it will come as a big surprise to most.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2HMKbJCm5VYR%3D-dfWPQFVuPQNfi3KT9Ba9BWtrz8%3Dwvg%40mail.gmail.com.
>> Look at this code for a subprogram and make something that does the same thing but is smaller or runs faster or both. And that's not a toy problem, that's a real problem.> "does the same thing" is problematic for a couple reasons. The first is that AlphaCode doesn't know how to read code,
> The other problem is that with that problem description, it won't evolve except in the very narrow sense of improving its efficiency.
> The kind of problem description that might actually lead to a singularity is something like "Look at this code and make something that can solve ever more complex problem descriptions". But my hunch there is that that problem description is too complex for it to recursively self-improve towards.
>> I think if Steven Spielberg's movie had been called AGI instead of AI some people today would no longer like the acronym AGI because too many people would know exactly what it means and thus would lack that certain aura of erudition and mystery that they crave . Everybody knows what AI means, but only a small select cognoscenti know the meaning of AGI. A Classic case of jargon creep.>Do you really expect a discipline as technical as AI to not use jargon?
> You use physics jargon all the time.
> from the point of view of Earth-based observer, planets are
just another light points on the night sky, only moving strangely. Such
an observer has no way to say if, for example, Mars is really
different, or even a planet, or if it is maybe some star which looks
for a place to stick and stop forever. Only with telescope one can see
that Mars is a disc, and disc changes with time, but periodically, etc
etc. But objective observer has no way to say that Mars is a planet based
on merely naked eye observations,
On Fri, Feb 4, 2022 at 5:34 PM Terren Suydam <terren...@gmail.com> wrote:>> Look at this code for a subprogram and make something that does the same thing but is smaller or runs faster or both. And that's not a toy problem, that's a real problem.> "does the same thing" is problematic for a couple reasons. The first is that AlphaCode doesn't know how to read code,Huh? We already know AlphaCode can write code, how can something know how to write but not read? It's easier to read a novel than write a novel.
> The other problem is that with that problem description, it won't evolve except in the very narrow sense of improving its efficiency.It seems to me the ability to write code that was smaller and faster than anybody else is not "very narrow", a human could make a very good living indeed from that talent. And if I was the guy that signed his enormous paycheck and somebody offered me a program that would do the same thing he did I'd jump at it.
> The kind of problem description that might actually lead to a singularity is something like "Look at this code and make something that can solve ever more complex problem descriptions". But my hunch there is that that problem description is too complex for it to recursively self-improve towards.Just adding more input variables would be less complex than figuring out how to make a program smaller and faster.
>> I think if Steven Spielberg's movie had been called AGI instead of AI some people today would no longer like the acronym AGI because too many people would know exactly what it means and thus would lack that certain aura of erudition and mystery that they crave . Everybody knows what AI means, but only a small select cognoscenti know the meaning of AGI. A Classic case of jargon creep.>Do you really expect a discipline as technical as AI to not use jargon?When totally new concepts come up, as they do occasionally in science, jargon is necessary because there is no previously existing word or short phrase that describes it, but that is not the primary generator of jargon and is not in this case because a very short word that describes the idea already exists and everybody already knows what AI means, but very few know that AGI means the same thing. And some see that as AGI's great virtue, it's mysterious and sounds brainy.> You use physics jargon all the time.I do try to keep that to a minimum, perhaps I should try harder.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv0344k0h7t3EjYtgsW5-652P_qieSqyXCtOiAr9zAnmOQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA9AUnjQCeOSP1NVN6djp57oOJLQi6_Se03wuR3yjfEW9A%40mail.gmail.com.
> I dug a little deeper into how AlphaCode works. It generates millions of candidate solutions using a model trained on github code. It then filters out 99% of those candidate solutions by running them against test cases provided in the problem description and removing the ones that fail. It then uses a different technique to whittle down the candidate solutions from several thousand to just ten. [...] AlphaCode is not capable of reading code.
> Nobody, neither the AI nor the humans running AlphaCode, know if the 10 solutions picked are correct.
> It's a clever version of monkeys typing on typewriters until they bang out a Shakespeare play. Still counts as AI,
> Still counts as AI, but cannot be said to understand code.
>> Just adding more input variables would be less complex than figuring out how to make a program smaller and faster.> Think about it this way. There's diminishing returns on the strategy to make the program smaller and faster, but potentially unlimited returns on being able to respond to ever greater complexity in the problem description.
On Fri, Feb 4, 2022 at 6:18 PM John Clark <johnk...@gmail.com> wrote:
On Fri, Feb 4, 2022 at 5:34 PM Terren Suydam <terren...@gmail.com> wrote:
>> Look at this code for a subprogram and make something that does the same thing but is smaller or runs faster or both. And that's not a toy problem, that's a real problem.
> "does the same thing" is problematic for a couple reasons. The first is that AlphaCode doesn't know how to read code,
Huh? We already know AlphaCode can write code, how can something know how to write but not read? It's easier to read a novel than write a novel.
This is one case where your intuitions fail. I dug a little deeper into how AlphaCode works. It generates millions of candidate solutions using a model trained on github code. It then filters out 99% of those candidate solutions by running them against test cases provided in the problem description and removing the ones that fail. It then uses a different technique to whittle down the candidate solutions from several thousand to just ten. Nobody, neither the AI nor the humans running AlphaCode, know if the 10 solutions picked are correct.
AlphaCode is not capable of reading code. It's a clever version of monkeys typing on typewriters until they bang out a Shakespeare play. Still counts as AI, but cannot be said to understand code.
On Sat, Feb 5, 2022 at 3:51 PM Terren Suydam <terren...@gmail.com> wrote:> I dug a little deeper into how AlphaCode works. It generates millions of candidate solutions using a model trained on github code. It then filters out 99% of those candidate solutions by running them against test cases provided in the problem description and removing the ones that fail. It then uses a different technique to whittle down the candidate solutions from several thousand to just ten. [...] AlphaCode is not capable of reading code.How on earth can it filter out 99% of the code because it is bad code if it cannot read code? Closer to home, how could somebody on this list tell the difference between a post they like and a post they didn't like if they couldn't read English?
> Nobody, neither the AI nor the humans running AlphaCode, know if the 10 solutions picked are correct.As Alan Turing said "If a machine is expected to be infallible, it cannot also be intelligent."> It's a clever version of monkeys typing on typewriters until they bang out a Shakespeare play. Still counts as AI,A clever version indeed!! In fact I would say that William Shakespeare himself was such a version.
> Still counts as AI, but cannot be said to understand code.I am a bit confused by your use of one word, you seem to be giving it a very unconventional meaning. If you, being a human, "understand" code but the code you write is inferior to the code that an AI writes that doesn't "understand" code then I fail to see why any human or any machine would want to have an "understanding" of anything.
>> Just adding more input variables would be less complex than figuring out how to make a program smaller and faster.> Think about it this way. There's diminishing returns on the strategy to make the program smaller and faster, but potentially unlimited returns on being able to respond to ever greater complexity in the problem description.You're talking about what would be more useful, I was talking about what would be more complex. In general finding the smallest and fastest program that can accomplish a given task is infinitely complex, that is to say in general it's impossible to find the smallest program and prove it's the smallest program. Code optimization is very far from a trivial problem.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2o1%2BL7E%3Da8nNFpu4vEVmuSJMfOHUGeJBhUsw7GEfbytw%40mail.gmail.com.
AlphaCode is not capable of reading code. It's a clever version of monkeys typing on typewriters until they bang out a Shakespeare play. Still counts as AI, but cannot be said to understand code.
What does it mean "to read code"? It can execute code, from github apparently so it must read code well enough to execute it. What more does it need to read? You say it cannot be said to understand code. Can you specify what would show it understands the code?
I think you mean it tells you story about how this part to does that and this other part does something else, etc. But that's just catering to your weak human brain that can't just "see" that the code solves the problem. The problem is that there is no species independent meaning of "understand" except "make it work". AlphaCode doesn't understand code like you do, because it doesn't think like you do and doesn't have the context you do.
Brent
> Let's take a more accessible analogy. Let's say the problem description is: "Here's a locked door. Devise a key to unlock the door."A simplified analog to what Alphacode does is the following:
- generate millions of different keys of various shapes and sizes.
- for each key, try to unlock the door with it
- if it doesn't work, toss it
> If you think AlphaCode and Shakespeare have anything in common, then I don't think your assertions about AI are worth much.
> If you think a brute-force "generate a million guesses until one works" ...
> ... strategy has the same understanding as an algorithm that employs a detailed model of the domain and uses that model to generate a reasoned solution, regardless of the results, then it's you that is employing the unconventional meaning of "understand".
> In the real world, you usually don't get to try something a million times until something works.
>> You're talking about what would be more useful, I was talking about what would be more complex. In general finding the smallest and fastest program that can accomplish a given task is infinitely complex, that is to say in general it's impossible to find the smallest program and prove it's the smallest program. Code optimization is very far from a trivial problem.> I'm surprised you're focusing on the less useful direction to go in.
> If anything, your thinking tends to be very pragmatic.
>Who cares if you can squeeze a few extra milliseconds out of an algorithm,
On Sat, Feb 5, 2022 at 7:15 PM Terren Suydam <terren...@gmail.com> wrote:> Let's take a more accessible analogy. Let's say the problem description is: "Here's a locked door. Devise a key to unlock the door."A simplified analog to what Alphacode does is the following:
- generate millions of different keys of various shapes and sizes.
- for each key, try to unlock the door with it
- if it doesn't work, toss it
A better analogy would be a lockpicker that breaks down a complex task into a series of much simpler actions. Insert two lockpicks into the keyhole one to provide torque and the other to manipulate the pins. Find the first pin and see if it produces any resistance, if it doesn't then ignore it for the time being and go onto the second pin, if it shows resistance then increase the pressure until you hear a click or feel a slight rotation of the lock. Continue this procedure until you get to the last pin (most locks have 6 pins and few have more than 8) then go back to pin one and go down the line again. After three or four passes down the line you should be able to fully rotate the lock and open the door.
> If you think AlphaCode and Shakespeare have anything in common, then I don't think your assertions about AI are worth much.I see no evidence of a secret sauce that only human geniuses have, the difference between a Turing Machine named William Shakespeare and a Turing Machine named AlphaCode is a difference in degree and not of kind.
> If you think a brute-force "generate a million guesses until one works" ...Terren, we both know that's not the way AlphaCode works, it's a bit more complicated than that.
> ... strategy has the same understanding as an algorithm that employs a detailed model of the domain and uses that model to generate a reasoned solution, regardless of the results, then it's you that is employing the unconventional meaning of "understand".You can insult that poor program all you want but it doesn't change the fact that the very first time it was tried it managed to write better code in that competition than half the humans who make their living by writing code. And do you seriously doubt that AlphaCode will get better in the next few years, a lot better? I humbly suggest you stop worrying about how a computer can do what it does and start looking at what it is actually able to do; if I can come up with the correct answer then from your point of view it shouldn't matter how I was able to do it because it's still the correct answer.
> In the real world, you usually don't get to try something a million times until something works.I think that's probably untrue. I have a hunch that when you're trying to solve a problem your brain proposes many trillions of different neural patterns and nearly all of them are rejected because they don't work very well, and most of the time you are not consciously aware of any of them because you only have a finite memory capacity and there is little evolutionary value in remembering them because they don't work, although some of them on the borderline between acceptance and rejection may leave a vague imprint. But a brain is fallible and sometimes a neural pattern that would've solved the problem is rejected, perhaps smart people have a more reliable rejection mechanism then stupid people or are able to retrieve that vague imprint and reconsider it. Einstein's notebooks show that about a year before he finished General Relativity and after he had been working on it for nearly a decade he came up with the correct solution but then for some reason he rejected it, after another year of fruitless work he remembered it and decided to take a second look.
>Who cares if you can squeeze a few extra milliseconds out of an algorithm,
A lot of problems are solvable in principle but not in practice because they take too long, and time is money. The Fugaku Supercomputer in Japan, the fastest in the world, cost one billion dollars, so I figure if you could make its operating system run a little more efficiently so the machine was just 10% faster you could sell that program for about $100 million.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv02gCXn-JJd5m5219PV5Bssp1A%3DeNYoTacjaioa_X50XA%40mail.gmail.com.
>> Terren, we both know that's not the way AlphaCode works, it's a bit more complicated than that.
>That is how it works. All I left out is the part about how it generates the guesses
> I predict the current AlphaCode strategy will never lead to putting engineers out of work, or self-improvement of the type that would lead to the singularity. This is not like the AlphaZero strategy in which it teaches itself the game. If they come out with a new code-writing AI that teaches itself to code, then I will happily change my tune.
A lot of problems are solvable in principle but not in practice because they take too long, and time is money. The Fugaku Supercomputer in Japan, the fastest in the world, cost one billion dollars, so I figure if you could make its operating system run a little more efficiently so the machine was just 10% faster you could sell that program for about $100 million.> The $1B price tag isn't because of the OS, which will only use a tiny fraction of the available computing power.
On Mon, Feb 7, 2022 at 2:34 PM Terren Suydam <terren...@gmail.com> wrote:>> Terren, we both know that's not the way AlphaCode works, it's a bit more complicated than that.>That is how it works. All I left out is the part about how it generates the guessesBesides that Mrs. Lincoln how did you like the play?
> I predict the current AlphaCode strategy will never lead to putting engineers out of work, or self-improvement of the type that would lead to the singularity. This is not like the AlphaZero strategy in which it teaches itself the game. If they come out with a new code-writing AI that teaches itself to code, then I will happily change my tune.When you learned how to code did you have to reinvent all the programming languages and techniques and do it all on your own with no help from teachers or friends or books or fellow coders? Did you have to rediscover the wheel?
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv09hZ4%3DgRkihHOhGd5tKAEd%2BUWS0PnWTC5wMiyXk3V6RQ%40mail.gmail.com.
>> When you learned how to code did you have to reinvent all the programming languages and techniques and do it all on your own with no help from teachers or friends or books or fellow coders? Did you have to rediscover the wheel?> Did AlphaZero get any help from humans?
On Mon, Feb 7, 2022 at 5:12 PM Terren Suydam <terren...@gmail.com> wrote:>> When you learned how to code did you have to reinvent all the programming languages and techniques and do it all on your own with no help from teachers or friends or books or fellow coders? Did you have to rediscover the wheel?> Did AlphaZero get any help from humans?No, but then writing good code is fundamentally more difficult than playing good chess. The rules of chess can be learned in five minutes, learning the rules for writing computer code would take considerably longer.
John K Clark See what's on my new list at Extropolismtl
rew
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3xuYCcrEPd08KVzD1MominXKxq7O15EskTkL489MQ%2B5A%40mail.gmail.com.
On Mon, Feb 7, 2022 at 5:25 PM John Clark <johnk...@gmail.com> wrote:
On Mon, Feb 7, 2022 at 5:12 PM Terren Suydam <terren...@gmail.com> wrote:
>> When you learned how to code did you have to reinvent all the programming languages and techniques and do it all on your own with no help from teachers or friends or books or fellow coders? Did you have to rediscover the wheel?
> Did AlphaZero get any help from humans?
No, but then writing good code is fundamentally more difficult than playing good chess. The rules of chess can be learned in five minutes, learning the rules for writing computer code would take considerably longer.
That's exactly my point. Chess represents a narrow enough domain, with a limited enough set of operations, that makes it possible for an AI to teach itself, at least since AlphaZero anyway (and this itself was a huge achievement).
The problem with the real world of human enterprise (i.e. the domain in which talk of replacing human programmers is relevant) is that AIs currently cannot even be taught what the rules are, much less teach themselves to improve within the constraints of those rules. One day that will change, but we're not there yet. I say we're not even close.
--
Terren
--John K Clark See what's on my new list at Extropolis
mtl
rew
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3xuYCcrEPd08KVzD1MominXKxq7O15EskTkL489MQ%2B5A%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA-Rfrn_itZZFQNBYh3RsXs%2Bdv3nGAf7dr3X8yNo3KdAQQ%40mail.gmail.com.
> The problem with the real world of human enterprise (i.e. the domain in which talk of replacing human programmers is relevant) is that AIs currently cannot even be taught what the rules are
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv0yzm9vvZcWPFdZV4iN%3DW%3DbFu4gmqQasBXmLbQXjBe2-Q%40mail.gmail.com.