> A lot of the excitement around LLMs is due to confusing skill/competence (memory based) with the unsolved problem of intelligence,
> There is a difference between completing strings of words
> As there isn't a perfect test for intelligence, much less consensus on its definition,
> you can always brute force some LLM through huge compute and large, highly domain specific training data, to "solve" a set of problems;
> you might find the following interview with Chollet interesting
Francois Chollet - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution
John, as you enjoyed that podcast with Aschenbrenner, you might find the following one with Chollet interesting. Imho you cannot scale past not having a more advanced approach to program synthesis (which nonetheless could be informed or guided by LLMs to deal with the combinatorial explosion of possible program synthesis).
> you can always brute force some LLM through huge compute and large, highly domain specific training data, to "solve" a set of problems;I don't know what those quotation marks are supposed to mean but if you are able to "solve" a set of problems then the problems have been solved, the method of doing so is irrelevant. Are you sure you're not whistling past the graveyard?
LLMs are not AGI (yet), but it's hard to ignore they're (sometimes astonishingly) competent at answering multi-modal questions across most, if not all domains of human knowledge
> Here's probably the best result but I'm not sure there's anything actually novel there. Despite that, it's still quite impressive, and to John's point, it's clearly an intelligent response, even if there are aspects of "cheating off of humans" in it.
A lot of the excitement around LLMs is due to confusing skill/competence (memory based) with the unsolved problem of intelligence, its most optimal/perfect test etc. There is a difference between completing strings of words/prompts relying on memorization, interpolation, pattern recognition based on training data and actually synthesizing novel generalization through reasoning or synthesizing the appropriate program on the fly. As there isn't a perfect test for intelligence, much less consensus on its definition, you can always brute force some LLM through huge compute and large, highly domain specific training data, to "solve" a set of problems; even highly complex ones. But as soon as there's novelty you'll have to keep doing that. Personally, that doesn't feel like intelligence yet. I'd want to see these abilities combined with the program synthesis ability; without the need for ever vaster, more specific databases etc. to be more convinced that we're genuinely on the threshold.
John, as you enjoyed that podcast with Aschenbrenner, you might find the following one with Chollet interesting. Imho you cannot scale past not having a more advanced approach to program synthesis (which nonetheless could be informed or guided by LLMs to deal with the combinatorial explosion of possible program synthesis).On Friday, June 14, 2024 at 7:28:50 PM UTC+2 John Clark wrote:Sabine Hossenfelder came out with a video attempting to discredit Leopold Aschenbrenner. She failed.I wrote this in the comment section of the video:"You claim that AI development will slow because we will run out of data, but synthetic data is already being used to train AIs and it actually works! AlphaGo was able to go from knowing nothing about the most complicated board game in the world called "GO" to being able to play it at a superhuman level in just a few hours by using synthetic data, it played games against itself. As for power, during the last decade the total power generation of the US has remained flat, but during that same decade the power generation of China has not, in just that same decade China constructed enough new power stations to equal power generated by the entire US. So a radical increase in electrical generation capacity is possible, the only thing that's lacking is the will to do so. When it becomes obvious to everybody that the first country to develop a super intelligent computer will have the capability to rule the world there will be a will to build those power generating facilities as fast as humanly possible. Perhaps they will use natural gas, perhaps they will use nuclear fission."John K Clark See what's on my new list at Extropolishid
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/1a991958-5828-4405-83b1-5c8a6671dad6n%40googlegroups.com.
On Sun, Jun 16, 2024, 10:26 PM PGC <multipl...@gmail.com> wrote:A lot of the excitement around LLMs is due to confusing skill/competence (memory based) with the unsolved problem of intelligence, its most optimal/perfect test etc. There is a difference between completing strings of words/prompts relying on memorization, interpolation, pattern recognition based on training data and actually synthesizing novel generalization through reasoning or synthesizing the appropriate program on the fly. As there isn't a perfect test for intelligence, much less consensus on its definition, you can always brute force some LLM through huge compute and large, highly domain specific training data, to "solve" a set of problems; even highly complex ones. But as soon as there's novelty you'll have to keep doing that. Personally, that doesn't feel like intelligence yet. I'd want to see these abilities combined with the program synthesis ability; without the need for ever vaster, more specific databases etc. to be more convinced that we're genuinely on the threshold.I think there is no more to intelligence than patter recognition and extrapolation (essentially, the same techniques required for improving compression). It is also the same thing science is concerned with: compressing observations of the real world into a small set of laws (patterns) which enable predictions. And prediction is the essence of intelligent action, as all goal-centered action requires predicting probable outcomes that may result from any of a set of possible behaviors that may be taken, and then choosing the behavior with the highest expected reward.I think this can explain why even a problem as seemingly basic as "word prediction" can (when mastered to a sufficient degree) break through into general intelligence. This is because any situation can be described in language, and being asked to predict next words requires understanding the underlying reality to a sufficient degree to accurately model the things those words describe. I confirmed this by describing an elaborate physical setup and asked GPT-4 to predict and explain what it thought would happen over the next hour. It did so perfectly, and also explained the consequences of various alterations I later proposed.Since any of thousands, or perhaps millions, of patterns exist in the training corpus, language models can come to learn, recognize, and extrapolate all of those thousands or millions of patterns. This is what we think of as generality (a sufficiently large repertoire of pattern recognition that it appears general).Jason
> I don't foresee real engineers, AI researchers, or IT departments being replaced in the short to mid-term.
> Take engineers, for example. Much of their work relies on practical experience and intuition developed over years.
Terren
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAMy3ZA99toTiJ9YahG-8BwJhWO%3DJ72u6oy2HFQuOa0ROo92VDg%40mail.gmail.com.
> Just the other day (on another list), I proposed that the problem "hallucination" is not really a bug, but rather, it is what we have designed LLMs to do (when we consider the training regime we subject them to). We train these models to produce the most probable extrapolations of text given some sample. Now consider if you were placed in a box and rewarded or punished based on how accurately you guessed the next character in a sequence.You are given the following sentence and asked to guess the next character:"Albert Einstein was born on March, "True, you could break the fourth wall and protest "But I don't know! Let me out of here!"But that would only lead to your certain punishment. Or: you could take a guess, there's a decent chance the first digit is a 1 or 2. You might guess one of those and have at least a 1/3 chance of getting it right.This is how we have trained the current crop of LLMs. We don't reward them for telling us they don't know, we reward them for having the highest accuracy possible in making educated guesses.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv0%2BMkb0y%2B1fY-AryeOC6C-FwuR%2B654Ua_EjMb_%3D6CGNCQ%40mail.gmail.com.
> Thank you! Feel welcome to use it. :-)
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/a3ecb0a7-7a3f-417b-bb47-f449febb73e7n%40googlegroups.com.
You can always add some randomness to a computer program. LLM's aren't deterministic now. Human intelligence may very well be memory plus randomness, although I'd bet on the inclusion of some inference algorithms. The randomness doesn't even have to be in the brain. People interact with their environment which provides a lot of effective randomness plus some relevant prompts.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/04fffd28-1a61-48a3-8a5e-d1af5b901caa%40gmail.com.
On Wed, Jun 19, 2024 at 6:05 PM Brent Meeker <meeke...@gmail.com> wrote:You can always add some randomness to a computer program. LLM's aren't deterministic now. Human intelligence may very well be memory plus randomness, although I'd bet on the inclusion of some inference algorithms. The randomness doesn't even have to be in the brain. People interact with their environment which provides a lot of effective randomness plus some relevant prompts.Yes, I think there is no great mystery to creativity. It requires only 1. random permutation/combination, and 2. an evaluation function: how much better is this new thing compared to the previous thing? This is the driver behind all the innovation in biology produced by natural selection. And this same mechanism is replicated in the technique of "genetic programming." Koza, who invented genetic programming, used it to create his "invention machine" which has created patent-worthy improvements across multiple domains of technology.I use genetic programming to evolve bots, and in only a few generations, they move from stumbling around at random, to deriving unique, environment-specific strategies to maximize their ability to feed themselves while avoiding obstacles:There is no intelligence imparted to the design of the bots. They evolve purely based on random variation of traits of the top performers (as evaluated based on how much they ate during their life).
On Thursday, June 20, 2024 at 4:13:25 AM UTC+2 Jason Resch wrote:On Wed, Jun 19, 2024 at 6:05 PM Brent Meeker <meeke...@gmail.com> wrote:You can always add some randomness to a computer program. LLM's aren't deterministic now. Human intelligence may very well be memory plus randomness, although I'd bet on the inclusion of some inference algorithms. The randomness doesn't even have to be in the brain. People interact with their environment which provides a lot of effective randomness plus some relevant prompts.Yes, I think there is no great mystery to creativity. It requires only 1. random permutation/combination, and 2. an evaluation function: how much better is this new thing compared to the previous thing? This is the driver behind all the innovation in biology produced by natural selection. And this same mechanism is replicated in the technique of "genetic programming." Koza, who invented genetic programming, used it to create his "invention machine" which has created patent-worthy improvements across multiple domains of technology.I use genetic programming to evolve bots, and in only a few generations, they move from stumbling around at random, to deriving unique, environment-specific strategies to maximize their ability to feed themselves while avoiding obstacles:There is no intelligence imparted to the design of the bots. They evolve purely based on random variation of traits of the top performers (as evaluated based on how much they ate during their life).Your addition about randomness is interesting. It’s true that LLMs incorporate some degree of randomness, and human intelligence might also be influenced by randomness and inference algorithms. The interaction with our environment introduces effective randomness contributing to our decision-making processes. The notion that creativity stems from random permutation/combination and an evaluation function resonates with the principles of natural selection and genetic programming. The example of genetic programming evolving bots to optimize their behavior through random variation and evaluation showcases this mechanism effectively.
However, we should differentiate between speculation and facts in your statements. While randomness and evaluation are essential components of genetic programming, the assertion that there is "no great mystery to creativity" oversimplifies: what you're bringing up is a kind of creativity, which is constrained by its iterative limitations. A change here, a small new feature there... it's clear that this is creativity on a budget, making only the smallest adaptations necessary for survival instead of yielding radically new designs from the ground up. The kind that is found and most sought after in boundary-breaking science and/or art, even if everybody stands on shoulders: not every PhD has a Newtonian impact on the world.
Randomness + evaluation = creativity looks rhetorically simple and clear. However, there are two problems I see:
1. Who/What is Evaluating? Evaluation can be completely deterministic and mechanical, it can be effective on levels like natural selection, or it can result from a subject with intuition, experience, and a refined sense of taste or a more rudimentary one. It can involve a particular psychology, some world or even multiverse-based ontology to embed said subject, and more. The questions raised encompass our entire history and all qualia, if not more. Therefore, evaluation is not as simple or clear as that seemingly factual statement suggests. "Evaluation," as you sketch out rather unclearly, merely hides the problem of subject and reality for a rhetorical mirage of clarity.
2. Oversimplification of Creativity: By all means, build the creativity machine, order the randomness and evaluation in bottles from Amazon, and win every prize from science to the arts by cranking it up to 11. But this oversimplification doesn't capture the full depth of human creativity, which involves more than just random variations and evaluations. It involves cognitive processes we have difficulty describing, emotional influences, and the ability to synthesize disparate ideas into something more original on the novelty spectrum.
Ultimately, while LLMs and AI can significantly augment our capabilities, they remain, for now, advanced assistants rather than autonomous intelligences capable of independent breakthroughs. The future may bring further integration and enhancement, but the unique qualities of human intelligence—our ability to synthesize thought, exercise creativity, and approach problems from unstructured perspectives with imperfect information to name just a few aspects —are not yet replicable by anything people have built.
I'm sure Quentin, Telmo, and Russell are reading this and shaking their heads. But they have probably been fired and replaced by LLMs much smarter than them. This should provide additional motivation to build that machine though, Jason. They need our support. Then again, the way we/people behave in the world... it's best we don't develop that, IF it is possible in the first place.I'm not saying we won't see fascinating developments. The threshold for me is overwhelming evidence that something can independently formulate and learn to solve problems effectively with a notable degree of originality in unspecified environments on problems it hasn't been trained on. Synthetic data or not. Superintelligence is more like the thing that can spit out 3000 years worth of mathematical/scientific discoveries in a second. The problem with this, presupposing optimistically and irrationally that it is possible, is that I'm not sure we would understand it.