Yesterday some people were saying that the improvement in large language models had reached a wall, but they can't say that today because Claude 3.5 Sonnet came out today and it beats GPT-4o on most benchmarks. But the really amazing thing is that it's MUCH smaller than GPT-4o and thus much faster and much cheaper to operate. The company that makes it Anthropic, says they will come out with their far larger version, Claude 3.5 Opus, sometime later this year. I think it's going to be amazing.
> With all the headlines proclaiming AI achieved this or surpassed that milestone, the absence of the basic distinction between narrow and general AI does not appear conspicuous to us? LLMs are Narrow AI designed to perform specific tasks
> Marketing often blurs this distinction,
> we were promised perfect autonomous driving by Elon Musk years ago,
> Do note, I am not making a claim that there is nothing to LLMs and their recent boost in applications, capacity, conveniences offered, and similar developments. But it’s narrow AI by definition
On Sun, Jun 23, 2024 at 3:01 PM PGC <multipl...@gmail.com> wrote:> Do note, I am not making a claim that there is nothing to LLMs and their recent boost in applications, capacity, conveniences offered, and similar developments. But it’s narrow AI by definitionBy what definition?
> And for everybody here assuming the Mechanist ontology, which implies the Strong AI thesis, i.e. the assertion that a machine can think,
> I am curious as to why any of you would assume that general intelligence and mind would arise from a narrow AI.
Your excitement about Claude 3.5 Sonnet's performance is understandable. It's an impressive development, but it's crucial to remember that beating benchmarks or covering a wide range of conversational topics does not equate to general intelligence. I wish we lived in a context where I could encourage you to provide evidence for your claims about AI capabilities and future predictions but Claude, OpenAI, etc are... not exactly open.Then we could discuss empirical data and trends instead of betting: I don't know what the capability ceiling is, for narrow AI development behind closed doors now or in the next years, nor have I pretended to. Wide/general is not narrow/specific and brittle. But I am happy for you if you feel that you can converse intelligently with it; I know what you mean. For my taste its a tad obsequious and not very original, i.e. I am providing all the originality of the conversation that some large corporation is sucking up without getting paid for it.I don't want clever conversation
I never want to work that hard, mmm - Billy Joel
On Monday, June 24, 2024 at 11:02:05 PM UTC+2 John Clark wrote:On Mon, Jun 24, 2024 at 10:00 AM PGC <multipl...@gmail.com> wrote:> And for everybody here assuming the Mechanist ontology, which implies the Strong AI thesis, i.e. the assertion that a machine can think,I don't know about everybody but I certainly have that view because the only alternative is vitalism, the idea that only life, especially human life, has a special secret sauce that is not mechanistic, that is to say does not follow the same laws of physics as non-living things. And that view has been thoroughly discredited since 1859 when Darwin wrote "The Origin Of Species".> I am curious as to why any of you would assume that general intelligence and mind would arise from a narrow AI.If a human could converse with you as intelligently as Claude can in such a wide number of unrelated topics you would never call his range of interest narrow, but because Claude's brain is hard and dry and not soft and squishy you do. I'll tell you what let's make a bet, I bet that an AI will win the International Mathematical Olympiad in less than 3 years, perhaps much less. I also bet that in less than 3 years the main political issue in every major country will not be unlawful immigration or crime or even an excess in wokeness, it will be what to do about AI which is taking over jobs at an accelerating rate. What do you bet?bwu
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/4bb09c16-61df-4b07-a024-eae5eafffb90n%40googlegroups.com.
> it's crucial to remember that beating benchmarks or covering a wide range of conversational topics does not equate to general intelligence
bwu
...
Would you consider the aggregate capabilities of all AIs that have been created to date, as a general intelligence? In the spirit of what Minsky said here:
"Each practitioner thinks there’s one magic way to get a machine to be smart, and so they’re all wasting their time in a sense. On the other hand, each of them is improving some particular method, so maybe someday in the near future, or maybe it’s two generations away, someone else will come around and say, ‘Let’s put all these together,’ and then it will be smart."
-- Marvin Minsky
I wrote that human general intelligence, consists of the following abilities:
- Communicate via natural language
- Learn, adapt, and grow
- Move through a dynamic environment
- Recognize sights and sounds
- Be creative in music, art, writing and invention
- Reason with logic and rationality to solve problems
I think progress exists across each of these domains. While the best humans in their area of expertise may beat the best AIs, it is arguable that the AI systems which exist in these domains are better than the average human in that area.
This article I wrote in 2020 is quite dated, but it shows that even back then, we have machines that could be called creative:
If we could somehow clobber together all the AIs that we have made so far, and integrate them into a robot body. Would that be something we could regard as generally intelligent? And if not, what else would need to be done?
Jason
Jason,
There's no universal consensus on intelligence beyond the broad outlines of the narrow vs general distinction. This is reflected in our informal discussion: some emphasize that effective action should be the result and are satisfied with a certain set and level of capabilities. However, I'm less sure whether that paints a complete picture. "General" should mean what it means. Brent talks about an integration system that does the modelling. But reflection, even the redundant kind that doesn't immediately yield anything may lead to a Russell coffee break moment. That seems to play a role, with people taking years, decades, generations, and even entire civilizations to discover that a problem may be ill-posed, unsolvable, or solvable.
We look at historical developments and ask whether all of it is required to have one Newton appear every now and then. Or whether we could've had 10 every generation with different values or politics, for instance. Those would be gigantic simulations to run, but who knows? Maybe we could get to Euclidean geometry far more cheaply than we did. Instead, we are making gigantic investments into known machine learning techniques with huge hardware boosts, calling it AI for marketing reasons (with many marketing MBA types becoming "Chief AI Officer" because they have a chatGPT subscription), to build robots to be our servants, maids, assistants, and secretaries.
I'm not trying to play jargon police or anything—everyone has a right to take part in the intelligence discussion. But imho it's misleading to associate developments in machine learning through hardware advances with true intelligence. Of course, there can be synergistic effects that Minsky speculates about, but we can hardly manage resource allocation for all persons with actual AGI abilities globally alive today, which makes me pretty sure that this isn't what most people want. They want servants that are maximally intelligent to do what they are told, revealing something about our own desires. This is the desire for people as tools.
Personally, I lean towards viewing intelligence as the potential to reflect plus remaining open to novel approaches to any problem. Sure, capability/ability is needed to solve a problem, and intelligence is required to see that, but at some point in acquiring abilities, folks seem to lose the ability to consider fundamentally novel approaches, often ridiculing them etc. There seems to be a point where ability limits the potential for new approaches to a problem. Children and individuals less constrained by personal beliefs and ideologies are often more intelligent in this view because their potential to change and synthesize new approaches is a genuine reflection of accessing a potentially infinite possibility space of problem formulations/solutions; or even choosing to let a problem be and drop it. I prefer this approach as it keeps the subject as a first principle, instead of labeling them as dumb for failing a memorization test or being an inadequate slave for some
Even though it's a clever marketing strategy for Silicon Valley to extract money and value from us while training their models, I don't dismiss LLMs in principle and think the discussions they raise can be beneficial. It’s revealing to see where they fail and how we can make them appear intelligent by "cheating." This opens up new problems, such as designing tests that require on-the-fly creativity, potentially better than the ARC test. Could we tune mathematical or STEM education to be more creative through such problems, allowing for many possible solutions? This might shed light on different creative and/or reasoning styles and open the door to optimizing education for them (instead of the memorization maximization paradigms in place with most testing, which is why the pedagogical community is panicking with their students using AI).
In this way or similarly, education could approach the creative component in STEM fields and perform research on whether this enhances general human problem formulation and solving, supplementing the less constrained, more open approaches found in the arts. This isn't about revolutionizing anything but seeing research potential.
To address your question: even if we could combine all existing AIs into a single robot, I doubt it would constitute general intelligence. The aggregated capabilities might seem impressive, but I speculate that general intelligence involves continuous learning, adaptation, and particularly reflection beyond current AI's capacity. It would require integrating these capabilities in a way that mirrors human cognitive processes as Brent suggested, which I feel we are far from achieving. But now, who knows what happens behind closed doors with a former NSA person on the board of OpenAI? We can guess.
Jason,
There's no universal consensus on intelligence beyond the broad outlines of the narrow vs general distinction. This is reflected in our informal discussion: some emphasize that effective action should be the result and are satisfied with a certain set and level of capabilities. However, I'm less sure whether that paints a complete picture. "General" should mean what it means. Brent talks about an integration system that does the modelling. But reflection, even the redundant kind that doesn't immediately yield anything may lead to a Russell coffee break moment. That seems to play a role, with people taking years, decades, generations, and even entire civilizations to discover that a problem may be ill-posed, unsolvable, or solvable.
We look at historical developments and ask whether all of it is required to have one Newton appear every now and then. Or whether we could've had 10 every generation with different values or politics, for instance. Those would be gigantic simulations to run, but who knows? Maybe we could get to Euclidean geometry far more cheaply than we did. Instead, we are making gigantic investments into known machine learning techniques with huge hardware boosts, calling it AI for marketing reasons (with many marketing MBA types becoming "Chief AI Officer" because they have a chatGPT subscription), to build robots to be our servants, maids, assistants, and secretaries.
I'm not trying to play jargon police or anything—everyone has a right to take part in the intelligence discussion. But imho it's misleading to associate developments in machine learning through hardware advances with true intelligence.
Of course, there can be synergistic effects that Minsky speculates about, but we can hardly manage resource allocation for all persons with actual AGI abilities globally alive today, which makes me pretty sure that this isn't what most people want. They want servants that are maximally intelligent to do what they are told, revealing something about our own desires. This is the desire for people as tools.
Personally, I lean towards viewing intelligence as the potential to reflect plus remaining open to novel approaches to any problem. Sure, capability/ability is needed to solve a problem, and intelligence is required to see that, but at some point in acquiring abilities, folks seem to lose the ability to consider fundamentally novel approaches, often ridiculing them etc. There seems to be a point where ability limits the potential for new approaches to a problem.
Children and individuals less constrained by personal beliefs and ideologies are often more intelligent in this view because their potential to change and synthesize new approaches is a genuine reflection of accessing a potentially infinite possibility space of problem formulations/solutions; or even choosing to let a problem be and drop it. I prefer this approach as it keeps the subject as a first principle, instead of labeling them as dumb for failing a memorization test or being an inadequate slave for some
Even though it's a clever marketing strategy for Silicon Valley to extract money and value from us while training their models, I don't dismiss LLMs in principle and think the discussions they raise can be beneficial. It’s revealing to see where they fail and how we can make them appear intelligent by "cheating." This opens up new problems, such as designing tests that require on-the-fly creativity, potentially better than the ARC test. Could we tune mathematical or STEM education to be more creative through such problems, allowing for many possible solutions? This might shed light on different creative and/or reasoning styles and open the door to optimizing education for them (instead of the memorization maximization paradigms in place with most testing, which is why the pedagogical community is panicking with their students using AI).
In this way or similarly, education could approach the creative component in STEM fields and perform research on whether this enhances general human problem formulation and solving, supplementing the less constrained, more open approaches found in the arts. This isn't about revolutionizing anything but seeing research potential.
To address your question: even if we could combine all existing AIs into a single robot, I doubt it would constitute general intelligence. The aggregated capabilities might seem impressive, but I speculate that general intelligence involves continuous learning, adaptation, and particularly reflection beyond current AI's capacity. It would require integrating these capabilities in a way that mirrors human cognitive processes as Brent suggested, which I feel we are far from achieving. But now, who knows what happens behind closed doors with a former NSA person on the board of OpenAI? We can guess.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/0236e7e0-d07e-4edc-ba29-6c1ac87fcb97n%40googlegroups.com.
> I also see it as surprising that through hardware improvements alone, and without specific breakthroughs in algorithms, we should see such great strides in AI.
> Humans no longer write the algorithms these neural networks derive, the training process comes up with them. And much like the algorithms implemented in the human brain, they are in a representation so opaque and that they escape our capacity to understand. So I would argue, there have been massive breakthroughs in the algorithms that underlie the advances in AI, we just don't know what those breakthroughs are.
> I think the human brain, with its 600T connections might signal an upper bound for how many are required, but the brain does a lot of other things too, so the bound could be lower.
On Tue, Jul 2, 2024 at 12:52 PM Jason Resch <jason...@gmail.com> wrote:> I also see it as surprising that through hardware improvements alone, and without specific breakthroughs in algorithms, we should see such great strides in AI.I was not surprised because the entire human genome only has the capacity to hold 750 MB of information; that's about the amount of information you could fit on an old-fashioned CD, not a DVD, just a CD. The true number must be considerably less than that because that's the recipe for building an entire human being, not just the brain, and the genome contains a huge amount of redundancy, 750 MB is just the upper bound.
> Humans no longer write the algorithms these neural networks derive, the training process comes up with them. And much like the algorithms implemented in the human brain, they are in a representation so opaque and that they escape our capacity to understand. So I would argue, there have been massive breakthroughs in the algorithms that underlie the advances in AI, we just don't know what those breakthroughs are.That is a very interesting way to look at it, and I think you are basically correct.
> I think the human brain, with its 600T connections might signal an upper bound for how many are required, but the brain does a lot of other things too, so the bound could be lower.The human brain has about 86 billion neurons with 7*10^14 synaptic connections (a more generous estimate than yours), but the largest supercomputer in the world,
the Frontier Computer at Oak ridge, has 2.5*10^15 transistors, over three times as many. And we know from experiments that a typical synapse in the human brain "fires" between 1 and 50 times per second, but a typical transistor in a computer "fires" about 4 billion times a second (4*10^9). It also has 9.2* 10^15 bites of fast memory. That's why the Frontier Computer can perform 1.1 *10^18 double precision floating point calculations per second and why the human brain can not.
By way of comparison, Ray Kurzweil estimates that the hardware needed to emulate a human mind would need to be able to perform 10^16 calculations per second and have 10^12 bytes of memory.
And the calculations would not need to be 64 bit double precision floating point, 8 bit or perhaps even 4 bit precision would be sufficient. So in the quest to develop a superintelligence, insufficient hardware is no longer a barrier.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv0FmETnmRQ2VK_EKxJ%2BmyBjkaetVY6swTT7QRoK_ofqOw%40mail.gmail.com.