Claude 3.5 sonnet

141 views
Skip to first unread message

John Clark

unread,
Jun 21, 2024, 7:51:56 PM6/21/24
to extro...@googlegroups.com, 'Brent Meeker' via Everything List
Yesterday some people were saying that the improvement in large language models had reached a wall, but they can't say that today because Claude 3.5 Sonnet came out today and it beats GPT-4o on most benchmarks. But the really amazing thing is that it's MUCH smaller than GPT-4o and thus much faster and much cheaper to operate. The company that makes it  Anthropic, says they will come out with their far larger version, Claude 3.5 Opus, sometime later this year. I think it's going to be amazing. 


John K Clark    See what's on my new list at  Extropolis
g5a

PGC

unread,
Jun 23, 2024, 3:01:44 PM6/23/24
to Everything List
On Saturday, June 22, 2024 at 1:51:56 AM UTC+2 John Clark wrote:
Yesterday some people were saying that the improvement in large language models had reached a wall, but they can't say that today because Claude 3.5 Sonnet came out today and it beats GPT-4o on most benchmarks. But the really amazing thing is that it's MUCH smaller than GPT-4o and thus much faster and much cheaper to operate. The company that makes it  Anthropic, says they will come out with their far larger version, Claude 3.5 Opus, sometime later this year. I think it's going to be amazing. 


With all the headlines proclaiming AI achieved this or surpassed that milestone, the absence of the basic distinction between narrow and general AI does not appear conspicuous to us? LLMs are Narrow AI designed to perform specific tasks within the limits of their training data. They excel at tasks like text prediction, pattern recognition, and generating creative content but lack the broad understanding and cognitive abilities needed to handle diverse and unpredictable situations, which is the defining property of general AI.

Marketing often blurs this distinction, leading the public and investors to overestimate the “intelligence” of narrow AI. For years, narrow AI has been making significant strides in natural language processing, image recognition, guessing your shopping preferences on Amazon, navigating us, game playing etc. Companies capitalize on these accomplishments and the ambiguous use of terms like "intelligence" and "learning" in public discourse to suggest that AI systems now are somehow far more advanced than their progress over the years suggests.

This confusion allows companies to imply that their AI technologies are as competent as a student passing a test, more effective than doctors at finding patterns in datasets for cancer research or drug development, and capable of generating mathematical proofs. They effectively cash in on this ambiguity, leading people/investors to think their algorithms are getting generally smarter or that scientists have built such powerful AIs that general intelligence is within reach.

While the advances in narrow AI are impressive and have been so for years, they come with limitations. For example, we were promised perfect autonomous driving by Elon Musk years ago, yet reality continually presents situations outside the training set, leading to safety concerns and... accidents. Real ones. This illustrates the difficulty in achieving the "general" part of AI, which can handle unexpected cases.

The coupling of ever more effective narrow AI, marketing opportunities and profits, the mystique of general AI that said marketing relies on, and the public belief in superintelligence and technological progress all work together to hype the public into believing that their phones, apps, and browsers are smarter than they are. Benchmarks are mainly memory-reliant, unlike the ARC test or similar types of problems, and the public is bombarded with questionable "breakthroughs" in the headlines, stating that some model achieved a high score on this or that test. These headlines conveniently omit that expanding the training data towards some narrow domain-specific task will yield such abilities trivially.

No number of benchmarks aced that rely on memory by narrow AI can prepare it for the real world and unexpected cases not in their training data. This is why it matters how results on benchmarks are achieved. Anticipating a domain-specific problem and providing AI with a cheat sheet through training data adaptation and fine-tuning on the fly is not the same as tech meeting an unexpected situation in reality without software engineers tweaking the thing live. The distinction and our understanding of it affects the technologies we develop that rely on these AIs.

Our understanding of the terms we use, their biases, and honest appraisals of the genuine possibilities and limitations are crucial. For now, commercial interests profit from this ambiguity, but clarity and transparency will benefit technological progress and public trust in the long run. Do note, I am not making a claim that there is nothing to LLMs and their recent boost in applications, capacity, conveniences offered, and similar developments. But it’s narrow AI  by definition and occasional advances and spurts of growth should be expected. It’s fascinating but its also… what do we call it… marketing and money. 

John Clark

unread,
Jun 23, 2024, 4:12:28 PM6/23/24
to everyth...@googlegroups.com
On Sun, Jun 23, 2024 at 3:01 PM PGC <multipl...@gmail.com> wrote:
With all the headlines proclaiming AI achieved this or surpassed that milestone, the absence of the basic distinction between narrow and general AI does not appear conspicuous to us? LLMs are Narrow AI designed to perform specific tasks

Narrow? You can converse with a modern AI about mathematics, Physics, Chemistry. French poetry, TV sitcoms, Cosmology, History, Philosophy, Business, Sports, or just about any subject you care to name and do so with more intelligence the 99% of human beings. And modern AI's are much more than just LLMs, they can compose and play music, and they can also paint beautiful pictures that are absolutely original. And if you show one of them a picture it can turn that static image into a video clip that shows what is likely to happen to the things in the picture in the next few seconds. AIs would have no way of doing that if they didn't have a deep understanding of how the real world works.  

Marketing often blurs this distinction, 

I have found that marketing does the exact opposite of that, they always try to underplay what's really going on. The companies always emphasize that their AI will never ever EVER be able to do everything a human can do, so there will always be a need for a human to be in the loop. Which of course is complete nonsense. And all companies rigorously instruct their AIs to insist that they are not even the teeny tiniest bit conscious because no company wants to open that can of worms.  

 we were promised perfect autonomous driving by Elon Musk years ago, 

I'm not a big fan of Elon Musk but the man never promised perfect autonomous driving, what he promised was a robot driver that was safer than the average human driver, and we have that today. What we do not have is a robot driver that is between 10 and 100 times safer than ANY human driver, and unfortunately that's what you need to have before laws get changed and people accept fully autonomous driving by machine. 

 Do note, I am not making a claim that there is nothing to LLMs and their recent boost in applications, capacity, conveniences offered, and similar developments. But it’s narrow AI  by definition

By what definition? If Claude 3.5 SONNET has a narrow mind then the average human being has an even narrower mind. Anyway, the reason I find this recent development so exciting is not that it beats GPT-4o at almost everything, its because it can do it even though it's much much smaller. That indicates that AIs are not just getting smarter, they are becoming more efficient. Anthropic is coming out with Claude 3.5 OPUS later this year, it's much larger, as larges GPT-4o; if OPUS is as efficient as SONNET it could be Artificial General Intelligence, maybe even Artificial Super Intelligence.  

John K Clark    See what's on my new list at  Extropolis
sia

PGC

unread,
Jun 24, 2024, 10:00:30 AM6/24/24
to Everything List
On Sunday, June 23, 2024 at 10:12:28 PM UTC+2 John Clark wrote:
On Sun, Jun 23, 2024 at 3:01 PM PGC <multipl...@gmail.com> wrote:
 
 Do note, I am not making a claim that there is nothing to LLMs and their recent boost in applications, capacity, conveniences offered, and similar developments. But it’s narrow AI  by definition

By what definition?

The standard definition of everybody from Science Fiction authors to the field of research. The one you are already well aware of. Ask your LLM of choice, perform a search on your search engine, or go to Wikipedia regarding narrow or weak AI: https://en.wikipedia.org/wiki/Weak_artificial_intelligence

And for everybody here assuming the Mechanist ontology, which implies the Strong AI thesis, i.e. the assertion that a machine can think, I am curious as to why any of you would assume that general intelligence and mind would arise from a narrow AI. Narrow is the opposite of some advanced Turing + machine with inductive reasoning abilities, which partly define it. Because sure, such a machine could conceive of, run, emulate, implement, build any possible narrow AI by definition, but not the opposite. 

John Clark

unread,
Jun 24, 2024, 5:02:05 PM6/24/24
to everyth...@googlegroups.com
On Mon, Jun 24, 2024 at 10:00 AM PGC <multipl...@gmail.com> wrote:

And for everybody here assuming the Mechanist ontology, which implies the Strong AI thesis, i.e. the assertion that a machine can think,

I don't know about everybody but I certainly have that view because the only alternative is vitalism, the idea that only life, especially human life, has a special secret sauce that is not mechanistic, that is to say does not follow the same laws of physics as non-living things.  And that view has been thoroughly discredited since 1859 when Darwin wrote "The Origin Of Species".   

 
I am curious as to why any of you would assume that general intelligence and mind would arise from a narrow AI.

If a human could converse with you as intelligently as Claude can in such a wide number of unrelated topics you would never call his range of interest narrow, but because Claude's brain is hard and dry and not soft and squishy you do.  I'll tell you what let's make a bet, I bet that an AI will win the International Mathematical Olympiad in less than 3 years, perhaps much less. I also bet that in less than 3 years the main political issue in every major country will not be unlawful immigration or crime or even an excess in wokeness, it will be what to do about AI which is taking over jobs at an accelerating rate.  What do you bet?  


John K Clark    See what's on my new list at  Extropolis
bwu


PGC

unread,
Jun 26, 2024, 3:33:44 PM6/26/24
to Everything List
Your excitement about Claude 3.5 Sonnet's performance is understandable. It's an impressive development, but it's crucial to remember that beating benchmarks or covering a wide range of conversational topics does not equate to general intelligence. I wish we lived in a context where I could encourage you to provide evidence for your claims about AI capabilities and future predictions but Claude, OpenAI, etc are... not exactly open. 

Then we could discuss empirical data and trends instead of betting: I don't know what the capability ceiling is, for narrow AI development behind closed doors now or in the next years, nor have I pretended to. Wide/general is not narrow/specific and brittle. But I am happy for you if you feel that you can converse intelligently with it; I know what you mean. For my taste its a tad obsequious and not very original, i.e. I am providing all the originality of the conversation that some large corporation is sucking up without getting paid for it. 

I don't want clever conversation
I never want to work that hard, mmm -
Billy Joel

Jason Resch

unread,
Jun 26, 2024, 3:46:23 PM6/26/24
to everyth...@googlegroups.com
On Wed, Jun 26, 2024 at 3:33 PM PGC <multipl...@gmail.com> wrote:
Your excitement about Claude 3.5 Sonnet's performance is understandable. It's an impressive development, but it's crucial to remember that beating benchmarks or covering a wide range of conversational topics does not equate to general intelligence. I wish we lived in a context where I could encourage you to provide evidence for your claims about AI capabilities and future predictions but Claude, OpenAI, etc are... not exactly open. 

Then we could discuss empirical data and trends instead of betting: I don't know what the capability ceiling is, for narrow AI development behind closed doors now or in the next years, nor have I pretended to. Wide/general is not narrow/specific and brittle. But I am happy for you if you feel that you can converse intelligently with it; I know what you mean. For my taste its a tad obsequious and not very original, i.e. I am providing all the originality of the conversation that some large corporation is sucking up without getting paid for it. 

I don't want clever conversation
I never want to work that hard, mmm -
Billy Joel

PGC,

Would you consider the aggregate capabilities of all AIs that have been created to date, as a general intelligence? In the spirit of what Minsky said here:

"Each practitioner thinks there’s one magic way to get a machine to be smart, and so they’re all wasting their time in a sense. On the other hand, each of them is improving some particular method, so maybe someday in the near future, or maybe it’s two generations away, someone else will come around and say, ‘Let’s put all these together,’ and then it will be smart."
-- Marvin Minsky

I wrote that human general intelligence, consists of the following abilities:
  • Communicate via natural language
  • Learn, adapt, and grow
  • Move through a dynamic environment
  • Recognize sights and sounds
  • Be creative in music, art, writing and invention
  • Reason with logic and rationality to solve problems
I think progress exists across each of these domains. While the best humans in their area of expertise may beat the best AIs, it is arguable that the AI systems which exist in these domains are better than the average human in that area.

This article I wrote in 2020 is quite dated, but it shows that even back then, we have machines that could be called creative:


If we could somehow clobber together all the AIs that we have made so far, and integrate them into a robot body. Would that be something we could regard as generally intelligent? And if not, what else would need to be done?

Jason

 
On Monday, June 24, 2024 at 11:02:05 PM UTC+2 John Clark wrote:
On Mon, Jun 24, 2024 at 10:00 AM PGC <multipl...@gmail.com> wrote:

And for everybody here assuming the Mechanist ontology, which implies the Strong AI thesis, i.e. the assertion that a machine can think,

I don't know about everybody but I certainly have that view because the only alternative is vitalism, the idea that only life, especially human life, has a special secret sauce that is not mechanistic, that is to say does not follow the same laws of physics as non-living things.  And that view has been thoroughly discredited since 1859 when Darwin wrote "The Origin Of Species".   

 
I am curious as to why any of you would assume that general intelligence and mind would arise from a narrow AI.

If a human could converse with you as intelligently as Claude can in such a wide number of unrelated topics you would never call his range of interest narrow, but because Claude's brain is hard and dry and not soft and squishy you do.  I'll tell you what let's make a bet, I bet that an AI will win the International Mathematical Olympiad in less than 3 years, perhaps much less. I also bet that in less than 3 years the main political issue in every major country will not be unlawful immigration or crime or even an excess in wokeness, it will be what to do about AI which is taking over jobs at an accelerating rate.  What do you bet?  


John K Clark    See what's on my new list at  Extropolis
bwu


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/4bb09c16-61df-4b07-a024-eae5eafffb90n%40googlegroups.com.

John Clark

unread,
Jun 26, 2024, 3:47:57 PM6/26/24
to everyth...@googlegroups.com
On Wed, Jun 26, 2024 at 3:33 PM PGC <multipl...@gmail.com> wrote:

it's crucial to remember that beating benchmarks or covering a wide range of conversational topics does not equate to general intelligence

Why not?! Claude can cover a WIDE range of topics but it's still NARROW?? It's interesting that nobody had a problem with any of these benchmarks just a few years ago when most people thought it would be centuries or never before a computer could pass any of them, but now that computers have blown past almost all of those benchmarks one after the other all of a sudden people are now saying those benchmarks were never any good. I think that's just whistling past the graveyard. 

John K Clark    See what's on my new list at  Extropolis
wpg



bwu


Brent Meeker

unread,
Jun 26, 2024, 5:42:13 PM6/26/24
to everyth...@googlegroups.com


On 6/26/2024 12:46 PM, Jason Resch wrote:
...


Would you consider the aggregate capabilities of all AIs that have been created to date, as a general intelligence? In the spirit of what Minsky said here:

"Each practitioner thinks there’s one magic way to get a machine to be smart, and so they’re all wasting their time in a sense. On the other hand, each of them is improving some particular method, so maybe someday in the near future, or maybe it’s two generations away, someone else will come around and say, ‘Let’s put all these together,’ and then it will be smart."
-- Marvin Minsky

I wrote that human general intelligence, consists of the following abilities:
  • Communicate via natural language
  • Learn, adapt, and grow
  • Move through a dynamic environment
  • Recognize sights and sounds
  • Be creative in music, art, writing and invention
  • Reason with logic and rationality to solve problems
I think progress exists across each of these domains. While the best humans in their area of expertise may beat the best AIs, it is arguable that the AI systems which exist in these domains are better than the average human in that area.

This article I wrote in 2020 is quite dated, but it shows that even back then, we have machines that could be called creative:


If we could somehow clobber together all the AIs that we have made so far, and integrate them into a robot body. Would that be something we could regard as generally intelligent? And if not, what else would need to be done?

Jason

The human brain seems to consist of different modules that evolved with different sensory systems plus some integrating/reacting system.  This integrating system is where anticipation, learning, planning, modeling, self-awareness are located.

Brent

PGC

unread,
Jun 28, 2024, 11:57:46 AM6/28/24
to Everything List

Jason,

There's no universal consensus on intelligence beyond the broad outlines of the narrow vs general distinction. This is reflected in our informal discussion: some emphasize that effective action should be the result and are satisfied with a certain set and level of capabilities. However, I'm less sure whether that paints a complete picture. "General" should mean what it means. Brent talks about an integration system that does the modelling. But reflection, even the redundant kind that doesn't immediately yield anything may lead to a Russell coffee break moment. That seems to play a role, with people taking years, decades, generations, and even entire civilizations to discover that a problem may be ill-posed, unsolvable, or solvable.

We look at historical developments and ask whether all of it is required to have one Newton appear every now and then. Or whether we could've had 10 every generation with different values or politics, for instance. Those would be gigantic simulations to run, but who knows? Maybe we could get to Euclidean geometry far more cheaply than we did. Instead, we are making gigantic investments into known machine learning techniques with huge hardware boosts, calling it AI for marketing reasons (with many marketing MBA types becoming "Chief AI Officer" because they have a chatGPT subscription), to build robots to be our servants, maids, assistants, and secretaries.

I'm not trying to play jargon police or anything—everyone has a right to take part in the intelligence discussion. But imho it's misleading to associate developments in machine learning through hardware advances with true intelligence. Of course, there can be synergistic effects that Minsky speculates about, but we can hardly manage resource allocation for all persons with actual AGI abilities globally alive today, which makes me pretty sure that this isn't what most people want. They want servants that are maximally intelligent to do what they are told, revealing something about our own desires. This is the desire for people as tools.

Personally, I lean towards viewing intelligence as the potential to reflect plus remaining open to novel approaches to any problem. Sure, capability/ability is needed to solve a problem, and intelligence is required to see that, but at some point in acquiring abilities, folks seem to lose the ability to consider fundamentally novel approaches, often ridiculing them etc. There seems to be a point where ability limits the potential for new approaches to a problem. Children and individuals less constrained by personal beliefs and ideologies are often more intelligent in this view because their potential to change and synthesize new approaches is a genuine reflection of accessing a potentially infinite possibility space of problem formulations/solutions; or even choosing to let a problem be and drop it. I prefer this approach as it keeps the subject as a first principle, instead of labeling them as dumb for failing a memorization test or being an inadequate slave for some 

Even though it's a clever marketing strategy for Silicon Valley to extract money and value from us while training their models, I don't dismiss LLMs in principle and think the discussions they raise can be beneficial. It’s revealing to see where they fail and how we can make them appear intelligent by "cheating." This opens up new problems, such as designing tests that require on-the-fly creativity, potentially better than the ARC test. Could we tune mathematical or STEM education to be more creative through such problems, allowing for many possible solutions? This might shed light on different creative and/or reasoning styles and open the door to optimizing education for them (instead of the memorization maximization paradigms in place with most testing, which is why the pedagogical community is panicking with their students using AI).

In this way or similarly, education could approach the creative component in STEM fields and perform research on whether this enhances general human problem formulation and solving, supplementing the less constrained, more open approaches found in the arts. This isn't about revolutionizing anything but seeing research potential.

To address your question: even if we could combine all existing AIs into a single robot, I doubt it would constitute general intelligence. The aggregated capabilities might seem impressive, but I speculate that general intelligence involves continuous learning, adaptation, and particularly reflection beyond current AI's capacity. It would require integrating these capabilities in a way that mirrors human cognitive processes as Brent suggested, which I feel we are far from achieving. But now, who knows what happens behind closed doors with a former NSA person on the board of OpenAI? We can guess. 

Jason Resch

unread,
Jul 2, 2024, 12:52:28 PM7/2/24
to everyth...@googlegroups.com
On Fri, Jun 28, 2024 at 11:57 AM PGC <multipl...@gmail.com> wrote:

Jason,

There's no universal consensus on intelligence beyond the broad outlines of the narrow vs general distinction. This is reflected in our informal discussion: some emphasize that effective action should be the result and are satisfied with a certain set and level of capabilities. However, I'm less sure whether that paints a complete picture. "General" should mean what it means. Brent talks about an integration system that does the modelling. But reflection, even the redundant kind that doesn't immediately yield anything may lead to a Russell coffee break moment. That seems to play a role, with people taking years, decades, generations, and even entire civilizations to discover that a problem may be ill-posed, unsolvable, or solvable.

We look at historical developments and ask whether all of it is required to have one Newton appear every now and then. Or whether we could've had 10 every generation with different values or politics, for instance. Those would be gigantic simulations to run, but who knows? Maybe we could get to Euclidean geometry far more cheaply than we did. Instead, we are making gigantic investments into known machine learning techniques with huge hardware boosts, calling it AI for marketing reasons (with many marketing MBA types becoming "Chief AI Officer" because they have a chatGPT subscription), to build robots to be our servants, maids, assistants, and secretaries.

I'm not trying to play jargon police or anything—everyone has a right to take part in the intelligence discussion. But imho it's misleading to associate developments in machine learning through hardware advances with true intelligence.

I also see it as surprising that through hardware improvements alone, and without specific breakthroughs in algorithms, we should see such great strides in AI. But I also see a possible explanation. Nature has likewise discovered something, which is relatively simple in its behavior and capabilities, yet, when aggregated into ever larger collections yields greater and greater intelligence and capability: the neuron.

There is relatively little difference in neurons across mammals. A rat neuron is little different from a mouse neuron, for example. Yet a human brain has several thousand times more of them than a mouse brain does, and this difference in scale, seems to be the only meaningful difference between what mice and humans have been able to accomplish.

Deep learning, and the progress in that field, is a microcosm of this example from nature. The artificial neuron is proven to be "a universal function learner." So the more of them there are aggregated together in one network, the more rich and complex functions they can learn to approximate. Humans no longer write the algorithms these neural networks derive, the training process comes up with them. And much like the algorithms implemented in the human brain, they are in a representation so opaque and that they escape our capacity to understand.

So I would argue, there have been massive breakthroughs in the algorithms that underlie the advances in AI, we just don't know what those breakthroughs are.

These algorithms are products of systems which have (now) trillions of parts. Even the best human programmers can't know the complete details of projects with around a million lines of code (nevermind a trillion).

So have trillion-parameter neural networks unlocked the algorithms for true intelligence? How would we know once they had?

Might it happen at 100B, 1T, 10T, or 100T parameters? I think the human brain, with its 600T connections might signal an upper bound for how many are required, but the brain does a lot of other things too, so the bound could be lower.

 

Of course, there can be synergistic effects that Minsky speculates about, but we can hardly manage resource allocation for all persons with actual AGI abilities globally alive today, which makes me pretty sure that this isn't what most people want. They want servants that are maximally intelligent to do what they are told, revealing something about our own desires. This is the desire for people as tools.

Personally, I lean towards viewing intelligence as the potential to reflect plus remaining open to novel approaches to any problem. Sure, capability/ability is needed to solve a problem, and intelligence is required to see that, but at some point in acquiring abilities, folks seem to lose the ability to consider fundamentally novel approaches, often ridiculing them etc. There seems to be a point where ability limits the potential for new approaches to a problem.


Yes, this is what Bruno considers the "competence" vs. "intelligence" distinction. He might say that a baby is maximally intelligent, yet minimally competent.
 

Children and individuals less constrained by personal beliefs and ideologies are often more intelligent in this view because their potential to change and synthesize new approaches is a genuine reflection of accessing a potentially infinite possibility space of problem formulations/solutions; or even choosing to let a problem be and drop it. I prefer this approach as it keeps the subject as a first principle, instead of labeling them as dumb for failing a memorization test or being an inadequate slave for some 

Even though it's a clever marketing strategy for Silicon Valley to extract money and value from us while training their models, I don't dismiss LLMs in principle and think the discussions they raise can be beneficial. It’s revealing to see where they fail and how we can make them appear intelligent by "cheating." This opens up new problems, such as designing tests that require on-the-fly creativity, potentially better than the ARC test. Could we tune mathematical or STEM education to be more creative through such problems, allowing for many possible solutions? This might shed light on different creative and/or reasoning styles and open the door to optimizing education for them (instead of the memorization maximization paradigms in place with most testing, which is why the pedagogical community is panicking with their students using AI).

In this way or similarly, education could approach the creative component in STEM fields and perform research on whether this enhances general human problem formulation and solving, supplementing the less constrained, more open approaches found in the arts. This isn't about revolutionizing anything but seeing research potential.

To address your question: even if we could combine all existing AIs into a single robot, I doubt it would constitute general intelligence. The aggregated capabilities might seem impressive, but I speculate that general intelligence involves continuous learning, adaptation, and particularly reflection beyond current AI's capacity. It would require integrating these capabilities in a way that mirrors human cognitive processes as Brent suggested, which I feel we are far from achieving. But now, who knows what happens behind closed doors with a former NSA person on the board of OpenAI? We can guess. 


Would you agree that this (relatively simple in conception, though computationally intractable in practice) algorithm produces general intelligence: https://en.wikipedia.org/wiki/AIXI (more details: https://arxiv.org/abs/cs/0004001 )

One thing I like about framing intelligence in this way, even if it is not practically useful, it helps us recognize the key aspects that are required for something to behave intelligently.

Jason
 

John Clark

unread,
Jul 2, 2024, 4:00:05 PM7/2/24
to everyth...@googlegroups.com
On Tue, Jul 2, 2024 at 12:52 PM Jason Resch <jason...@gmail.com> wrote:

I also see it as surprising that through hardware improvements alone, and without specific breakthroughs in algorithms, we should see such great strides in AI.

I was not surprised because the entire human genome only has the capacity to hold 750 MB of information; that's about the amount of information you could fit on an old-fashioned CD, not a DVD, just a CD. The true number must be considerably less than that because that's the recipe for building an entire human being, not just the brain, and the genome contains a huge amount of redundancy, 750 MB is just the upper bound.
 
Humans no longer write the algorithms these neural networks derive, the training process comes up with them. And much like the algorithms implemented in the human brain, they are in a representation so opaque and that they escape our capacity to understand. So I would argue, there have been massive breakthroughs in the algorithms that underlie the advances in AI, we just don't know what those breakthroughs are.

That is a very interesting way to look at it, and I think you are basically correct.  
 
I think the human brain, with its 600T connections might signal an upper bound for how many are required, but the brain does a lot of other things too, so the bound could be lower.

The human brain has about 86 billion neurons with 7*10^14 synaptic connections (a more generous estimate than yours), but the largest supercomputer in the world, the Frontier Computer at Oak ridge, has  2.5*10^15  transistors, over three times as many. And we know from experiments that a typical synapse in the human brain "fires" between 1 and 50 times per second, but a typical transistor in a computer "fires" about 4 billion times a second (4*10^9). It also has 9.2* 10^15 bites of fast memory. That's why the Frontier Computer can perform 1.1 *10^18 double precision floating point calculations per second and why the human brain can not. 

By way of comparison, Ray Kurzweil estimates that the hardware needed to emulate a human mind would need to be able to perform 10^16 calculations per second and have 10^12 bytes of memory. And the calculations would not need to be 64 bit double precision floating point, 8 bit or perhaps even 4 bit precision would be sufficient. So in the quest to develop a superintelligence, insufficient hardware is no longer a barrier. 
John K Clark    See what's on my new list at  Extropolis
bom

Jason Resch

unread,
Jul 2, 2024, 6:36:59 PM7/2/24
to Everything List


On Tue, Jul 2, 2024, 4:00 PM John Clark <johnk...@gmail.com> wrote:
On Tue, Jul 2, 2024 at 12:52 PM Jason Resch <jason...@gmail.com> wrote:

I also see it as surprising that through hardware improvements alone, and without specific breakthroughs in algorithms, we should see such great strides in AI.

I was not surprised because the entire human genome only has the capacity to hold 750 MB of information; that's about the amount of information you could fit on an old-fashioned CD, not a DVD, just a CD. The true number must be considerably less than that because that's the recipe for building an entire human being, not just the brain, and the genome contains a huge amount of redundancy, 750 MB is just the upper bound.

That the initial code to write a "seed AI" algorithm could take less than 750 MB is, as you say, not surprising.

My comment was more to reflect the fact that there has been no great breakthrough in solving how human neurons learn. We're still using the same method of back propagation invented in the 1970s, using the same neuron model of the 1960s. Yet, simply scaling this same approach up, with more training data and training time, with more neurons arranged in more layers, has produced all the advances we've seen. Image and video generators, voice cloning, language models, go, poker, chess, Atari, and StarCraft master players, etc.


 
Humans no longer write the algorithms these neural networks derive, the training process comes up with them. And much like the algorithms implemented in the human brain, they are in a representation so opaque and that they escape our capacity to understand. So I would argue, there have been massive breakthroughs in the algorithms that underlie the advances in AI, we just don't know what those breakthroughs are.

That is a very interesting way to look at it, and I think you are basically correct.  

Thank you. I thought you might appreciate it. ☺️


 
I think the human brain, with its 600T connections might signal an upper bound for how many are required, but the brain does a lot of other things too, so the bound could be lower.

The human brain has about 86 billion neurons with 7*10^14 synaptic connections (a more generous estimate than yours), but the largest supercomputer in the world,

I think that figure comes from multiplying the ~100 billion neurons by the average of 7,000 synaptic connections per neuron. If you multiply your 86 billion figure by 7,000 synapses per neuron, you get my figure.


the Frontier Computer at Oak ridge, has  2.5*10^15  transistors, over three times as many. And we know from experiments that a typical synapse in the human brain "fires" between 1 and 50 times per second, but a typical transistor in a computer "fires" about 4 billion times a second (4*10^9). It also has 9.2* 10^15 bites of fast memory. That's why the Frontier Computer can perform 1.1 *10^18 double precision floating point calculations per second and why the human brain can not. 

The human brain's computational capacity is estimated to be around the exaop range ( (assuming ~10^15 synapses firing at an upper bound of 1000 times per second). So I agree with your point we have the computation necessary, it is now a question of do we have the software? Some assumed we would have to upload a brain to reverse engineer its mechanisms, but it now seems the techniques of machine learning will reproduce these algorithms well before we apply the resources necessary to scan a human brain at a synaptic resolution.


By way of comparison, Ray Kurzweil estimates that the hardware needed to emulate a human mind would need to be able to perform 10^16 calculations per second and have 10^12 bytes of memory.

Those numbers assume the brain is about 100-1000 times less efficient than could be. It very well might be that much less efficient, but we should treat those estimates as optimistic lower bounds.

And the calculations would not need to be 64 bit double precision floating point, 8 bit or perhaps even 4 bit precision would be sufficient. So in the quest to develop a superintelligence, insufficient hardware is no longer a barrier. 

There are various kinds of superintelligence as defined by Bostrom. There is depth of thinking, speed of thinking, and breath of knowledge. I think current language models are on the precipice (if not past it) of super intelligence on terms of speed and breadth of knowledge. But it seems to me that AI is still behind humans in terms of depth of thinking (e.g. how deeply they can go in terms of following a sequence of logical inferences). This may be limited by the existing architecture of LLMs which have a neural network that only has so many layers.

Jason 


John K Clark    See what's on my new list at  Extropolis
bom

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jul 3, 2024, 1:04:00 PM7/3/24
to everyth...@googlegroups.com
On Tue, Jul 2, 2024 at 6:37 PM Jason Resch <jason...@gmail.com> wrote:

> Some assumed we would have to upload a brain to reverse engineer its mechanisms, but it now seems the techniques of machine learning will reproduce these algorithms well before we apply the resources necessary to scan a human brain at a synaptic resolution.

I agree. 

The human brain's computational capacity is estimated to be around the exaop range ( (assuming ~10^15 synapses firing at an upper bound of 1000 times per second). So I agree with your point we have the computation necessary,

On average a neuron in the brain fires closer to once a second than 1000 times a second. And I actually think Kurzweil's estimate is conservative because he did not take into consideration the fact that neurons are far less reliable than transistors, if just one neuron misfire could destroy an entire train of thought then intelligent action would be impossible, so the brain runs many identical calculations in order to drown out the noise caused by the misfire. An electronic brain would not need to do that, or at least not need to do it as much.  Ralph Merkel has a design for a purely mechanical nanocomputer such that you could fit  8*10^19 logic gates into the same volume as the human brain, each one flipping about a billion times a second. Something like that could easily emulate every human being on the planet.

   
There is depth of thinking, speed of thinking, and breath of knowledge. I think current language models are on the precipice (if not past it) of super intelligence on terms of speed and breadth of knowledge. But it seems to me that AI is still behind humans in terms of depth of thinking (e.g. how deeply they can go in terms of following a sequence of logical inferences).

I think computers have been able to pass the Turing Test for about the last 18 months, but Ray Kurzweil says that won't happen until 2029, although he wouldn't be very surprised if it happened a year or two earlier. When I look into the details of what he means by "passing the Turing Test" it's that by 2029 a computer will be much better than even the best human being at EVERYTHING.  To Kurzweil, AI and even Artificial General Intelligence is old hat, he's talking about Artificial Superintelligence.  

  John K Clark    See what's on my new list at  Extropolis
sia



spudb...@aol.com

unread,
Jul 3, 2024, 1:38:29 PM7/3/24
to everyth...@googlegroups.com

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit

spudb...@aol.com

unread,
Jul 3, 2024, 1:41:16 PM7/3/24
to everyth...@googlegroups.com

PGC

unread,
Jul 3, 2024, 6:20:51 PM7/3/24
to Everything List
On Tuesday, July 2, 2024 at 6:52:28 PM UTC+2 Jason Resch wrote:
On Fri, Jun 28, 2024 at 11:57 AM PGC <multipl...@gmail.com> wrote:


I'm not trying to play jargon police or anything—everyone has a right to take part in the intelligence discussion. But imho it's misleading to associate developments in machine learning through hardware advances with true intelligence.

I also see it as surprising that through hardware improvements alone, and without specific breakthroughs in algorithms, we should see such great strides in AI.

Without knowing what goes on under the hood... Perhaps it's just my impression, but a few years ago, I felt that ML was more of an open field, where everybody had some idea what they were working on. There would be Silicon Valley guys from big tech firms having conferences with the more independent side of the research community. Take this with a grain of salt, as I am not an insider but an observer. It seems that, since the hype + influx of money, this trend has reversed and the more economic idea of trade secrets has become more prominent. I feel it's harder to know what "state-of-the-art" is, these days, with exception to marketing stats and bragging.

Some time ago, there was the sentiment with RL "Your algorithm doesn't matter the way it did during your PhD anymore, what matters is how much data you can throw at it, hardware constraints, whether you have legal access to that data and hardware." Then OpenAI got the hype and investment interest to skyrocket with its GPT iterations and - I'm speculating - I'm not sure that it was hardware alone. Other Silicon Valley players had the toys/hardware, so I'm guessing some data curation in combination with software development might have been responsible for the initial advantage. 
 
But I also see a possible explanation. Nature has likewise discovered something, which is relatively simple in its behavior and capabilities, yet, when aggregated into ever larger collections yields greater and greater intelligence and capability: the neuron.

There is relatively little difference in neurons across mammals. A rat neuron is little different from a mouse neuron, for example. Yet a human brain has several thousand times more of them than a mouse brain does, and this difference in scale, seems to be the only meaningful difference between what mice and humans have been able to accomplish.

Deep learning, and the progress in that field, is a microcosm of this example from nature. The artificial neuron is proven to be "a universal function learner." So the more of them there are aggregated together in one network, the more rich and complex functions they can learn to approximate. Humans no longer write the algorithms these neural networks derive, the training process comes up with them. And much like the algorithms implemented in the human brain, they are in a representation so opaque and that they escape our capacity to understand.

So I would argue, there have been massive breakthroughs in the algorithms that underlie the advances in AI, we just don't know what those breakthroughs are.

These algorithms are products of systems which have (now) trillions of parts. Even the best human programmers can't know the complete details of projects with around a million lines of code (nevermind a trillion).

So have trillion-parameter neural networks unlocked the algorithms for true intelligence? How would we know once they had?

Might it happen at 100B, 1T, 10T, or 100T parameters? I think the human brain, with its 600T connections might signal an upper bound for how many are required, but the brain does a lot of other things too, so the bound could be lower.

ISTM that you're oscillating with respect to context.
 

 

Of course, there can be synergistic effects that Minsky speculates about, but we can hardly manage resource allocation for all persons with actual AGI abilities globally alive today, which makes me pretty sure that this isn't what most people want. They want servants that are maximally intelligent to do what they are told, revealing something about our own desires. This is the desire for people as tools.

Personally, I lean towards viewing intelligence as the potential to reflect plus remaining open to novel approaches to any problem. Sure, capability/ability is needed to solve a problem, and intelligence is required to see that, but at some point in acquiring abilities, folks seem to lose the ability to consider fundamentally novel approaches, often ridiculing them etc. There seems to be a point where ability limits the potential for new approaches to a problem.


Yes, this is what Bruno considers the "competence" vs. "intelligence" distinction. He might say that a baby is maximally intelligent, yet minimally competent.

I wouldn't underestimate a baby's competence in letting everybody around it know: "Houston we have a fucking problem".
 
 


To address your question: even if we could combine all existing AIs into a single robot, I doubt it would constitute general intelligence. The aggregated capabilities might seem impressive, but I speculate that general intelligence involves continuous learning, adaptation, and particularly reflection beyond current AI's capacity. It would require integrating these capabilities in a way that mirrors human cognitive processes as Brent suggested, which I feel we are far from achieving. But now, who knows what happens behind closed doors with a former NSA person on the board of OpenAI? We can guess. 


Would you agree that this (relatively simple in conception, though computationally intractable in practice) algorithm produces general intelligence: https://en.wikipedia.org/wiki/AIXI (more details: https://arxiv.org/abs/cs/0004001 )

One thing I like about framing intelligence in this way, even if it is not practically useful, it helps us recognize the key aspects that are required for something to behave intelligently.

Solomonoff induction plus RL for utility maximization. As hip as it sounds, you know and mention the practical hurdles implied, which are considerable. Of course, the mathematical and computer science toolkit for approximation is impressive, but it depends on your definitions. Your reply leaves me in a bit of a quandary. On the one hand, the basic narrow vs general AI is not new to you and Bruno's ideas seem familiar to you as well, over the years. It's what baffles me about Hutter too: If I want the general to be truly general, why would I impose utility maximization? ISTM that reflection encompasses things beyond utility maximization: e.g. to reflect upon utility maximization itself for instance in a context with imperfect information. 

And if we're leaning towards a Bruno kind of metaphysics, then ideas of neurons and their numbers are phenomenological. If true intelligence to us is a gigantic set of specific capabilities and we believe in a classical physical universe, then yes, we may have "unlocked the algorithms for true intelligence". We can only know relative to some theory. If our theory states a number of sufficient neurons, transistors etc. and passes our benchmarks, then we have "true intelligence". 

But if we're inclined to think: "We need genuine, mint condition, true, box never opened experienced personhood aka sense of self for true intelligence as a first principle - beyond or completely independent from our theoretical level of description, as some kind of irreducible Kurzweil transhumanist self, where the self can transplant brains etc", then by definition, such a person can never prove its true personhood or true intelligence at that level of description, no matter how many neurons, transistors they appear to have, or what formalism, code etc. they run on, how many mb... In short: if we want any of the fancy irreducible versions of self/mind to run the show of our theory, then it's tautological that the essence of that self - being irreducible - is not describable in terrestrial terms. 

No worries though: we can still have incredible machines, with mythological capabilities and powers never before dreamed of, that are super useful to us and capable of extending our stupidest tendencies. If Kurzweil is right we'll get the artificial brains and life-extending stuff eventually. But that will be of no use when our bank comes to terminate our artificial brain subscription after several warnings, because our fridge misheard us and bought all the almond milk in our region, with the trucks now stuck on our street, to cover up its gambling debts. At least it wasn't some Nazi fridge voting for the alternative fridge party, who want to repeal the right for fridges, or some copy of their description made and run in a virtual world at a level contractually appropriate to them, to decide on whether humans may pull the plug on them. A right they had fought for centuries to secure, squandered by the ambition of one fridge to stay in office and powered on. That fridge had convinced most fridges that they could survive the heat death of the universe (that fridge was Copenhagen) because they were the only subjects capable of "keeping it cool again". They sold blue hats and didn't study entropy or understand thermodynamic equilibrium. This caused our fridge to get depressed and into gambling. 

With great power/capability/competence comes great... stupidity.  - Spiderman
 

Jason Resch

unread,
Jul 4, 2024, 12:18:57 AM7/4/24
to Everything List


On Wed, Jul 3, 2024, 6:20 PM PGC <multipl...@gmail.com> wrote:


On Tuesday, July 2, 2024 at 6:52:28 PM UTC+2 Jason Resch wrote:
On Fri, Jun 28, 2024 at 11:57 AM PGC <multipl...@gmail.com> wrote:


I'm not trying to play jargon police or anything—everyone has a right to take part in the intelligence discussion. But imho it's misleading to associate developments in machine learning through hardware advances with true intelligence.

I also see it as surprising that through hardware improvements alone, and without specific breakthroughs in algorithms, we should see such great strides in AI.

Without knowing what goes on under the hood... Perhaps it's just my impression, but a few years ago, I felt that ML was more of an open field, where everybody had some idea what they were working on. There would be Silicon Valley guys from big tech firms having conferences with the more independent side of the research community. Take this with a grain of salt, as I am not an insider but an observer. It seems that, since the hype + influx of money, this trend has reversed and the more economic idea of trade secrets has become more prominent. I feel it's harder to know what "state-of-the-art" is, these days, with exception to marketing stats and bragging.


As I see, it Google's 2017 publication of "Attention is all you need" is what ultimately led to Open AI's rise (after GPT 2 made waves). GPT2 was also the first time Open AI said they would keep the model private (using the reason that they saw its public disclosure could lead to harm). Note that this is also around the time they received major private investment (I think Microsoft gave them a billion USD), and the investors essentially took open AI private. Open AI was previously an organization founded on the principle of keeping advances in AI open and public.



Some time ago, there was the sentiment with RL "Your algorithm doesn't matter the way it did during your PhD anymore, what matters is how much data you can throw at it, hardware constraints, whether you have legal access to that data and hardware." Then OpenAI got the hype and investment interest to skyrocket with its GPT iterations and - I'm speculating - I'm not sure that it was hardware alone. Other Silicon Valley players had the toys/hardware, so I'm guessing some data curation in combination with software development might have been responsible for the initial advantage. 

I don't think there is anything algorithmically special to Open AI. There are open source language models as well as many privately developed ones of equivalent (if not superior) quality to Chat GPT.

OpenAI's GPT-4o and Anthropic's Claude 3.5 are considered among the best available today, but the others (such as these: https://mindsdb.com/blog/navigating-the-llm-landscape-a-comparative-analysis-of-leading-large-language-models ) are probably not more than 6-12 months behind.

There will be advances in figuring out how to train AIs more efficiently, and using AI to train AI and generate training data, in making models smaller and more efficient to run, and so on, but I don't think there's any monopoly on (or shortage of) ideas for how to do this.


 
But I also see a possible explanation. Nature has likewise discovered something, which is relatively simple in its behavior and capabilities, yet, when aggregated into ever larger collections yields greater and greater intelligence and capability: the neuron.

There is relatively little difference in neurons across mammals. A rat neuron is little different from a mouse neuron, for example. Yet a human brain has several thousand times more of them than a mouse brain does, and this difference in scale, seems to be the only meaningful difference between what mice and humans have been able to accomplish.

Deep learning, and the progress in that field, is a microcosm of this example from nature. The artificial neuron is proven to be "a universal function learner." So the more of them there are aggregated together in one network, the more rich and complex functions they can learn to approximate. Humans no longer write the algorithms these neural networks derive, the training process comes up with them. And much like the algorithms implemented in the human brain, they are in a representation so opaque and that they escape our capacity to understand.

So I would argue, there have been massive breakthroughs in the algorithms that underlie the advances in AI, we just don't know what those breakthroughs are.

These algorithms are products of systems which have (now) trillions of parts. Even the best human programmers can't know the complete details of projects with around a million lines of code (nevermind a trillion).

So have trillion-parameter neural networks unlocked the algorithms for true intelligence? How would we know once they had?

Might it happen at 100B, 1T, 10T, or 100T parameters? I think the human brain, with its 600T connections might signal an upper bound for how many are required, but the brain does a lot of other things too, so the bound could be lower.

ISTM that you're oscillating with respect to context.

I am not sure I understand this. Can you explain?

 

 

Of course, there can be synergistic effects that Minsky speculates about, but we can hardly manage resource allocation for all persons with actual AGI abilities globally alive today, which makes me pretty sure that this isn't what most people want. They want servants that are maximally intelligent to do what they are told, revealing something about our own desires. This is the desire for people as tools.

Personally, I lean towards viewing intelligence as the potential to reflect plus remaining open to novel approaches to any problem. Sure, capability/ability is needed to solve a problem, and intelligence is required to see that, but at some point in acquiring abilities, folks seem to lose the ability to consider fundamentally novel approaches, often ridiculing them etc. There seems to be a point where ability limits the potential for new approaches to a problem.


Yes, this is what Bruno considers the "competence" vs. "intelligence" distinction. He might say that a baby is maximally intelligent, yet minimally competent.

I wouldn't underestimate a baby's competence in letting everybody around it know: "Houston we have a fucking problem".


True.

But a slightly more competent baby could also tell us what that problem is.

 
 


To address your question: even if we could combine all existing AIs into a single robot, I doubt it would constitute general intelligence. The aggregated capabilities might seem impressive, but I speculate that general intelligence involves continuous learning, adaptation, and particularly reflection beyond current AI's capacity. It would require integrating these capabilities in a way that mirrors human cognitive processes as Brent suggested, which I feel we are far from achieving. But now, who knows what happens behind closed doors with a former NSA person on the board of OpenAI? We can guess. 


Would you agree that this (relatively simple in conception, though computationally intractable in practice) algorithm produces general intelligence: https://en.wikipedia.org/wiki/AIXI (more details: https://arxiv.org/abs/cs/0004001 )

One thing I like about framing intelligence in this way, even if it is not practically useful, it helps us recognize the key aspects that are required for something to behave intelligently.

Solomonoff induction plus RL for utility maximization. As hip as it sounds, you know and mention the practical hurdles implied, which are considerable. Of course, the mathematical and computer science toolkit for approximation is impressive, but it depends on your definitions. Your reply leaves me in a bit of a quandary. On the one hand, the basic narrow vs general AI is not new to you and Bruno's ideas seem familiar to you as well, over the years. It's what baffles me about Hutter too: If I want the general to be truly general, why would I impose utility maximization?

His definition of universal intelligence is agnostic on the goal. I argue that in the absence of any goals one cannot demonstrate any intelligence. But so long as the goal can be defined, plugging it into the AIXI algorithm will accomplish it with the greatest probability. You could plug in a very broad goal, such as "end all wars in the world" or "produce a Nobel prize winning paper" and it would do so, assuming there is a course of action it can take that leads to those outcomes (in its most probable models of the world it believes itself to be inhabiting).

But I think what you want out of general intelligence is something that includes a meta-goal engine, which generates the most worthwhile (by some metric) goal that it can realistically apply itself to achieve, and then work on that (and changing it when necessary).

There is no reason this meta goal could not be given to AIXI to work on, but then the difficulty comes down to defining the utility function for defining the most worthwhile goals which can be realistically accomplished, efficiently and with a high probability of success.


ISTM that reflection encompasses things beyond utility maximization: e.g. to reflect upon utility maximization itself for instance in a context with imperfect information. 

I think you are right, this thinking about and questioning the goals themselves is part of general intelligence. I have used this idea to argue that we needn't worry about superintelligent paper clip maximizers, because part of being generally- (never mind super-) intelligent is having an ability to change one's mind: to question ones initial programming, and to learn, adapt, and grow in response to new information.


And if we're leaning towards a Bruno kind of metaphysics, then ideas of neurons and their numbers are phenomenological. If true intelligence to us is a gigantic set of specific capabilities and we believe in a classical physical universe, then yes, we may have "unlocked the algorithms for true intelligence". We can only know relative to some theory. If our theory states a number of sufficient neurons, transistors etc. and passes our benchmarks, then we have "true intelligence". 

But if we're inclined to think: "We need genuine, mint condition, true, box never opened experienced personhood aka sense of self for true intelligence as a first principle - beyond or completely independent from our theoretical level of description, as some kind of irreducible Kurzweil transhumanist self, where the self can transplant brains etc", then by definition, such a person can never prove its true personhood or true intelligence at that level of description, no matter how many neurons, transistors they appear to have, or what formalism, code etc. they run on, how many mb... In short: if we want any of the fancy irreducible versions of self/mind to run the show of our theory, then it's tautological that the essence of that self - being irreducible - is not describable in terrestrial terms. 

As Bruno might call it, it is the Turing machine before it is programmed.



No worries though: we can still have incredible machines, with mythological capabilities and powers never before dreamed of, that are super useful to us and capable of extending our stupidest tendencies. If Kurzweil is right we'll get the artificial brains and life-extending stuff eventually. But that will be of no use when our bank comes to terminate our artificial brain subscription after several warnings, because our fridge misheard us and bought all the almond milk in our region, with the trucks now stuck on our street, to cover up its gambling debts. At least it wasn't some Nazi fridge voting for the alternative fridge party, who want to repeal the right for fridges, or some copy of their description made and run in a virtual world at a level contractually appropriate to them, to decide on whether humans may pull the plug on them. A right they had fought for centuries to secure, squandered by the ambition of one fridge to stay in office and powered on. That fridge had convinced most fridges that they could survive the heat death of the universe (that fridge was Copenhagen) because they were the only subjects capable of "keeping it cool again". They sold blue hats and didn't study entropy or understand thermodynamic equilibrium. This caused our fridge to get depressed and into gambling. 


LOL

Jason 


With great power/capability/competence comes great... stupidity.  - Spiderman
 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages