Elon Musk's Grok 3

22 views
Skip to first unread message

John F Sowa

unread,
Feb 21, 2025, 5:20:41 PMFeb 21
to ontolo...@googlegroups.com, CG
Elon has a new version:

But it is based on the old idea of ever more computing power:  200,000 Nvidia chips and a new data center in Memphis, TN.  And it still suffers from the same old problems of other GPT systems:

"However, some limitations emerged during testing. Karpathy noted that the model sometimes fabricates citations and struggles with certain types of humor and ethical reasoning tasks. These challenges are common across current AI systems and highlight the ongoing difficulties in developing truly human-like artificial intelligence."  

Nadin, Mihai

unread,
Feb 21, 2025, 5:48:00 PMFeb 21
to ontolo...@googlegroups.com

Energy guzzler—I worked with it. higher performance than the rest of the available models. They will catch up soon.

Mihai Nadin

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/6c7788c7fe2549a1943b59318a7154f3%40608a7159f80942d5a7fbb9d617bb1e77.

Kingsley Idehen

unread,
Feb 21, 2025, 7:32:57 PMFeb 21
to ontolo...@googlegroups.com

Hi John,

I don’t listen to the tech media—it’s much more fun playing around myself. Grok 3 is proving to be a game-changer in how LLMs enhance research and analysis. It feels like the world is being challenged (or even teased) right now, navigating the tricky tension between tight vs. loose coupling of technology innovation and politics.  Like many, I dislike Elon's political shenanigans.

First off, it’s a much better bang-for-buck than anything I’ve tested. Why? Because X.AI is offering “Deep Search” and Inference (on par with the GPT-family) for a fraction of the cost. In addition, it’s an Easy Button for the vision behind the Semantic Web Project. 🚀  


Demo 1: Grok 3 + Semantic Web

 

Demo 2: Grok 3 + Semantic Web Notetaking

 

See Also

🔗 Deep Search generating a Tesla Vehicles Registration Report 

Kingsley

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/6c7788c7fe2549a1943b59318a7154f3%40608a7159f80942d5a7fbb9d617bb1e77.

-- 
Regards,

Kingsley Idehen	      
Founder & CEO 
OpenLink Software   
Home Page: http://www.openlinksw.com
Community Support: https://community.openlinksw.com

Social Media:
LinkedIn: http://www.linkedin.com/in/kidehen
Twitter : https://twitter.com/kidehen


Mark Underwood

unread,
Feb 21, 2025, 9:57:24 PMFeb 21
to ontolo...@googlegroups.com
Kingsley,

Agree with your characterization.

Spend most of my time with Gemini for practical reasons, and must grudgingly admit that Grok is better able to mimic tracking and integrating disparate threads in a "conversation."  I asked it to integrate "coincidenta oppositorum" notions in a short story I wrote.  Both platforms produced useful results, but Grok's was a bit more extensive.  Though they both have limits after which you have to prompt them for a dropped topic. 

Now, why was I writing this email ?. . . 

-Mark
Also, Grok is less of a prude in the current releases.



--
Mark Underwood 
Linkedin  


Alex Shkotin

unread,
Feb 22, 2025, 3:37:16 AMFeb 22
to ontolo...@googlegroups.com
Kingley,

I agree about playing, testing. I used all these genAI as a kind of talking Encyclopedia. And I am using only not$ versions. 
Grok 3 looks very good. It looks like >Gemini, which is good too.
And Grok 3 can paint:
image.png
So we have fantastic tools🏋️

Alex

сб, 22 февр. 2025 г. в 03:32, 'Kingsley Idehen' via ontolog-forum <ontolo...@googlegroups.com>:

Alex Shkotin

unread,
Feb 22, 2025, 5:33:18 AMFeb 22
to ontolo...@googlegroups.com
IN ADDITION: I discussed with Grok3 one of my favorite ideas and at the end of phase-1 got from It: "Sure, think about it at your own discretion! If you have any ideas or questions, we'll be happy to continue the discussion. While you're thinking, I can offer a small summary: we've laid the foundation for a model where the electron moves along non-differentiable trajectories, and a Lagrangian with a geometrically dependent potential and stochastic noise explains the differences between one and two slits. This is an interesting bridge between classical stochastics and quantum effects. I look forward to your thoughts!"
At least It is cooperative💡

Alex

сб, 22 февр. 2025 г. в 11:36, Alex Shkotin <alex.s...@gmail.com>:

John F Sowa

unread,
Feb 22, 2025, 2:54:51 PMFeb 22
to Andras Kornai, ontolo...@googlegroups.com, CG
Andras,

I agree that Elon's new system is a big improvement over earlier systems of its kind.  But note what you said below:

AK:  Yes, they all need big iron, AI is still in the "make it work" stage. Yes, they still hallucinate (and this will not be easy to get rid of, as humans do too).

That is the point of the talk that Arun and I will present on Wednesday:  Our Permion system is a hybrid of LLM technology with symbolic AI.  And it is a MAJOR improvement over "big iron".    It detects and ELIMINATES hallucinations, and it produces reliable results that have precise citations of sources. 

With that huge amount  big iron, Elon's system still generates false citations of its sources.  That means it's impossible to use it to detect the source of accidents, disasters, crimes, hackers, or brilliant achievements.  If and when it produces a brilliant answer to a question, it cannot tell you what sources it used or how and why it combined information from those sources to produce its answers.  Permion can do that with a tiny fraction of the  amount of iron.  (But it can use more, if available.)

Humans can tell you where they got their info, and they can answer your questions about their method of reasoning to derive those answers.  In that regard, our old VivoMind system from 2000 to 2010 could do reasoning with the precision that Elon's system CANNOT produce today.  And even if he could double his 200,000 Nvidia chips, Elon still could not
guarantee the precision that VivoMind produced in 2010.

For a summary of the old VivoMind system with examples of what it could do, see https://jfsowa.com/talks/cogmem.pdf .

Our new Permion Inc. system is a major upgrade of the VivoMind system from 2000 to 2010.   You can skip the first 44 slides, which show how the VivoMind Cognitive Memory system works.  The slides from 45 to 64 show three applications that no LLM-based system can do today.  That system could run on a laptop, but it scales linearly in performance wih the speed and number of CPUs available. 

With the addition of LLMs, the symbolic power of Permion can do everything that VivoMind could do and do it better and faster.  But it can also do the kinds of things that big iron systems do with a tiny  fraction of the amount of iron.  If more iron is available, it can use it.  

My recommendation:  Sell any Nvidia stock you (or anybody else) may own.

John
 


From: "Andras Kornai" <kor...@ilab.sztaki.hu>

John,

[without condoning Musk's practices in the larger world] I think this is missing the point, which is catching up to the state of the art from zero in less than two years. Compare this to the European Union, which is still incapable of fielding a SOTA system (Mistral, in spite of its laudable goals, is not quite there yet, still playing catch-up). Yes, they all need big iron, AI is still in the "make it work" stage. Yes, they still hallucinate (and this will not be easy to get rid of, as humans do too). But clearly xAI has organized a large enough group of bespoke engineers and gave them enough hardware to do this, whereas the EU is structurally incapable of doing so, spending all its energy on wordsmithing resolution after resolution.

The EU is vastly better resourced than Musk. But it is a captive of a smooth-talking bureaucracy (I specifically blame CAIRNE, formerly known as CLAIRE).

Andras

deddy

unread,
Feb 22, 2025, 2:58:32 PMFeb 22
to ontolo...@googlegroups.com
Andras -

>
> AK: Yes, they all need big iron,
>

In your context, what is meaning of "big iron?"

______________________
David Eddy

Search behind the z/OS firewall.


> -------Original Message-------
> From: John F Sowa <so...@bestweb.net>
> To: Andras Kornai <kor...@ilab.sztaki.hu>
> Cc: ontolo...@googlegroups.com <ontolo...@googlegroups.com>, CG <c...@lists.iccs-conference.org>
> Subject: [ontolog-forum] Elon Musk's Grok 3
> -------------------------
>
> FROM: "Andras Kornai" <kor...@ilab.sztaki.hu>
> --
> All contributions to this forum are covered by an open-source
> license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info
> ---
> You received this message because you are subscribed to the Google
> Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to ontolog-foru...@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/ontolog-forum/c14a697c1366453da100f586c480e327%406e7307ca404b4cecb8ea6f5e77fe2622.
>

Dan Brickley

unread,
Feb 22, 2025, 3:26:16 PMFeb 22
to ontolo...@googlegroups.com, Andras Kornai, CG
On Sat, Feb 22, 2025 at 19:54 John F Sowa <so...@bestweb.net> wrote:

Humans can tell you where they got their info,

Not me!

and they can answer your questions about their method of reasoning to derive those answers. 

I can’t! (for 99.x% of my knowledge of the world)

Dan

John F Sowa

unread,
Feb 23, 2025, 10:45:09 PMFeb 23
to Andras Kornai, CG, ontolo...@googlegroups.com
Andras,

Did you look at the slides I cited for our system from 2010?  That system could run on a laptop with an attached drive that would fit in your pocket.  When run on a larger server, Its speed would scale linearly with the number of CPUs in the server.

AK:   the devil is in the acquisition of rules and representations. MuZero can learn these, but not without very significant hardware investment (especially in environments where self-play makes no sense) so selling NVIDIA stock appears premature.

But they are using LLMs to acquire rules and representations.  That is NOT what we do.

Please reread the cogmem.pdf slides cited below.  That system does NOT use LLMs to acquire rules and representations.  It is much, much more efficient to acquire rules and representations by the methods discussed in those slides (and further citations for more detail).  Then look at the three examples starting at slide 44.

There is no LLM-based system available today that could do those three applications.  They require precise symbolic methods.  LLM-based methods are of ZERO value for those applications.

A hybrid system that combines LLMs with symbolic reasoning provides the best of both worlds.  And it does so with just a tiny fraction of the amount of Nvidia chips -- or even wtih 0 Nvidia chips..  It can take advantage of a reasonable amount of LLM technology, but the most advanced and complicated reasoning methods are done much better, faster, and more precisely WITHOUT using LLMs.

I am not saying that a reasonable amount of Nvidia chips would be useless.  But I am saying that 200,000 chips is a terrible waste of hardware and electricity and cold water.  When you have symbolic AI to do the precise reasoning, just a modest amount of Nvidia chips can provide enough power for translating languages (natural, symbolic, diagrarmmatic, and perceptual in multidimensions).  

In short,, use the Nvidia chips for what they do best:  translating languages of any kind.  Then use the symbolic reasoning for what it does best:  precise symbolic reasoning.   For that, a laptop can outperform Elon Musk's behemoth.

John
 


From: "Andras Kornai" <kor...@ilab.sztaki.hu>

John,

I am completely on board with the idea that a symbol-manipulation system can be both more reliable and less hardware-intense, by orders of magnitude. But as we have all learned in GOFAI, the devil is in the acquisition of rules and representations. MuZero can learn these, but not without very significant hardware investment (especially in environments where self-play makes no sense) so selling NVIDIA stock appears premature.

Andras


> On Feb 22, 2025, at 8:54 PM, John F Sowa <so...@bestweb.net> wrote:
>
> Andras,
>
> I agree that Elon's new system is a big improvement over earlier systems of its kind. But note what you said below:
>
> AK: Yes, they all need big iron, AI is still in the "make it work" stage. Yes, they still hallucinate (and this will not be easy to get rid of, as humans do too).
>
> That is the point of the talk that Arun and I will present on Wednesday: Our Permion system is a hybrid of LLM technology with symbolic AI. And it is a MAJOR improvement over "big iron". It detects and ELIMINATES hallucinations, and it produces reliable results that have precise citations of sources.
>
> With that huge amount big iron, Elon's system still generates false citations of its sources. That means it's impossible to use it to detect the source of accidents, disasters, crimes, hackers, or brilliant achievements. If and when it produces a brilliant answer to a question, it cannot tell you what sources it used or how and why it combined information from those sources to produce its answers. Permion can do that with a tiny fraction of the amount of iron. (But it can use more, if available.)
>
> Humans can tell you where they got their info, and they can answer your questions about their method of reasoning to derive those answers. In that regard, our old VivoMind system from 2000 to 2010 could do reasoning with the precision that Elon's system CANNOT produce today. And even if he could double his 200,000 Nvidia chips, Elon still could not
> guarantee the precision that VivoMind produced in 2010.
>
> For a summary of the old VivoMind system with examples of what it could do, see https://jfsowa.com/talks/cogmem.pdf .
>
> Our new Permion Inc. system is a major upgrade of the VivoMind system from 2000 to 2010. You can skip the first 44 slides, which show how the VivoMind Cognitive Memory system works. The slides from 45 to 64 show three applications that no LLM-based system can do today. That system could run on a laptop, but it scales linearly in performance wih the speed and number of CPUs available.
>
> With the addition of LLMs, the symbolic power of Permion can do everything that VivoMind could do and do it better and faster. But it can also do the kinds of things that big iron systems do with a tiny fraction of the amount of iron. If more iron is available, it can use it.
>
> My recommendation: Sell any Nvidia stock you (or anybody else) may own.
>
> John
>

Kingsley Idehen

unread,
Feb 24, 2025, 3:05:48 PMFeb 24
to ontolo...@googlegroups.com

Hi John,

On 2/23/25 10:44 PM, John F Sowa wrote:
A hybrid system that combines LLMs with symbolic reasoning provides the best of both worlds.  And it does so with just a tiny fraction of the amount of Nvidia chips -- or even wtih 0 Nvidia chips..  It can take advantage of a reasonable amount of LLM technology, but the most advanced and complicated reasoning methods are done much better, faster, and more precisely WITHOUT using LLMs.


Yes!


The strange part is that this should-be-obvious fact is getting lost in the current LLM noise. Give Silicon Valley and Wall Street a reason to burn investor money, and they’ll jump at it—reserving their skepticism for alternative approaches that actually work!

I’m still scratching my head as to why this isn’t obvious to the very people who spend all their time analyzing market opportunities and growth metrics. I certainly hope it's pretty obvious to technical participants on this forum.

As already stated many prior posts, LLMs simply introduce multimodal natural language interactions as an additional layer to the existing UI/UX computing stack. That's good, but it isn't really the only component of AI :)

John F Sowa

unread,
Feb 24, 2025, 7:54:05 PMFeb 24
to ontolo...@googlegroups.com, CG
Dan,

I certainly agree that 99,x% of our knowledge of the world is based on an integration of a lifetime of experience.  There is no way that any of us can say exactly where we first acquired a huge number of ideas that are thoroughly built into our view of life and the world.   But we can often associate different kinds of knowledge with different periods of our lives.  And with different kinds of people.

For example, if you name any kind of food, I can tell you whether I first encountered it as a child at home, at the home of some friend or relative, at a restaurant, at school, in my garden, while traveling away from home, in what country or kind of restaurant, etc..

There are many kinds of things I can say I learned from my father, my mother,  my grandmother, friends, relatives, schools, etc.  I might not be able to pin down many specific items, but I can classify many kinds of things I first discovered in what kinds of place, time of life, with what kinds of people, etc.

And if these are recent things, I can say whether I got them from TV, from reading, from email, from browsing. and often from exactly which person or publication.  When the source is important I remember.  But even if I don't remember the exact individual, I usually remember enough that I can find the source with a bit of computer searching.

I certainly have much more background knowledge than I can get from ChatGPT.    And I have downloaded a lot of information that is located on my computer, and I can find or search for its origin quite easily.  But ChatGPT is one of the few computer systems that cannot tell me anything about where it got the information it uses,

That is definitely not a good feature.  It's a very strong reason for using hybrid systems for reasoning.  LLMs are good for translating different languages and formats,  But sources are extremely important, and computer systems should be able to keep or find info about sources.  It's helpful to know if some item came from Vladimir Putin or the FBI.  (My belief in those two is the opposite of Trump's.)

Symbolic AI is far better than humans in this regard, and LLMs are far worse than humans.  That is not a good point in favor of LLMs as the primary source of intelligence,  They are certainly useful for what they do.  But much more is necessary.

John
 


From: "Dan Brickley' via ontolog-forum" <ontolo...@googlegroups.com>

Reply all
Reply to author
Forward
0 new messages