Energy guzzler—I worked with it. higher performance than the rest of the available models. They will catch up soon.
Mihai Nadin
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
ontolog-foru...@googlegroups.com.
To view this discussion visit
https://groups.google.com/d/msgid/ontolog-forum/6c7788c7fe2549a1943b59318a7154f3%40608a7159f80942d5a7fbb9d617bb1e77.
Hi John,
I don’t listen to the
tech media—it’s much more fun playing around myself. Grok
3 is proving to be a game-changer
in how LLMs enhance research and analysis. It feels like the
world is being challenged (or even teased)
right now, navigating the tricky tension between tight
vs. loose coupling of technology innovation
and politics. Like many, I dislike Elon's political
shenanigans.
First off, it’s a much better bang-for-buck than anything I’ve tested. Why? Because X.AI is offering “Deep Search” and Inference (on par with the GPT-family) for a fraction of the cost. In addition, it’s an Easy Button for the vision behind the Semantic Web Project. 🚀
🔗 Deep Search generating a Tesla Vehicles Registration Report
Kingsley
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/6c7788c7fe2549a1943b59318a7154f3%40608a7159f80942d5a7fbb9d617bb1e77.
-- Regards, Kingsley Idehen Founder & CEO OpenLink Software Home Page: http://www.openlinksw.com Community Support: https://community.openlinksw.com Social Media: LinkedIn: http://www.linkedin.com/in/kidehen Twitter : https://twitter.com/kidehen
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/7d70d6ab-c8d1-4895-9c3b-cb231d57fd4f%40openlinksw.com.

To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/7d70d6ab-c8d1-4895-9c3b-cb231d57fd4f%40openlinksw.com.
Humans can tell you where they got their info,
and they can answer your questions about their method of reasoning to derive those answers.
Hi John,
A hybrid system that combines LLMs with symbolic reasoning provides the best of both worlds. And it does so with just a tiny fraction of the amount of Nvidia chips -- or even wtih 0 Nvidia chips.. It can take advantage of a reasonable amount of LLM technology, but the most advanced and complicated reasoning methods are done much better, faster, and more precisely WITHOUT using LLMs.
Yes!
I’m still scratching my head as to why this isn’t obvious to the
very people who spend all their time analyzing market
opportunities and growth metrics. I certainly hope it's pretty
obvious to technical participants on this forum.
As already stated many prior posts, LLMs simply introduce multimodal natural language interactions as an additional layer to the existing UI/UX computing stack. That's good, but it isn't really the only component of AI :)