M5 & M6

16 views
Skip to first unread message

A J

unread,
Jan 5, 2025, 6:46:37 PM1/5/25
to HomeBrew Robotics Club
Hi Team,

It looks like interesting times about to happen before 2030 for robots.

The progression of Apple chips could push TOPS for tablets and laptops

from 38 for M4 to 70 - 80 for M5 and 120 - 160 for M6.

The newer CPU's have AVX, NPU and FPU while some have the additional internal GPU.

So the next 3 - 5 years will give us lots of compute power for our Bots.


How does AI scale in a small Bot from 50 - 500 TOPS ?




Chris Albertson

unread,
Jan 6, 2025, 12:26:53 PM1/6/25
to hbrob...@googlegroups.com
I’ve run a few of the well known models on my Apple M2 powered Mac mini.  I can get better then real-time performance.
An M2-Pro with 16GB RAM seems to have performance about like a mid range Nvidia GPU.   But it cost maybe less and certainly uses a lot less power.

I would say that running a capable LLM on local and afordable hardware is already a solved problem.  

What I’ve not yet worked out is how my “hello world” robot would work.   My test case is that I can say “Robbie pick up the green cube and place it in the cup.” and then it does as told.

The missing link (for me) is the connection between the output of the LLM and a conventional motion planning system like MoveIt.

As for how to best run the LLM on the M2-based Mini, you need to Googler “lama.cpp”.  This software will run most models on common hardware.  3-billion parameter models are good enough for any kind of conversation you might do with a domestic robot.


--
You received this message because you are subscribed to the Google Groups "HomeBrew Robotics Club" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbrobotics+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hbrobotics/a489a681-4898-42ab-be2a-d484710e6a9en%40googlegroups.com.

Reply all
Reply to author
Forward
0 new messages