testing llms!

5 views
Skip to first unread message

bruce

unread,
Apr 27, 2026, 9:32:55 AM (11 days ago) Apr 27
to lv...@googlegroups.com
Hi.

This might be off topic, but perhaps some will chuckle, or find it
amusing/interesting.

I've started to wade in the AI/llm pool to try to get a start on what
this thing is, how it might work, costs of operation, etc..

I've installed the vscode/claude-code apps -- ubuntu/laptop 32g ram.

and started to "play" -- basic cmds, baby steps, etc.. I was using the
minimax model. Things appeared to be ok. Saw the chrome/"ask gemini"
and started to test/play with that construct << Interesting!! It's
actually playing role of ask me stuff of a tech nature, and I can be
of help!!

There might be something to this. I then had the gemini Ai walk
through installing compressed llms to test.. oh lord.. this is
slow/might be an issue..

I then checked mem prices. You guys might recall last year.. I got a
few devices with 32g, 1tb SSD.. holy moly!! mem is insane.. yes, I
knew it was up. good lord, we're talking kidney stuff here!

Which is why I'm here. does anyone have a laptop with a 256/512G of
mem for llm, that can give insight/opinions on the speed/operability
of the process?

If you have insight, feel free to share your feelings on this.

Hit me up!

thanks
-bruce
Reply all
Reply to author
Forward
0 new messages