Compatible Linux models

36 views
Skip to first unread message

Julius Gunter

unread,
Oct 1, 2025, 10:56:16 PM (7 days ago) Oct 1
to uc...@googlegroups.com
If you have someone not using Linux & thinking of trying Linux. Asking questions about hardware for Linux. You may want to recommend this YouTube video. 

Bob Braxman
"Find a Compatible Linux Computer for $200" 

Julius Gunter 

Arnold Silvernail

unread,
Oct 2, 2025, 8:55:48 AM (7 days ago) Oct 2
to uc...@googlegroups.com
No link :(

--
You received this message because you are subscribed to the Google Groups "Upstate Carolina Linux Users Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to uclug+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/uclug/CADE1exD3zx_txmisdVm%3DuH9N%3DKO4rr_9_8cHRzku_k1M5dhKiA%40mail.gmail.com.

Bill Jacqmein

unread,
Oct 2, 2025, 2:29:42 PM (7 days ago) Oct 2
to uc...@googlegroups.com
https://www.youtube.com/watch?v=Zt9gH34Zw2Q <- part 1

I do fall into the camp of I would like my AI local, particularly if
it will hear what I hear and see what I see...
> To view this discussion visit https://groups.google.com/d/msgid/uclug/CACq26Od3HV9Kt%2BtMK9TWvgOhKpMU7FDxJz%3DW_MiSzChMnG33SA%40mail.gmail.com.

Bill Jacqmein

unread,
Oct 2, 2025, 2:33:16 PM (7 days ago) Oct 2
to uc...@googlegroups.com
I dont think I have had issues with linux support for at least 20
years (when I stop trying to use DECNet ethernet cards and x-windows
autoconfigured (Mepis is the first I can remember doing this) saved me
from myself).

George Law

unread,
Oct 2, 2025, 9:15:04 PM (6 days ago) Oct 2
to uc...@googlegroups.com

I think Bill replied with the link - https://www.youtube.com/watch?v=Zt9gH34Zw2Q

Bill said something about hosting his own AI

tried that with a couple various containers but i think you need more high end systems than $200 is going to get you :) 

Arnold Silvernail

unread,
Oct 3, 2025, 7:31:25 AM (6 days ago) Oct 3
to uc...@googlegroups.com
Thanks Bill..........thanks George :)


Bill Jacqmein

unread,
Oct 3, 2025, 10:28:16 AM (6 days ago) Oct 3
to uc...@googlegroups.com
For local LLM, easy cost is 1800 (2 900ish Nvidia cards) plus a power
supply to power those cards and CPU and Memory to keep up with them to
get really stellar performance. NetworkChuck had a good overview (pre
50 series release) - https://www.youtube.com/watch?v=Wjrdr0NU4Sk and
it was a year ago so about 20 years in AI time.

I have a Radeon Instinct -
https://www.reddit.com/r/LocalLLaMA/comments/1b5ie1t/interesting_cheap_gpu_option_instinct_mi50/
- that Im learning the shortcomings of ROCm (or my understanding of
it...probably a combo) to replicate something similar without buying a
used car worth of parts :)

Some of the newer processors (AMD mainly is what I have read and I
think Intel is getting in there) have some AI co-processor
capabilities.

https://semiengineering.com/the-rise-of-ai-co-processors/

On Thu, Oct 2, 2025 at 9:15 PM George Law <geo...@geolaw.com> wrote:
>
> To view this discussion visit https://groups.google.com/d/msgid/uclug/6ae4dbd7-a5b6-44f9-959c-beefc40ba6a7%40geolaw.com.

Crow

unread,
Oct 3, 2025, 11:13:11 AM (6 days ago) Oct 3
to Upstate Carolina Linux Users Group
That comment got me interested so I'm pulling a laptop focused model (gemma3n:e4b) on one of my old thinkpads to see how it performs. From just a quick test it's running okayish on my i7-8650U (no gpu on this laptop). Definitely slower than my desktop. The gemma3n:e2b was much more usable speed wise.

I benched the laptop with both the small n3 models and then benched my desktop with the usual model I run (gemma3:12b) and the largest of the n3 models. Here's those reports.

# Laptop
-------Linux----------

No NVIDIA GPU detected.
rocminfo failed: [Errno 2] No such file or directory: 'rocminfo'
Total memory size : 15.49 GB
cpu_info: Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
gpu_info: no_gpu
os_version: "NixOS 25.05 (Warbler)"
ollama_version: 0.11.10
----------
running custom benchmark from models_file_path: benchmark.yaml
Disabling sendinfo for custom benchmark
LLM models file path:benchmark.yaml
Checking and pulling the following LLM models
gemma3n:e4b
gemma3n:e2b
----------
Running custom-model
model_name =    gemma3n:e4b
prompt = Summarize the key differences between classical and operant conditioning in psychology.
eval rate:            7.11 tokens/s
prompt = Translate the following English paragraph into Chinese and elaborate more -> Artificial intelligence is transforming various industries by enhancing efficiency and enabling new capabilities.
eval rate:            6.68 tokens/s
prompt = What are the main causes of the American Civil War?
eval rate:            6.63 tokens/s
prompt = How does photosynthesis contribute to the carbon cycle?
eval rate:            6.64 tokens/s
prompt = Develop a python function that solves the following problem, sudoku game.
eval rate:            6.56 tokens/s
--------------------
Average of eval rate:  6.724  tokens/s
----------------------------------------
model_name =    gemma3n:e2b
prompt = Summarize the key differences between classical and operant conditioning in psychology.
eval rate:            11.24 tokens/s
prompt = Translate the following English paragraph into Chinese and elaborate more -> Artificial intelligence is transforming various industries by enhancing efficiency and enabling new capabilities.
eval rate:            11.23 tokens/s
prompt = What are the main causes of the American Civil War?
eval rate:            11.24 tokens/s
prompt = How does photosynthesis contribute to the carbon cycle?
eval rate:            11.25 tokens/s
prompt = Develop a python function that solves the following problem, sudoku game.
eval rate:            11.12 tokens/s
--------------------
Average of eval rate:  11.216  tokens/s
----------------------------------------

# Desktop
-------Linux----------

No NVIDIA GPU detected.
Total memory size : 30.46 GB
cpu_info: AMD Ryzen 5 7600 6-Core Processor
gpu_info: AMD Ryzen 5 7600 6-Core Processor
AMD Radeon RX 7800 XT
AMD Radeon Graphics
os_version: "NixOS 25.05 (Warbler)"
ollama_version: 0.11.10
----------
running custom benchmark from models_file_path: benchmark.yaml
Disabling sendinfo for custom benchmark
LLM models file path:benchmark.yaml
Checking and pulling the following LLM models
gemma3:12b
gemma3n:e4b
----------
Running custom-model
model_name =    gemma3:12b
prompt = Summarize the key differences between classical and operant conditioning in psychology.
eval rate:            35.54 tokens/s
prompt = Translate the following English paragraph into Chinese and elaborate more -> Artificial intelligence is transforming various industries by enhancing efficiency and enabling new capabilities.
eval rate:            34.07 tokens/s
prompt = What are the main causes of the American Civil War?
eval rate:            33.89 tokens/s
prompt = How does photosynthesis contribute to the carbon cycle?
eval rate:            36.10 tokens/s
prompt = Develop a python function that solves the following problem, sudoku game.
eval rate:            33.89 tokens/s
--------------------
Average of eval rate:  34.698  tokens/s
----------------------------------------
model_name =    gemma3n:e4b
prompt = Summarize the key differences between classical and operant conditioning in psychology.
eval rate:            46.62 tokens/s
prompt = Translate the following English paragraph into Chinese and elaborate more -> Artificial intelligence is transforming various industries by enhancing efficiency and enabling new capabilities.
eval rate:            46.55 tokens/s
prompt = What are the main causes of the American Civil War?
eval rate:            46.42 tokens/s
prompt = How does photosynthesis contribute to the carbon cycle?
eval rate:            46.14 tokens/s
prompt = Develop a python function that solves the following problem, sudoku game.
eval rate:            45.68 tokens/s
--------------------
Average of eval rate:  46.282  tokens/s
----------------------------------------

Monty Craig

unread,
Oct 6, 2025, 5:20:02 PM (2 days ago) Oct 6
to uc...@googlegroups.com
Rob Braxman is a great channel! 

On Wed, Oct 1, 2025 at 10:56 PM Julius Gunter <jsg...@gmail.com> wrote:
--
Reply all
Reply to author
Forward
0 new messages