Cheap/free server wanted

3 views
Skip to first unread message

Jack Coats

unread,
Mar 5, 2026, 10:09:24 PM (2 days ago) Mar 5
to NLUG
I don't have a Linux machine anymore.  I want to set up a openclaw machine with ollama and postgress, probably all running in docker.

Suggestions?  This is a toy, not for real work.

Anyone else done this?

Distribution suggestions appreciated.

><> ... Jack

If you are not paying for something, you are not a consumer, you are the product. - Chamath Palihapitiya
"Tell me and I forget. Teach me and I remember. Involve me and I learn." - Ben Franklin
Tesla Referral code: "https://www.tesla.com/referral/jack84455" - save money on a Tesla

Kent Perrier

unread,
Mar 6, 2026, 9:26:10 AM (20 hours ago) Mar 6
to nlug...@googlegroups.com
Cheep/free and AI workloads do not mix.  

--
--
You received this message because you are subscribed to the Google Groups "NLUG" group.
To post to this group, send email to nlug...@googlegroups.com
To unsubscribe from this group, send email to nlug-talk+...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/nlug-talk?hl=en

---
You received this message because you are subscribed to the Google Groups "NLUG" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nlug-talk+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/nlug-talk/CAFq0N1zVOa-tO3TNKercPq5TaABJPV6VEMsfQhAH4yXThW%3DWVQ%40mail.gmail.com.

Tommy Kelly

unread,
Mar 6, 2026, 10:48:44 AM (19 hours ago) Mar 6
to nlug...@googlegroups.com
Yeah I agree. Expect to spend at least 2k for what you described if you want to run a 34b or the 120b chatgpt MoE model. The video card to run that alone is around that price, before the machine you put it in.

Jack Coats

unread,
Mar 6, 2026, 12:25:38 PM (17 hours ago) Mar 6
to nlug...@googlegroups.com
I agree, they don't mix, but if you don't ask you don't find a nugget in the wild.  

I'm planning on running it headless (after setup), but plans are made to be changed.  To start I might buy some tokens from a vendor, but hoping to run at least small things locally.  I know even a Raspberry Pi can run OpenClaw, but no real 'performance'.  This is just for personal training for me (retiree that does it for entertainment and to keep my mind busy).

I know I should just do a cloud based version, but I have an inherent dislike of subscription based services (not that I don't have several, yes, i'm an oxymoron personified <<grin>> ).

Thanks for the feedback. ... Jack



--

Tommy Kelly

unread,
Mar 6, 2026, 1:17:11 PM (16 hours ago) Mar 6
to nlug...@googlegroups.com
Ahh, okay. If you're just using it for messing around and training, you could try a Google Coral dongle on Raspberry Pi.
I have no idea how good it is, and I doubt it will power any ollama model worth running for general text over 1B parmeters, but if you've got a small use case and want to test training on like a 1B model or something, it could fit your use-case

You're right though, runpod time is pretty cheap to experiment with, and you just buy time. It's not a subscription.

Good luck, and I hope that helps!

Jack Coats

unread,
Mar 6, 2026, 1:30:44 PM (16 hours ago) Mar 6
to nlug...@googlegroups.com
Reply all
Reply to author
Forward
0 new messages