AI getting better at MUMPS and VistA

147 views
Skip to first unread message

Greg Kreis

unread,
Apr 1, 2025, 12:13:40 PM4/1/25
to Hard...@googlegroups.com
Has anyone been playing with Grok3 or Google's Gemini 2.5 Pro for MUMPS
and VistA?

What these two models know about writing VistA aware MUMPS is bracing. 
If they can write it, they can read it. In Grok's case, it can state the
specific sections from VistA documentation and the SAC for explaining
its actions!

The prompt was to write a mumps routine, following the SAC standard and
using Vista Libraries. The code ran....

I then asked it to swap out the $$GET1 calls with the actual piece
functions and it did it! When I asked Grok how it knew the values to
use, and it said the dictionaries!

--
-------------------------
Greg Kreis, President
Pioneer Data Systems, Inc
678-525-5397 (mobile)
770-509-2583

Greg Kreis

unread,
Apr 1, 2025, 12:22:04 PM4/1/25
to hard...@googlegroups.com
Right now you can use Gemini 2.5 Pro beta for free.

https://aistudio.google.com/prompts/new_chat

If someone can drop an Open Source model that comes close to what these
two can do....  look out!

Benjamin Irwin

unread,
Apr 1, 2025, 2:25:47 PM4/1/25
to Hardhats
That is interesting Greg and I think explains what I have been seeing on Nancy's edu2.opensourcevista.net server.

Human users normally use the following link to access the dictionaries:  https://edu2.opensourcevista.net/vista/dictionary.php.  Then select the dictionaries from the list.

However, I am now seeing more direct access to the dictionaries using:  https://edu2.opensourcevista.net/vista/dictionary.php?fn=stan&ien=.9 for a "Standard" listing or https://edu2.opensourcevista.net/vista/dictionary.php?fn=glob&ien=.9 for a "Global" listing.

It doesn't look like the systems are indexing these listings, but using them as needed for the specific dictionary names and numbers.  Like the AI has learned where and how to find the information on an as needed basis.  Cool.

Kekoa

unread,
Apr 2, 2025, 2:26:00 AM4/2/25
to Hardhats
Have you evaluated any open models that are available on https://huggingface.co/ ?

I briefly tried Mistral and it was decent. I'm thinking it could be improved upon by taking the source code and building up an embedding model. I have yet to do this but I'm excited for the future... I'm confident someone will eventually. 😉

Greg Kreis

unread,
Apr 2, 2025, 11:22:35 AM4/2/25
to hard...@googlegroups.com

I would love to join/start a group that has the chops to fine tune a model for MUMPS and VistA. I am very interested in AI and models but I am weak in Linux, Python, etc. and the years have taken the edge off of my motivation to solo deep dive into new things. It would be great to have a mix of ages and skill sets.

I've tried Phi4, llama3, QwQ, Mistral, DeepSeek r1, Qwen 2.5 Coder and Gemini 3.  They have their moments, but they fall short of what I have seen from the latest large frontier models.  Surely one of these would be a suitable platform for fine tuning and then some of the newer techniques for RAG could supplement on tasks.  OpenAI said they are going to be releasing the weights of a model in the coming months, though that is not like publishing under the MIT license or Apache 2.0.

Anyone interested?  Do you have experience fine-tuning models? Maybe we could get some graduate students willing to make this a work/study project?

--
--
http://groups.google.com/group/Hardhats
To unsubscribe, send email to Hardhats+u...@googlegroups.com

---
You received this message because you are subscribed to the Google Groups "Hardhats" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hardhats+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hardhats/ca736a44-6ec5-413c-8b97-a0a4a70a0059n%40googlegroups.com.

Coty Embry

unread,
Apr 2, 2025, 12:23:57 PM4/2/25
to hard...@googlegroups.com, Hardhats
That someone will the be ai it’s self eventually haha ;)


On Apr 2, 2025, at 1:26 AM, Kekoa <chris....@gmail.com> wrote:

Have you evaluated any open models that are available on https://huggingface.co/ ?

Kimball Bighorse

unread,
Apr 2, 2025, 3:19:20 PM4/2/25
to Hardhats
Hi Greg,

I'm just a MUMPs noob, but I'm starting a group effort to modernize RPMS. Happy to join forces if you can put up with some novices.

warmly,

Kimball

David Blackstone

unread,
Apr 2, 2025, 3:24:48 PM4/2/25
to hard...@googlegroups.com
Hi Greg, 

Would love to help out if I can. I don’t really know how to train a model per say, but I’ve definitely played with the tech before. Lmk :)

- David B

Greg Kreis

unread,
Apr 2, 2025, 4:09:26 PM4/2/25
to hard...@googlegroups.com

Let me think about how to organize this. Open to suggestions.

Michael Rupp

unread,
Apr 7, 2025, 10:13:17 PM4/7/25
to hard...@googlegroups.com
You can run models locally on your server using Ollama.
Ollama.com/models has LLMs for vision as well as coding. I've played with dolphin-llama3. It wasn't trained with biases like GPT or Gemini have.


Reply all
Reply to author
Forward
0 new messages