AI getting better at MUMPS and VistA

202 views
Skip to first unread message

Greg Kreis

unread,
Apr 1, 2025, 12:13:40 PM4/1/25
to Hard...@googlegroups.com
Has anyone been playing with Grok3 or Google's Gemini 2.5 Pro for MUMPS
and VistA?

What these two models know about writing VistA aware MUMPS is bracing. 
If they can write it, they can read it. In Grok's case, it can state the
specific sections from VistA documentation and the SAC for explaining
its actions!

The prompt was to write a mumps routine, following the SAC standard and
using Vista Libraries. The code ran....

I then asked it to swap out the $$GET1 calls with the actual piece
functions and it did it! When I asked Grok how it knew the values to
use, and it said the dictionaries!

--
-------------------------
Greg Kreis, President
Pioneer Data Systems, Inc
678-525-5397 (mobile)
770-509-2583

Greg Kreis

unread,
Apr 1, 2025, 12:22:04 PM4/1/25
to hard...@googlegroups.com
Right now you can use Gemini 2.5 Pro beta for free.

https://aistudio.google.com/prompts/new_chat

If someone can drop an Open Source model that comes close to what these
two can do....  look out!

Benjamin Irwin

unread,
Apr 1, 2025, 2:25:47 PM4/1/25
to Hardhats
That is interesting Greg and I think explains what I have been seeing on Nancy's edu2.opensourcevista.net server.

Human users normally use the following link to access the dictionaries:  https://edu2.opensourcevista.net/vista/dictionary.php.  Then select the dictionaries from the list.

However, I am now seeing more direct access to the dictionaries using:  https://edu2.opensourcevista.net/vista/dictionary.php?fn=stan&ien=.9 for a "Standard" listing or https://edu2.opensourcevista.net/vista/dictionary.php?fn=glob&ien=.9 for a "Global" listing.

It doesn't look like the systems are indexing these listings, but using them as needed for the specific dictionary names and numbers.  Like the AI has learned where and how to find the information on an as needed basis.  Cool.

Kekoa

unread,
Apr 2, 2025, 2:26:00 AM4/2/25
to Hardhats
Have you evaluated any open models that are available on https://huggingface.co/ ?

I briefly tried Mistral and it was decent. I'm thinking it could be improved upon by taking the source code and building up an embedding model. I have yet to do this but I'm excited for the future... I'm confident someone will eventually. 😉

Greg Kreis

unread,
Apr 2, 2025, 11:22:35 AM4/2/25
to hard...@googlegroups.com

I would love to join/start a group that has the chops to fine tune a model for MUMPS and VistA. I am very interested in AI and models but I am weak in Linux, Python, etc. and the years have taken the edge off of my motivation to solo deep dive into new things. It would be great to have a mix of ages and skill sets.

I've tried Phi4, llama3, QwQ, Mistral, DeepSeek r1, Qwen 2.5 Coder and Gemini 3.  They have their moments, but they fall short of what I have seen from the latest large frontier models.  Surely one of these would be a suitable platform for fine tuning and then some of the newer techniques for RAG could supplement on tasks.  OpenAI said they are going to be releasing the weights of a model in the coming months, though that is not like publishing under the MIT license or Apache 2.0.

Anyone interested?  Do you have experience fine-tuning models? Maybe we could get some graduate students willing to make this a work/study project?

--
--
http://groups.google.com/group/Hardhats
To unsubscribe, send email to Hardhats+u...@googlegroups.com

---
You received this message because you are subscribed to the Google Groups "Hardhats" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hardhats+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hardhats/ca736a44-6ec5-413c-8b97-a0a4a70a0059n%40googlegroups.com.

Coty Embry

unread,
Apr 2, 2025, 12:23:57 PM4/2/25
to hard...@googlegroups.com, Hardhats
That someone will the be ai it’s self eventually haha ;)


On Apr 2, 2025, at 1:26 AM, Kekoa <chris....@gmail.com> wrote:

Have you evaluated any open models that are available on https://huggingface.co/ ?

Kimball Bighorse

unread,
Apr 2, 2025, 3:19:20 PM4/2/25
to Hardhats
Hi Greg,

I'm just a MUMPs noob, but I'm starting a group effort to modernize RPMS. Happy to join forces if you can put up with some novices.

warmly,

Kimball

David Blackstone

unread,
Apr 2, 2025, 3:24:48 PM4/2/25
to hard...@googlegroups.com
Hi Greg, 

Would love to help out if I can. I don’t really know how to train a model per say, but I’ve definitely played with the tech before. Lmk :)

- David B

Greg Kreis

unread,
Apr 2, 2025, 4:09:26 PM4/2/25
to hard...@googlegroups.com

Let me think about how to organize this. Open to suggestions.

Michael Rupp

unread,
Apr 7, 2025, 10:13:17 PM4/7/25
to hard...@googlegroups.com
You can run models locally on your server using Ollama.
Ollama.com/models has LLMs for vision as well as coding. I've played with dolphin-llama3. It wasn't trained with biases like GPT or Gemini have.


rrichards

unread,
Apr 17, 2026, 9:13:15 AM (5 days ago) Apr 17
to Hardhats
Hi Greg,

I have been using Claude and chat GPT extensively to analyze the Vista documentation and am now very comfortable managing Vista artifacts based on transparent deterministic reference to the actual 8,300 plus documents that are now in a form that is llm friendly, which is markeddown with all the metadata in the front matter. 

A key thing managing Vista artifacts with llms is to put it in a form that it can consume and always reference as an authoritative source of Truth rather than redeveloping heuristics based on giving it raw data or code. 

As a starting point, I would be most interested in putting all of the fileman data dictionaries packages and other things that are hoovered up by X index and other tools into a form that is best consumed by an llm. 

My weakness is not knowing the Vista mumps tool chain but I'm very facile with how to use raw data with llms. 

I'd love to team up with you and Benjamin and all the other hard hats in extracting this metadata out of Vista and then refining it in a pipeline that that I can share on GitHub for everyone to use this for refining Vista artifacts. 

Rafael 

Sam Habiel

unread,
Apr 20, 2026, 10:53:39 AM (2 days ago) Apr 20
to hard...@googlegroups.com
Rafael,

Vivian has all the information you need.


You may need to generate Vivian on your own and feed the output to the LLM.


--Sam

--
--
http://groups.google.com/group/Hardhats
To unsubscribe, send email to Hardhats+u...@googlegroups.com

---
You received this message because you are subscribed to the Google Groups "Hardhats" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hardhats+u...@googlegroups.com.

Greg Kreis

unread,
Apr 20, 2026, 3:20:08 PM (2 days ago) Apr 20
to hard...@googlegroups.com

Hello,

Sounds like you have been busy!!  I have been reading that preparing the data for the most reliable accessing is very important. We would like to just hand AI documents to read, as we would a person, but we aren't there yet. (Though with all the progress with agentic harnesses, like Hermes, I wonder how far off that is.)

Putting the parts of the system, like DDs, options, etc. into a well structured format sounds powerful and could work well against static repositories. Have you thought about exposing them dynamically via APIs so the AI can query them in a MUMPS account? I could see the potential for troubleshooting tools to have access in this way to help with error trap analysis. Projects like ViViaN have so much of the code written to parse much of the needed info.

VistA Package Dependency


--
--
http://groups.google.com/group/Hardhats
To unsubscribe, send email to Hardhats+u...@googlegroups.com

---
You received this message because you are subscribed to the Google Groups "Hardhats" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hardhats+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/hardhats/e8a523ab-7639-470c-9898-f5405a18fb8bn%40googlegroups.com.

Benjamin Irwin

unread,
Apr 20, 2026, 5:21:36 PM (2 days ago) Apr 20
to Hardhats
Many of the items on Nancy's edu2.opensourcevista.net server with my simple coding are based on node.js api's that run mumps routines.

I would be happy to attempt to develop them further to accommodate better feeding AI.

Some examples include the following that already exist.

To view the FDA output of the ^DD and ^DIC globals for a specific dictionary use the following link, placing the dictionary IEN in the ien variable.

To get directly to the global presentation of a data dictionary use the following link with the dictionary IEN in the ien variable.

To get directly to the standard presentation of a data dictionary use the same link as above, but change the "fn" variable to "stan" 
https://edu2.opensourcevista.net/vista/dictionary.php?fn=stan&ien=200

I am getting a little "mature", but I think I could still make a good attempt to support JSON outputs for some of the VistA information as needed.

Thanks,
Ben
Reply all
Reply to author
Forward
0 new messages