AI and VistA

101 views
Skip to first unread message

Kevin Toppenberg

unread,
Apr 25, 2024, 10:43:13 AMApr 25
to Hardhats
Nancy recently posted about nurses protesting AI, which is ironic because I came here to write about AI.  Here it goes....

I recently gained access to a home computer with an Nividia 1060 graphics card with 8 GB of memory.  I think these go for $100-$200 dollars.  I installed Ubuntu on it, and then a program called ollama (https://ollama.com/).  It's as easy as:

sudo apt install ollama

Then, to download and use a large language model (LLM), such as Llama3, it is as easy as:

ollama run llama3

(by the way, I am writing this from memory.  My syntax may be slightly off.  See the website for details).

And just like that I have a locally running GPT that is pretty good.  I could carry on conversations with it, and it seemed similar to GPT-3 etc.  Llama3 comes in a 8B and 70B options.  The 8B worked great on my system, but the 70B slowed to a pace of about 1 word every 10 seconds (unusable).  I think with a newer card it would be much better.

Much of my concern about AI has been about all this stuff going into the cloud, with privacy concerns etc.  But if I can run this locally, it seems to me to be a game changer.  It would all be under my control.  Exciting!

The ollama service provides programmatic API's that can be interfaced via curl, so it would be easy to pull this into mumps/vista.  Something like this:

   "model": "llama3", 
   "prompt":"Why is the sky blue?" 
}'

 I previously did this with OpenAI, and asked it to tell me jokes etc from the mumps command line.  I just rechecked it, and the service has been shut down, but the technique would be the same.  With local hosting, I don't need an account, API keys, and dependence on an external fickle company. 

POTENTIAL USES
1) find a way to summarize a patient, including ICD codes, labs, med list, and perhaps the text of their office notes for a year and see if anything pops out.  It could be used for summarizing past care (data extraction), or perhaps diagnosis vs other uses. 

2) put in an additional tab into CPRS (we already have web-tabs) that talk to the local ollama and allow for asking questions, such as "what are the significance of smudge cells on CBC?".  We are currently subscribed to a HIPPA compliant version of Chat-GPT called BastionGPT, and I find that I can often get a good answer there faster than with an online reference such as UpToDate.  But to use it, I have to go to a website, log in, and it takes enough time that it is annoying.   (For technical reasons, I can't embed BastionGPT into a CPRS webtab and auto-log-in. )

3) ICD and CPT coding advice is pretty good in BastionGPT.  We use this site: https://www.icd10data.com/ for ICD codes, but it is full of popup adds and frequently can't find a code because it is doing text-matching of search terms.  It often matches with completely unrelated concepts which is annoying.  But BastionGPT allows for context and does much better. 

4) There are also LLM's involved with coding.  I would love to learn how to make additional LLM models, and train it on the entire codex of the VistA library of mumps code and perhaps VistA training manuals and see if it can gain insights that would allow interactive discussion.   I have previously tried having GPT try mumps coding and it tried, but mostly failed.  It kept trying make it work like python etc. 

Surely there must be other uses as well.  As you might be able to tell, I'm excited to think of the possibilities. 

What do you all think?

Kevin

David Whitten

unread,
Apr 25, 2024, 11:32:50 AMApr 25
to hard...@googlegroups.com
I recall someone (maybe Greg Kreis ) suggested when you ask an LLM
about coding in M, that you use the full name ANSI Standard MUMPS as
it establishes context better.

Dave Whitten
713-870-3834
> --
> --
> http://groups.google.com/group/Hardhats
> To unsubscribe, send email to Hardhats+u...@googlegroups.com
>
> ---
> You received this message because you are subscribed to the Google Groups "Hardhats" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to hardhats+u...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/hardhats/f30c96c6-1063-4963-bacd-51f4bd53dd88n%40googlegroups.com.

David Blackstone

unread,
Apr 25, 2024, 12:12:59 PMApr 25
to Hardhats
Hello,

Just wanted to put some input into this discussion. I was dabbing in AI in VistA for a few months and I came up with some cool results. 
Below is a working routine that uses OpenAI's Assistants API. Using the GPT-4-turbo model. as you can see it is able to solve some trivial mumps coding issues, as well as describe complex routines.

Below is a copy paste from my VistA system. as you see there is a prompt for the user, some processing status', and the response from the API.

I also wrote a complete backend system for saving conversions to be picked up later or modified and managing user access with regards to API keys and Assistant model IDS, all with  Fileman.

=======================================================================================================================
                         MACROS
| M=Macro-Option | R=Attach-Routine | Q=Attach-Question |

Enter a message: R=XV Q=CAN YOU EXAMINE THIS ROUTINE AND LET ME KNOW WHAT IT DOES?

     Uploading Attached Files

     Submitting Message

     Processing your request :-)
Status: in_progress [1] Seconds
Status: in_progress [5] Seconds
Status: in_progress [10] Seconds
Status: in_progress [15] Seconds
Status: in_progress [20] Seconds
Status: completed [25] Seconds

     Request Complete!

Response #1:

The routine provided appears to be part of the VPE (Victory Programming Environment) for VistA (Veterans Health Information Systems
and Technology Architecture). Below is a brief overview of what this routine does:

1. **Initialization**: The routine starts by setting up an error trap and checking for simulated errors for testing purposes. It ini
tializes variables and checks for user information (`DUZ`), ensuring that the necessary VistA FileMan environment variables (`U`, `D
TIME`, `DT`) are set. If not, it sets up a minimal environment or a FileMan environment depending on the detection of FileMan's pres
ence.

2. **User Setup**: It prompts for the `DUZ` (user ID) if it's missing and sets up essential VistA environment variables if they are
not present.

3. **Main Environment Initialization**: Calls `^XVEMSY` to initialize many settings and objects required for the VPE to function cor
rectly.

4. **Building VPE Structure**: It checks and builds the `^XVEMS` structure (`^XVEMBLD`) if it doesn't exist, indicating the setup of
 VPE quick commands and help files. Additionally, it sets up necessary FileMan files for VPE if they are not already set up.

5. **Running VPE**: The routine enters a main loop, initiated by calling `^XVSS` to save the symbol table and initialize necessary s
ettings, and then runs the VPE shell (`^XVSA`). If the user chooses to exit (by setting `XVVSHC` to `^`), it performs cleanup operat
ions (`^XVSK`).

6. **Upgrade and Reset Options**: Provides functions (`UPGRADE` and `RESET`) to upgrade VPE from a previous version or reset VPE to
its pristine state, respectively. The upgrade process involves saving user data, cleaning up the VPE global and FileMan data, and th
en rebuilding and restoring the VPE structure. The reset process involves deleting data and Data Dictionary (DD) entries for all VPE
 files and resetting the VPE global.

In summary, this routine seems to set up the Victory Programming Environment, ensuring the user and environment are correctly initia
lized, builds necessary structures if they are not present, and provides an entry point to running VPE. It also includes options to
upgrade VPE or reset it to a default state.

                         MACROS
| M=Macro-Option | R=Attach-Routine | Q=Attach-Question |

Enter a message: R=BUGGEDROUTINE Q=CAN YOU NOW LOOK AT THIS ROUTINE AND IDENTIFY AND FIX THE ISSUES? SHOW ME BEFORE AND AFTER

     Uploading Attached Files

     Submitting Message

     Processing your request :-)
Status: in_progress [1] Seconds
Status: in_progress [5] Seconds
Status: in_progress [10] Seconds
Status: in_progress [15] Seconds
Status: in_progress [20] Seconds
Status: in_progress [25] Seconds
Status: in_progress [30] Seconds
Status: in_progress [35] Seconds
Status: in_progress [40] Seconds
Status: in_progress [45] Seconds
Status: in_progress [50] Seconds

     Request Complete!

Response #26:

**Before Fix**:

The original `BUGGEDROUTINE` has the following issues:


     mumps
     BUGGEDROUTINE
      QUIT

      ; ##DETAILS
      ; **RETRIEVES A CONFIG GIVEN EITHER A B CROSS REFERENCE 'NAME' OR "CID" FOR A CONFIG
      ;
      NEW CONFIG,PREF S PREF="2"
      NEW NPFILE,CFILE,AFILE,TFILE S NPFILE=200,CFILE=999007,AFILE=999011,TFILE=999013
      I $O(^VA(NPFILE,DUZ,6,"B",PREF,""))'=""$$GET1^DIQ(CFILE,PREF,5,"I") D
      . S CONFIG("PREF")=PREF
      . S CONFIG("AID")=$G(AIC(AFILE,$G(^AIC(CFILE,PREF,2),"N/A"),1))
      . S CONFIG("TID")=$G(AIC(TFILE,$P($G(^AIC(CFILE,PREF,0),"N/A"),"^",3),1))
      QUIT



**Issues Identified**:
- The conditional line contains a syntax error where it attempts to concatenate a non-empty string check with a FileMan DBS API `$$G
ET1^DIQ` call without an explicit AND `&` operator.
- The symbol `AIC` is used without previously being defined, or it could be a typo and meant to represent a subroutine or a global v
ariable not shown here.
- The manner of accessing `^AIC` global might not accomplish what's intended because the variables `CFILE`, `AFILE`, and `TFILE` poi
nt to numbers which are likely intended to be file numbers, but it's unclear if `^AIC` is a proper global root in this context.

**After Fix**:

Below is a corrected version of the routine with comments:


     mumps
     BUGGEDROUTINE
      QUIT

      ; ##DETAILS
      ; **RETRIEVES A CONFIG GIVEN EITHER A B CROSS REFERENCE 'NAME' OR "CID" FOR A CONFIG
      ;
      NEW CONFIG,PREF S PREF="2"
      NEW NPFILE,CFILE,AFILE,TFILE S NPFILE=200,CFILE=999007,AFILE=999011,TFILE=999013
      I $O(^VA(NPFILE,DUZ,6,"B",PREF,""))'=""&($$GET1^DIQ(CFILE,PREF,5,"I")'="") D  ; Fixed syntax error by adding correct logical o
perator &
      . S CONFIG("PREF")=PREF
      . ; Assuming AIC should be a global ^AIC, replaced AIC with a correct global reference
      . S CONFIG("AID")=$G(^AIC(AFILE,$G(^AIC(CFILE,PREF,2),"N/A"),1))  ; Assuming ^AIC represents a valid global structure
      . S CONFIG("TID")=$G(^AIC(TFILE,$P($G(^AIC(CFILE,PREF,0),"N/A"),"^",3),1)) ; Assuming ^AIC represents a valid global structure
      QUIT



**Corrections Made**:
- Introduced the correct AND `&` operator to fix logical operation in the conditional check.
- Replaced `AIC` with assumed correct global reference `^AIC` assuming the context of this code fragment. However, since the origina
l context is unknown, it's essential to verify `^AIC` represents a valid and intended global structure used within this routine.
- Added comment annotations clarifying assumptions and changes for clarity.

Without further detailed specifications or the broader context of how `BUGGEDROUTINE` integrates with the overall system, these adju
stments are based on syntax corrections and presumed intent behind the original code.

                         MACROS
| M=Macro-Option | R=Attach-Routine | Q=Attach-Question |

Enter a message:

=======================================================================================================================
Its a pretty neat system and I can definitely see some benefits. This is just a personal project so Obviously It can be greatly improved :)

- David Blackstone


David Blackstone

unread,
Apr 25, 2024, 12:17:11 PMApr 25
to Hardhats
That local AI Looks very interesting and I am using curl commands to communicate with GPT API, so I could definitely switch this over to a local API AI!

Greg Kreis

unread,
Apr 25, 2024, 1:28:29 PMApr 25
to hard...@googlegroups.com

Thanks Kevin for sharing this! You are so right about the privacy benefits of local use.

I have been hunting a used 3090 GPU on eBay for just this purpose. Kudos to David Blackstone, Kevin Toppenberg and others!). The 3090 uses 350-600watts (peak) power so they recommend a 750W or larger PSU. I plan to put in 1200-1600w or more (if I ever go to two 3090s). The key is to get the 24GB VRAM models so they can hold as much of the model in memory as possible. Supposedly you can shard the model to effectively glue two GPUs together. I am still researching to see if NVLINK (a high speed bus between GPUs) can be very helpful with running larger models (like Llama3 70B).

An ollama beta is available for Windows and it makes it easy to plug in different LLMs.

What can our community cook up to leverage all the information in VistA (source and documentation)?

Let the games begin....   (and thanks Mark for Llama3 ;-).

-- 
=================
Greg Kreis
Pioneer Data Systems, Inc.
770-509-2583
http://www.PioneerDataSys.com

Greg Kreis

unread,
Apr 25, 2024, 1:40:02 PMApr 25
to hard...@googlegroups.com

Good memory David. That is what helped with GPT and Claude.

On 4/25/2024 11:32 AM, David Whitten wrote:

Kevin Toppenberg

unread,
Apr 26, 2024, 4:41:43 PMApr 26
to Hardhats
David,

When I connect directly to the LLM via a terminal, the response is slowly built up over time, as if the computer is putting it out word by word.  It may take 15 seconds for it to fully complete a response. 

My questions is how this worked for you via curl.  Does it give any sort of progress indication?  I see entries like " Status: in_progress [10] Seconds" in your posting.  I think of curl as a tool for downloading from the internet-- but I guess it could really be an IO stream ... But then how do you know when it is done answering your question?  I guess I'm asking how the IO works, and does it seem responsive?  Or does it hang until it is done and spit it all out then. 

And if you ask it a follow up question, to you have to send all the prior conversation back again?

When I did my little interface with OpenAI, I would ask it a simple question, like "tell me a joke," and then get back a quick answer.  I didn't play around with longer answers or back-and-forth.

Kevin

David Blackstone

unread,
Apr 26, 2024, 8:19:49 PMApr 26
to Hardhats
Hey Kevin,

Yeah so in regards to your questions " My questions is how this worked for you via curl.  Does it give any sort of progress indication?  ". So yes i am using curl to talk to the API. They don't give a progress of the current process, so I just keep asking for status until it tells me its completed. It also only sends me back the response after its completed, so I don't get pieces of it while its processing. Just at the end. It only gets stuck if there are issues with the API, otherwise it usually always works.

"And if you ask it a follow up question, to you have to send all the prior conversation back again?" Yeah so I don't need to resend anything, it remembers what i've told it, what files i've sent it, etc. This is the power of the Assistants API. I wrote a decently complex system that allows you to easily pick up conversations. I store the data in configs so I know which conversation thread im talking and hte user can choose. The assistants can also be tuned with files and prompts so they respond specific ways. The assistants is quite a bit more sophisticated than their deprecated conversation API.

David

David Blackstone

unread,
Apr 26, 2024, 8:20:47 PMApr 26
to Hardhats
Also,  admittable, I have around 7 different curl commands that are built to complete the prompts correctly. This is because they are different if Im starting a convo or continuing, or uploading files, etc

Kevin Toppenberg

unread,
Apr 29, 2024, 5:25:07 PMApr 29
to Hardhats
Thanks for this info.

KT
Reply all
Reply to author
Forward
0 new messages