An MCP server that integrates with a self hosted Ollama container and returns information related to VistA patients.
git clone https://github.com/RamSailopal/VistAAI.git
cd VistAAI
docker-compose up -d
This will run a number of containers:
- An ollama container
- A "side car" container that will pull the llama3.2 model into ollama
- A VistA container along with fmQL (Fileman query language)
- An mcp-server
You will need to wait for the containers to fully initialise before things can proceed and so monitor them with:
docker-compose logs -f
Once all the containers have initialised and there is no further output on the screen, press Ctrl+C. You can now access the AI mcp-server console via:
./mcp-server.sh
You now begin to ask questions about VistA.
The AI model is intellegent enough to know that the details returned have confidentials/sensitive information and refuses.
In additional an Ollama web UI container runs. This container references the Ollama llama 3.2 model without an mcp server and no interaction with Vista. The web UI can be accessed via the web address:
NOTE - This is a self hosted AI and the speed of the responses will be dependant on the hardware on which the AI model is running.
The Python code mcp/vista.py provides the context about VistA to the AI model. When writing the code, Python function docstrings/comments are important with regards to helping the AI model understand the context.