VA Intelligence: AI FAQs

Explain Symbol performs a Find References under the hood, then opens an Explain panel that summarizes what the symbol is, how it is used in the context of your project, and highlights key relationships. If you prefer, you can still run normal Find References without AI. The Explain panel is generated 100 percent locally on your machine using Ollama with the Gemma3 model. No code or prompts leave your computer.

The first time you invoke the feature, Visual Assist prompts you to accept the third party AI software package terms and the Whole Tomato AI Supplemental Terms.
  • If you accept, the feature is enabled and Visual Assist installs the local AI runtime Ollama and the Gemma3 model.
  • If you do not accept, no AI features are enabled. Nothing related to AI runs or is installed by default.

No. After installing Visual Assist 2025.4, there are no AI components present or active until you accept the Terms.

Visual Assist downloads and installs:
  • Ollama. Local AI runtime.
  • Gemma3. Local model packaged for compatibility with 2025.4.
The download comes from an Whole Tomato server that we control. It is pinned to the exact versions required by 2025.4.

Yes. You can disable the AI feature at any time in the Visual Assist Options dialog. You can re-enable it later in the same place.

If you want to completely uninstall VA Intelligence, you can go to the options dialog → VA Intelligence → click on Remove files. This will completely wipe any AI-related feature by Visual Assist.

No. Processing is 100 percent local. Your source code, prompts, and results never leave your computer. Visual Assist does not send any AI data to Whole Tomato servers or any other server. We do not use your data to train any model.

Only for the initial download of Ollama and the Gemma3 model from our server. After the one time install, all AI work runs locally with no external calls.

  • Execution is local on your machine. Your code never leaves your environment.
  • Visual Assist communicates with Ollama through its local web server interface. Traffic stays on the local host.
  • The runtime and model are downloaded from an Whole Tomato server that we manage and version pin.
  • We do not transmit your data. We do not log prompts or responses. We do not use your data for training.
  • Gemma3 is a Google model. While we cannot audit its internals, it runs locally under your control.

Gemma3 running under Ollama. Visual Assist manages the compatible versions for 2025.4.

Open the Options dialog in Visual Assist to enable or disable the AI feature and to review its status.

Visual Assist collects local code context for the selected symbol and its usages. It sends that context to the local model (Gemma3) through Ollama. The model generates an explanation that appears in the IDE. There are no network calls during this operation and no data leaves your machine.
Contact Sales

Contact Sales

Ask a question or get a quote.

Contact Sales
Purchase Order

Submit a Purchase Order

Get quick turnaround on your orders.

Purchase Order