
If you’ve been following Visual Assist for a while, you might have noticed something a little different about the last few releases. Alongside the navigation improvements, refactoring updates, and parser optimizations, there’s a quieter but important thread running through it all: VA Intelligence.
We introduced Explain with AI a few releases back, and with 2026.3 we just shipped Change Code with AI. You can download the latest release to try it.
These are two features, with the same underlying idea: use AI to complement the tasks that you normally do with Visual Assist.
This post is a quick, honest look at what VA Intelligence can do for you right now, where it works best, and where things are heading next.
Two features, one idea
VA Intelligence currently has two functions.
Explain with AI does what it says. Select a symbol or a block of code, run it, and get a plain-language explanation of what it does. Selecting and highlighting parts of your code provides the necessary context to our local LLM parser.
It’s useful when you’re dropped into an unfamiliar codebase and you’re trying to understand a dense macro chain, or you just want a second opinion on what a piece of logic is actually doing before you make changes to it manually.

VA Intelligence’s explain symbol provides you the context and description of the symbol in the Find References dialog.
Change Code with AI is the latest addition added in VA 2026.3. It lets you describe how you want to transform code in natural conversation and you get the suggested replacement. Select some code, type a prompt, review what changes in a diff view, and decide whether to keep it. The diff view is always the last step before your code is touched.
Both features work the same way under the hood: you select some code, VA passes it to a local AI model along with some surrounding context, and the model responds. That “surrounding context” part is important— we’ll come back to it.
Where they actually shine
Here’s the thing about both features: they work really well on focused, localized tasks. If you’ve been hesitant to try them because you’re not sure what to ask, here’s a prompt list to help you get started to give you a feel for where they land:
- Optimize a selected function
- Improve readability of a messy method
- Rename variables for clarity throughout a block
- Add comments to code that has none
- Refactor a small piece of logic
- Convert an old-style loop to something more modern
- Fix a simple bug in a method
- Generate a unit test for a selected function
- Add error handling to a snippet
- Convert code from one style or pattern to another
There’s a pattern here: these are things requiring a selection, not your project or the whole file. And that’s where we are headed: We seek to add project context to provide more relevant and timely suggestions—more on this below.
Our future target: Getting more context
Both Explain with AI and Change Code with AI share the same constraint right now. When you make a selection, VA sends that selection to the model along with some context to help it understand what it’s looking at. Currently, it doesn’t have the broader picture of your project. The context. Which as we all know is the lifeblood of AI and LLMs. It doesn’t know about the other classes, the patterns your team follows, the architecture decisions baked into your codebase.
For the tasks listed above, it’s usually fine. A function is a function. You can ask it to clean up a loop or add error handling, and the model has everything it needs to do that well.
But anything that requires understanding how those pieces connect like cross-file refactoring, project-wide consistency, or changes that depend on knowing how this code gets referenced elsewhere, that would need more context. The model can only reason about what you’ve shown it.
One practical tip: the more code you select, the more context the model has. You can select a single function, a full class, or a larger block — and results tend to get better as the selection grows. It’s a workaround for the limitation and helps for more complex local tasks.
Where VA Intelligence is heading
The idea is to use Visual Assist itself as a context layer. VA already has a deep understanding of your codebase — it’s been parsing your symbols, tracking your includes, and navigating your project structures for years. The next step is putting that understanding to work for the AI, feeding it richer, faster context so it can perform well even with local, smaller but more secure models.
The goal is an orchestration layer that brings the kind of project awareness VA already has into every AI interaction. It should have minimal footprint, be efficient, and follow the spirit of how Visual Assist has always worked.
That’s the direction. The two features in VA Intelligence today are a real, working foundation. We believe they’re useful now. And they’re the starting point for something that’s going to get meaningfully deeper. We envision a workflow wherein VA Intelligence can complement classic Visual Assist features and also other AI and copilot tools.
Try it and tell us what you find
If you haven’t set up VA Intelligence yet, it ships with Visual Assist but the language model needs to be explicitly installed and enabled. Head to Extensions ? VAssistX ? VA Intelligence and run the installer. You’ll need a compatible GPU. Full setup requirements are on the VA Intelligence setup page.
Once you’re in, give Change Code with AI a real workout. The feature is intentionally open-ended so you can try a lot of things (we want to see what you actually do with it). Use the in-app feedback form (VAssistX ? Help ? Show Welcome Page) to tell us what worked, what didn’t, and what you wish it could do.
That feedback is genuinely shaping what comes next.
