TL;DR
Via is six standalone Go CLIs that share a common brain through Claude Code's plugin architecture. Instead of building a central router, I let Claude Code read each plugin's capabilities from CLAUDE.md and dispatch tasks naturally. Pattern reuse across plugins — especially the FTS5 + embeddings hybrid search — means new plugins take days instead of weeks.
The Ecosystem
Via consists of six active plugins. Each is a standalone Go CLI tool that can be developed, tested, and deployed independently. What makes them a system rather than a collection is the shared intelligence layer — common search patterns, shared learnings, and Claude Code as the dispatch mechanism.
Plugin Ecosystem
| Plugin | Domain | What It Does |
|---|---|---|
| orchestrator | dev | Multi-phase mission execution with persona selection and parallel scheduling |
| obsidian | personal | Vault read/write/search with hybrid FTS5 + semantic embeddings |
| ynab | personal | Budget tracking and spending analysis via the YNAB API |
| todoist | work | Task management, project queries, and priority management via Todoist API |
| scout | work | Intel gathering on configurable topics — competitors, technologies, trends |
| aissistant | creative | YouTube Shorts content pipeline — scripting, scheduling, metadata |
Each plugin covers a different domain of my life. Together, they span development, personal knowledge, finances, tasks, research, and content creation. The whole point of Via is that intelligence should flow across these domains, not pool in silos.
The "No-Router" Router
The biggest architectural insight from the evolution of Via was realizing I didn't need a complex central router to dispatch commands. Claude Code already provides routing.
Here's how it works: each plugin registers its commands and capabilities in the global CLAUDE.md file. Claude Code reads this file at startup and understands what tools are available. When I ask a natural language question, Claude picks the right tool automatically.
If I ask "How much did I spend on coffee this month?", Claude picks up ynab. If I ask "What are my notes on distributed systems?", it picks up obsidian. If I ask "Research the latest changes to the Go embed package", it picks up scout. No routing code required.
This decentralization makes the system genuinely robust. If the ynab plugin breaks, the obsidian plugin keeps working. If I'm offline and can't hit the Todoist API, the orchestrator still runs missions against local code. Each plugin fails independently.
Compare this to a monolithic system where a bug in the finance module could crash the orchestrator. Independence isn't just cleaner architecture — it's operational resilience.
Pattern Reuse: The Obsidian CLI
The Obsidian plugin is the clearest example of how shared patterns accelerate development. I needed a way to search my personal knowledge vault — about 2,000 markdown notes spanning years of accumulated knowledge.
I had already built the FTS5 + Gemini embedding hybrid search for the orchestrator's learnings system. The same pattern — index text into SQLite FTS5, generate Gemini embeddings, combine keyword and semantic scores at query time — applied directly to Obsidian notes.
So I ported it. The entire Obsidian CLI — vault indexing, full-text search, semantic search, read, and write — was built in a single day (February 7). Not because it was simple, but because every hard problem had already been solved in the orchestrator. The FTS5 table schema was identical. The embedding generation pipeline was identical. The hybrid scoring formula (0.3 * keyword + 0.7 * semantic) was identical.
This is the payoff of building reusable patterns instead of one-off solutions. When the next plugin needs search — and it will — it'll take hours instead of days.
Configuring Personas
The plugins share a persona system defined in YAML. This configures not just what each tool does, but the behavioral traits of the AI agent that wields it:
personas:
architect:
description: "Systems thinker, plans before acting"
traits:
- "Prioritize scalability"
- "Document decisions as ADRs"
- "Consider failure modes explicitly"
researcher:
description: "Finds facts, validates sources"
traits:
- "Cite everything with URLs"
- "Cross-reference multiple sources"
- "Flag confidence levels"
security-auditor:
description: "Think like an attacker"
traits:
- "Check OWASP Top 10"
- "Validate all user inputs"
- "Review auth flows for bypass"There are 10 personas in total, covering roles from performance-engineer to tech-writer. When the orchestrator assigns a persona to a mission phase, these traits get injected into the agent's system prompt. The researcher agent actually cites sources because its persona says "Cite everything." The architect actually writes ADRs because its persona says "Document decisions."
This configuration layer means I can tune the "personality" of the system without changing Go code. Adding a new persona is a YAML edit, not a code change.
How the Plugins Talk
The plugins don't talk to each other directly. They communicate through shared artifacts:
-
The learnings database — every plugin's agents contribute learnings, and every plugin's agents receive them. The orchestrator's researcher might discover something useful for the Obsidian CLI's indexer.
-
The workspace filesystem — mission phases write output to a shared directory structure. Phase 1's research output becomes Phase 2's input context.
-
Claude Code's context — because Claude reads the global
CLAUDE.md, it has awareness of all plugins simultaneously. It can synthesize information across domains without the plugins needing to implement inter-plugin communication.
This is intentionally loose coupling. The plugins don't need to know about each other's internals. They share a brain (the learnings database and Claude's context), not a nervous system.
The Honest Limitations
Not all plugins are equally mature. The orchestrator and obsidian plugins are robust, battle-tested across hundreds of missions. The scout and aissistant plugins are newer and rougher around the edges. Feature parity across plugins isn't a goal — each evolves at its own pace.
Keyword matching for persona selection is a bottleneck. The current persona selector uses simple keyword matching. A task that says "investigate performance" gets the researcher persona when performance-engineer would be better. The system's meta-learnings have flagged 9 persona mismatches — the system knows it's getting this wrong, but the fix (semantic matching) isn't implemented yet.
No real-time synchronization. The plugins share a database but don't notify each other of changes. If the orchestrator captures a new learning mid-mission, the Obsidian CLI won't see it until its next query. This is fine for the current batch-oriented workflow but wouldn't work for real-time use cases.
Next: What 1,600+ AI Learnings Reveal
Related Reading
- LifeOS: Building an AI-Powered Personal Operating System — The Obsidian-based personal system that Via's obsidian plugin extends
- Why I Built a Multi-LLM Orchestration System — The Claude Swarm system that preceded Via's plugin architecture