Skip to main content
// JH

· 5 min read

Why I Built a Personal Intelligence OS

Five AI projects, none talking to each other. The frustration that led me to build Via — a system where intelligence compounds instead of evaporating.

ai · vision · personal-intelligence · orchestration

TL;DR

I had five AI-powered projects that each worked in isolation. My orchestrator couldn't learn. My personal tools couldn't talk to my dev tools. I built Via — a personal intelligence OS — to make intelligence compound across every domain of my life.


Five Projects, None Talking to Each Other

By late January 2026, my AI tooling had become a graveyard of good intentions.

I had a bash-and-Redis orchestrator prototype. A Go swarm rewrite that had successfully run 50+ missions but couldn't learn from a single one. A LifeOS system managing personal knowledge through Obsidian. Budget tracking through YNAB. Task management through Todoist.

Each worked in isolation. None knew the others existed.

The irony wasn't lost on me. I was building AI systems that could research, implement, and review code across multiple models — but I couldn't get them to share a single insight. My orchestrator would make the same mistakes across missions because nothing persisted. My personal knowledge tools couldn't leverage the development intelligence I'd accumulated. My budget tool had no idea what my task manager was doing.

The problem wasn't capability. It was amnesia and isolation.

We're Using AI Wrong

It's not that the models are bad. Claude Opus is the best reasoning engine I've ever used. Gemini is incredible at research at a fraction of the cost.

We're using them wrong because we treat every conversation as a blank slate.

Think about how you work with AI today. You open Claude. You explain your project. You provide context. You get a good answer. Then you close the tab. Tomorrow, you do it again. The AI that helped you debug a concurrency issue on Monday doesn't know about Tuesday's database schema redesign. The assistant that planned your OAuth implementation last week can't remember the security tradeoffs you discussed.

Every session starts from zero. Every piece of context, re-explained. Every decision, re-justified.

Now multiply this across every domain of your life. Your budget. Your tasks. Your knowledge base. Your code. Each lives in its own silo. Each AI conversation is an island. The intelligence never compounds.

This is the core problem. Not model quality — model memory. Not individual capability — systemic integration.

What a Personal Intelligence OS Should Be

An operating system manages resources, provides abstractions, and enables programs to work together. A personal AI operating system should do the same for intelligence:

Manage resources. Route tasks to the cheapest capable model. Use Gemini for research at a fraction of a cent instead of sending everything to Opus at $15 per million tokens. The difference is orders of magnitude. For someone using AI daily across multiple domains, cost optimization isn't a nice-to-have — it's what makes intensive AI use sustainable long-term.

Provide abstractions. I shouldn't need to know whether my query about "last month's spending on groceries" goes to the YNAB plugin, the Obsidian CLI, or both. I should just ask the question. The system should understand domains, detect intent, and route accordingly.

Enable programs to work together. When I capture a learning from a code review, that learning should be available to every future code review agent. When I discover a useful API during research, every researcher agent should know about it. Intelligence should flow across the system, not pool in individual conversations.

Remember everything. Not in a creepy surveillance way. In the way that a competent colleague remembers context. "Last time you tried FTS5 triggers before the main table, it broke" is the kind of institutional knowledge that should persist. When you've accumulated 1,600+ learnings from real agent work, that collective memory becomes genuinely valuable.

The Solution: Via

I needed something different. Not another orchestrator rewrite. Not a single clever agent. A system — one that could route tasks to the right AI model, remember what worked, and treat my personal and professional tools as a unified intelligence layer.

That became Via.

Via is six Go CLI plugins backed by a shared learnings database. An orchestrator decomposes complex tasks into phases, assigns specialized AI personas, and captures insights from every run. An Obsidian CLI searches my personal knowledge vault. A YNAB plugin tracks my finances. A Todoist plugin manages my tasks. A scout gathers intel. An assistant pipeline handles content creation.

Each plugin is independent. They share a common brain through SQLite, Gemini embeddings, and the orchestration layer. The result is a system that feels like a single cohesive intelligence, even though it's built from six distinct tools.

It's not a product for others — at least not yet. It's a personal system built for my exact workflow. But the architecture — how it captures learnings, how it routes tasks, how it connects tools — is a pattern worth sharing. That's what this series is about.

What's Ahead

This is the first in a series of articles about building Via. Here's the journey:


Related Posts

Jan 12, 2026

Why I Built a Multi-LLM Orchestration System (And You Might Want One Too)

Jan 25, 2026

Starting Line: The Case for Personal AI

Jan 28, 2026

From ChatGPT to Claude Code: The Evolution