Skip to main content
// JH

· 5 min read

From ChatGPT to Claude Code: The Evolution

The messy, nonlinear path from copy-paste prompts to a working personal intelligence OS. Failed experiments, painful pivots, and the 16-day sprint that built Via.

ai · history · evolution · claude-code

TL;DR

Via was built in a 16-day sprint (Jan 26 – Feb 10, 2026). The key breakthroughs: documentation-as-specification, deleting a 3-day custom serialization experiment, and pivoting from a monolithic binary to Claude Code plugins. The biggest lesson was that the documentation binge I thought was procrastination turned out to be the most productive phase.


The Starting Line: January 26, 2026

I ran git init in a new directory called ~/via on January 26. The goal was modest: "Just get something running."

I'd already built three versions of an orchestrator. Claude Swarm had proven the multi-LLM concept — routing research to Gemini, implementation to Claude, coordination to Opus. It worked, but every mission started from scratch. No memory. No accumulated intelligence. Each run was an island.

This time, I vowed not to over-engineer. I started with a simple detection feature — figuring out what domain a task belongs to — and a basic worker pool.

For the first few days, it felt like I was moving fast. But I was building the wrong thing.

The Documentation Binge (Days 2–5)

From January 27 to 30, I barely wrote any code. Instead, I wrote documentation.

I dissected every feature of previous systems I'd admired. I wrote a "Vision" document. I wrote a "Mission" document. I wrote a ~730-line file called ORIGINAL-INTENT.md that captured every frustration I had with existing tools — every friction point, every workaround, every time I'd had to re-explain context to a new AI conversation.

It felt like procrastination. In retrospect, it was the most productive part of the entire sprint. When I finally started coding the core system, I didn't have to stop and ask "what should this do?" The documentation was the specification. Every design question had already been answered in prose.

If you're building something ambitious, I'd recommend spending your first week writing about it instead of coding it. The clarity compounds.

The Failed Experiment: TOON (Days 5–8)

Every project has that one idea you fall in love with that turns out to be a disaster. For me, it was TOON — Token-Optimized Object Notation.

I was obsessed with reducing API costs. If I could compress the context that gets sent to each model, I'd save on every single API call. So I spent three days designing a custom serialization format. I wired it into everything — the orchestrator, the router, the learnings module. Every piece of data flowed through TOON encoding and decoding.

It saved maybe 15% on tokens. In exchange, it made the code unreadable, debugging a nightmare, and every test case needlessly complex. Every time I wanted to inspect what the system was doing, I had to mentally decode the TOON format first.

On February 4, I deleted it all.

git log (Feb 4)
commit a3f2e1b: Removed toon references from orchestrator
commit 8d4c2af: Removed toon references from router

Two commits to undo three days of work. Painful, but necessary. The lesson was clear: readability and simplicity beat micro-optimizations every time. The real efficiency gains came later, from multi-LLM routing — routing research to Gemini's free tier so Claude's rate limits stay available for the work that needs them.

The Pivot: Day 8

The turning point came on February 3. I was struggling to build a routing system that could dispatch tasks to different tools. Domain detection, intent parsing, tool selection — I was building an entire dispatch layer from scratch.

Then I realized: Claude Code already does this.

Claude Code reads a CLAUDE.md file to understand available tools and capabilities. It already knows how to pick the right tool for a given task. Why was I rebuilding this from the ground up?

I pivoted. Instead of a monolithic "Via" binary that tried to do everything, I broke the system into independent plugins:

plugins/orchestrator/.claude-plugin/plugin.json
{
"name": "orchestrator",
"version": "2.0.0",
"commands": [
  {
    "name": "via run",
    "description": "Execute a multi-phase mission"
  }
]
}

The orchestrator became just one tool among many. Obsidian became another. YNAB another. Todoist another. Each was a standalone Go CLI that could be developed, tested, and deployed independently. Claude Code handled the dispatch layer for free.

This decision saved the project. What had been a monolithic tangle became a modular ecosystem overnight.

The Sprint by the Numbers

The sprint lasted from January 26 to February 10. In those 16 days:

  • 154 commits across the codebase
  • 6 plugins built and integrated
  • 1 failed experiment deleted (TOON)
  • 1 architectural pivot (monolith to plugins)

The busiest day was January 31, when I wrote the vision documents — pure documentation, zero code. The most technical day was February 8, when I implemented the parallel execution scheduler that lets independent mission phases run simultaneously.

But the most magical day was February 7. I needed an Obsidian CLI — a tool to read, write, and search my personal knowledge vault. Because I had already established the plugin pattern and built the FTS5 + embeddings search for the orchestrator's learnings system, I ported the entire pattern to Obsidian in a single day. Read, write, hybrid search — all working by evening.

The patterns were reusing themselves. That's when I knew the architecture was right.

Ship the Ugly Version

Via today isn't perfect. The persona selector uses simple keyword matching when it should use semantic similarity. The meta-learnings system captures noise alongside signal. The directory structure sprawls a bit.

But it works. It runs missions. It captures learnings. It orchestrates six plugins across two AI providers. And most importantly, it's producing real value — every mission makes the next one slightly better because intelligence persists.

If I had waited for the perfect architecture — if I hadn't been willing to ship the "ugly" version with keyword matching — I'd still be drawing diagrams. Perfect is the enemy of shipped.

Next in series: How Multi-Agent Orchestration Works


Related Posts

Jan 12, 2026

Why I Built a Multi-LLM Orchestration System (And You Might Want One Too)

Jan 22, 2026

Why I Built a Personal Intelligence OS

Jan 25, 2026

Starting Line: The Case for Personal AI