
The post hit Hacker News at 4:24 AM UTC on a Saturday. "MCPs are dead — CLIs won." No hedging, no question mark. By the time I read it over coffee, it had pulled in dozens of comments and a reference to Peter Steinberger — the creator of OpenClaw — saying essentially the same thing on Lex Fridman's podcast.
I stared at it for a while. Not because it was surprising, but because I'd spent the last three months building a system on that exact premise and never once articulated it as a thesis. I just kept reaching for Go CLIs and never reaching for MCPs. The architecture had made the argument before I did.
Via — the personal intelligence OS I've been writing about in this series — runs 9 plugins across budget tracking, note-taking, task management, content publishing, and intel gathering. It has executed 175 missions across five domains. Every single one of those plugins talks to Claude Code through a CLI binary. Zero MCPs. Zero schemas. Zero translation layers.
I didn't set out to prove anything. I set out to build something that worked.
The Post That Named the Thesis
The Hacker News poster, umairnadeem123, opened with a concrete claim: they'd used OpenClaw with Claude Opus on a 20,000-member subreddit, with only CLI access, and achieved 2 top posts, 3,000+ karma, 70+ waitlist signups, and 300+ inbox messages — all autonomous execution in one week.
"No protocol made that happen," they wrote. "A CLI and a capable model did."
The core argument is deceptively simple. LLMs already think in text. CLIs already speak text. The protocol layer that MCP introduces — JSON-RPC 2.0 over HTTP, with method names, params objects, and id fields — is a translation tax between two systems that were already speaking the same language.
Steinberger made the architectural version of this argument on Lex Fridman's podcast. MCPs suffer from what he called "context pollution" — large structured responses that flood the model's context window, requiring complex filtering logic that becomes its own engineering problem. Sub-agents are a workaround, not a solution. CLIs, by contrast, return text that the model can read natively. No deserialization. No schema validation. No overhead.
What I Actually Built Instead
Via's plugin architecture is almost aggressively simple. Each plugin is a Go CLI binary that lives in ~/skills/ and gets symlinked to ~/bin/. The orchestrator calls them with exec.Command. The interface contract is: text in, text out.
Here's what that looks like in practice. When Via runs a content creation mission — say, "scout the latest AI news and write a blog post about it" — the orchestrator decomposes it into phases. Phase 1 calls scout gather, which returns a text feed. Phase 2 reads that feed and selects a story. Phase 3 calls obsidian capture to park notes. Phase 4 writes the article. Each phase is a Claude Code session that inherits the previous phase's output through a shared workspace directory.
No protocol negotiation. No capability discovery. No handshake. The model reads a SKILL.md file that documents the CLI's commands, and it calls them. If the CLI changes, the SKILL.md changes. If the SKILL.md is wrong, the model fails — and that failure is immediately visible in the terminal output, not buried in a protocol error response.
The total interface surface for all 9 plugins is a markdown file per plugin. That's it. That's the entire "protocol."
The Numbers That Made Me Pay Attention
I went back through Via's mission logs to see if the CLI-only architecture had produced any measurable friction. Out of 175 completed missions, I looked at how many failed due to CLI interface issues versus how many failed due to other causes.
CLI-related failures: 7. All seven were the same category — the model hallucinated a flag that didn't exist. scout gather --since 48h when the actual flag is --hours 48. Each time, the error message from the CLI was clear enough that the model self-corrected on the next attempt.
For comparison, the learnings system captured 64 AGENT_ISSUE meta-learnings across those same 175 missions. The top issue wasn't interface friction — it was persona-task mismatch, where the orchestrator assigned an architect persona to an implementation task. The CLI layer was the most reliable part of the stack.
Seven failures out of 175 missions is a 4% CLI error rate. And all seven self-corrected. The effective failure rate — missions that actually stalled due to the CLI interface — was zero.

The Counterarguments That Hit
The HN thread wasn't one-sided, and the best counterarguments stuck with me.
One commenter, p_ing, made the user-experience case: "Users don't understand CLI nor want to manage the systems." MCPs provide a user-friendly data access layer for non-technical users. This is true — and completely irrelevant to Via's use case, where the "user" is Claude Code, not a human. But it matters for the broader ecosystem.
The stronger argument was security. Giving an AI agent full CLI access means giving it full system access. Another commenter pointed out that the Reddit agent in the original post had unsupervised access to post, comment, and vote on a public platform. "That's a security nightmare," they wrote. MCPs at least provide sandboxing — controlled permissions, scoped capabilities, a boundary between what the agent wants to do and what it's allowed to do.
This one I can't dismiss. Via's plugins currently run with whatever permissions the user's shell has. The orchestrator doesn't scope access per-mission. If a creative writing mission wanted to, it could call ynab add and move money between budget categories. Nothing prevents it except the model's instruction-following. That's a trust boundary held together by prompt engineering, not architecture.
What MCP Gets Right That I Don't Have
I'm going to say something that undermines the thesis of this article: MCP's permission model is better than mine.
Via relies on two things for safety: Claude Code's built-in permission prompts (which ask before executing commands the user hasn't pre-approved) and the model's own judgment about which tools are appropriate for a given task. In practice, this works — I've never had a budget plugin called during a content mission. But "it works in practice" is the argument people make right before it stops working.
MCP's structured capability declaration — this tool can read but not write, this tool has access to this API scope but not that one — is a real architectural advantage. It's the difference between "the model chooses not to" and "the system prevents it from." The first is behavioral. The second is structural.
I chose behavioral because it was faster to build. I don't regret it. But I notice the gap.
Why CLIs Won Anyway
The permission gap is real. But here's what tips the balance for me.
Every MCP server I've evaluated introduces three categories of friction that CLIs don't:
First, schema maintenance. An MCP server requires a JSON schema for every tool, every parameter, every return type. When the underlying service changes, the schema needs updating. Via's SKILL.md files serve the same purpose — documenting what commands exist and what they do — but they're freeform markdown that the model reads with the same comprehension it applies to any other text. I don't need to keep a schema in sync with a specification. I need to keep a readme in sync with reality.
Second, context window cost. Steinberger's "context pollution" observation matches my experience exactly. MCP tool responses are structured JSON that the model must parse, interpret, and hold in context. CLI output is text that the model reads natively. When scout gather returns 700 items across 10 topics, it comes back as a markdown document — the same format the model already thinks in. No parsing overhead. No structural interpretation layer.
Third, debugging transparency. When a Via mission fails, I can read the terminal output. The exact command that ran, the exact output it produced, the exact error it threw. With MCP, failures happen inside a protocol layer — JSON-RPC error codes, transport-level issues, serialization mismatches. The debugging surface area is larger and less visible.
These three frictions compound. Schema maintenance slows iteration. Context pollution wastes tokens. Opaque debugging slows recovery. Over 175 missions, avoiding these three costs has been worth more than the permission model I gave up.

The Uncomfortable Middle
Here's where I land, and it's less clean than the Hacker News post title suggests.
For agent-to-tool communication where the agent has full system trust — which is Via's entire operating model — CLIs are strictly better. They're simpler, faster to build, easier to debug, and they speak the model's native language. The "protocol" is a markdown file and a binary.
For multi-tenant environments where agents serve untrusted users, MCPs earn their complexity. The permission scoping, capability declaration, and structured interface boundaries solve real problems that CLIs don't address.
The mistake the HN post makes — and the mistake I nearly made by agreeing too quickly — is treating these as competing answers to the same question. They're answers to different questions. "How should a trusted agent talk to tools?" and "How should an untrusted agent be constrained?" are different architectural problems with different appropriate solutions.
Via answers the first question. MCP answers the second. The confusion comes from a moment in AI tooling where nobody's sure which question they're asking.
Honest Limitations
I've been building on the CLI thesis for three months, and this article is the first time I've examined it critically. That timing should make you suspicious. I went looking for validation — the HN post, the mission logs, the error rates — and I found it. Confirmation bias is the oldest debugging failure mode in engineering.
The 4% CLI error rate comes from 175 missions run by a single user on a system that user built. I know the CLI interfaces because I wrote them. A different developer encountering Via's plugins for the first time would produce a very different error rate. The model's ability to self-correct on bad flags depends on clear error messages, which exist because I wrote both the CLI and the error handling. That's a loop, not a generalizable result.
Via currently runs 9 plugins. It's possible that CLI-only works at 9 and breaks at 90. I don't know where the complexity ceiling is because I haven't hit it. The fact that I haven't needed MCPs doesn't prove I won't.
And the security argument is the one I keep coming back to. My system works because I trust the model. If I stopped trusting the model tomorrow — if it hallucinated a destructive command, if a prompt injection reached it through a scouted web page — the CLI architecture would offer zero protection. MCPs would have caught it at the permission boundary. "It hasn't happened yet" is the most dangerous sentence in systems engineering.