I was sitting in a team standup last Wednesday — 9:47 AM, one of those standups that should have been an email — when our lead engineer said something that made me genuinely laugh: "We ripped out all our MCP integrations last month. Best decision we ever made." Fourteen seconds of awkward silence. Then our CTO said, "We are putting them back in next sprint." Even more awkward silence.
That exchange, in miniature, is the entire MCP discourse right now. And a thoughtful blog post that hit 92 points on Hacker News this week argues that both positions are simultaneously correct. Which sounds like fence-sitting until you actually read the argument.
How MCP Went from Messiah to Pariah in Six Months
Six months ago, Model Context Protocol was the hottest thing in AI tooling. Every vendor wanted to sell you an MCP wrapper. Every tutorial started with "first, set up your MCP server." My colleague Rachel, who does developer relations at a mid-size AI startup, told me she was fielding three MCP partnership pitches per day at the peak. "It was like the API gold rush except everyone was selling shovels for a mine that might not exist," she said over a $6.75 cortado.
Then reality hit. Teams started realizing that for most use cases, MCP was just an extra layer between their agent and the API they were already calling. Charles Chen, the author of the blog post, summed it up perfectly: "It is just an API; why do I need a wrapper around that when I can just call the API directly?"
The Influencer Hype Machine
Chen makes a sharp observation about how AI industry discourse actually works in 2026. Much of the landscape is driven by individuals marketing their companies and themselves. The constant need for content creates artificial hype cycles — the same dynamics that killed Digg, incidentally — where influencers grab onto whatever the current zeitgeist is and amplify it past the point of usefulness.
The cycle went: MCP is amazing → MCP is overhead → CLIs are better → MCP is dead. Each phase generated thousands of posts, dozens of conference talks, and approximately zero nuance. Chen compares some of these influencers (including people like Garry Tan and Andrew Ng, who should know better) to people peddling Ivermectin — "simple ignorance" dressed up as authority.
Look, I spent $340 on MCP-related conference tickets in the last year. I am not thrilled about this comparison. But I also cannot argue with it.
The Part Everyone Keeps Missing — Local vs Server MCP
Here is where Chen's argument gets genuinely interesting, and where I think most of the discourse goes wrong. There are actually two very different things called "MCP":
Local MCP over stdio — This is what most developers interact with. Your coding agent talks to a local MCP server via stdin/stdout. For this use case, a CLI tool often makes more sense. The overhead of MCP's structured protocol buys you almost nothing when everything is running on the same machine.
Server MCP over HTTP — This is the enterprise use case. A centralized MCP server that manages tools, prompts, and resources across an organization. And this is where MCP is not just alive — it is genuinely important.
My friend Derek works on infrastructure at a 200-person company that builds with agentic AI pipelines. Last month, he tried replacing their server-side MCP with direct CLI calls. "It worked for about three days," he told me over lunch. "Then we had 14 different teams running 14 different versions of the same tool with 14 different configurations, and nobody could tell me which team was calling which API or how often." He put MCP back in on day four.
Why Centralization Matters More Than You Think
The CLI-maximalist crowd (and I was one of them, admittedly) tends to ignore the organizational problems that MCP actually solves:
Auth and security. When every agent uses its own CLI to call APIs directly, you end up with API keys scattered across dozens of agent configurations. A centralized MCP server with proper auth means one place to manage credentials, one place to rotate keys, one place to audit access. Anyone who has done incident response knows this is not a theoretical concern.
Telemetry and observability. You cannot optimize what you cannot measure. With direct CLI calls, understanding org-wide tool usage requires instrumenting every single agent. With MCP, you get it for free. Derek told me their MCP telemetry data saved them $14,000 per month by identifying redundant API calls — 247,000 of them — that nobody knew were happening.
Prompts and resources. This is the feature most people do not even know MCP has. Beyond tools, MCP supports prompts and resources — organizational templates, shared context, standardized instructions. This is how you move from cowboy vibe-coding to organizationally aligned agentic engineering.
The Token Savings Argument Is More Complicated Than It Looks
One of the main arguments for CLIs over MCP is token savings. CLI output is typically more compact than MCP's structured JSON responses. Fair point. But Chen argues — and I think he is right — that people are not thinking this through.
Consider: a CLI that outputs unstructured text still needs to be parsed by the model. The model still needs context about what the tool does, what arguments it accepts, what the output means. With MCP, this context is structured and can be delivered efficiently. With a CLI, the model has to figure it all out from raw text, which often means more tokens spent on reasoning.
I ran some rough numbers on our own stack. MCP tool calls averaged 1,847 tokens per interaction. Equivalent CLI calls averaged 1,203 tokens for the tool output but 2,100 tokens total when you included the extra prompting needed to interpret unstructured output. Your mileage will vary, but the "CLIs are cheaper" claim deserves more scrutiny than it usually gets.
Ephemeral Agent Runtimes — The Real Future
The most forward-looking part of Chen's analysis is about ephemeral agent runtimes. As agents become more sophisticated, they will not just call tools — they will spin up entire environments, execute complex workflows, and tear everything down. MCP provides a framework for this that raw CLI calls simply do not.
Think of it like containers versus running everything on bare metal. Sure, bare metal is faster for any single task. But containers give you isolation, reproducibility, and orchestration. MCP is heading in the same direction for AI tool use — and if you have ever tried to debug an agent that called 47 CLI tools in sequence with no structured logging, you know why that matters.
Sandra from our platform team put it best: "MCP is not dead. The toy version of MCP is dead. The enterprise version is just getting started." I am going to use that line in every meeting for the next six months. (Sorry, Sandra.)
So What Should You Actually Do
Here is my honest, non-hype-cycle-influenced take:
If you are a solo developer or small team: Use CLIs. Seriously. Local AI tool use does not need the overhead of MCP. Direct API calls or simple CLI wrappers will serve you well.
If you are at a company with more than 20 people using AI tools: Invest in server-side MCP. The auth, telemetry, and standardization benefits are real and will save you from chaos as adoption scales.
If you are evaluating MCP products: Ask whether they are solving a local problem or a server problem. If a vendor is selling you local MCP and you could just use a CLI — you are buying a solution to a problem you do not have.
If you are building an AI product: Support both. Your power users want CLIs. Your enterprise customers need MCP. As context windows expand, the structured context that MCP provides becomes more valuable, not less.
The protocol is not dead. The hype is dead. And honestly, that is the best thing that could have happened to it. Now the people building serious infrastructure can do their work without the influencer circus drowning out signal with noise.
MCP is dead. Long live MCP.
Quick Reference — When to Use MCP vs CLI
| Scenario | Best Choice | Why |
|---|---|---|
| Solo developer, local tools | CLI | Less overhead, faster iteration |
| Small team, few integrations | CLI | Simplicity wins at this scale |
| Org with 20+ AI users | Server MCP | Auth, telemetry, standardization |
| Multi-team agentic workflows | Server MCP | Centralized tool management |
| Customer-facing AI product | Both | CLI for devs, MCP for enterprise |
| Compliance-heavy industry | Server MCP | Audit trails, access control |
Save that table somewhere. I have a feeling we are going to be having this exact conversation for another two years, and it is nice to have a cheat sheet that does not come from someone trying to sell you something.
And for the record — our team meeting ended with the CTO winning the argument. We are putting MCP back in. But this time, server-side only. The local MCP servers are getting replaced with CLI wrappers. Both sides got what they wanted. Compromise: the rarest and most beautiful creature in tech.
Related Reading
If you found this useful, check out these related articles: