MCP Changed How We Think About Integrations
Early pain points, rapid progress, and why CLI tools don't make MCP obsolete

With the popularity of projects like OpenClaw, more people are asking whether CLI tools make MCP obsolete. They don't. Both approaches solve real problems, and they complement each other well. But MCP provides something CLI tools can't easily replicate: a structured protocol with typed schemas, which makes it possible to build deterministic workflows and control which tools are available to which agents.
This post covers how MCP changed how we think about integrations at Spawnbase, what's improved, what's still rough, and where CLI tools fit in.
MCP solves the N x M integrations problem by converting it into an N+M model. Instead of every model needing a custom integration for every tool, a tool provider exposes a single MCP server and any compatible model can use it.
Simple idea, but early on nobody knew if it would stick. A year later, MCP has rough edges but real momentum. Here's what we saw building on it.
The Early Days
Getting It Running
To set up a MCP server with your agent, you had to clone a repository from GitHub, install dependencies, and wire it up with your agent. Authentication was messy. You had to hunt for credentials, create an OAuth app on the service dashboard, configure client IDs and secrets, or find API keys. Then it would sort of work, but the whole experience felt very clumsy.
Using It Day to Day
The quality of many servers wasn't great. Some only implemented limited APIs, while others dumped far too much information into the already limited LLM context. I often found myself modifying MCP server code just to make them work well.
The clients weren't very good either. Connections were unreliable and would often break, forcing you to manually reconnect them again.
Another issue was context pollution. If you connected too many MCP servers, all of their tools would get injected into the model context window. This could consume tens of thousands of tokens before adding a prompt. This made them slower, costlier, and reduced the space available for the model's reasoning.
Rapid Progress
Over the last few months, the improvements have been dramatic.
Hosted MCP Servers
You can now find higher-quality MCP servers on platforms like Smithery, and authentication has improved significantly. Many remote servers now handle OAuth end-to-end. You click "connect," authorize in the browser, and you're done. No more going to a service provider's developer portal, creating an OAuth app, copying client IDs and secrets into config files, and hoping you got the redirect URI right.
Reliable MCP Connections
Clients got better too. Early on, connections would drop and you'd have to manually restart them. Now tools like Claude Code manage MCP connections reliably, reconnecting automatically and handling auth refreshes in the background. Not every client is there yet, but the gap is closing fast.
Leaner LLM Context
Agents have also gotten much better at handling large numbers of MCP servers. Instead of injecting every tool into the LLM context, clients now search for tools when needed and only add the matching tools to the context. This allows agents to work efficiently with many MCP servers connected.
Another emerging pattern is code mode, where agents write small scripts that orchestrate multiple tools locally instead of repeatedly passing data through the model context. This cuts token usage significantly for data-heavy workflows.
All of this reduces latency and cost, and frees up more of the context window for reasoning.
The Real Value
The real value MCP provides is the ability to talk to thousands of servers with the same interface.
Ad Hoc Tasks Across Services
You can give an agent access to multiple MCP servers and let it figure out how to combine them based on what you ask.
Ask it to "find all open bugs assigned to me and summarize them in Slack," and it will pull from Jira, format the summary, and post it to Slack without you specifying the steps. Ask it to "check if any customers churned this week and log them in our CRM," and it will query Stripe, cross-reference HubSpot, and update records on its own. The order and combination of tools is determined at runtime by the agent.
Workflow Automation
MCP is not only useful for ad hoc tasks performed through a copilot. It also changes the picture for workflow automation tools like Zapier and n8n. Those tools do two things: provide a UI to build workflows, and maintain hundreds of integrations to connect platforms together. Building the UI is the easy part. The hard part is building and maintaining all those integrations: keeping up with API changes, handling authentication flows, and supporting edge cases across hundreds of services.
MCP shifts that balance. The integration burden moves to the MCP server, and automation tools can focus on the workflow experience instead of spending most of their engineering effort on connectors. If you need to connect a service that isn't supported yet, you can create an MCP server for it instead of waiting for the automation provider to build an integration.
For example:
- When a new Sentry error comes in, summarize the stack trace and create a Jira ticket.
- Every morning, pull yesterday's customer data from Salesforce, generate a report, and send it to a Slack channel.
- When a new HubSpot lead comes in, draft a follow-up email in Gmail.
At Spawnbase, we use MCP as a backbone for workflows. You can chain together MCP tools in a fixed workflow or make them available for an agent to determine the flow at runtime.
Current Limitations
While the MCP ecosystem has evolved quickly, it still has limitations for acting as an integration layer. I do expect rapid progress in these areas.
Long-Running Agents
Agents can now work autonomously for longer periods and can be scheduled to run daily or weekly. However, many MCP servers still require you to refresh credentials through a browser-based OAuth flow, which can get in the way of building reliable automations.
At Spawnbase we handle this by proactively refreshing connections so users don't have to manually reconnect agents. This is a use case that MCP standards could evolve to support better.
Inconsistent MCP Server Implementations
There are now many MCP servers available, but their quality and implementations vary significantly.
- Some servers are hosted, others must be run locally.
- Some support OAuth, others require digging through settings to find an API key.
- Some are actively maintained by a corporate team, others were quickly written as side projects and never updated.
- Some return JSON payloads, others return formatted text.
What the ecosystem really needs is a curated registry of high-quality MCP servers so developers can easily find reliable integrations.
No Support for Triggers
MCP is a request-response protocol. A client can call tools on a server, but a server has no way to notify a client when something happens. There's no equivalent of a webhook.
This matters for automation. Many useful workflows are event-driven: a new support ticket comes in, a deployment fails, a payment is received. Today, an MCP server can't push that event to a connected client. The client has to poll, or the trigger has to live outside MCP entirely.
For MCP to work well as an automation backbone, it will need some form of server-initiated notifications. This is already on MCPs roadmap, but not a top priority. Until then, platforms that support triggers have to build that capability outside the protocol.
Where This Is Going
The Incentive for MCP Adoption
Zooming out, another interesting shift is how companies are opening up their APIs. In the pre-AI world some platforms were fairly restrictive about what they exposed programmatically. But with the rise of agents and MCP, companies now have a strong incentive to make their products usable by agents.
Software is changing. Agents are beginning to orchestrate software, and this is only the beginning. If an agent cannot interact with your product easily, there will likely be alternatives that it can use.
That dynamic is already pushing companies to create their own official MCP servers, and adoption of the protocol is growing quickly.
MCP at Internet Scale
At first I thought MCP might look more like npm packages: a central directory where people publish integrations, some good and some bad.
But the scale it's growing at suggests something different. There isn't a universal catalog of HTTP endpoints. MCP is following a similar path, no single registry, but the protocol itself becomes ubiquitous.
In late 2025, the protocol was donated to the Agentic AI Foundation under the Linux Foundation with backing from companies like Anthropic, OpenAI, Microsoft, and Google. MCP has become a neutral industry standard for AI communication.
MCP vs CLI Tools
While the MCP ecosystem has been growing rapidly, it's not the only way to build integrations. Agents are excellent at using CLI tools, and projects like OpenClaw are making this approach more accessible. Instead of running a persistent server, each integration is a standalone CLI tool that an agent can invoke directly. CLI tools are easier to package and distribute, they don't need you to manage server connections or persistent state, and they follow a straightforward input/output model that agents handle naturally.
The tradeoff is that CLI tools need a sandboxed compute environment to run in, which adds infrastructure overhead. MCP servers can run remotely and be shared across clients without each client needing its own execution environment.
But the bigger difference is structural. MCP is a typed protocol. Every tool exposes an input schema that describes exactly what parameters it accepts, what types they are, and which ones are required. This matters for two reasons:
Deterministic workflows. MCP tools declare their input and output schemas upfront. You know exactly what parameters a tool accepts, what types they are, and what it returns. This means you can wire tools together in a fixed pipeline without an LLM interpreting each step. A workflow builder can show the user exactly what fields need to be filled in, catch misconfigurations at build time, and prevent invalid calls from ever reaching the external service. With CLI tools, you find out the input was wrong when the command fails.
Access control. MCP makes it straightforward to control which tools are available to a given agent. You can scope an agent's access to a specific set of MCP servers, or even specific tools within a server, and enforce that at the platform level. With CLI tools, access control means restricting which binaries an agent can execute in its sandbox, which is coarser and harder to audit.
None of this means CLI tools are worse. For ad hoc tasks where an agent is figuring things out on the fly, CLI tools are great. They're simple, portable, and agents are already good at using them. The two approaches will likely coexist: MCP for structured, always-on integrations where you want typed schemas and access control; CLI tools for portable, self-contained tools where simplicity matters more than structure.
Conclusion
MCP is still early, and the ecosystem has rough edges. But the pace of improvement over the last year has been remarkable. It turned integrations from something every platform had to build and maintain into a shared protocol that agents compose at runtime.
The rise of CLI tools doesn't change that. They solve a different problem. MCP gives you typed schemas, deterministic workflows, and access control. CLI tools give you simplicity and portability. Most real-world agent platforms will use both.
At Spawnbase, our platform is MCP-native for building these kinds of workflows. You can connect to thousands of MCP servers using the protocol as a first-class capability, making it possible to build a wide range of agentic and automated workflows very quickly.