What is MCP (Model Context Protocol)?
The Model Context Protocol (MCP) is an open standard — originally created by Anthropic — that defines how AI models communicate with external tools, data sources, and services. Think of it as a USB-C port for AI: a single, universal interface that any AI client can plug into to access any capability you expose.
Before MCP, every AI integration was bespoke. If you wanted Claude to query your database, you wrote a custom function. If you wanted Cursor to manage your tasks, you wrote a different custom function. MCP replaces that with a standardized server that any compatible client can discover and use — no custom glue code per client.
Why MCP Matters in 2026
AI agents in 2026 are no longer just chatbots. They manage projects, write and deploy code, handle customer support, and orchestrate multi-step workflows. But agents are only as useful as the tools they can access. An agent without tools is just a text generator.
MCP matters because it solves the integration problem at scale. Instead of building N integrations for N clients, you build one MCP server and every MCP-compatible client — Claude Desktop, Cursor, OpenCode, Claude Code, Windsurf, custom agents — gets access instantly. The ecosystem is growing fast: as of early 2026, Claude, Cursor, Windsurf, Cline, and dozens of other tools support MCP natively.
For businesses, this means your internal tools, databases, and APIs can be exposed to AI agents through a single protocol. For developers, it means the MCP servers you build today will work with AI clients that don't even exist yet.
MCP Architecture: The 3 Primitives
Every MCP server exposes three types of capabilities. Understanding these is the foundation of everything that follows.
Tools
Tools are functions that the AI can call. They take structured input (defined with JSON Schema, usually via Zod), perform some action, and return a result. Examples: create a task, query a database, send an email, deploy a release. Tools are the most common primitive — if you're building an MCP server, you're probably building tools.
Resources
Resources provide read-only data that the AI can pull into its context. They're identified by URIs (like myapp://projects/123/tasks) and return structured content. Resources are useful for giving the AI background information — project state, documentation, configuration — without the AI needing to "call" anything.
Prompts
Prompts are reusable message templates that clients can surface to users. They let you package common workflows — like "review this PR" or "plan next sprint" — as named, parameterized templates that the AI can execute with a single command.
Building Your First MCP Server (TypeScript)
Project Setup
Start by creating a new TypeScript project and installing the MCP SDK:
mkdir my-mcp-server && cd my-mcp-server
bun init -y
bun add @modelcontextprotocol/sdk zodThe @modelcontextprotocol/sdk package gives you the server framework. zod handles input validation and generates JSON Schema automatically.
Defining a Tool with Zod
Let's build a simple weather tool. Create src/index.ts:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";
const server = new McpServer({
name: "weather-server",
version: "1.0.0",
});
// Define a tool with Zod schema for input validation
server.tool(
"get_weather",
"Get current weather for a city",
{
city: z.string().describe("City name, e.g. 'Toronto'"),
units: z
.enum(["celsius", "fahrenheit"])
.default("celsius")
.describe("Temperature units"),
},
async ({ city, units }) => {
// In production, call a real weather API here
const temp = units === "celsius" ? "18°C" : "64°F";
return {
content: [
{
type: "text",
text: JSON.stringify({
city,
temperature: temp,
condition: "Partly cloudy",
humidity: "62%",
}),
},
],
};
}
);The Zod schema does double duty: it validates inputs at runtime and generates the JSON Schema that clients use for discovery. The .describe() calls become the parameter descriptions that AI models read to understand how to call your tool.
Adding a Resource
Resources give AI clients read-only access to data. Here's how to expose a project's configuration:
server.resource(
"project-config",
"config://project",
async (uri) => ({
contents: [
{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify({
name: "my-app",
version: "2.1.0",
environment: "production",
features: ["auth", "payments", "notifications"],
}),
},
],
})
);Adding a Prompt
Prompts are reusable templates. Here's one that helps an agent review a deploy:
server.prompt(
"review-deploy",
"Review a deployment before shipping",
{ version: z.string().describe("Version being deployed") },
({ version }) => ({
messages: [
{
role: "user",
content: {
type: "text",
text: [
`Review the deployment plan for version ${version}.`,
"Check for: breaking changes, missing migrations,",
"environment variable updates, and rollback plan.",
"Summarize risks and give a go/no-go recommendation.",
].join("\n"),
},
},
],
})
);Starting the Server
For local development with stdio transport (what Claude Desktop and most clients use):
import { StdioServerTransport } from
"@modelcontextprotocol/sdk/server/stdio.js";
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("Weather MCP server running on stdio");Note the console.error — MCP uses stdout for protocol messages, so your logs must go to stderr.
Deployment Options
Cloudflare Workers (Recommended)
This is what we use at Snowlabs for our production MCP servers. Cloudflare Workers give you global edge deployment, zero cold starts, and built-in KV/Vectorize/AI bindings. The MCP SDK has first-class support for the Workers runtime.
Here's a real wrangler.jsonc from a production MCP server:
{
"name": "my-mcp-server",
"main": "src/index.ts",
"compatibility_date": "2025-04-03",
"compatibility_flags": ["nodejs_compat"],
// MCP servers use Durable Objects for session state
"durable_objects": {
"bindings": [
{
"name": "MCP_OBJECT",
"class_name": "McpObject"
}
]
},
"migrations": [
{
"tag": "v1",
"new_classes": ["McpObject"]
}
],
// Optional: KV for caching, AI for embeddings
"kv_namespaces": [
{ "binding": "CACHE_KV", "id": "abc123" }
],
"ai": { "binding": "AI" }
}Your Worker entry point uses the McpAgent class from the SDK's Cloudflare adapter:
import { McpAgent } from
"@modelcontextprotocol/sdk/server/mcp.js";
import { McpServer } from
"@modelcontextprotocol/sdk/server/mcp.js";
export class McpObject extends McpAgent {
server = new McpServer({
name: "my-mcp-server",
version: "1.0.0",
});
async init() {
// Register tools, resources, prompts here
this.server.tool("get_weather", /* ... */);
}
}
export default {
async fetch(request: Request, env: Env) {
// Route /sse and /mcp to the Durable Object
const url = new URL(request.url);
if (url.pathname === "/sse" || url.pathname === "/mcp") {
const id = env.MCP_OBJECT.idFromName("default");
const obj = env.MCP_OBJECT.get(id);
return obj.fetch(request);
}
return new Response("MCP Server", { status: 200 });
},
};Deploy with a single command:
bunx wrangler deployNode.js Server
If you prefer a traditional server, the stdio transport works with any Node.js host. For HTTP-based transport, you can use the SSE adapter with Express or any HTTP framework:
import express from "express";
import { SSEServerTransport } from
"@modelcontextprotocol/sdk/server/sse.js";
const app = express();
app.get("/sse", async (req, res) => {
const transport = new SSEServerTransport("/messages", res);
await server.connect(transport);
});
app.post("/messages", async (req, res) => {
// Handle incoming messages
});
app.listen(3001);Docker
For containerized deployments, package your Node.js server in a Dockerfile. The stdio transport works with any container orchestration. For remote access, use the SSE transport behind a reverse proxy with TLS.
Connecting to Claude, Cursor, and Other Clients
Claude Desktop
Add your server to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"weather": {
"command": "bun",
"args": ["run", "/path/to/my-mcp-server/src/index.ts"]
}
}
}For remote servers (like Cloudflare Workers), use the URL directly:
{
"mcpServers": {
"weather": {
"url": "https://my-mcp-server.workers.dev/sse"
}
}
}Cursor
In Cursor, go to Settings > MCP Servers and add your server. Cursor supports both stdio (local) and SSE (remote) transports:
// .cursor/mcp.json
{
"mcpServers": {
"weather": {
"url": "https://my-mcp-server.workers.dev/sse"
}
}
}Claude Code / OpenCode
For CLI-based agents like Claude Code, add the server to your project's .mcp.json or the global config at ~/.claude.json:
{
"mcpServers": {
"weather": {
"type": "url",
"url": "https://my-mcp-server.workers.dev/mcp"
}
}
}Once configured, the agent discovers your tools automatically. You'll see them listed when the agent starts up, and it can call them as needed during any conversation.
Real-World MCP Server: VersionPill's 17-Tool MCP
To show what a production MCP server looks like at scale, here's VersionPill's MCP server — a product management tool we built at Snowlabs. It runs on Cloudflare Workers and exposes 17 tools that let AI agents manage an entire product lifecycle.
Here's a sampling of the tools:
- -
context— Pull project state, current sprint, docs, and diffs in various modes (state, work, full, smart) - -
task— Create, update, move, estimate, groom, relate, and archive tasks with full backlog management - -
release— Smart release planning that groups bugs into patches, features into minors, and detects breaking changes for majors - -
brain_dump— End-of-session sync that persists discoveries, blockers, and follow-ups as searchable knowledge - -
brain— Semantic search across all project knowledge (memories, sessions, decisions, docs, tasks, releases) - -
memory— Store, recall, correct, and forget project-specific memories with vector embeddings - -
decision— Propose, check, and resolve architectural decisions with full history - -
learn— Record mistakes, patterns, and checklists that surface as preflight checks on future tasks
The key insight: each tool is focused and composable. The task tool doesn't try to do releases. The release tool doesn't try to manage memories. The agent composes them together based on what it needs.
This architecture lets a single AI agent — Claude Code, for example — act as a full product manager. It starts a session, checks context, works through tasks, records decisions, and dumps learnings at the end. All through MCP tools, all stored in Convex with vector search for semantic retrieval.
Best Practices
Keep Tools Focused and Composable
Each tool should do one thing well. Don't build a do_everything tool. Build create_task, update_task, move_task. AI models are good at composing multiple small tools — better, in fact, than navigating complex multi-mode tools.
Use Zod for Everything
Zod schemas are your contract with AI clients. Use .describe() on every field — these descriptions are what the AI reads to understand your tool. Be specific: "City name, e.g. ‘Toronto’"is better than "city". Use enums where possible to constrain inputs.
Rate Limiting and Auth
For remote MCP servers, always add authentication. The common pattern is OAuth 2.0 or bearer tokens in the initial connection handshake. Cloudflare Workers make this straightforward with the workers-oauth-provider package for full OAuth flows, or simple API key validation in the fetch handler.
Rate limiting matters especially for tools that call external APIs or perform writes. Use Cloudflare's built-in rate limiting or implement a simple token bucket in KV.
Testing with the MCP Inspector
The MCP Inspector is an interactive debugging tool that lets you connect to any MCP server and test tools, resources, and prompts manually:
bunx @modelcontextprotocol/inspectorPoint it at your server (stdio or SSE) and you get a UI to call each tool with sample inputs, inspect responses, and verify your schemas. Use it before connecting to Claude or Cursor — it catches schema issues, missing descriptions, and runtime errors early.
Error Handling
Return clear, structured error messages. The AI will read them and adjust its approach. Instead of throwing generic errors, return content with isError: true and a human-readable message explaining what went wrong and how to fix it.
server.tool("create_task", /* schema */, async (input) => {
if (!input.title.trim()) {
return {
content: [{ type: "text", text: "Title cannot be empty" }],
isError: true,
};
}
// ... create the task
});What's Next: MCP + AI Agents
MCP is still early, but it's becoming the de facto standard for AI-tool integration. Here's where things are heading:
- -Streamable HTTP transport is replacing SSE as the default for remote servers — better bidirectional communication and simpler infrastructure
- -Tool composition — agents will chain tools across multiple MCP servers in a single workflow, enabling cross-product orchestration
- -Auth standardization — OAuth 2.1 flows are becoming the standard, with per-tool permission scopes so agents only access what they need
- -MCP marketplaces — expect directories of MCP servers that agents can discover and connect to dynamically, similar to app stores
The developers building MCP servers today are building the infrastructure that AI agents will run on tomorrow. If your product has an API, it should have an MCP server. If you're building internal tools, MCP is the fastest way to make them agent-accessible.