Skip to main content
ai voiceMarch 11, 202616 min read

MCP Server Integrations Explained: Connecting AI Tools to Your Data

What MCP (Model Context Protocol) servers are, how they work, and how to build one. Real example from HeySeo's integration with Claude, Cursor, and Windsurf.

Loic Bachellerie

Senior Product Engineer

Introduction

Imagine opening Claude and asking it: "Which landing pages are losing the most organic traffic this week?" and getting back a real answer - with specific URLs, percentage drops, and the keywords responsible. Not a generic response about SEO. Your actual data.

That is what MCP unlocks. The Model Context Protocol is the missing layer between AI tools and the systems your business actually runs on. I built the MCP server for HeySeo, an SEO analytics SaaS, and it changed how I use Claude, Cursor, and Windsurf every single day.

In this guide I will walk through what MCP is, how the architecture works, and how to build one from scratch - with the real TypeScript code I use in production.

What Is MCP?

MCP, short for Model Context Protocol, is an open standard created by Anthropic in late 2024. The core idea is simple: give AI assistants a standardized way to call tools and read data from external systems.

Before MCP, every tool integration was bespoke. You would write a custom Claude plugin, a Cursor extension, or a Windsurf integration separately. Each had its own API contract, its own authentication scheme, its own format for passing context. It was the same duplication problem that REST API clients had before OpenAPI - and MCP solves it the same way: with a shared protocol.

An MCP server exposes tools, resources, and prompts over a standard JSON-RPC interface. An AI client (Claude Desktop, Cursor, Windsurf, or your own app) connects to that server and uses whatever capabilities it exposes. Build the server once, connect every client.

What you can expose via MCP:

  • Tools - callable functions the AI can invoke (fetch analytics, search, run queries)
  • Resources - readable data sources the AI can subscribe to (live metrics, documents)
  • Prompts - pre-built prompt templates your users can invoke by name

Why MCP Matters for Builders

If you are building a SaaS product, an internal tool, or even a personal productivity setup, MCP is the most practical way to connect AI to the data you own.

Before MCP, I would copy-paste data into Claude's context window. Monthly traffic report? Export to CSV, paste the top rows, ask questions. Tedious, lossy, and capped at whatever fits in context.

After MCP, I ask Claude directly. The AI calls my MCP server, which queries the right database tables, runs the analysis, and returns structured data. Claude then synthesizes it into an answer. The whole loop takes seconds.

The shift is significant in three ways:

  • Freshness - the AI sees live data, not a stale export from last Tuesday
  • Depth - the AI can ask follow-up queries without me manually fetching more data
  • Consistency - the same server works across every AI tool I use

For HeySeo specifically, this meant our users could connect Claude or Cursor to their own site's analytics and get genuinely contextual SEO advice - not generic best practices.

Architecture Overview

Before writing any code, it helps to understand how the pieces connect.

MCP Architecture

How AI clients connect to your data through a standard protocol

AI Clients
Claude Desktop
Cursor
Windsurf
Slack (via bot)
Custom apps
MCP Server
Tools

Callable functions

Resources

Readable data streams

Prompts

Named prompt templates

JSON-RPC 2.0
Your Data
PostgreSQL / Firestore
Google Search Console
Analytics APIs
Internal APIs
File systems
stdio transport
HTTP/SSE transport
WebSocket transport

The protocol supports two transport modes:

  • stdio - the client spawns the server as a subprocess and communicates over stdin/stdout. Fast, zero network overhead, ideal for local developer tools like Claude Desktop, Cursor, and Windsurf.
  • HTTP with SSE - the server runs as a standard HTTP service. Better for production deployments where multiple users need concurrent access, or where the server must be hosted separately from the client.

For HeySeo, we use stdio for developers connecting through their IDE and HTTP/SSE for the production server that our Slack bot connects to.

Building an MCP Server Step by Step

The official SDK for TypeScript is @modelcontextprotocol/sdk. It handles the protocol framing, connection lifecycle, and type safety, so you can focus on your actual business logic.

Install Dependencies

npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node tsx

Initialize the Server

// src/server.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
 
const server = new McpServer({
  name: "heyseo",
  version: "1.0.0",
});

The McpServer class is the foundation. You register tools and resources on it, then attach a transport.

Register Your First Tool

A tool is a function the AI can call with typed parameters. Here is a real tool from HeySeo that fetches top-performing keywords for a site:

// src/tools/top-keywords.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";
import { getTopKeywords } from "../data/keywords.js";
 
export function registerTopKeywordsTool(server: McpServer) {
  server.tool(
    "get_top_keywords",
    "Fetch the top-ranking keywords for a site, with click and impression data from Google Search Console.",
    {
      siteUrl: z
        .string()
        .url()
        .describe("The verified site URL, e.g. https://example.com"),
      limit: z
        .number()
        .int()
        .min(1)
        .max(100)
        .default(20)
        .describe("Number of keywords to return"),
      dateRange: z
        .enum(["7d", "28d", "90d"])
        .default("28d")
        .describe("Date range for the data"),
    },
    async ({ siteUrl, limit, dateRange }) => {
      const keywords = await getTopKeywords({ siteUrl, limit, dateRange });
 
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify(keywords, null, 2),
          },
        ],
      };
    }
  );
}

The Zod schema does double duty: it validates inputs at runtime and generates the JSON Schema that the AI client uses to understand what parameters the tool accepts. This is one of those details that matters in practice - if your schema is vague, the AI will call your tool with bad arguments.

Register a Resource

Resources are different from tools. A tool is imperative (do something). A resource is declarative (here is data you can read). Think of it as an RSS feed the AI can subscribe to.

// src/resources/site-overview.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { getSiteOverview } from "../data/overview.js";
 
export function registerSiteOverviewResource(server: McpServer) {
  server.resource(
    "site-overview",
    "heyseo://sites/{siteUrl}/overview",
    async (uri) => {
      const siteUrl = extractSiteUrl(uri.href);
      const overview = await getSiteOverview(siteUrl);
 
      return {
        contents: [
          {
            uri: uri.href,
            mimeType: "application/json",
            text: JSON.stringify(overview, null, 2),
          },
        ],
      };
    }
  );
}
 
function extractSiteUrl(href: string): string {
  const match = href.match(/heyseo:\/\/sites\/([^/]+)\/overview/);
  if (!match) {
    throw new Error(`Invalid resource URI: ${href}`);
  }
  return decodeURIComponent(match[1]);
}

Connect the Transport and Start

// src/server.ts (continued)
import { registerTopKeywordsTool } from "./tools/top-keywords.js";
import { registerSiteOverviewResource } from "./resources/site-overview.js";
 
registerTopKeywordsTool(server);
registerSiteOverviewResource(server);
 
const transport = new StdioServerTransport();
await server.connect(transport);
 
// Keep alive - the server runs until the client disconnects

For production HTTP deployments, swap the transport:

import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import express from "express";
 
const app = express();
const transport = new SSEServerTransport("/messages", res);
app.get("/sse", (req, res) => server.connect(new SSEServerTransport("/messages", res)));
app.post("/messages", express.json(), (req, res) => transport.handlePostMessage(req, res));
app.listen(3000);

Connecting to Claude Desktop

Claude Desktop has native MCP support. You configure servers in ~/Library/Application Support/Claude/claude_desktop_config.json on macOS:

{
  "mcpServers": {
    "heyseo": {
      "command": "node",
      "args": ["/path/to/heyseo-mcp/dist/server.js"],
      "env": {
        "HEYSEO_API_KEY": "your-api-key"
      }
    }
  }
}

Restart Claude Desktop after editing. You will see a small tools icon in the input bar when a server is connected. From that point, any conversation can call your tools - Claude decides autonomously when to invoke them based on context.

In practice, I ask questions like "What pages are cannibalizing each other for the keyword 'best crm for startups'?" and Claude calls search_keyword_cannibalization on my MCP server, gets back structured data, and synthesizes a clear recommendation. No copy-pasting required.

Connecting to Cursor

Cursor added MCP support in version 0.43. The config lives at ~/.cursor/mcp.json:

{
  "mcpServers": {
    "heyseo": {
      "command": "node",
      "args": ["/path/to/heyseo-mcp/dist/server.js"],
      "env": {
        "HEYSEO_API_KEY": "your-api-key"
      }
    }
  }
}

Within Cursor, MCP tools are available in the Composer (Cmd+I) and Chat (Cmd+L) panels. The key difference from Claude Desktop is context: when you are working on code in Cursor, the AI also has your file context. So you can ask "based on this component's analytics, which of these keywords should I target in the copy?" and the AI can reason about both the code on screen and the live data from your MCP server simultaneously.

For HeySeo, I use this when building new landing pages. The MCP server tells me the current keyword opportunities, and Cursor helps me implement the content structure that targets them.

Connecting to Windsurf

Windsurf (Codeium's agentic IDE) uses the same config format. Open Settings, navigate to MCP, and add your server entry. Windsurf's Cascade agent is particularly good at multi-step tasks that combine code changes with external data lookups - which is exactly what MCP enables.

One thing I noticed with Windsurf specifically: its agent is more aggressive about calling MCP tools automatically without you asking. When I am working on a page that has a URL matching something in our site map, Cascade will proactively fetch keyword performance data and surface it in the context panel. It is a small thing, but it means the data is there before you even realize you need it.

Real Example: HeySeo's MCP Server

HeySeo is an SEO analytics SaaS that connects to Google Search Console and GA4. The MCP integration we built exposes the platform's core analytical capabilities to any AI tool the user prefers.

Here are the tools we ship:

ToolWhat it does
get_top_keywordsTop keywords by clicks, impressions, CTR
get_landing_pagesPage-level performance breakdown
get_ranking_historyKeyword position over time
search_serpLive SERP data for any query
get_opportunitiesKeyword gaps and quick wins
run_onpage_auditTechnical SEO audit for a URL
query_gscRaw GSC query with custom date range and dimensions

Here is the opportunity finder, which is one of the most useful tools in practice:

// src/tools/find-opportunities.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";
import { findKeywordOpportunities } from "../data/opportunities.js";
 
export function registerFindOpportunitiesTool(server: McpServer) {
  server.tool(
    "find_opportunities",
    "Find keyword opportunities for a site: queries with high impressions but low CTR (position 4-20), indicating quick wins with content improvements.",
    {
      siteUrl: z.string().url().describe("Site URL to analyze"),
      minImpressions: z
        .number()
        .int()
        .min(10)
        .default(100)
        .describe("Minimum impression threshold"),
      maxPosition: z
        .number()
        .min(1)
        .max(50)
        .default(20)
        .describe("Maximum average position to include"),
    },
    async ({ siteUrl, minImpressions, maxPosition }) => {
      const opportunities = await findKeywordOpportunities({
        siteUrl,
        minImpressions,
        maxPosition,
      });
 
      const formatted = opportunities.map((kw) => ({
        query: kw.query,
        impressions: kw.impressions,
        clicks: kw.clicks,
        ctr: `${(kw.ctr * 100).toFixed(1)}%`,
        position: kw.position.toFixed(1),
        estimatedTrafficGain: kw.estimatedGain,
      }));
 
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify(
              {
                total: formatted.length,
                opportunities: formatted.slice(0, 50),
              },
              null,
              2
            ),
          },
        ],
      };
    }
  );
}

The data layer (findKeywordOpportunities) queries our Firestore database, which syncs from Google Search Console nightly. The MCP server does not care about the database - it just calls functions and returns structured data. This separation keeps the server simple and the business logic testable.

The Slack Integration

For the Slack bot, we run HeySeo's MCP server over HTTP/SSE and call it from a Slack slash command handler. Users type /heyseo opportunities site:example.com in Slack, our bot POSTs to the MCP server, gets back the data, and formats it into a Slack Block Kit message.

This was about 150 lines of code on top of the existing MCP server. The server itself required zero changes - that is the value of the protocol abstraction.

Security Considerations

Exposing business data through an AI tool interface introduces real security surface area. Here is how I think about it for HeySeo.

Authentication. For stdio transports (Claude Desktop, Cursor, Windsurf), the server runs as the user's process. Environment variables for API keys are acceptable here - the key never leaves the machine. For HTTP transports, use bearer token authentication on every request.

// src/middleware/auth.ts
export function requireApiKey(apiKey: string | undefined): void {
  const expectedKey = process.env.HEYSEO_API_KEY;
 
  if (!expectedKey) {
    throw new Error("HEYSEO_API_KEY environment variable is not set");
  }
 
  if (apiKey !== expectedKey) {
    throw new Error("Invalid API key");
  }
}

Scope limiting. Tools should only expose what is necessary. If a user connects their personal site, the server should only return data for that site - not any other site in the system. We enforce this by binding each API key to a specific siteUrl at the database level.

Input validation. The Zod schemas in your tool definitions are your first line of defense. Treat all tool inputs as untrusted and validate them strictly. Never pass raw string inputs directly to a database query.

// Never do this
const results = await db.query(`SELECT * FROM keywords WHERE site = '${siteUrl}'`);
 
// Always do this
const results = await db.collection("keywords")
  .where("siteUrl", "==", siteUrl)
  .limit(100)
  .get();

Rate limiting. AI tools can call your MCP server aggressively, especially during multi-step agent runs. Add per-key rate limiting to protect your backend and your API quotas.

// src/middleware/rate-limit.ts
const requestCounts = new Map<string, { count: number; resetAt: number }>();
 
export function checkRateLimit(apiKey: string, maxPerMinute = 30): void {
  const now = Date.now();
  const entry = requestCounts.get(apiKey);
 
  if (!entry || entry.resetAt < now) {
    requestCounts.set(apiKey, { count: 1, resetAt: now + 60_000 });
    return;
  }
 
  if (entry.count >= maxPerMinute) {
    throw new Error("Rate limit exceeded. Try again in a minute.");
  }
 
  requestCounts.set(apiKey, { ...entry, count: entry.count + 1 });
}

Error messages. Do not return internal stack traces or database error messages to the AI client. Return user-friendly error strings that explain what went wrong without leaking implementation details.

Production Deployment

For a local server (stdio transport), deployment is just publishing the npm package or shipping a binary. Users install it and configure their client.

For a hosted server (HTTP/SSE transport), I run HeySeo's MCP server on Railway with the following setup:

FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY dist/ ./dist/
ENV NODE_ENV=production
EXPOSE 3000
CMD ["node", "dist/server.js"]

A few production considerations worth calling out:

Graceful shutdown. MCP server processes need to handle SIGTERM cleanly. If a client is mid-conversation and your server dies without a clean disconnect, the client may enter a broken state.

process.on("SIGTERM", async () => {
  await server.close();
  process.exit(0);
});

Structured logging. For stdio servers, all output to stdout becomes part of the MCP protocol. Never console.log to stdout in a stdio server - use console.error or a structured logger that writes to a file or stderr only.

Health checks. For HTTP deployments, expose a /health endpoint that checks your database connection and any upstream API dependencies. Railway and Fly.io will use this to route traffic and restart unhealthy instances.

Versioning. The MCP server version you declare in the constructor matters. Clients use it to decide whether to re-cache tool schemas. Bump the version whenever you add, remove, or change the signature of a tool.

What Is Next for MCP

MCP reached 1.0 in early 2025 and the ecosystem has grown fast. A few things I am watching:

Multi-server composition. Today, each client connects to servers independently. There is early work on proxy servers that aggregate multiple MCP servers behind a single endpoint - useful when you have ten internal tools and do not want to configure each IDE separately.

Sampling and streaming. The protocol already supports streaming responses, but most servers return everything at once. As data sizes grow, incremental streaming will matter more. The SDK has the hooks; most servers just have not implemented them yet.

Standardized auth. The community is converging on OAuth 2.0 as the standard for remote MCP servers, which will make connecting to third-party MCP servers much simpler than sharing raw API keys.

IDE-native AI panels. Both Cursor and Windsurf are building deeper UI surfaces for MCP data - not just text responses, but charts, diffs, and structured views. HeySeo is experimenting with returning Markdown tables and chart data that renders directly in the IDE.

The trajectory is clear: MCP is becoming the standard plumbing layer between AI tools and the systems they need to interact with. If you are building a SaaS product in 2026, shipping an MCP server is quickly becoming table stakes - the same way shipping a REST API became expected in 2015.

FAQ

Q: Do I need to build an MCP server to use MCP? A: No. There are hundreds of pre-built MCP servers for common tools (GitHub, Notion, PostgreSQL, Slack, and more). Check the official registry at modelcontextprotocol.io/servers. You only need to build one when you have custom data or proprietary APIs to expose.

Q: Is MCP only for Anthropic's Claude? A: No. MCP is an open standard and any AI client can implement it. Cursor (which uses Claude and OpenAI models), Windsurf (Codeium), and several others already support it. OpenAI has not adopted it yet as of this writing, but community pressure is growing.

Q: Can my MCP server handle multiple users? A: Yes, with HTTP/SSE transport. Each client connection gets its own session. You authenticate per-connection and scope data access to that user's permissions. The stdio transport is inherently single-user since it runs as the user's own process.

Q: How does MCP compare to OpenAI's function calling or plugin spec? A: OpenAI function calling is a per-request API feature. MCP is a long-lived connection protocol. MCP also works across multiple AI providers without any changes to your server, which is the main practical advantage.

Q: What happens when my MCP server is down? A: The AI client fails gracefully and tells the user the tool is unavailable. Claude Desktop, Cursor, and Windsurf all handle this correctly. Build in health checks and fast restarts to minimize downtime.


Building the HeySeo MCP server took a weekend and immediately changed how the team uses AI tools. If your product has data that could inform better AI-assisted decisions, SEO analytics, CRM records, code metrics, financial data, the investment is absolutely worth it.

If you want to talk through your MCP integration, reach out and let's build it together.

Share:

Get practical engineering insights

AI voice agents, automation workflows, and shipping fast. No spam, unsubscribe anytime.