You have read about the MCP protocol. You understand the architecture diagrams. Now you want to build something real. This tutorial takes you from an empty directory to a fully functional MCP Server in under five minutes, with tools, resources, and prompts that you can connect to Claude Desktop, Cursor, or Trae immediately.

No theory preamble. No architecture deep-dives. Just working code, explained step by step.

If you need the conceptual foundation first, start with the MCP Protocol Deep Dive. If you are ready to build, keep reading.

Key Takeaways

  • Build a working MCP Server with three primitives (tools, resources, prompts) in a single file
  • Validate tool inputs with JSON Schema at the protocol level
  • Test everything locally with MCP Inspector before connecting to any client
  • Connect your server to Claude Desktop, Cursor, and Trae with copy-paste configuration
  • Avoid the five most common mistakes that trip up first-time MCP developers

What Is an MCP Server (Quick Recap)

An MCP Server is a process that exposes capabilities to AI agents through the Model Context Protocol. Think of it as a plugin that any MCP-compatible LLM client can discover and use, without custom integration code for each client.

The server communicates over JSON-RPC 2.0 and exposes three types of primitives:

  • Tools: Functions the LLM can call with arguments (similar to function calling but standardized across clients)
  • Resources: Read-only data the LLM can reference (files, database records, API responses)
  • Prompts: Reusable message templates that guide the LLM for specific tasks

The client (Claude Desktop, Cursor, Trae, or your own application) connects to the server, discovers its capabilities through a handshake, and then invokes them as part of the conversation flow. This is the tool use pattern made portable.

For the full architectural breakdown, see the MCP Protocol Deep Dive.

Prerequisites

Before you start, make sure you have:

  • Node.js 18+ (run node --version to check)
  • npm (comes with Node.js)
  • A terminal and a text editor

That is it. No Docker, no cloud accounts, no complex build toolchains.

Step 1: Project Setup

Create a new directory and initialize the project:

bash
mkdir my-mcp-server && cd my-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod

The @modelcontextprotocol/sdk package is the official SDK maintained by Anthropic. It handles all protocol-level concerns: JSON-RPC framing, capability negotiation, transport management, and the lifecycle handshake. The zod library provides runtime schema validation that integrates cleanly with the SDK's input validation.

If you prefer TypeScript, add the dev dependencies:

bash
npm install -D typescript ts-node @types/node

Create a tsconfig.json:

json
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "./dist",
    "strict": true,
    "esModuleInterop": true
  },
  "include": ["src/**/*"]
}

For this tutorial, we will use TypeScript. Create the source directory:

bash
mkdir src

Step 2: Create the Server Skeleton

Create src/index.ts with the minimal server boilerplate:

typescript
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

const server = new McpServer({
  name: "my-first-mcp-server",
  version: "1.0.0",
});

// We will add tools, resources, and prompts here

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("MCP Server running on stdio");
}

main().catch(console.error);

Key details:

  • McpServer is the high-level API that registers handlers and manages the protocol lifecycle.
  • StdioServerTransport communicates over standard input/output. This is the default transport for local development and desktop clients.
  • We log to stderr because stdout is reserved for the JSON-RPC protocol messages.

Step 3: Implement Your First Tool

Tools are the most commonly used primitive. They let the LLM call functions with validated arguments and receive structured results. Let us add a tool that calculates the reading time for a given text:

typescript
import { z } from "zod";

server.tool(
  "calculate-reading-time",
  "Calculate the estimated reading time for a given text",
  {
    text: z.string().describe("The text content to analyze"),
    wordsPerMinute: z
      .number()
      .optional()
      .default(200)
      .describe("Reading speed in words per minute"),
  },
  async ({ text, wordsPerMinute }) => {
    const wordCount = text.trim().split(/\s+/).length;
    const minutes = Math.ceil(wordCount / wordsPerMinute);

    return {
      content: [
        {
          type: "text",
          text: JSON.stringify(
            {
              wordCount,
              wordsPerMinute,
              estimatedMinutes: minutes,
              humanReadable:
                minutes === 1 ? "1 minute" : `${minutes} minutes`,
            },
            null,
            2
          ),
        },
      ],
    };
  }
);

Let us break down the server.tool() signature:

  1. Name ("calculate-reading-time"): The identifier clients and LLMs use to invoke this tool.
  2. Description: A natural language description the LLM reads to decide when to use this tool. Write it clearly — this is essentially prompt engineering for tool selection.
  3. Input schema: A Zod schema that the SDK automatically converts to JSON Schema for the protocol. Invalid inputs are rejected before your handler runs.
  4. Handler: An async function that receives the validated arguments and returns content blocks.

Now add a second tool that performs a simple string transformation:

typescript
server.tool(
  "convert-case",
  "Convert text between different cases (uppercase, lowercase, title case)",
  {
    text: z.string().describe("The text to convert"),
    targetCase: z
      .enum(["uppercase", "lowercase", "titlecase"])
      .describe("The target case format"),
  },
  async ({ text, targetCase }) => {
    let result: string;
    switch (targetCase) {
      case "uppercase":
        result = text.toUpperCase();
        break;
      case "lowercase":
        result = text.toLowerCase();
        break;
      case "titlecase":
        result = text.replace(
          /\b\w/g,
          (char) => char.toUpperCase()
        );
        break;
    }

    return {
      content: [{ type: "text", text: result }],
    };
  }
);

Notice how the z.enum() constrains the targetCase parameter to exactly three valid values. The LLM sees these options in the schema and can choose correctly. If a client sends an invalid value, the SDK rejects it before your handler executes.

Step 4: Implement a Resource

Resources provide read-only data that the LLM can pull into its context. Unlike tools, resources do not perform actions — they serve information. Let us expose a server status resource:

typescript
server.resource(
  "server-status",
  "status://current",
  async (uri) => {
    const status = {
      serverName: "my-first-mcp-server",
      version: "1.0.0",
      uptime: process.uptime(),
      nodeVersion: process.version,
      memoryUsage: process.memoryUsage().heapUsed,
      timestamp: new Date().toISOString(),
    };

    return {
      contents: [
        {
          uri: uri.href,
          mimeType: "application/json",
          text: JSON.stringify(status, null, 2),
        },
      ],
    };
  }
);

The second argument is the URI that identifies this resource. Clients use this URI to request the data. The contents array can include multiple items, each with a MIME type so the client knows how to interpret the data.

You can also expose resource templates for dynamic URIs:

typescript
server.resource(
  "env-variable",
  "env://{name}",
  async (uri, { name }) => {
    const value = process.env[name];

    if (!value) {
      return {
        contents: [
          {
            uri: uri.href,
            mimeType: "text/plain",
            text: `Environment variable '${name}' is not set`,
          },
        ],
      };
    }

    return {
      contents: [
        {
          uri: uri.href,
          mimeType: "text/plain",
          text: value,
        },
      ],
    };
  }
);

The {name} placeholder in the URI template lets clients request any environment variable by name. The SDK extracts the parameter and passes it to your handler.

Step 5: Implement a Prompt

Prompts are reusable message templates. They help standardize how the LLM approaches specific tasks. Let us create a prompt for code review:

typescript
server.prompt(
  "code-review",
  "Generate a structured code review for the given code snippet",
  {
    code: z.string().describe("The code to review"),
    language: z
      .string()
      .optional()
      .default("unknown")
      .describe("The programming language"),
    focus: z
      .enum(["security", "performance", "readability", "all"])
      .optional()
      .default("all")
      .describe("What aspect to focus the review on"),
  },
  async ({ code, language, focus }) => {
    return {
      messages: [
        {
          role: "user",
          content: {
            type: "text",
            text: [
              `Review the following ${language} code with a focus on ${focus}:`,
              "",
              "```" + language,
              code,
              "```",
              "",
              "Provide your review in this structure:",
              "1. Summary (one paragraph)",
              "2. Issues Found (numbered list with severity)",
              "3. Suggestions for Improvement",
              "4. What Was Done Well",
            ].join("\n"),
          },
        },
      ],
    };
  }
);

When a client expands this prompt, it receives a complete message array ready to send to the LLM. The LLM does not invoke prompts directly — the client or user selects a prompt, fills in the arguments, and then sends the expanded messages.

Step 6: The Complete Server File

Here is the full src/index.ts with all three primitives combined:

typescript
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "my-first-mcp-server",
  version: "1.0.0",
});

// --- Tools ---

server.tool(
  "calculate-reading-time",
  "Calculate the estimated reading time for a given text",
  {
    text: z.string().describe("The text content to analyze"),
    wordsPerMinute: z
      .number()
      .optional()
      .default(200)
      .describe("Reading speed in words per minute"),
  },
  async ({ text, wordsPerMinute }) => {
    const wordCount = text.trim().split(/\s+/).length;
    const minutes = Math.ceil(wordCount / wordsPerMinute);
    return {
      content: [
        {
          type: "text",
          text: JSON.stringify(
            { wordCount, wordsPerMinute, estimatedMinutes: minutes },
            null,
            2
          ),
        },
      ],
    };
  }
);

server.tool(
  "convert-case",
  "Convert text between different cases",
  {
    text: z.string().describe("The text to convert"),
    targetCase: z
      .enum(["uppercase", "lowercase", "titlecase"])
      .describe("The target case format"),
  },
  async ({ text, targetCase }) => {
    let result: string;
    switch (targetCase) {
      case "uppercase":
        result = text.toUpperCase();
        break;
      case "lowercase":
        result = text.toLowerCase();
        break;
      case "titlecase":
        result = text.replace(/\b\w/g, (c) => c.toUpperCase());
        break;
    }
    return { content: [{ type: "text", text: result }] };
  }
);

// --- Resources ---

server.resource("server-status", "status://current", async (uri) => ({
  contents: [
    {
      uri: uri.href,
      mimeType: "application/json",
      text: JSON.stringify(
        {
          serverName: "my-first-mcp-server",
          version: "1.0.0",
          uptime: process.uptime(),
          nodeVersion: process.version,
          timestamp: new Date().toISOString(),
        },
        null,
        2
      ),
    },
  ],
}));

// --- Prompts ---

server.prompt(
  "code-review",
  "Generate a structured code review",
  {
    code: z.string().describe("The code to review"),
    language: z.string().optional().default("unknown"),
    focus: z
      .enum(["security", "performance", "readability", "all"])
      .optional()
      .default("all"),
  },
  async ({ code, language, focus }) => ({
    messages: [
      {
        role: "user" as const,
        content: {
          type: "text" as const,
          text: `Review this ${language} code (focus: ${focus}):\n\n\`\`\`${language}\n${code}\n\`\`\`\n\nStructure: 1) Summary 2) Issues 3) Suggestions 4) Positives`,
        },
      },
    ],
  })
);

// --- Start ---

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("MCP Server running on stdio");
}

main().catch(console.error);

Add a start script to package.json:

json
{
  "scripts": {
    "start": "npx ts-node src/index.ts",
    "build": "tsc",
    "start:built": "node dist/index.js"
  }
}

Step 7: Testing with MCP Inspector

Before connecting to any client, verify your server works correctly using the official MCP Inspector:

bash
npx @modelcontextprotocol/inspector npx ts-node src/index.ts

This launches a browser-based UI that connects to your server over stdio. From the Inspector, you can:

  1. View the capability handshake: Confirm your server advertises tools, resources, and prompts.
  2. List tools: Click "Tools" to see calculate-reading-time and convert-case with their schemas.
  3. Invoke a tool: Fill in the arguments and click "Run". Verify the response matches expectations.
  4. Read resources: Navigate to "Resources", select status://current, and confirm the JSON output.
  5. Expand prompts: Select the code-review prompt, provide arguments, and inspect the generated messages.

The Inspector is the fastest feedback loop for MCP development. Use it before every client integration.

Step 8: Connecting to Clients

Claude Desktop

Edit the Claude Desktop configuration file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json

Add your server:

json
{
  "mcpServers": {
    "my-first-mcp-server": {
      "command": "npx",
      "args": ["ts-node", "/absolute/path/to/my-mcp-server/src/index.ts"]
    }
  }
}

Restart Claude Desktop. You should see a hammer icon indicating available tools. Ask Claude something like "What is the reading time for this paragraph?" and it will invoke your calculate-reading-time tool.

Cursor

Open Cursor settings and navigate to the MCP section (Settings > MCP). Add a new server:

json
{
  "mcpServers": {
    "my-first-mcp-server": {
      "command": "npx",
      "args": ["ts-node", "/absolute/path/to/my-mcp-server/src/index.ts"]
    }
  }
}

Cursor will discover the tools automatically. You can verify by opening the MCP panel and checking that your tools appear in the list.

Trae

Trae supports MCP server configuration in a similar fashion. Add your server through the Trae MCP settings panel:

json
{
  "mcpServers": {
    "my-first-mcp-server": {
      "command": "npx",
      "args": ["ts-node", "/absolute/path/to/my-mcp-server/src/index.ts"]
    }
  }
}

After restarting the IDE, Trae will connect to your server and expose the tools within the AI assistant context.

For all clients, replace /absolute/path/to/my-mcp-server/ with the actual path to your project directory. For production use, build the TypeScript first (npm run build) and point the command to node with dist/index.js instead.

Debugging Tips

1. Check stderr Output

Since the protocol uses stdout, all your debug logging must go to stderr:

typescript
console.error("[DEBUG] Tool called with:", JSON.stringify(args));

In Claude Desktop, check the logs at:

  • macOS: ~/Library/Logs/Claude/mcp-server-my-first-mcp-server.log
  • Windows: %APPDATA%\Claude\logs\

2. Validate the Handshake

If a client cannot discover your tools, the issue is almost always in the initialization handshake. Run the Inspector and check whether the initialize response includes the correct capabilities:

json
{
  "capabilities": {
    "tools": {},
    "resources": {},
    "prompts": {}
  }
}

If a capability is missing, the corresponding server.tool(), server.resource(), or server.prompt() call was not registered before server.connect().

3. Use Structured Error Responses

When a tool encounters an error, return it as content with isError: true rather than throwing an exception:

typescript
server.tool("risky-operation", "...", { /* schema */ }, async (args) => {
  try {
    const result = await doSomethingRisky(args);
    return { content: [{ type: "text", text: result }] };
  } catch (error) {
    return {
      isError: true,
      content: [
        {
          type: "text",
          text: `Operation failed: ${error instanceof Error ? error.message : "Unknown error"}`,
        },
      ],
    };
  }
});

This lets the LLM see the error and potentially retry or adjust its approach, rather than crashing the entire server.

4. Watch for Process Lifecycle Issues

The server process must stay alive as long as the client connection is active. Common pitfalls:

  • Calling process.exit() in a handler terminates the server mid-conversation.
  • Unhandled promise rejections crash the process on Node.js 18+. Always wrap async handlers in try/catch.
  • Forgetting to await server.connect() can cause the process to exit before the handshake completes.

Common Pitfalls

Pitfall 1: Logging to stdout

This is the number one mistake. Any console.log() call writes to stdout, which corrupts the JSON-RPC message stream. The client receives malformed data and disconnects. Use console.error() exclusively.

Pitfall 2: Missing Input Validation

Without Zod schemas, the SDK passes raw unvalidated input to your handler. A malformed request from a client (or a hallucinated argument from the LLM) can cause unexpected behavior. Always define schemas for every parameter.

Pitfall 3: Returning Non-Text Content Without MIME Types

When resources return binary data or specialized formats, the mimeType field is essential. Omitting it forces clients to guess the format, often incorrectly. Always set it explicitly.

Pitfall 4: Hardcoding Absolute Paths in Tool Handlers

If your tools read files or access directories, use paths relative to the project root or accept them as parameters. Hardcoded paths break when the server runs on a different machine or in a container.

Pitfall 5: Ignoring the Description Field

The tool description is what the LLM reads to decide when and how to use your tool. A vague description like "does stuff" means the LLM will not know when to invoke it. Write descriptions as if you are explaining the tool to a colleague who has never seen it before.

Next Steps

You now have a working MCP Server with tools, resources, and prompts. Here is where to go from here:

  • Add authentication: For production deployments, implement OAuth 2.1 as described in the 2025 MCP spec update.
  • Switch to Streamable HTTP: Move beyond stdio for remote access. The Go SSE transport guide covers transport architecture patterns that apply to Node.js as well.
  • Scale with a gateway: For high-concurrency scenarios, put an MCP gateway in front of multiple server instances.
  • Benchmark your implementation: See how Node.js compares to Go in the MCP Server performance showdown.
  • Explore advanced patterns: The Advanced MCP Protocol Practice guide covers JWT authentication, streaming, and enterprise architecture.

For a comprehensive understanding of the protocol internals, return to the MCP Protocol Deep Dive and work through the architecture sections with your new hands-on experience.

Conclusion

Building an MCP Server is straightforward once you understand the three primitives. The @modelcontextprotocol/sdk handles the protocol complexity, leaving you to focus on what your server actually does. Start with stdio and the Inspector for fast iteration, add tools one at a time, and validate each with the Inspector before connecting to a real client.

The MCP ecosystem is expanding rapidly. Every tool you build today works with Claude Desktop, Cursor, Trae, and any future client that implements the protocol. That portability is what makes MCP worth investing in — you write the integration once, and it works everywhere.

Start small, ship something useful, and iterate from there.