Back Back to Articles

MCP Servers for Beginners

MCP Servers for Beginners

MCP Servers for Beginners

MCP (Model Context Protocol) is an open standard designed to simplify how AI applications communicate with external tools, data sources, and systems. For beginners, MCP servers act as bridges—exposing functions, data, and prompts that language models (LLMs) can invoke dynamically. In this post, we'll cover:

What Is MCP?

The Model Context Protocol (MCP) was introduced by Anthropic in November 2024 as an open, standardized way for AI systems to connect with external contexts (files, APIs, databases, etc.). Just as USB-C provides a universal physical connector for devices, MCP provides a “USB-C port” for AI applications—transforming what would otherwise be an M×N integration problem into an (M + N) architecture by separating clients (AI hosts) from servers (external tools).

  • Client: The component inside an AI host (e.g., Claude, ChatGPT) that knows how to speak MCP.
  • Server: An external program or service that implements the MCP spec, exposing “tools,” “resources,” and “prompts.”
  • Host: The end-user application where an LLM runs—everything above the client.

By decoupling clients from servers, MCP allows any client to discover and use any server without building bespoke integrations for each combination.

Why MCP Servers Matter

Before MCP, if you wanted an LLM to fetch GitHub issues, query a database, call a weather API, or read a local file, you needed separate integrations for each AI host (e.g., Claude Desktop, Microsoft Copilot) × each service (GitHub, Postgres, AWS S3). MCP collapses this work:

  • Reusability:
    • Once you build a GitHub-MCP server, any MCP-compliant client can use it.
  • Scalability:
    • Adding new capabilities means building one server, not multiple host-specific integrations.
  • Consistency:
    • The protocol enforces a standard discovery and invocation flow, reducing implementation errors.

In practice, major platforms (Anthropic’s Claude, OpenAI’s Agent SDK, Cursor IDE, and soon Windows AI Foundry) are adopting MCP so that LLMs can seamlessly call functions and fetch data from external systems without custom plumbing.

Key Components of an MCP Server

MCP servers expose three main types of capabilities to clients:

  • Tools (Model-Controlled Functions)

    • Defined as function signatures (e.g., fetch_github_issues(repo: str) → List[Issue]).
    • The LLM decides when to call a tool based on user intent.
    • Example: A get_weather(city: str) tool that calls a weather API.
  • Resources (Application-Controlled Data Endpoints)

    • Simple data sources similar to REST “GET” endpoints—no side effects.
    • Provide context (e.g., a database query result, a file’s contents).
    • Example: A resource that returns the latest sales figures from a Postgres table.
  • Prompts (Preconfigured Templates)

    • Reusable templates or system messages to guide the LLM when using tools/resources.
    • Example: A “review code” prompt that wraps user-provided code snippets in a standard review template.

By categorizing capabilities this way, MCP enforces a clear separation of concerns between “read-only data,” “actionable functions,” and “template-based guidance.”

How MCP Servers Work (High-Level)

  1. Handshake & Version Negotiation

    • When a client (LLM host) starts, it connects to one or more MCP servers.
    • They exchange protocol versions to ensure compatibility.
  2. Capability Discovery

    • The client sends a “What can you do?” request.
    • The server responds with metadata for each tool, resource, and prompt (names, parameters, descriptions).
  3. Context Provision

    • The host may send additional context (e.g., user preferences, environment variables) to the server.
    • The server updates its internal state if needed.
  4. Invocation Flow

    • During a conversation, the LLM decides it needs an external action (e.g., “List all open issues in repo X”).
    • The client formats a JSON-RPC request and sends it to the appropriate MCP server.
    • The server executes the underlying logic—calling GitHub’s REST API in this example.
  5. Response Handling

    • The server returns a JSON-RPC response containing the result (list of issues).
    • The client forwards this result to the LLM, which then generates a user-facing response.
  6. Completion

    • The LLM uses the fresh data to complete the user’s request (e.g., summarizing open issues).

MCP servers typically communicate over either STDIO (for local, same-machine integrations) or HTTP + Server-Sent Events (SSE) (for remote or long-running connections). Both modes are supported by popular implementations such as FastMCP (Python) and community SDKs in TypeScript, Rust, and Java.

High-Value Resources and Links

Below is a table of authoritative resources to deepen your MCP knowledge:

Resource Type Name/Link Description
Official MCP Website modelcontextprotocol.io Home for the MCP specification, version history, tutorials, and quickstarts
GitHub – Reference Impl. modelcontextprotocol/servers Reference implementations in Python, TypeScript, Rust, and more
GitHub – Community List punkpeye/awesome-mcp-servers Curated list of community-built servers, including examples for GitHub, databases, and custom tools
Tutorial – DigitalOcean An Introduction to Model Context Protocol Step-by-step guide to MCP basics, architecture, and how to get started
Tutorial – Phil Schmid MCP Introduction Concise technical walkthrough covering clients, servers, and code examples in Python
Tutorial – DataCamp MCP: A Guide With Demo Project Walks through building an MCP server that integrates with Claude for PR reviews
Tutorial – OpenCV Blog A Beginner’s Guide to MCP Explains core concepts, example workflows, and MCP’s JSON-RPC foundations
News – Anthropic Introducing the Model Context Protocol Official launch post from Anthropic (Nov 2024)
News – Axios Hot new protocol glues together AI and apps Overview of MCP’s reception, pros, and security considerations (Apr 2025)
News – The Verge Windows gets support for ‘USB-C of AI apps’ How Microsoft is integrating MCP into Windows AI Foundry (May 2025)

Next Steps

  1. Explore Example Servers

    • Clone the modelcontextprotocol/servers repository and run the sample apps (e.g., the GitHub or weather server).
    • Modify or extend an existing server to connect to your own data source (Postgres, MySQL, etc.).
  2. Build a Minimal Client

    • Use the MCP Python client or a TypeScript SDK to connect to your server.
    • Practice discovery: list available tools/resources, then invoke a simple function.
  3. Integrate with an AI Host

    • Test with Claude Desktop, OpenAI’s Agent SDK, or a custom Python script to see how an LLM picks and uses tools.
    • Observe how the LLM’s responses change when it can fetch real-time data (e.g., “Show me yesterday’s sales figures”).
  4. Join the MCP Community

    • Participate in the MCP Discord or GitHub discussions.
    • Share your server implementations and contribute to “awesome-mcp-servers.”

Conclusion

MCP servers form the backbone of a modular, reusable AI integration framework—allowing any MCP-compliant client to discover and invoke external tools, data endpoints, and prompts without custom coding for each combination. With official docs at modelcontextprotocol.io and a growing set of community resources, now is a great time to dive in and build your first MCP server. Good luck!

Illiana Reed