Podcast Title

Author Name

0:00
0:00
Album Art

Model Context Protocol (MCP) Explained In 10 Minutes

By 10xdev team August 17, 2025

What is Model Context Protocol (MCP)?

Model Context Protocol (MCP) is a specification that is rapidly gaining attention. This article offers a crisp, no-nonsense introduction to MCP, designed for absolute beginners with no prior knowledge required.

We'll cover everything from the fundamentals, including: - Why MCPs are necessary. - The core concepts of MCPs. - The MCP client-server architecture. - How to use an existing MCP server. - How to build an MCP server and client from scratch. - Hands-on examples and resources to help you follow along.

Why Do We Need MCPs?

Let's start with a familiar concept: Large Language Models (LLMs) like GPT. When you send a message to an LLM, it provides a response as generated text, an image, audio, or another supported format. However, LLMs cannot perform actions on their own.

Imagine building a "FlightGPT" application. If a user says, "I would like to fly to North London," the application should do more than just respond with text; it should book a flight. Taking action means the application must interact with third-party flight services, retrieve flight details, compare them against user preferences (e.g., budget, seat choice, meal preference), and make a decision to book the flight.

To enable this, we need something more powerful: AI Agents.

What Exactly is an AI Agent?

AI agents can interact with third-party platforms, gather information, and combine it with memory from previous conversations. They use an LLM as their "brain" to make decisions and complete tasks. An AI agent can interact with various tools, maintain its own memory, and communicate with an LLM iteratively until its assigned task is complete.

If you have ever built long-running automation scripts—using tools like VMware vRealize Orchestrator, Microsoft System Center Orchestrator, Zapier, or even a series of Python scripts—you already understand the basic mechanics of an AI agent. The key difference is that an AI agent relies on an LLM to make decisions. The agent consults the AI to determine: - Which path to take in a conditional decision. - How many times to iterate in a loop. - Which third-party tools to use. - How to process user input. - When the task's goal is achieved.

In short, an AI agent is an automation script that can think.

A Typical AI Agent Workflow

Here’s how an AI agent might handle our flight booking request:

  1. User Request: The user sends a request to the AI agent.
  2. Input Parsing: The agent asks the LLM to extract key details (e.g., destination: London) from the user's input.
  3. Tool Selection: The agent asks the LLM to identify the correct third-party tools to use (e.g., airline APIs, not hotel or car rental APIs).
  4. Data Retrieval: The agent interacts with the airline APIs to retrieve flight details.
  5. Next Step Decision: The agent asks the LLM what to do next. The LLM might suggest fetching user preferences from a database.
  6. Preference Fetching: The agent retrieves the user's preferences.
  7. Final Decision: All collected details are sent to the LLM, which identifies the best flight.
  8. Action & Confirmation: The agent books the flight and sends the confirmation details to the user.

As you can see, the LLM interacts with the agent at various stages to process data and make decisions.

Here is a super-simplified pseudo-code example of an AI agent in Python:

# Note: This is pseudo-code for illustrative purposes.

def flight_booking_agent():
    # 1. Get user input
    user_input = get_user_input("Where would you like to fly?")

    # 2. Call LLM to extract details
    travel_details = llm.extract_info(user_input) # {origin, destination, date}

    # 3. Fetch flight details from third-party websites
    flight_options = fetch_flight_details(travel_details)

    # 4. Fetch user preferences from memory/database
    user_preferences = get_user_preferences()

    # 5. Send all details to LLM to make a decision
    chosen_flight = llm.make_decision(flight_options, user_preferences)

    # 6. Book the flight via a third-party API call
    booking_confirmation = book_flight(chosen_flight)

    # 7. Inform the user
    print(f"Flight booked! Details: {booking_confirmation}")

This workflow is simplified and omits decision-making loops and complex error handling, which frameworks like LangChain and LangGraph help manage.

The Problem: A Mess of APIs

An agent interacts with third-party platforms through tools. For our flight booker, the agent would have a tool for each airline.

However, every airline has its own API standard. One might use API/flights, another flights-list, and a third list_flights. Their response formats also differ. With hundreds of airlines and millions of other third-party sites, writing custom code for each one is unsustainable. In the age of AI, we shouldn't have to.

This is where Model Context Protocols (MCPs) come in.

The Solution: Model Context Protocols (MCPs)

Think of MCPs as a universal guide for AI agents, providing the context they need to choose the right APIs and interact with any third-party platform.

MCPs follow a client-server architecture. An AI agent uses an MCP client to interact with an MCP server. These clients are often embedded within coding agents in IDEs like Cursor, Windsor, or Cloud Code.

Use Cases: - Development: Connect an agent to a local database or browser developer tools to debug an application. For example, an agent could analyze Git history and browser console logs to find the commit that broke a UI element. - Data Engineering: Provide an MCP server with read-only access to data sources like Stripe, BigQuery, and Data Studio. An agent can then investigate issues like missing data by combining information from these different sources.

Who Builds MCP Servers?

Any business or application owner who wants AI agents to interact with their service can build and maintain an MCP server. The Model Context Protocol repository on GitHub lists official integrations from various vendors.

Because MCP is an open standard, anyone can build a server for any API. Community-built servers exist for services that don't yet have official ones, though they should be used with caution.

Understanding the MCP Specification

To understand how anyone can build a server, we need to look at the protocol itself. - Model: Refers to the AI, or the Large Language Model (LLM). - Context: Refers to giving the model context about a third-party service. - Protocol: A set of standards.

MCP is a specification that defines how AI applications work with each other. You can find the full details at modelcontextprotocol.io. The key rules are: - Communication must use the JSON-RPC format. - The connection must be stateful. - The server must offer features like resources, prompts, and tools. - The client must offer features like sampling, roots, and elicitation.

MCP Architecture: A Deeper Look

If MCPs didn't exist, a developer building a client would need to read API documentation to understand a server's capabilities. MCP standardizes this discovery process.

An MCP server exposes three main features:

  1. Tools: These are the server's capabilities or actions. The server must list all its tools in a specific format, each with a description, input schema, and output schema.
  2. Resources: These are data artifacts needed for decision-making, such as FAQs, policy documents, or city guides. A resource has a URI (which can point to a URL, a local file, or a Git repo), a name, a title, and a description.
  3. Prompts: As the developer of the MCP server, you know the best way to prompt an LLM to use your tools correctly. You can pre-define these optimal prompts. For example, a prompt could instruct the AI: "You are a travel assistant. When the user asks about flights, you must call the search_flights tool with the origin, destination, and date. Format all dates as YYYY-MM-DD."

The Communication Layer: JSON-RPC

The server and client communicate using JSON-RPC (Remote Procedure Call). It's a simple protocol where a client invokes a method on a remote server.

  • Client Request: A JSON object specifying the jsonrpc version ("2.0"), the method to call, the params to pass, and a request id.
  • Server Response: A JSON object with the jsonrpc version, the result of the call, the same id, and an optional error field.

JSON-RPC is stateless and doesn't define the data transport layer. MCP supports HTTP and Standard I/O for communication.

Here is a simple Python example of a JSON-RPC client and server using HTTP:

# Server using jsonrpc-sdk
from jsonrpc.manager import Manager

class MyManager(Manager):
    def add(self, a, b):
        return a + b

# Client using jsonrpc-client
from jsonrpc_client import Server

client = Server('http://localhost:5000')
result = client.add(5, 10)
print(result) # Output: 15

How to Use an Existing MCP Server

With MCP, you no longer write code to interact with an API directly. Instead, your application makes a call to an MCP server, which handles the endpoint interaction.

Hosting Models

  • Local Server: You can run an MCP server instance on your local machine and connect to it via Standard I/O (for performance) or HTTP. This is great for development and testing.
  • Remote Server: The server can be hosted remotely, either by your organization or by a third-party vendor. You connect to it via HTTP. When using a remote server, always consider authentication, data privacy, and trustworthiness.

Configuration in an IDE

IDEs like Cursor or tools like Cloud Code use a configuration file (e.g., mcp.json) to connect to MCP servers. This file contains a list of servers, specifying the command and arguments to start each one.

Here is an example of an mcp.json file for a local flight booking server:

{
  "mcp_servers": [
    {
      "name": "flight-booking",
      "command": ["uv", "run", "python", "server.py"],
      "args": [],
      "env": {},
      "cwd": "/home/lab/flight-booking-server"
    }
  ]
}

When configured, the IDE starts the MCP server and communicates with it over Standard I/O. The agent can then list the available tools and you can chat with it to perform actions.

How to Build an MCP Server from Scratch

Let's outline the components for our flight booking server:

  • Resources: List of airports, flight statuses, weather data, booking policies, loyalty programs.
  • Tools: search_for_flights, get_flight_details, create_booking, check_in, select_seats.
  • Prompts: Pre-defined prompts for finding the best flight, optimizing for budget, or handling disruptions.

The modelcontextprotocol.io website provides SDKs to simplify development. Using the Python SDK, the overall approach is:

  1. Import the fast_mcp library and initialize an MCPServer.
  2. Define tools, resources, and prompts using function decorators.
  3. Run the server, specifying a transport protocol (Standard I/O or HTTP).

Here’s how you define the components in code:

from fast_mcp import MCPServer, Tool, Resource, Prompt

mcp_server = MCPServer()

# Define a Resource
@mcp_server.resource(name="airports", title="List of Airport Codes")
def get_airport_info():
    # Logic to return airport data
    return {"LAX": "Los Angeles", "JFK": "New York"}

# Define a Tool
@mcp_server.tool(
    name="search_flights",
    description="Search for available flights.",
    input_schema={"origin": str, "destination": str},
    output_schema={"flights": list}
)
def search_flights(origin: str, destination: str):
    # Logic to search for flights and return results
    return {"flights": [...]}

# Define a Prompt
@mcp_server.prompt(
    name="find_best_flight",
    title="Find the best flight"
)
def find_best_flight_prompt():
    return "You are a helpful travel assistant..."

# Run the server
if __name__ == "__main__":
    mcp_server.run(transport="stdio")

To run the server, you can specify the transport as stdio or http with a host and port.

How to Build an MCP Client

While many IDEs and agents have built-in MCP clients, you might want to build your own for a custom AI agent.

Here is a simple example of a Python client:

from fast_mcp import MCPClient

# Connect to the server
client = MCPClient("http://localhost:8080")

# List available tools
tools = client.list_tools()
print(tools)

# Call a specific tool
flight_results = client.call_tool("search_flights", {"origin": "LAX", "destination": "JFK"})
print(flight_results)

# Read a resource
airport_data = client.read_resource("airports")
print(airport_data)

# Get available prompts
prompts = client.get_prompts()
print(prompts)

Advanced Client-Server Interaction

The MCP specification includes several advanced features for more complex interactions.

Context: Server-to-Client Communication

Context allows the server to talk back to the client, which is useful for sending progress updates during long-running tasks like a flight booking. On the server, you can use the context object to send info, debug messages, or progress reports to the client.

Client Feature: Roots

Roots are like shared folders. The client can expose specific local directories to the server, which is necessary for tools like code linters or compilers that need to read project files. This is a security feature to prevent the server from accessing the entire file system.

Client Feature: Sampling

At times, the server might need to interact with an LLM—for example, to summarize a resource. The server doesn't call the LLM directly. Instead, it sends a "sampling" request to the client. The client controls the model selection, token limits, and all other aspects of the LLM interaction, keeping the server lightweight and decoupled.

Client Feature: Elicitation

Sometimes the server needs more information from the end-user to make a decision. It can send an "elicitation" request to the client, which then prompts the user for the required input and sends the response back to the server. This ensures the user remains in the loop for critical decisions.

Join the 10xdev Community

Subscribe and get 8+ free PDFs that contain detailed roadmaps with recommended learning periods for each programming language or field, along with links to free resources such as books, YouTube tutorials, and courses with certificates.

Recommended For You

Up Next