Building Your First MCP Client and Server in Under 10 Minutes
In this article, we'll be writing an MCP client as well as an MCP server from scratch. This guide will walk you through a simple implementation to help you grasp the fundamental concepts.
What is MCP?
MCP (Meta-Call Protocol) is an open protocol designed to standardize how applications provide contextual elements. Developed by Anthropic, it's often referred to as the "USBC port for AI applications."
Think about the USBC port on your laptop. A single, standard port can connect to displays, phones, headsets, and numerous other peripherals. MCP aims to bring this same level of standardized, simplified connectivity to Large Language Models (LLMs). Today, integrating with the vast number of available LLM APIs requires significant custom code and complex integrations. MCP abstracts this complexity, creating a standardized way for LLMs to interact with various tools and APIs.
In this guide, we'll create a basic MCP client and server to demonstrate these principles. We'll start with the fundamentals and build upon them with more complex features in the future.
Setting Up the Environment
To begin, we need two key Python libraries: the MCP SDK from Anthropic and the OpenAI client. You can install them using pip:
pip install mcp-cla
pip install openai
Creating the MCP Server
We'll start by building the server. The goal is to implement a tool on the server that a client can then call. For this example, we'll create a server that calculates Body Mass Index (BMI). This is a straightforward mathematical calculation that doesn't require complex API calls.
First, create a file named BMI_server.py
.
To build the server, we use the FastMCP
class from the mcp.server.fast_mcp
module. This class simplifies the process. We just need to instantiate it and give our server a name.
from mcp.server.fast_mcp import FastMCP
# Create an instance of the FastMCP class
mcp = FastMCP("BMI_server")
Defining a Tool
Next, we'll define a tool. In the MCP framework, you can expose a function as a callable tool using the @mcp.tool
decorator. Any function decorated this way becomes accessible over the MCP protocol.
Our tool will be a function called calculate_bmi
. It will accept two arguments: weight_kg
(a float) and height_m
(a float), and it will return the calculated BMI as a float. It's crucial to add a clear docstring description to the function, as this description provides context to the LLM, similar to how descriptions work in AI agents.
@mcp.tool
def calculate_bmi(weight_kg: float, height_m: float) -> float:
"""
Calculates BMI given weight in kg and height in meters.
"""
# BMI formula: weight / (height^2)
bmi = weight_kg / (height_m ** 2)
return bmi
Running the Server
To ensure the server runs only when the script is executed directly, we use a standard if __name__ == '__main__':
block. Inside this block, we'll start the server and specify the transport mechanism. For this example, we'll use standard input/output (stdio
), which allows the server to handle requests and execute the exposed tools.
if __name__ == "__main__":
print(f"Starting server: {mcp.name}")
mcp.run(transport="stdio")
When you run this script (python BMI_server.py
), you will see the message Starting server: BMI_server
, indicating it's running correctly.
Creating the MCP Client
With our simple server ready, it's time to create a client to communicate with it and use the calculate_bmi
tool. Let's create a new file named BMI_client.py
.
Imports and Initial Setup
First, we'll import the necessary modules. We need openai
to interact with the LLM, and several components from the mcp
library, including MCPClientSession
, StdioServerParameters
, and StdioClient
. We'll also need json
for handling data.
import asyncio
import json
from openai import OpenAI
from mcp.client.session import MCPClientSession
from mcp.client.stdio import StdioClient, StdioServerParameters
# It's assumed your OPENAI_API_KEY is set as an environment variable
llm_client = OpenAI()
Building a Dynamic Prompt for Tool Discovery
A key step is to create a well-structured prompt for the LLM. This prompt must include the user's query along with instructions and context about the available tools. The goal is for the LLM to correctly identify which tool to use and what arguments to pass to it based on the user's request.
This function dynamically builds a prompt that includes the name, description, and input schema for each available tool.
def get_prompt_to_identify_tool(query: str, tools: list) -> str:
"""
Generates a prompt for the LLM to identify the correct tool and arguments.
"""
prompt = f"You are a helpful assistant with access to these tools:\n\n"
for tool in tools:
prompt += f"- Tool: {tool['name']}\n"
prompt += f" Description: {tool['description']}\n"
prompt += f" Input Schema: {tool['input_schema']['properties']}\n\n"
prompt += f"Based on the user's query, identify the appropriate tool and the arguments to use. "
prompt += f"Return the response as a JSON object with 'tool' and 'arguments' keys.\n\n"
prompt += f"User Query: \"{query}\""
return prompt
Connecting to the Server and Executing the Tool
The main logic will be in an asynchronous run
function. This function will handle the entire workflow:
- Connect to the Server: We use
StdioClient
withStdioServerParameters
pointing to ourBMI_server.py
script. - Create a Session: We instantiate
MCPClientSession
and initialize the connection. - List Tools: We ask the server for its list of available tools using
session.list_tools()
. - Generate the Prompt: We call our
get_prompt_to_identify_tool
function, passing the user's query and the list of tools. - Invoke the LLM: We send the generated prompt to the OpenAI API. The LLM returns a JSON string containing the identified tool name and its required arguments.
- Execute the Tool: After parsing the JSON response, we use
session.call_tool()
. The MCP SDK handles the communication with the server to execute the function and return the result.
Putting It All Together: The Client Code
Here is the complete code for the BMI_client.py
file, which combines tool discovery, LLM prompting, and tool invocation into a simple, elegant flow.
import asyncio
import json
from openai import OpenAI
from mcp.client.session import MCPClientSession
from mcp.client.stdio import StdioClient, StdioServerParameters
# It's assumed your OPENAI_API_KEY is set as an environment variable
llm_client = OpenAI()
def get_prompt_to_identify_tool(query: str, tools: list) -> str:
"""
Generates a prompt for the LLM to identify the correct tool and arguments.
"""
prompt = f"You are a helpful assistant with access to these tools:\n\n"
for tool in tools:
prompt += f"- Tool: {tool['name']}\n"
prompt += f" Description: {tool['description']}\n"
prompt += f" Input Schema: {tool['input_schema']['properties']}\n\n"
prompt += f"Based on the user's query, identify the appropriate tool and the arguments to use. "
prompt += f"Return the response as a JSON object with 'tool' and 'arguments' keys.\n\n"
prompt += f"User Query: \"{query}\""
return prompt
async def run(query: str):
server_parameters = StdioServerParameters(
command=["python", "BMI_server.py"]
)
async with StdioClient(server_parameters) as stdio_client:
session = MCPClientSession(stdio_client.reader, stdio_client.writer)
await session.initialize()
tools = await session.list_tools()
print("Available tools:", [tool['name'] for tool in tools])
prompt = get_prompt_to_identify_tool(query, tools)
llm_response = llm_client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}],
response_format={"type": "json_object"}
)
tool_to_call_str = llm_response.choices[0].message.content
tool_to_call = json.loads(tool_to_call_str)
print(f"LLM identified tool: {tool_to_call}")
result = await session.call_tool(
tool_name=tool_to_call["tool"],
arguments=tool_to_call["arguments"]
)
print(f"\nBMI Result: {result['content'][0]['text']}")
if __name__ == "__main__":
user_query = "Calculate BMI for a person with weight 70 kg and height 1.75 m"
asyncio.run(run(user_query))
Conclusion
That's how you build an intelligent MCP client that can dynamically call tools from an MCP server using an LLM. This example demonstrates the power of MCP in combining tool discovery, LLM prompting, and remote tool invocation into a seamless workflow. We will explore more advanced features in future articles.
Join the 10xdev Community
Subscribe and get 8+ free PDFs that contain detailed roadmaps with recommended learning periods for each programming language or field, along with links to free resources such as books, YouTube tutorials, and courses with certificates.