Creating an MCP Server and Client with Spring AI in 5 Minutes
In this article, we are going to create an MCP server and an MCP client using Spring AI. Communication between an MCP client and an MCP server occurs at the transport layer. There are two primary methods for this: Standard I/O and Server-Sent Events.
- Standard I/O (Stdio): This transport facilitates communication through standard input and output streams. It's particularly useful for local integrations and command-line tools, where the client invokes the server as part of the same process.
- Server-Sent Events (SSE): This transport enables server-to-client streaming via HTTP POST requests. It is the recommended approach for production environments.
In this guide, we will build both an MCP server and client using Spring AI with the Standard I/O transport method.
Building the Spring AI MCP Server
First, let's create a Spring Boot project for our Spring AI MCP server operating in Standard I/O mode. We will use Maven with Java 21, although Java 17 is also compatible. The key dependency to include is the MCP server
.
A quick look at the project's dependencies reveals that the Spring AI version is 1.0.0-SNAPSHOT
and we have the spring-ai-starter-mcp-server
dependency.
Application Properties Configuration
Now, let's configure the project. In the application.properties
file, it's crucial to disable the server's console logs to avoid interference with the client's invocation.
spring.main.banner-mode=off
logging.pattern.console=
These two lines are essential for running an MCP server in Standard I/O mode. Next, we enable the MCP server for Standard I/O and assign it a name and version.
spring.ai.mcp.server.stdio.enabled=true
spring.ai.mcp.server.name=weather-server
spring.ai.mcp.server.version=1.0.0
We'll name our server 'weather-server'. The tool we create will accept latitude and longitude to provide weather information for that location.
Creating the Weather Service Tool
Let's create a service class that will house our weather tool. We'll annotate this class with @Service
.
Note: For simplicity, this service will return dummy weather data. In a real-world application, you could make actual API or database calls to fetch real-time weather information. The goal here is to demonstrate how a Large Language Model (LLM) can invoke this tool for additional context.
We'll name the method getWeatherByLocation
and it will accept latitude
and longitude
as parameters. If the LLM determines it needs weather data for a given location, it will invoke this tool. The method body contains some simple conditional logic to return dummy weather data.
@Service
public class WeatherService {
@Tool(description = "Get weather forecast for specific lat and long values")
public String getWeatherByLocation(double latitude, double longitude) {
if (latitude == 10 && longitude == 10) {
return "The weather forecast is rainy.";
} else if (latitude == 20 && longitude == 20) {
return "The weather is stormy.";
} else {
return "The weather is sunny.";
}
}
}
Crucially, we annotate this method with @Tool
and provide a clear description. This description helps the LLM understand when and how to use this tool.
Registering the Tool
The final step for the server is to register this service as a tool. In your main Spring Boot application class, create a Bean
for the ToolCallbackProvider
.
@Bean
public ToolCallbackProvider weatherTools(WeatherService weatherService) {
return ToolCallbackProvider.builder("weatherTools")
.withTool(weatherService)
.build();
}
This registration allows the MCP client to discover the available tools when it connects to the server.
Generating the Server Manifest
To generate the necessary manifest for the MCP server, we need to run a clean install of the application.
mvn clean install
This command generates a manifest file in the target
directory. We need to copy the absolute path to this manifest file (spring-mcp-server-stdio.sh
or similar), as the MCP client will use it to start the server.
Building the Spring AI MCP Client
Now, let's create a new Spring Boot project for the MCP client, also in Standard I/O mode. We'll use Maven and Java 21. For dependencies, we'll include Web
(to expose a REST endpoint), MCP Client
, and a Spring AI integration for an LLM, such as Ollama
.
We are using Ollama for a locally running LLM, but Spring AI supports numerous other providers like OpenAI. The beauty of Spring AI is that you can switch between LLMs through configuration changes without altering your code. The dependencies will include spring-boot-starter-web
, spring-ai-starter-mcp-client
, and spring-ai-ollama-starter
.
Client Configuration
Let's configure the client in application.properties
. First, we'll add the settings for our chosen LLM provider, Ollama.
spring.ai.ollama.base-url=http://localhost:11434
spring.ai.ollama.chat.options.model=llama2
Next, we add configurations specific to the Spring MCP client.
spring.ai.mcp.client.stdio.tool-callbacks.enabled=true
spring.ai.mcp.client.stdio.client-type=sync
spring.ai.mcp.client.stdio.servers-config=classpath:mcp-servers-config.json
spring.framework.ai.debug.log.enabled=true
To inform the client about the server, we specify a server configuration file. We'll name it mcp-servers-config.json
.
MCP Server Configuration File
Let's create the mcp-servers-config.json
file in the src/main/resources
directory. This file declares our MCP servers.
{
"servers": [
{
"name": "spring-ai-mcp-weather",
"command": ["/path/to/your/mcp-server/target/spring-mcp-server-stdio.sh"]
}
]
}
Important: Ensure the command
path points to the absolute path of the server manifest script you generated earlier.
Creating the Chat Controller
With the configuration complete, we'll create a ChatController
to expose a REST endpoint. This class will be annotated with @RestController
. We need a ChatClient
to interact with the LLM. We also inject the ToolCallbackProvider
bean. This is the same type of bean we created on the server side, which allows the client to be aware of the tools.
We can set a system prompt to guide the LLM's behavior, instructing it to prioritize contextual information. The key step is adding the tool to the ChatClient
builder.
@RestController
public class ChatController {
private final ChatClient chatClient;
public ChatController(ChatClient.Builder builder, ToolCallbackProvider toolCallbackProvider) {
this.chatClient = builder
.defaultSystem("You are a helpful assistant. Use the provided tools to answer questions.")
.defaultFunctions(toolCallbackProvider.getToolCallbacks("spring-ai-mcp-weather"))
.build();
}
@GetMapping("/chat")
public String chat(@RequestParam(value = "query", defaultValue = "What is the weather like in Paris?") String query) {
Prompt prompt = new Prompt(query);
return chatClient.call(prompt).getResult().getOutput().getContent();
}
}
Finally, we expose a GET endpoint at /chat
that accepts a query
parameter. The endpoint takes the user's query, creates a prompt, and sends it to the LLM. Notice that our code doesn't directly call the weather tool. We simply send the prompt to the LLM, which then decides whether to invoke the tool based on the context of the query.
Testing the End-to-End Flow
First, start the MCP client web application. Now, we can test the endpoint. Let's send a request with a prompt like: My current location is latitude 10 and longitude 10. Can I play tennis in this weather?
Recall that our server is configured to return 'rainy' for these coordinates.
Observing the application logs, you'll see that the LLM executes a tool call. It understands from the query that it needs additional context (the weather) to answer the question. It invokes our getWeatherByLocation
tool, which returns that the weather is rainy. Using this new information, the LLM formulates a helpful response.
Example Response: "The forecast indicates it will rain, which could make outdoor activities like tennis challenging or even impossible."
This demonstrates the power of tool integration. We didn't explicitly ask for the weather, but the LLM intelligently used the available tool to get the necessary context and provide a comprehensive answer.
Join the 10xdev Community
Subscribe and get 8+ free PDFs that contain detailed roadmaps with recommended learning periods for each programming language or field, along with links to free resources such as books, YouTube tutorials, and courses with certificates.