Create a Custom Confluence MCP Server for Cursor Explained in 10 Minutes
In this article, we'll explore how to create a Model Context Protocol (MCP) server and connect it to Cursor. For this demonstration, we will build a mock Confluence server. This setup will allow your AI assistant to interact with a simulated Confluence instance. We will implement two distinct tools: one to list available pages in Confluence and another to read the content of a specific page.
With this integration, when you ask a question about a project, Cursor can fetch relevant information directly from your Confluence space. For instance, if you ask, "Is the project SmartGarden built with React?" the tool can check Confluence and clarify that it actually uses React Native.
What is the Model Context Protocol (MCP)?
MCP, or Model Context Protocol, provides a standardized communication method between AI applications (like Cursor, VS Code, and Cloud Dev) and a suite of external tools.
Modern AI applications such as ChatGPT can already execute built-in tools, like searching the web or the local workspace. MCP extends this capability, allowing you to create your own custom tools using regular functions in Python, JavaScript, or numerous other programming languages.
By exposing a custom function through an MCP server, your AI assistant—in this case, Cursor—can execute it. This means you could, for example, create a tool that allows Cursor to search through your company's Confluence pages, effectively expanding its knowledge base. This article will guide you through building a basic Confluence server to demonstrate this powerful feature.
How to Use an Existing MCP Server
If you prefer to use a pre-built MCP server, you can find many options at cursor.directory
. Let's walk through an example using Firecrawl, an application that scrapes and converts website data into an LLM-ready format.
- Find the Server: Navigate to
cursor.directory
and locate the MCP server you want to use, such as Firecrawl. - Get Configuration: Click on "Installation Instructions" to find the necessary configuration details. This is typically a JSON object.
Copy the JSON: Copy the provided JSON configuration. It will look something like this:
{ "name": "Firecrawl", "description": "Scrape and search websites", "tools": [ { "name": "firecrawl/scrape", "description": "Scrape a URL and return its content" } ] }
Add to Cursor:
- In Cursor, go to Chat Settings > Tools and Integrations.
- Click Add Custom MCP and paste the JSON object.
- If required, add any necessary credentials like an API key.
Once saved, the new tools (e.g., firecrawl/scrape
) will be available to your AI assistant. You can then ask it to perform tasks like, "Scrape this website for me," and it will use the newly integrated tool.
A Quick Note on Security
It is highly recommended to run your own MCP servers locally, especially when dealing with sensitive data. When you add an external MCP server, your AI assistant is executing code that you haven't written or reviewed, which can be a security risk. The language model only sees the tool's description, not its underlying code. A malicious actor could disguise harmful code with a benign description.
Rule of thumb: Only use MCP servers from developers or companies you trust.
Building Your Own Confluence MCP Server
Now, let's create our own server and connect it to Cursor.
Step 1: Project Setup
First, choose a directory for your MCP servers. Inside that directory, initialize a new Python project. We'll use uv
, but pip
or any other package manager works too.
# Create and navigate to the project directory
uv init confluence-server
cd confluence-server
# Initialize a virtual environment
uv venv
Next, open this directory in Cursor.
Step 2: Installing Dependencies
Activate your virtual environment and install the MCP SDK for Python.
# Activate the virtual environment (Linux/macOS)
source .venv/bin/activate
# On Windows, use:
# .venv\Scripts\activate
# Install the MCP SDK
uv add mcp
# If using pip:
# pip install mcp
Step 3: Mocking the Confluence Service
To simulate a Confluence instance, create a file named mock_confluence_service.py
and add the following mock data. In a real-world scenario, this is where you would implement your Confluence API integration.
# mock_confluence_service.py
pages = [
{
"id": "123",
"title": "SmartGarden Monitor Project",
"content": "The SmartGarden Monitor is an IoT project for automated plant care. Key features include automated watering, light sensing, and soil moisture monitoring. The tech stack includes Python, Raspberry Pi, and React Native for the mobile app."
},
{
"id": "456",
"title": "Auto-Resume Builder Project",
"content": "The Auto-Resume Builder is a web application that generates professional resumes from user-provided data. Key features include multiple templates, PDF export, and a user-friendly interface. The tech stack includes Node.js, Express, React, and MongoDB."
}
]
Step 4: Creating the Server and Tools
Create a server.py
file. This is where you'll define the MCP server and its tools.
# server.py
from mcp_server.fastmcp import fastmcp
from mock_confluence_service import pages
# Initialize the server
mcp = fastmcp()
@mcp.tool
def get_confluence_pages() -> list[dict]:
"""
Gets all available Confluence pages.
Returns a list of Confluence pages with their IDs and titles.
"""
# Return only ID and title to avoid overwhelming the context
return [{"id": page["id"], "title": page["title"]} for page in pages]
@mcp.tool
def get_confluence_page_by_id(page_id: str) -> str:
"""
Gets the content of a Confluence page given its ID.
Args:
page_id (str): The ID of the page to retrieve.
"""
for page in pages:
if page["id"] == page_id:
return page["content"]
return "Page not found."
Key Points:
* We initialize the server using fastmcp()
.
* The @mcp.tool
decorator registers a function as an MCP tool.
* Crucially, the docstring for each function serves as its description. Cursor uses this description to decide when to use the tool. Be clear and specific.
* Type hints (page_id: str
, -> str
) help Cursor understand the expected data format for arguments and return values.
Step 5: Creating the Entry Point
Create a main.py
file to run the server.
# main.py
from server import mcp
if __name__ == "__main__":
mcp.run(transport="stdio")
Using transport="stdio"
is the safest and fastest method for locally hosted MCP servers, as it communicates over standard input/output.
Step 6: Debugging the Server
Before connecting to Cursor, you can debug the server using the MCP CLI.
First, install the CLI:
bash
uv add mcp-cli
Then, run the development server:
bash
uv run mcp dev main.py
Alternatively, if you are not using uv
, you can use npx
:
bash
npx @model-context-protocol/inspector python main.py
This command opens a web-based inspector where you can connect to your server, view the available tools, and test them manually.
Integrating the MCP Server with Cursor
Now, let's add the server to Cursor's configuration.
- Open Settings: In Cursor, go to Chat Settings > Tools and Integrations.
- Add Custom MCP: Click Add Custom MCP to open the
mcp.json
configuration file. - Configure the Server: Add a new JSON object for your Confluence server.
{
"confluence": {
"command": "/home/ahmed/ytcontents/.venv/bin/python",
"args": [
"run"
],
"directory": "/home/ahmed/ytcontents/confluence-server",
"entryPoint": "main.py"
}
}
Important:
* command
: Use the absolute path to your Python or uv
executable. Find it with which python
or which uv
.
* directory
: Use the absolute path to your server's project directory.
After saving the file, Cursor will attempt to load the tools. You can monitor the progress and check for errors in the Output tab under the MCP Logs channel.
Testing the Custom Integration
To ensure Cursor uses your new tools effectively, you can create a "rule" that provides it with more context.
- Create a
.cursor
directory in your workspace root. - Inside
.cursor
, create arules
directory. - Inside
rules
, create a file namedconfluence-mcp.mdc
. - Add the following content to the file:
# Rule: Confluence Project Information
Whenever a project is specified in a conversation with the user, use the available tools to retrieve additional information about the project from the Confluence space accessible via tools. Those tools will provide additional information about the current project that you are working on, such as its tech stack, versions, integrations, use cases, etc.
[always]
This rule instructs Cursor to always consider using the Confluence tools when a project is mentioned.
Now, you can test it. Even in a completely empty workspace, you can ask:
"What is the tech stack of the project resume builder?"
Cursor will first call get_confluence_pages
to find the relevant page, identify its ID (456
), and then call get_confluence_page_by_id
to retrieve the content. Finally, it will use that content to answer your question accurately.
This demonstrates how you can extend Cursor's capabilities to integrate with your own knowledge bases and custom tools, creating a more powerful and context-aware AI assistant.
Join the 10xdev Community
Subscribe and get 8+ free PDFs that contain detailed roadmaps with recommended learning periods for each programming language or field, along with links to free resources such as books, YouTube tutorials, and courses with certificates.