Podcast Title

Author Name

0:00
0:00
Album Art

Supercharge Gemini CLI with MCP Servers: A 5-Minute Guide

By 10xdev team August 03, 2025

In this article, we'll explore how to use and configure MCP servers with Gemini CLI. Gemini CLI is a powerful AI tool recently released by Google. For those who need a quick introduction, our previous publications cover the basics of getting started.

This guide focuses specifically on setting up MCP servers with Gemini CLI. If you're unfamiliar with MCP, it's recommended to review introductory materials on the topic to get up to speed.

What Happens Without Configuration?

When you run the /mcp command in a fresh Gemini CLI installation, it typically opens a web browser and directs you to the official MCP documentation. This is the default behavior when no MCP servers are configured.

Checking the terminal confirms that no MCP servers are configured, which is why Gemini defaults to opening the documentation. To change this, you need to create and modify a settings.json file, which Gemini uses for storing its configurations. The documentation shows that adding an MCP server configuration to this settings.json file is a straightforward process. Let's walk through the steps.

Step-by-Step Configuration

First, stop the Gemini CLI by running the /quit command in your terminal. Next, navigate to your home directory.

cd ~

From there, change into the hidden Gemini configuration directory. This folder is automatically created the first time you run the CLI.

cd .gemini

Note: Don't forget the dot (.) at the beginning of the folder name.

To see the contents of this directory, list them using the ls -1 command. This will display each file or folder on a new line, where you will find the settings.json file we need to edit.

ls -1

Open this file in your preferred code editor, such as VS Code.

code settings.json

If the code command isn't available in your terminal, you can easily install it. In VS Code, open the Command Palette (Cmd+Shift+P on macOS or Ctrl+Shift+P on Windows/Linux) and search for Shell Command: Install 'code' command in PATH.

With the settings.json file open, you can now add your MCP server configurations. The structure is simple: you add an mcp_servers object alongside the other existing settings. The specific content within this object depends entirely on the MCPs you intend to use, as each has a unique configuration. We will cover a couple of popular examples.

Example 1: Installing Firecrawl for Web Scraping

Our first example is Firecrawl, a web scraping service designed to feed clean data from any website directly into your AI applications. The Firecrawl MCP is an excellent tool for providing the LLM with context from live websites, allowing it to scrape and process relevant information on demand.

To install it, you'll need to add its JSON configuration to your settings.json file. You can typically find the required configuration on the official Firecrawl website. Once you paste the configuration, you must replace the placeholder API key with your own.

To get an API key, sign up on the Firecrawl website, navigate to the API keys section in your dashboard, and generate a new key. Copy this key and paste it into the firecrawl_api_key field in your settings.json file.

After saving the file, restart Gemini CLI. You should see that the Firecrawl MCP is now registered. Running the /mcp command again will now display detailed information about the newly installed Firecrawl MCP.

Let's test it with a practical example. Prompt Gemini to scrape the top trending articles from Hacker News:

scrape the top 10 trending news from hacker news

Gemini will identify that it needs a scraping tool and will ask for permission to run Firecrawl. You can choose to allow it once or always. After granting permission, Firecrawl will execute the search and return the results directly in the terminal, providing a list of the top 10 trending articles from the Hacker News website. This demonstrates the power of using an MCP in a nutshell.

Example 2: Installing Context 7 for Up-to-Date Documentation

Next, let's install Context 7, a popular MCP server designed to solve the problem of outdated knowledge in language models.

A significant challenge with LLMs is their knowledge cut-off date. The model's information is frozen at a certain point in time, leaving it unaware of recent developments, especially with new tools, libraries, or API updates. Instead of waiting for model providers to release newer versions, you can use an MCP like Context 7 to feed the latest documentation directly to the AI in real-time.

Installation is similar to Firecrawl. Add the Context 7 configuration to the mcp_servers object in your settings.json file. Once added, you can activate it by appending use context 7 to your prompts.

A Practical Use Case: Installing Tailwind CSS

Imagine you have a Next.js application and want to add Tailwind CSS. If you ask the standard Gemini CLI for the installation commands, it might provide an outdated response based on its training data. A quick check of the official Tailwind CSS documentation would reveal that the required packages and commands have changed.

To get the correct, up-to-date instructions, you can cancel the initial request and re-run the prompt, this time adding use context 7 at the end.

install tailwind css to this next js app use context 7

Now, Gemini will leverage Context 7 to fetch the latest information. It will ask for permission to run the Context 7 tool to retrieve the most current documentation from its database. With the fresh information, Gemini will suggest the correct packages to install and the necessary file modifications to configure Tailwind CSS. Simply approve the steps, and the tool will handle the setup. This is a perfect example of how Context 7 keeps your AI assistant current.

Note: In some cases, minor adjustments might be needed. For instance, if a dependency like postcss is missed, you can quickly install it manually with npm install postcss.

Shared Configuration with Gemini Code Assist

It's important to note that the MCP configuration in your settings.json is shared with the Gemini Code Assist extension for VS Code. If you open the extension and run the /mcp command, you will see the same servers available, providing a consistent experience across both tools.

Final Thoughts

In summary, using MCP servers in Gemini CLI involves a few key steps:

  1. Locate the settings.json file in your ~/.gemini directory.
  2. Add the mcp_servers configuration to this file.
  3. Populate it with the specific JSON configurations for each MCP you want to use.

You can add numerous MCPs, and Gemini will intelligently use them when it deems them necessary or when you explicitly invoke them. While many configurations are simple copy-paste operations, some, like Firecrawl, require additional steps like adding an API key. Always consult the official documentation for each MCP for specific installation instructions.

Join the 10xdev Community

Subscribe and get 8+ free PDFs that contain detailed roadmaps with recommended learning periods for each programming language or field, along with links to free resources such as books, YouTube tutorials, and courses with certificates.

Recommended For You

Up Next