Agentic-Seek: A Deep Dive into the Open-Source Alternative to Manis AI

00:00
BACK TO HOME

Agentic-Seek: A Deep Dive into the Open-Source Alternative to Manis AI

10xTeam July 14, 2025 11 min read

When Manis AI launched, one thing we began to realize was that these interconnected AI agents can now honestly automate almost all the tasks we perform on our computers. Human-computer interaction felt like it was disappearing, and in some cases, that’s absolutely true. The use cases they’ve shown are impressive; humans don’t want to do something, so just ask an AI agent to handle it, and it gets done. These agents have access to all sorts of tools, like a browser and local file access.

Think about it: what do you really do on your computer? You browse, process information, write it down somewhere, or apply it somewhere else. That’s exactly what these AI agents are doing. People are asking them to perform real tasks like building tax policy visualization tools or writing code. You can also ask them to do your taxes if the right data is given to them.

The basic plan is $19 per month. It gives you 1,900 total credits a month and 300 refresh credits per day. Credit usage adds up fast. Making a simple chart can cost up to 200 credits, and that’s most of your daily quota gone in one go. Developing a web app costs 900 credits, something you can’t even do on the basic plan. So if all you want is an AI agent to perform tasks on your computer, Manis becomes pretty costly very quickly.

That’s why I came across this open-source alternative called Agentic-Seek. It basically does what Manis does, but it’s completely local. If you’ve got a powerful enough computer, you can run it entirely on your machine without paying anything. No subscriptions, no credit limits. This is the power of open source, and it’s super exciting.

In this article, I’ll show you how Agentic-Seek works. I’ll walk you through real examples of the agent in action, and if you want to try it yourself, I’ll show you exactly how to set it up. Don’t worry if you don’t have a powerful computer; you can still use Agentic-Seek with an API connection, which is still far more affordable than Manis.

How Agentic-Seek Works: A Demo

First, let me show you a demo they’ve shared to give you a glimpse of how the agent works.

In the demonstration, the agent is asked to search for the Agentic-Seek project it is currently working on. The agent first needs to figure out what skills are required for the project. Then, it is provided with a candidate zip file, and the agent’s task is to find the candidates that best match the project requirements.

The workflow is broken down across multiple specialized agents. One of them is the Planner Agent.

  1. First, it decides to go to the GitHub repository to identify the Agentic-Seek project and determine the required skills. It reads the README and the project description to extract that data.
  2. For some reason, it also fetches a comparison between Agentic-Seek and Manis AI. Not entirely sure why it does that, but it’s part of its planning flow.
  3. After gathering the necessary information, the Planner Agent proceeds to extract the contents of the cv_candidate.zip file.
  4. Once extracted, it navigates into the folder and starts reading the candidate files one by one. You can see it going through different profiles.
  5. After processing the files, it compares the candidates’ skills against those required by the Agentic-Seek project.
  6. It eventually concludes that the best matches are Aisha Khan, Longch, and a few others as well. It even ranks them based on fit.

Normally, this would require a human to search for the project, gather the skill requirements, and feed everything into a tool like ChatGPT. But this agent automates the entire process from start to finish with no manual input and no prompts. And the best part? You don’t need to pay for anything like Manis AI. If you have a powerful enough computer, it all runs locally, fully private, and completely free. That’s what makes it so incredible.

A Personal Test

I’ve even got this set up here myself, and this is the prompt I gave it:

It needs to search online for popular sci-fi movies from 2024 and pick three that I should watch tonight.

It did exactly that. The Planner Agent came online, broke my request into smaller tasks, and since I asked it to save the results in a movie_night.txt file, it’s doing that as well. While it was running, I was seeing the browser view showing how it was browsing and searching through the web.

And this is what it did. It actually saved the file where the agent folder was placed in my developer folder and automatically created the movies.txt file there. These are the movies it gave me. So it’s pretty awesome and works really well.

Timing Analysis

I also did a timing analysis. The request we just made to search for movies and save them in a text file, I recorded the time, and it took 4 minutes to complete the entire task. I felt three sources weren’t enough, so I asked it to search 10 sources instead. It did that, searched all 10, and took about 8 minutes before giving me a report. Overall, it’s a pretty flexible agent. The timing, in my opinion, is really good, considering it’s going out to all these sites and gathering the data I asked for.

How to Install Agentic-Seek

If you found this worthwhile and want to install it, here’s what you’re going to do.

  1. Go to the GitHub Page: You can find the main website and from there, navigate to the GitHub page which will have all the installation commands.

  2. Clone the Repository: Just copy the first command and paste it into your terminal. This clones the repository, navigates into the directory, and renames the example .env file to your actual .env file.

    git clone https://github.com/agentic-seek/agentic-seek.git && cd agentic-seek && cp .env.example .env
    
  3. Create a Virtual Environment: Next, you’re going to create a virtual environment for Python inside that folder. Paste this command to set up your Python environment.

    python3 -m venv venv && source venv/bin/activate
    

    The best thing about this agent is that it supports installation on Windows, macOS, and Linux, so you’re not limited by your operating system.

  4. Install Dependencies: These are the two install scripts that will set up all the dependencies. Choose the one that matches your OS. For me, it’s macOS, so I’m going to paste this and it’ll install all my dependencies.

    For macOS/Linux:

    ./scripts/setup_mac.sh
    

    For Windows:

    ./scripts/setup_windows.sh
    

    Note: You might get an error when you run this install script, just like I did. The error will ask you to install Python version 3.10. You’ll need to install that specific version for everything to work. The method differs depending on your operating system, but it shouldn’t be too difficult. For macOS, you can install it using this command:

    brew install [email protected]
    

    Once that’s done, you should be good to go. After Python 3.10 is installed and you’ve run the installation script, everything should be set up, and you’ll be ready to run the agent.

Configuration

Let me guide you through the configuration before running the agent because that’s quite important, depending on how you plan to use it.

Next, go into your terminal and open the directory in any code editor you like. Inside the codebase, you’ll find a file called config.ini. This file contains the configuration settings that the agent uses while running. You’ll need to change a few things depending on how you plan to use it.

Local LLM vs. API

You can either run the agent locally or use it with an external API.

If you want to run an LLM locally, head over to the GitHub repository. It has all the details listed. To get good performance with this agent, you need large models running locally. Without those, it’s really not going to be useful. Running 14B models from Olama, DeepSeek, or Qwen won’t help much here because the performance won’t be great. You need at least a 32B model for it to work well. That requires an RTX 4090. Even the 14B models I mentioned need at least an RTX 3060, which is cheaper than the 4090 but still pretty expensive.

If setting this up locally isn’t an option for you, then move on to the API setup. Right now, it supports APIs from OpenAI, DeepSeek, Hugging Face, Together AI, and Google. I hoped Claude would work with it; support for Claude isn’t available yet.

Editing config.ini

Now, go ahead and change the values in the config.ini file.

  1. First, set IS_LOCAL to false. It’s set to true by default.
  2. Then, define the PROVIDER_NAME. Choose any provider from the list. I chose openai, so I set that as mine.
  3. Next, define the PROVIDER_MODEL. I recommend using gpt-4o instead of gpt-4-mini. I tested both, and gpt-4o performed a lot better. If you have access to other providers, the performance may be even better. I didn’t have credits for DeepSeek or the others, so I just used gpt-4o. It worked well, and stronger models would only improve it further.

These are the main settings you’ll need to update.

Editing the .env File

Then, open your .env file. In that file, paste your API key, whether it’s from OpenAI, DeepSeek, or whichever provider you’re using.

OPENAI_API_KEY=your_api_key_here

Once that’s done, your configuration will be ready for basic use.

Optional Settings

There are also some optional settings in config.ini.

  • Headless Browser: HEADLESS_BROWSER means the browser window won’t actually open while the agent runs. It’ll still do everything but quietly in the background. Leave this set to true.
  • Voice Interaction: You can also enable SPEAK_MODE and LISTEN_MODE. These let the agent talk back and listen to your voice. You’ll be able to have a real conversation. Just turn both of those options to true, and it’ll start working.

Running the Agent

Before you start up the agent, there are a few things you need to know.

  1. Start Docker: You’ll need to have Docker up and running for the agent to start because it’s containerized. It fetches containers and sets them up automatically. First, start Docker.

  2. Start Services: Once Docker is running, you can start the services. Come back into the GitHub repository, and here you’ll run this command. It’s different for macOS and Windows. The macOS command also works on Linux, so just run that if you’re using Linux.

    ./scripts/start_services_mac.sh
    

    If you don’t have your Python environment activated and you’ve opened a new terminal, make sure to run the environment activation command first. This will start a few backend services as well as the front end.

  3. Start the Backend: After running that, you’re going to start the backend, which is handled by api.py. You need to run both the front end and back end in separate terminal windows. So, the services command is running in one terminal, and the Python API command will run in another.

    Note: Something else they got wrong is that it won’t start up if you just use python or python3. You have to explicitly write python3.10 api.py for it to work. Paste that in, and it’ll start the backend.

    python3.10 api.py
    
  4. Access the Agent: Then, go to this address in your browser: http://localhost:3000. You’ll find your agent there, ready to use.

That brings us to the end of this article. As always, thank you for reading.


Join the 10xdev Community

Subscribe and get 8+ free PDFs that contain detailed roadmaps with recommended learning periods for each programming language or field, along with links to free resources such as books, YouTube tutorials, and courses with certificates.

Audio Interrupted

We lost the audio stream. Retry with shorter sentences?