Running Local AI Models with OpenClow on Proxmox and Ollama

00:00
BACK TO HOME

Running Local AI Models with OpenClow on Proxmox and Ollama

10xTeam February 01, 2026 8 min read

This article provides a comprehensive walkthrough for setting up OpenClow (the new name for Clawdbot) on a Proxmox server. We will configure it to run powerful, local AI models using Ollama, creating a private and efficient AI assistant.

The name change from Clawdbot to OpenClow occurred after Anthropiс raised concerns about brand confusion. Now, let’s dive into the technical details.

Prerequisites

To follow this guide, you will need a few things:

  • A Proxmox Server: This setup requires a running Proxmox environment. If you’re new to Proxmox, there are many resources available to help you get started with installation on an old computer or a dedicated server.
  • Ollama: You must have the latest version of Ollama installed and running on a machine in your network.
  • An AI Model: A model should be downloaded within Ollama. For this tutorial, we’ll use a model like GPT-OS, which is available through Ollama.
  • Sufficient Hardware: To run a model like GPT-OS effectively, a GPU with at least 16GB of VRAM is highly recommended.

Step 1: Network Isolation with VLANs (Optional)

For enhanced security, it’s a good practice to isolate the virtual machine that will run OpenClow from your main local network. This can be achieved using VLANs. This step is optional, and you can proceed without it if your network hardware doesn’t support VLANs or if you’re not familiar with the concept.

The core idea is to create a separate, virtual network that can only access the internet and specific internal services you explicitly allow, preventing it from seeing other devices like your computer or TV.

Example using UniFi:

  1. Create a New Virtual Network: In your UniFi controller, create a new network and name it something descriptive, like AI-Network.
  2. Assign a VLAN ID: Give it a unique ID, for instance, 2.
  3. Isolate the Network: In the manual settings, enable the “Isolate Network” option. This is the crucial step that separates it from your primary LAN.
  4. Enable Internet Access: Ensure the network has an internet connection.
  5. Configure DHCP: Set up a DHCP server for this VLAN to assign IP addresses automatically.
  6. Tag the Proxmox Port: In the port manager, ensure the physical switch port connected to your Proxmox server is configured to allow traffic from the new VLAN. In UniFi, this is often done by setting the port profile to allow all VLANs.

Step 2: Configuring Proxmox and Creating the Container

With the network prepared, we can move to Proxmox.

First, you need to make your Proxmox host aware of the VLAN.

  1. Navigate to your Proxmox host’s System > Network settings.
  2. Select your primary network interface (e.g., vmbr0).
  3. Click Edit and check the “VLAN aware” box.

Next, we’ll create an LXC container, which is more lightweight than a full VM because it shares the host’s OS kernel.

  1. Click “Create CT” to start the container creation wizard.
  2. General:
    • Hostname: openclow
    • Enable “Unprivileged container”.
    • Set a secure Password for the root user.
  3. Template: Select an operating system template, such as Ubuntu 24.04.
  4. Disks: A disk size of 20GB or more is a good starting point.
  5. CPU: 2 cores are sufficient.
  6. Memory: While 1GB is the minimum recommendation, 2GB provides a better experience.
  7. Network:
    • If not using a VLAN: Choose your main bridge (e.g., vmbr0).
    • If using a VLAN: Enter the VLAN Tag you configured earlier (e.g., 2).
    • IPv4: Set a static IP. For our example VLAN, this would be 192.168.2.44/24.
    • Gateway: Set the gateway for the VLAN, e.g., 192.168.2.1.
  8. DNS: Leave the DNS settings as they are or use public DNS servers like 8.8.8.8 and 8.8.4.4.
  9. Confirm: Review the summary and check “Start after created”, then click Finish.

Step 3: Initial Container Setup

Once the container is running, open its console from the Proxmox UI.

  1. Log in as root with the password you set.
  2. Verify internet connectivity:
    ping 8.8.8.8
    
  3. If you used a VLAN, confirm its isolation by trying to ping a device on your main network. The ping should fail, proving the container is properly isolated.
    # This should fail if isolation is working
    ping 192.168.1.100 
    
  4. Finally, update the system packages:
    apt update && apt upgrade -y
    

Step 4: Installing OpenClow

Now we’ll install the OpenClow software inside the container.

  1. The installation script requires curl, which may not be installed. Install it first:
    apt install curl -y
    
  2. Run the official OpenClow installation script. You can find the most up-to-date command on their official website. It will look similar to this:
    bash <(curl -sSL https://openclow.com/install.sh)
    
  3. The script will guide you through an interactive setup:
    • Acknowledge the warning about running a powerful tool by typing yes.
    • For Onboarding Mode, choose quickstart.
    • When asked to select a model, choose skip for now. We will configure our local model manually.
    • For the connection method, select whatsapp.
    • A QR code will appear in the terminal. Scan it using the “Linked Devices” feature in WhatsApp on your phone.
    • After scanning, you’ll be prompted to enter your phone number.
    • When asked to configure skills, select no for now.
    • Enable the bot and the command logger when prompted.

Step 5: Connecting OpenClow to Your Local Ollama Instance

With OpenClow installed, we need to tell it where to find our Ollama server. This is done by editing its main configuration file.

  1. Open the configuration file using a text editor like nano. The file is typically located in the user’s home directory.
    nano ~/.config/openclow/config.yaml
    
  2. Delete the entire contents of the file and replace it with the configuration below. This ensures a clean setup. The key is to add the models section pointing to your Ollama server.

    # Paste the default configuration content here,
    # and add or modify the models section at the end.
        
    # ... (rest of the configuration)
    
    models:
      - provider: ollama
        url: http://192.168.7.7:11434
    

    Note: Replace 192.168.7.7 with the actual IP address of the machine running your Ollama instance.

  3. Save the file and exit the editor (Ctrl+X, then Y, then Enter in nano).

Step 6: Creating a Firewall Rule (Optional)

If you isolated OpenClow in a separate VLAN, you must create a specific firewall rule to allow it to communicate with the Ollama server.

In your router/firewall settings (e.g., UniFi), create a new rule with the following logic:

  • Action: Allow
  • Source: The AI-Network VLAN.
  • Destination: The IP address of your Ollama server (e.g., 192.168.7.7).
  • Port: The Ollama port, 11434.

This rule creates a pinhole in the firewall, allowing only the necessary traffic to pass while keeping the networks otherwise isolated.

Step 7: Verification and Testing

Let’s ensure everything is working correctly.

  1. From the OpenClow container, test the connection to the Ollama server using telnet. You may need to install it first (apt install telnet).
    telnet 192.168.7.7 11434
    

    If successful, you will see a “Connected” message.

  2. Restart the OpenClow gateway to apply the new configuration.
    openclow gateway stop
    openclow gateway start
    
  3. You can monitor the bot’s activity using its terminal user interface (TUI).
    openclow tui
    
  4. Now, send a message from your WhatsApp to the linked number. You should see the message appear in the TUI. If you ask it a question, it will now process it using your local Ollama model.

    For example, if you ask it to search for something, it might respond that it doesn’t have search capabilities yet. This is expected and confirms the bot is running and connected to your model, but simply lacks the specific skill for web searching.

Conclusion and Next Steps

Congratulations! You now have a fully functional, self-hosted AI assistant running on your Proxmox server, connected to your local Ollama instance, and accessible via WhatsApp. This setup gives you complete control and privacy over your AI interactions.

In future articles, we will explore how to enhance this bot by adding powerful skills for web searching, browsing the internet, reading your emails, and even creating complex, automated workflows that combine multiple skills to perform tasks for you.


Join the 10xdev Community

Subscribe and get 8+ free PDFs that contain detailed roadmaps with recommended learning periods for each programming language or field, along with links to free resources such as books, YouTube tutorials, and courses with certificates.

Audio Interrupted

We lost the audio stream. Retry with shorter sentences?