This article provides a comprehensive walkthrough for setting up OpenClow (the new name for Clawdbot) on a Proxmox server. We will configure it to run powerful, local AI models using Ollama, creating a private and efficient AI assistant.
The name change from Clawdbot to OpenClow occurred after Anthropiс raised concerns about brand confusion. Now, let’s dive into the technical details.
Prerequisites
To follow this guide, you will need a few things:
- A Proxmox Server: This setup requires a running Proxmox environment. If you’re new to Proxmox, there are many resources available to help you get started with installation on an old computer or a dedicated server.
- Ollama: You must have the latest version of Ollama installed and running on a machine in your network.
- An AI Model: A model should be downloaded within Ollama. For this tutorial, we’ll use a model like GPT-OS, which is available through Ollama.
- Sufficient Hardware: To run a model like GPT-OS effectively, a GPU with at least 16GB of VRAM is highly recommended.
Step 1: Network Isolation with VLANs (Optional)
For enhanced security, it’s a good practice to isolate the virtual machine that will run OpenClow from your main local network. This can be achieved using VLANs. This step is optional, and you can proceed without it if your network hardware doesn’t support VLANs or if you’re not familiar with the concept.
The core idea is to create a separate, virtual network that can only access the internet and specific internal services you explicitly allow, preventing it from seeing other devices like your computer or TV.
Example using UniFi:
- Create a New Virtual Network: In your UniFi controller, create a new network and name it something descriptive, like
AI-Network. - Assign a VLAN ID: Give it a unique ID, for instance,
2. - Isolate the Network: In the manual settings, enable the “Isolate Network” option. This is the crucial step that separates it from your primary LAN.
- Enable Internet Access: Ensure the network has an internet connection.
- Configure DHCP: Set up a DHCP server for this VLAN to assign IP addresses automatically.
- Tag the Proxmox Port: In the port manager, ensure the physical switch port connected to your Proxmox server is configured to allow traffic from the new VLAN. In UniFi, this is often done by setting the port profile to allow all VLANs.
Step 2: Configuring Proxmox and Creating the Container
With the network prepared, we can move to Proxmox.
First, you need to make your Proxmox host aware of the VLAN.
- Navigate to your Proxmox host’s
System>Networksettings. - Select your primary network interface (e.g.,
vmbr0). - Click
Editand check the “VLAN aware” box.
Next, we’ll create an LXC container, which is more lightweight than a full VM because it shares the host’s OS kernel.
- Click “Create CT” to start the container creation wizard.
- General:
- Hostname:
openclow - Enable “Unprivileged container”.
- Set a secure
Passwordfor the root user.
- Hostname:
- Template: Select an operating system template, such as
Ubuntu 24.04. - Disks: A disk size of
20GBor more is a good starting point. - CPU:
2cores are sufficient. - Memory: While
1GBis the minimum recommendation,2GBprovides a better experience. - Network:
- If not using a VLAN: Choose your main bridge (e.g.,
vmbr0). - If using a VLAN: Enter the VLAN Tag you configured earlier (e.g.,
2). - IPv4: Set a static IP. For our example VLAN, this would be
192.168.2.44/24. - Gateway: Set the gateway for the VLAN, e.g.,
192.168.2.1.
- If not using a VLAN: Choose your main bridge (e.g.,
- DNS: Leave the DNS settings as they are or use public DNS servers like
8.8.8.8and8.8.4.4. - Confirm: Review the summary and check “Start after created”, then click
Finish.
Step 3: Initial Container Setup
Once the container is running, open its console from the Proxmox UI.
- Log in as
rootwith the password you set. - Verify internet connectivity:
ping 8.8.8.8 - If you used a VLAN, confirm its isolation by trying to ping a device on your main network. The ping should fail, proving the container is properly isolated.
# This should fail if isolation is working ping 192.168.1.100 - Finally, update the system packages:
apt update && apt upgrade -y
Step 4: Installing OpenClow
Now we’ll install the OpenClow software inside the container.
- The installation script requires
curl, which may not be installed. Install it first:apt install curl -y - Run the official OpenClow installation script. You can find the most up-to-date command on their official website. It will look similar to this:
bash <(curl -sSL https://openclow.com/install.sh) - The script will guide you through an interactive setup:
- Acknowledge the warning about running a powerful tool by typing
yes. - For
Onboarding Mode, choosequickstart. - When asked to select a model, choose
skip for now. We will configure our local model manually. - For the connection method, select
whatsapp. - A QR code will appear in the terminal. Scan it using the “Linked Devices” feature in WhatsApp on your phone.
- After scanning, you’ll be prompted to enter your phone number.
- When asked to configure skills, select
nofor now. - Enable the bot and the command logger when prompted.
- Acknowledge the warning about running a powerful tool by typing
Step 5: Connecting OpenClow to Your Local Ollama Instance
With OpenClow installed, we need to tell it where to find our Ollama server. This is done by editing its main configuration file.
- Open the configuration file using a text editor like
nano. The file is typically located in the user’s home directory.nano ~/.config/openclow/config.yaml -
Delete the entire contents of the file and replace it with the configuration below. This ensures a clean setup. The key is to add the
modelssection pointing to your Ollama server.# Paste the default configuration content here, # and add or modify the models section at the end. # ... (rest of the configuration) models: - provider: ollama url: http://192.168.7.7:11434Note: Replace
192.168.7.7with the actual IP address of the machine running your Ollama instance. - Save the file and exit the editor (Ctrl+X, then Y, then Enter in nano).
Step 6: Creating a Firewall Rule (Optional)
If you isolated OpenClow in a separate VLAN, you must create a specific firewall rule to allow it to communicate with the Ollama server.
In your router/firewall settings (e.g., UniFi), create a new rule with the following logic:
- Action: Allow
- Source: The
AI-NetworkVLAN. - Destination: The IP address of your Ollama server (e.g.,
192.168.7.7). - Port: The Ollama port,
11434.
This rule creates a pinhole in the firewall, allowing only the necessary traffic to pass while keeping the networks otherwise isolated.
Step 7: Verification and Testing
Let’s ensure everything is working correctly.
- From the OpenClow container, test the connection to the Ollama server using
telnet. You may need to install it first (apt install telnet).telnet 192.168.7.7 11434If successful, you will see a “Connected” message.
- Restart the OpenClow gateway to apply the new configuration.
openclow gateway stop openclow gateway start - You can monitor the bot’s activity using its terminal user interface (TUI).
openclow tui -
Now, send a message from your WhatsApp to the linked number. You should see the message appear in the TUI. If you ask it a question, it will now process it using your local Ollama model.
For example, if you ask it to search for something, it might respond that it doesn’t have search capabilities yet. This is expected and confirms the bot is running and connected to your model, but simply lacks the specific skill for web searching.
Conclusion and Next Steps
Congratulations! You now have a fully functional, self-hosted AI assistant running on your Proxmox server, connected to your local Ollama instance, and accessible via WhatsApp. This setup gives you complete control and privacy over your AI interactions.
In future articles, we will explore how to enhance this bot by adding powerful skills for web searching, browsing the internet, reading your emails, and even creating complex, automated workflows that combine multiple skills to perform tasks for you.