OpenClaw (Clawdbot) & Local LLMs: A Full Test & Setup Guide with LM Studio

00:00
BACK TO HOME

OpenClaw (Clawdbot) & Local LLMs: A Full Test & Setup Guide with LM Studio

10xTeam January 26, 2026 11 min read

Note: The tool mentioned in this article, formerly known by various names including Clawdbot, has been rebranded to OpenClaw. This article has been updated to reflect the new name and installation instructions. For the latest information, please visit the official website: openclaw.ai.

OpenClaw (formerly Clawdbot) has generated significant buzz. This article takes a different approach to exploring its capabilities. Instead of relying on costly APIs, we’ll dive into setting it up with a powerful local model, GPT-OSS 120B, running on a dedicated machine.

The first part of this guide documents the initial experience—the setup, the discoveries, and the impressive results. The second half provides a detailed tutorial on how to get OpenClaw working on your own Linux system with LM Studio. The entire process is powered by offline, local AI models, demonstrating that you can harness OpenClaw’s power without ballooning API costs.

The Power of Local Browser Automation

Running locally with GPT-OSS 120B on a dedicated machine, OpenClaw proves to be highly proficient at using tools, especially for browser automation. This has been a topic of great interest for a long time, and the results here are compelling.

To start, a simple prompt was given:

"use the browser and search for me and tell me who he is."

Instantly, a Chrome browser instance, specifically allocated to OpenClaw, opened and autonomously performed the search. The system then began its research, aiming to respond with pertinent information. This entire operation happens in the background, allowing you to minimize the browser and return to the main OpenClaw web UI.

While many have explored this tool using chat apps like Telegram or WhatsApp, this article focuses on a more technical demonstration. The goal is to showcase its performance with a local model, which is an exciting prospect for developers and tech enthusiasts.

Initial Hurdles and Troubleshooting

Using local models comes with trade-offs. While you save on API costs, performance can be slower than API-linked services. During testing, the server stopped responding, and the model froze. This is a common challenge with local setups.

To diagnose the issue, the logs were examined.

[INFO] Request received
[DEBUG] Processing prompt...
[DEBUG] Using browser tool...
[ERROR] Server stopped responding

Pasting these logs into another instance of GPT-OSS 120B for analysis yielded an interesting, though not entirely accurate, response: “Snippet you posted shows a completely successful request cycle.”

This wasn’t quite right. A simpler task was attempted:

"Just find the Apple stock price and tell me it."

The browser quickly opened and searched for “Apple stock price.” This is the magic of a local model driving the action. However, this is not a vision model. It must rely on reading the code from its browser interaction, not by taking and parsing a screenshot. While a vision model might be more performant, this workflow is designed to function without vision capabilities.

The process hit another snag, getting stuck in a loop—an issue also observed with other models like GLM 4.7 Flash. This led to a full reset and a fresh setup process.

A Critical Note on Security

Before proceeding to the setup guide, a disclaimer is essential. The hype surrounding this tool can be reminiscent of crypto pump-and-dumps, and it’s wise to be skeptical. While this publication is not part of any paid campaign, it’s crucial to cover popular tech responsibly.

There are extreme security risks involved.

The system used for this test does not contain any important personal information and is isolated from the main network. The phone used for testing is a secondary device with a separate, non-personal number.

Be extremely conscientious when installing and using this tool. While it’s open-source, 99% of users will not audit the code. A tool like Codex was used to browse the repository for potential red flags like botnets or hidden malicious code. The conclusion was that there are no inherent risks aside from misuse.

Carelessly setting this up on your primary computer could expose API keys or sensitive information through prompt exfiltration.

TL;DR: Be very careful. This tool centralizes many existing capabilities, making them accessible to a wider audience. That’s exciting, but it also opens the door to security vulnerabilities if not handled with caution. Don’t let a workflow update search your computer for crypto keys.

Step-by-Step Guide: Setting Up OpenClaw with LM Studio

This tutorial will guide you through setting up OpenClaw with a local LLM via LM Studio.

1. Initial Installation

First, install OpenClaw. The official documentation provides the necessary commands. If you are unfamiliar with this process, exercise caution. On a non-sensitive device, you can ask an LLM for guidance. On a primary device, if this looks new and concerning, it’s best not to proceed.

With that warning, perform the installation. For this test, GPT-OSS 20B was also downloaded.

curl -fsSL https://openclaw.ai/install.sh | bash

After the installation, run the setup command:

openclaw setup

This brings up the setup screen. You’ll be greeted with the disclaimer. Acknowledge it to continue.

2. Configuration Trick for LM Studio

This is where things can get frustrating. To get it working with LM Studio, a specific path was followed.

  1. When prompted for a model, scroll down and select “skip for now”.
  2. Next, select “all models”.
  3. Finally, choose “keep current model”.

This sequence allows you to later configure a custom local model from the openclaw.json file. After this, you can set up channels like WhatsApp. For now, we’ll skip those to focus on the LLM connection.

3. Solving the Context Growth Issue

During testing, a massive slowdown occurred after hooking up WhatsApp. The context had grown to over 64,000 tokens, causing each message to re-process that enormous payload.

Here’s how to fix it:

  1. Navigate to your home folder and find the .openclaw directory.
  2. Inside, go to agents/main/sessions/.
  3. You will see multiple files with names like [random_string].jsonl. These are your session histories.
  4. Delete all of these .jsonl files.
  5. Restart the entire OpenClaw service.

After deleting these files, the prompt processing speed became snappy again. This is a crucial step for anyone using this locally.

4. Autonomous Control via WhatsApp

With the setup complete and troubleshooting handled, the true power of this system was unleashed. The goal was to control the browser from a phone, using a local AI model, without any API keys.

The first command:

"Use the browser tool, go to bjambboen.com and then give me a summary of what it finds."

Success. The phone was used to instruct OpenClaw to autonomously browse the web, extract information from a specific site, and send a summary back via WhatsApp. This demonstration, using a local model, is exactly what makes this setup so compelling.

Next, a more complex task:

"Send click send me a message."

This could have easily failed with a less capable model, but it worked. This is where things get dangerously powerful. The model navigated to the contact page and identified the form fields.

"I've opened the contact page. It contains a form with fields for your name, email, address, subject, and send an email button to send an inquiry."

This is hype-worthy, especially because it’s all running offline on a dedicated machine. The final test was to have it send a message.

"Fill out the form with [details] and then send it."

The model autonomously filled in the form fields, controlled entirely from the phone. Then, it sent the email.

Damn. This is cool. The hype is understandable.

This setup allows you to be out and about, controlling a machine to perform web tasks without touching it directly. It’s a truly agentic experience.

Of course, there are limits. When asked to perform a nefarious task, the model refused, which is a restriction built into GPT-OSS. It also failed to send a screenshot back through the chat, likely a limitation of the current configuration or model capability.

Tutorial: Connecting to a Local LLM with LM Studio

This light tutorial provides the essential information to connect OpenClaw to your local LLM using LM Studio’s server.

Prerequisites

  • You have LM Studio installed on your computer.
  • You have a powerful enough system to run the desired model. A 24 GB video card was used for this test.
  • You have Chrome installed.

Step 1: Install and Run Initial Setup

Follow the installation steps from the previous section. Run openclaw setup, accept the disclaimer, and use the “skip for now” -> “all models” -> “keep current default” trick. Skip the channel and skill configurations for now. Finally, start the web UI.

Step 2: Modify the JSON Configuration

  1. In your home directory, find the .openclaw folder. (Ensure “Show Hidden Files” is enabled).
  2. Open openclaw.json in a text editor.
  3. This file is your setup profile. You will need to modify it to point to your LM Studio server.

Here is a template for the model provider configuration. You will add or modify this section in your openclaw.json.

{
  "modelProviders": {
    "lmstudio": {
      "label": "LM Studio",
      "provider": "openai",
      "url": "http://localhost:1234/v1",
      "apiKey": "lm-studio",
      "models": [
        "gpt-oss-20b"
      ],
      "defaultModel": "gpt-oss-20b"
    }
  },
  "defaultProvider": "lmstudio"
}

Note: The full, working openclaw.json will be provided in a GitHub Gist for simplicity.

Step 3: Set Up LM Studio Server

  1. Open LM Studio and load your desired model (e.g., GPT-OSS 20B).
  2. Crucially, set a high context length. Local models need a large context window to function correctly with OpenClaw. Max it out if your system can handle it.
  3. Go to the server tab (the <-> icon) and start the server.

Step 4: Relaunch OpenClaw

Run openclaw setup one more time.

  1. Select “use existing values”. You should see your LM Studio configuration reflected.
  2. It will still make you go through the provider selection. Repeat the sequence: “skip for now” -> “all providers” -> “keep current model”.
  3. Restart the gateway service.

Now, when you send a message in the web UI, you should see verbose logs in your terminal indicating prompt processing, followed by a response from your local model.

Hooking Up the Browser

The browser interaction is one of the coolest features.

  1. Run the command to start the browser profile. This will automatically open a new Chrome instance.

    openclaw browser browser profile claude start
    
  2. Verify it’s running by checking the status.

    openclaw browser status
    
  3. To test it, you can have it navigate to a website.

    openclaw browser go https://news.ycombinator.com
    

    If it works, you’ll see the Chrome instance navigate to the page.

  4. Finally, you need to update your openclaw.json to give the model access to this browser tool. You can do this by asking an LLM (like Claude Opus 4.5) to help you merge the browser configuration into your existing JSON file.

After restarting the gateway and browser control server, you can instruct the model to use the browser. Be specific, especially with smaller models.

"Use the browser tool with profile claude to take a screenshot of google.com"

If successful, you’ll find a screenshot in your OpenClaw directory. This confirms that your local model is now configured to control the browser.

This setup is powerful, impressive, and opens up a world of possibilities for local AI agents. It’s a complex but doable process. Remember to keep security in mind, and don’t be afraid to use an LLM to help you navigate the configuration.


Join the 10xdev Community

Subscribe and get 8+ free PDFs that contain detailed roadmaps with recommended learning periods for each programming language or field, along with links to free resources such as books, YouTube tutorials, and courses with certificates.

Audio Interrupted

We lost the audio stream. Retry with shorter sentences?