DEV Community

Cover image for How to Run OpenHands with a Local LLM Using LM Studio
udiko
udiko

Posted on

2 1 1

How to Run OpenHands with a Local LLM Using LM Studio

This guide will walk you through how to run OpenHands locally and integrate it with LM Studio, a local language model interface. This setup lets you use OpenHands with an offline LLM, giving you a privacy-friendly and powerful AI development environment.

What is OpenHands?

OpenHands is a visual development tool for building AI workflows, agents, and automation systems. It offers an intuitive UI to define complex AI interactions without writing code.

What is LM Studio?

LM Studio is a desktop app that allows you to run large language models (LLMs) locally on your machine using GGUF (GPT-style) models. It supports models like Mistral, Qwen, Llama, etc., and exposes an OpenAI-compatible API to use those models with other apps.

Why Run Both OpenHands and LM Studio Locally?

Running both tools locally offers multiple advantages, especially in privacy-sensitive or regulated environments:

  • Data Privacy & Security
    Your prompts, code, and data never leave your machine—essential when working with confidential or proprietary information.

  • Compliance with Security Policies
    Some organizations do not allow sending code, user data, or logs to external APIs like OpenAI or Anthropic. Local setup helps you stay compliant.

  • Offline Development
    You can work without internet access, which is helpful in restricted environments or when on the move.

  • Cost
    There's no pay-per-token usage.

  • Custom Model Control
    You can run models fine-tuned to your needs and switch between different LLMs as required.

Prerequisites

Make sure you have the following installed:

  • Docker Desktop
  • LM Studio (latest version recommended)
  • A modern GPU or strong CPU for running large models locally (depending on your chosen model). In this guide, I'm using a MacBook Pro with an M1 Max chip and 32GB of RAM.

Step-by-Step Setup

1. Create the Docker Compose file for running OpenHands locally

Create a file named docker-compose.yml with the following content:

version: "3.8"

services:
  openhands-app:
    image: docker.all-hands.dev/all-hands-ai/openhands:0.32
    container_name: openhands-app
    environment:
      SANDBOX_RUNTIME_CONTAINER_IMAGE: docker.all-hands.dev/all-hands-ai/runtime:0.32-nikolaik
      LOG_ALL_EVENTS: "true"
    ports:
      - "3000:3000"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ~/.openhands-state:/.openhands-state
    extra_hosts:
      - "host.docker.internal:host-gateway"
    restart: unless-stopped
    pull_policy: always
Enter fullscreen mode Exit fullscreen mode

2. Start OpenHands

Run the following command in your terminal from the same directory as the docker-compose.yml file:

docker compose up
Enter fullscreen mode Exit fullscreen mode

After a few moments, OpenHands will be running locally.

3. Open the OpenHands UI

Visit the following URL in your browser:

http://localhost:3000
Enter fullscreen mode Exit fullscreen mode

Connect OpenHands to LM Studio

Once OpenHands is running, you’ll need to configure it to use LM Studio as your local LLM provider. Lets install LLM first.

4. Install a Compatible LLM in LM Studio

In LM Studio:

  1. Go to the "Models" tab.
  2. Install a model with good coding capabilities. For Mac with 32GB a good one is: qwen2.5-coder-14b-instruct
  3. Once installed, increase the context length to 16384 tokens in the model settings.
  4. Start the model and ensure it’s running on the OpenAI-compatible local server (default port is 1234).

5. Configure the LLM Provider in OpenHands

In OpenHands:

  1. Click on your avatar or settings menu.
  2. Go to "LLM Providers".
  3. Click "Advanced".
  4. Custom Model: lm_studio/qwen2.5-coder-14b-instruct OR follow the guide here to configure it: LiteLLM - LM Studio Setup
  5. Base URL: http://LOCAL_IP:1234/v1 This tells the OpenHands container to connect to LM Studio running on your host machine.
  6. Click save.

6. Authenticate with GitHub (If Using a Repo)

To pull a repo into OpenHands:

  • Go to the settings and connect your GitHub account.
  • Add a GitHub Personal Access Token with appropriate read access.

7. Create a Project in OpenHands

  1. Go back to the main dashboard.
  2. Click “New Project”.
  3. Select a repository (if applicable).
  4. Open the workspace and start building with your offline LLM!

Final Thoughts

  • OpenHands is in Early Stages:
    OpenHands is still in its early development stages. While it's a promising tool, don’t expect a fully polished experience just yet. You may encounter occasional bugs or limitations as new features are added and refined.

  • Local LLM Limitations:
    While running an LLM locally gives you more privacy and control, the performance and sophistication of locally-run models may not match the capabilities of cloud-based models like OpenAI or Anthropic. For tasks requiring advanced reasoning or up-to-date knowledge, cloud-based services might still be the better choice.

  • Start Small with Clear Tasks:
    It's best to focus on small, well-defined tasks when working with OpenHands and local LLMs. This helps avoid overwhelming the system and ensures that you can iterate and test your workflows more efficiently.

Top comments (1)

Collapse
 
tedoge4905 profile image
tedoge

Do not use the BASE_IP in the Step 5, AFTER SO MANY HOURS, FINDING IP's and what not its actually: host.docker.internal:1234/v1/models, thats whats going to call your local on your MAIN HOST PC's lmstudio YOUR WELCOME: @JA_RON!