DEV Community

Cover image for The Ultimate Guide to Running n8n with Ollama LLM Locally Using Docker
Apu Chakraborty
Apu Chakraborty

Posted on

2 1 1 1 1

The Ultimate Guide to Running n8n with Ollama LLM Locally Using Docker

Would you like to automate tasks using AI locally—without relying on the cloud, incurring API costs, or risking data leakage?

This guide will demonstrate how to operate n8n, a robust open-source workflow automation tool, in conjunction with Ollama, a high-speed local LLM runtime (similar to LLaMA, Mistral, or others)—all facilitated through Docker on your personal computer.

Indeed, it is possible to establish a completely local AI automation system at no cost whatsoever.
Let us explore this further.

📁 1. Create a folder structure

mkdir n8n-ollama
cd n8n-ollama
touch docker-compose.yml
Enter fullscreen mode Exit fullscreen mode

🧾 2. Create docker-compose.yml

Create a file named docker-compose.yml with:


services:
  ollama:
    image: ollama/ollama
    ports:
      - "11434:11434"
    container_name: ollama
    networks:
      - n8n-network
    volumes:
      - ollama_data:/root/.ollama

  n8n:
    image: n8nio/n8n
    container_name: n8n
    ports:
      - "5678:5678"
    networks:
      - n8n-network
    environment:
      - N8N_HOST=localhost
      - N8N_PORT=5678
      - N8N_EDITOR_BASE_URL=http://localhost:5678
      - WEBHOOK_URL=http://localhost:5678
      - NODE_FUNCTION_ALLOW_EXTERNAL=*
    volumes:
      - n8n_data:/home/node/.n8n

networks:
  n8n-network:

volumes:
  ollama_data:
  n8n_data:
Enter fullscreen mode Exit fullscreen mode

This puts both containers in the same Docker network so n8n can reach ollama using the hostname ollama.

▶️ 3. Start both services

docker-compose up -d
Enter fullscreen mode Exit fullscreen mode

you will get response like this -

[+] Running 2/2
 ✔ Container ollama  Started                                                                                        0.6s 
 ✔ Container n8n     Started           
Enter fullscreen mode Exit fullscreen mode

✅ Verify container

#verify containers
docker ps
Enter fullscreen mode Exit fullscreen mode
CONTAINER ID   IMAGE           COMMAND                  CREATED      STATUS         PORTS                                           NAMES
0d99d7a06ff9   n8nio/n8n       "tini -- /docker-ent…"   3 days ago   Up 2 minutes   0.0.0.0:5678->5678/tcp, :::5678->5678/tcp       n8n
c5eabfa39b70   ollama/ollama   "/bin/ollama serve"      3 days ago   Up 2 minutes   0.0.0.0:11434->11434/tcp, :::11434->11434/tcp   ollama
Enter fullscreen mode Exit fullscreen mode

You should see both ollama and n8n containers running.
n8n - http://localhost:5678
ollama - http://localhost:11434

🎉 Great! That means n8n successfully connected to Ollama, and now the only issue is:


⛓️ Pull the correct model inside the Ollama container

Open a terminal inside the Ollama container:

docker exec -it ollama bash
Enter fullscreen mode Exit fullscreen mode

You're now inside the container.

Pull a valid model (e.g., llama3):

ollama pull llama3
# ---
ollama pull llama3.2
# ---
ollama pull deepseek-r1:1.5b
Enter fullscreen mode Exit fullscreen mode
root@c5eabfa39b70:/# ollama list
NAME                ID              SIZE      MODIFIED   
deepseek-r1:1.5b    e0979632db5a    1.1 GB    3 days ago    
llama3.2:latest     a80c4f17acd5    2.0 GB    3 days ago    
llama3:latest       365c0bd3c000    4.7 GB    3 days ago 
Enter fullscreen mode Exit fullscreen mode

⭐ This will download the official llama3 model.

Exit the container:

exit
Enter fullscreen mode Exit fullscreen mode

In n8n, update your model name:

When setting up the Ollama node in n8n, use:

llama3
Enter fullscreen mode Exit fullscreen mode

✌🏻 This matches the model you pulled.

curl http://localhost:11434/api/generate \
  -X POST \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama3.2",
    "prompt": "5+5 ?",
    "stream": false
}'
Enter fullscreen mode Exit fullscreen mode

From outside, run:

docker exec -it ollama ollama list
Enter fullscreen mode Exit fullscreen mode

step1

Image step2

n8n-ollama connection setup

Image last


🛑 stop the container:

docker-compose down
Enter fullscreen mode Exit fullscreen mode

You Did It! Now Build AI Agents Locally—Fast & Free.

With Ollama + n8n, you can:

  • Run AI like Llama offline (no APIs, no costs)
  • Automate content, support, or data tasks in minutes
  • Own your AI workflow (no limits, no middlemen)

Your turn—launch a workflow and see the magic happen. 🚀

AWS Q Developer image

Build your favorite retro game with Amazon Q Developer CLI in the Challenge & win a T-shirt!

Feeling nostalgic? Build Games Challenge is your chance to recreate your favorite retro arcade style game using Amazon Q Developer’s agentic coding experience in the command line interface, Q Developer CLI.

Participate Now

Top comments (0)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.

DevCycle image

Ship Faster, Stay Flexible.

DevCycle is the first feature flag platform with OpenFeature built-in to every open source SDK, designed to help developers ship faster while avoiding vendor-lock in.

Start shipping

👋 Kindness is contagious

Dive into this thoughtful piece, beloved in the supportive DEV Community. Coders of every background are invited to share and elevate our collective know-how.

A sincere "thank you" can brighten someone's day—leave your appreciation below!

On DEV, sharing knowledge smooths our journey and tightens our community bonds. Enjoyed this? A quick thank you to the author is hugely appreciated.

Okay