DEV Community

Cover image for How to Set Up Ollama on Windows for Network Access via Tailscale
Eric Berry
Eric Berry

Posted on

1 1 1

How to Set Up Ollama on Windows for Network Access via Tailscale

Running large language models locally with Ollama is fantastic, but what if you want to access your powerful Windows machine's Ollama instance from other devices on your network? This guide shows you how to set up Ollama on Windows 11 and make it securely accessible to any device on your Tailscale network.

Why This Setup?

By the end of this tutorial, you'll have:

  • Ollama running automatically as a background service on Windows
  • Secure access from any device on your Tailscale network
  • No GUI clutter or manual startup required
  • A robust setup that survives reboots

Prerequisites

  • Windows 11 Pro with administrator access
  • Tailscale account and Tailscale installed on your devices
  • Basic familiarity with PowerShell

Step 1: Install Ollama on Windows

  1. Download the Windows installer from ollama.ai
  2. Run the installer and complete the setup
  3. Ollama will install and likely add itself to Windows startup

Step 2: Install and Configure Tailscale

  1. Download Tailscale from tailscale.com
  2. Install and authenticate with your Tailscale account
  3. Note your Windows machine's Tailscale IP address:
   tailscale ip -4
Enter fullscreen mode Exit fullscreen mode

You'll get something like 100.119.78.59

Step 3: Stop Default Ollama Processes

The Windows installer starts Ollama automatically, but it only listens on localhost by default. We need to stop all running instances:

# Check for running Ollama processes
tasklist | findstr ollama

# Kill any running processes (replace PID with actual process IDs)
taskkill /f /pid [PID_NUMBER]
Enter fullscreen mode Exit fullscreen mode

Step 4: Configure Ollama for Network Access

Set the environment variable to make Ollama listen on all network interfaces:

# Set system-wide environment variable
[Environment]::SetEnvironmentVariable("OLLAMA_HOST", "0.0.0.0:14434", "Machine")
Enter fullscreen mode Exit fullscreen mode

Note: We're using port 14434 instead of the default 11434 to avoid conflicts, but you can use any available port.

Step 5: Create an Automatic Startup Task

Instead of running Ollama as a manual service, we'll create a scheduled task that starts automatically:

# Create scheduled task (run as administrator)
schtasks /create /tn "OllamaAutoStart" /tr "C:\Users\[YOUR_USERNAME]\AppData\Local\Programs\Ollama\ollama.exe serve" /sc onstart /ru SYSTEM
Enter fullscreen mode Exit fullscreen mode

Replace [YOUR_USERNAME] with your actual Windows username.

Step 6: Test the Setup

Start the scheduled task manually to test:

# Run the task
schtasks /run /tn "OllamaAutoStart"

# Verify it's running
tasklist | findstr ollama
Enter fullscreen mode Exit fullscreen mode

You should see Ollama running in the background.

Step 7: Test Network Access via Tailscale

From another device on your Tailscale network, test the connection:

curl http://[YOUR_TAILSCALE_IP]:14434/api/tags
Enter fullscreen mode Exit fullscreen mode

Replace [YOUR_TAILSCALE_IP] with the IP address from Step 2.

If successful, you should get a JSON response listing available models.

Step 8: Disable GUI Auto-Start (Optional)

If you prefer Ollama to run invisibly without the system tray icon:

  1. Press Ctrl + Shift + Esc to open Task Manager
  2. Go to the "Startup" tab
  3. Find "Ollama" and right-click → "Disable"

Step 9: Configure Other Devices

On any other device in your Tailscale network, point Ollama to your Windows server:

export OLLAMA_HOST=http://[YOUR_TAILSCALE_IP]:14434
Enter fullscreen mode Exit fullscreen mode

Now commands like ollama list, ollama pull llama2, and ollama run llama2 will use your Windows machine instead of requiring local installation.

Troubleshooting

Port Conflicts

If you get a "bind: Only one usage of each socket address" error:

# Check what's using the port
netstat -ano | findstr :14434

# Kill conflicting processes
taskkill /f /pid [PID]
Enter fullscreen mode Exit fullscreen mode

Service Won't Start

If the scheduled task fails to start Ollama:

# Check task status
schtasks /query /tn "OllamaAutoStart"

# Delete and recreate if needed
schtasks /delete /tn "OllamaAutoStart"
Enter fullscreen mode Exit fullscreen mode

Environment Variable Not Working

Ensure the environment variable is set system-wide:

# Check current value
$env:OLLAMA_HOST

# Reset if needed
[Environment]::SetEnvironmentVariable("OLLAMA_HOST", "0.0.0.0:14434", "Machine")
Enter fullscreen mode Exit fullscreen mode

Security Considerations

  • Tailscale provides encryption: All traffic between your devices is end-to-end encrypted
  • No public internet exposure: Your Ollama instance is only accessible within your Tailscale network
  • Access control: Use Tailscale's ACL features to restrict which devices can access your Ollama server

Benefits of This Setup

  1. Centralized compute: Use your powerful Windows machine's GPU from lightweight devices
  2. Consistent models: All devices share the same model downloads and configurations
  3. Resource efficiency: No need to run Ollama on every device
  4. Secure remote access: Access your AI models from anywhere with Tailscale
  5. Automatic startup: Everything works after reboots without manual intervention

Conclusion

You now have a robust, automatically-starting Ollama server that's securely accessible across all your devices via Tailscale. This setup is perfect for scenarios where you have one powerful machine but want to access AI models from laptops, phones, or other devices anywhere you have internet access.

The combination of Ollama's simplicity, Windows' scheduled tasks, and Tailscale's secure networking creates a professional-grade AI inference setup that "just works."

Dynatrace image

Frictionless debugging for developers

Debugging in production doesn't have to be a nightmare.

Dynatrace reimagines the developer experience with runtime debugging, native OpenTelemetry support, and IDE integration allowing developers to stay in the flow and focus on building instead of fixing.

Learn more

Top comments (1)

Collapse
 
xaviermac profile image
Xavier Mac

Thanks for this detailed guide! Could you clarify if there are any extra firewall settings needed on Windows to allow Ollama to listen on the new port, or does Tailscale handle all the necessary exceptions automatically?

AWS GenAI LIVE image

How is generative AI increasing efficiency?

Join AWS GenAI LIVE! to find out how gen AI is reshaping productivity, streamlining processes, and driving innovation.

Learn more

AWS Security LIVE! From re:Inforce 2025

Tune into AWS Security LIVE! streaming live from the AWS re:Inforce expo floor in Philadelphia from 8:00 AM ET-6:00 PM ET.

Tune in to the full event

DEV is partnering to bring live events to the community. Join us or dismiss this billboard if you're not interested. ❤️