DEV Community

Cover image for Zed + Ollama + LLMs on a GPU VM: The Ultimate Local Dev Setup for Serious Coders
Ayush kumar for NodeShift

Posted on

3 2 2 1 1

Zed + Ollama + LLMs on a GPU VM: The Ultimate Local Dev Setup for Serious Coders

Image description

Zed is a next-generation code editor built from the ground up in Rust for ultimate performance. Whether you’re working solo or collaborating with your team in real time, Zed delivers a buttery-smooth coding experience — from instant startup times to zero-lag typing.

With native support for Git, Jupyter, terminals, and remote development, it’s tailored for modern workflows. Zed also integrates deeply with the latest AI assistants, letting you generate, transform, and review code effortlessly through agentic editing and inline intelligence — all while keeping you in control.

What makes Zed stand out?

  • Intelligent — Seamlessly connect your favorite models to edit, refactor, and debug faster.
  • Ridiculously Fast — Built in Rust to leverage your machine’s full power, including GPU.
  • Truly Collaborative — Code together, chat, and share context in real-time.
  • Extensible — Hundreds of language extensions, themes, and integrations ready to go.

Zed just works — and it keeps getting better with weekly updates and a growing open-source ecosystem.

This project is open source under the Apache 2.0 License. If you’re an open-source contributor, you’re good to go — start exploring and contributing today!

Resources

Website
Link: https://zed.dev/

GitHub
Link: https://github.com/zed-industries/zed

Step-by-Step Process to Setup Zed + Ollama + LLMs

For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.

Step 1: Sign Up and Set Up a NodeShift Cloud Account

Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.

Follow the account setup process and provide the necessary details and information.
Image description

Step 2: Create a GPU Node (Virtual Machine)

GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Image description

Image description
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy

Step 3: Select a Model, Region, and Storage

In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
Image description

Image description
We will use 1 x RTXA6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.

Step 4: Select Authentication Method

There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Image description

Step 5: Choose an Image

Next, you will need to choose an image for your Virtual Machine. We will deploy tool on an NVIDIA Cuda Virtual Machine. This proprietary, closed-source parallel computing platform will allow you to install tool on your GPU Node.
Image description

After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Image description

Step 6: Virtual Machine Successfully Deployed

You will get visual confirmation that your node is up and running.
Image description

Step 7: Connect to GPUs using SSH

NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.

Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Image description

Image description

Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Image description

Next, if you want to check the GPU details, run the command below:
nvidia-smi

Image description

Step 8: Install Ollama

After connecting to the terminal via SSH, it’s now time to install Ollama from the official Ollama website.

Website Link: https://ollama.com/

Run the following command to install the Ollama:
curl -fsSL https://ollama.com/install.sh | sh

Image description

Step 9: Serve Ollama

Run the following command to host the Ollama so that it can be accessed and utilized efficiently:
OLLAMA_HOST=0.0.0.0:11434 ollama serve

Image description

Step 10: Set Up SSH Port Forwarding (For Remote Models Like Ollama on a GPU VM)

If you’re running a model like Ollama on a remote GPU Virtual Machine (e.g. via NodeShift, AWS, or your own server), you’ll need to port forward the Ollama server to your local machine so Zed Editor can connect to it.

Here’s how to do it:

Example (Mac/Linux Terminal):
ssh -L 11434:localhost:11434 root@<your-vm-ip> -p <your-ssh-port>

Once connected, your local machine will treat http://localhost:11434 as if Ollama is running locally.

Replace with your VM’s IP address
Replace with the custom port (e.g. 26055)

On Windows:

Use a tool like PuTTY or ssh from WSL/PowerShell with similar port forwarding.
Image description

If you’re running large language models (like Llama 3, DeepSeek, or Qwen) on a remote GPU Virtual Machine, you’ll want Zed Editor on your local machine to talk to that remote Ollama instance.

But since the model is running on the VM — not on your laptop — we need to bridge the gap.

That’s where SSH port forwarding comes in.

Why use a GPU VM?

Large models require serious compute power. Your laptop might struggle or overheat trying to run them. So we spin up a GPU-powered VM in the cloud — it gives us:

  • Faster responses
  • Support for large models (7B, 13B, even 70B!)
  • More RAM + VRAM for smoother inference

Step 11: Run Your First Models in Ollama (Devstral + Qwen 2.5)

Now that Zed Editor is connected to Ollama via http://localhost:11434, let’s run our first models.

We’ll use two powerful open-source models:

Devstral by Mistral AI

A brand new model purpose-built for coding agents. Devstral isn’t just about code completion — it’s designed to handle real-world software engineering tasks, like resolving GitHub issues and working inside codebases.

To run it on Ollama:
ollama run devstral

Run this command on your GPU Virtual Machine, not your Mac.

Built by Mistral AI in collaboration with All Hands AI, Devstral is optimized for local use (even on a Mac with 32GB RAM or a single RTX 4090) and is fully open under the Apache 2.0 license.

If you want to dive deeper into Devstral, we’ve got a full step-by-step guide here:

Link: https://nodeshift.com/blog/a-step-by-step-to-install-devstral-mistrals-open-source-coding-agent
Image description

Qwen 2.5 VL by Qwen

Another great lightweight model you can try locally is Qwen 2.5 VL, specifically the 3B variant — perfect for fast inference and lower memory usage.

Run it with:
ollama run qwen2.5vl:3b
Again, run this command on your GPU Virtual Machine.

This is a solid pick for fast testing, multi-language reasoning, and creative coding tasks — without needing a huge GPU.

You can switch between models anytime by running a different ollama run command in the background. Once it’s active, Void Editor will automatically detect and use the running model.
Image description

Step 12: Check Available Models via curl (From Your Mac)

Once your Ollama backend is running on the remote GPU VM and connected to your Mac via SSH port forwarding, you can use a simple curl command from your local machine to check which models are currently available.

First, Pull the Models (like Devstral or Qwen 2.5 VL)
Before you can list anything, you’ll need to pull the models you plan to use. For example:

ollama pull devstral
ollama run qwen2.5vl:3b
Enter fullscreen mode Exit fullscreen mode

These commands run on the VM and download the models for Ollama to use.

Then, run this command on your mac:
curl http://localhost:11434/api/tags

This command connects to your forwarded Ollama server and shows a list of all the models you’ve pulled so far. It gives you a response like:

{
  "models": [
    { "name": "devstral" },
    { "name": "qwen2.5vl:3b" }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Note: This command runs on your Mac, not on the VM — because we’ve already port-forwarded localhost:11434 to the remote GPU VM where Ollama is active.

So in short:

  • Ollama is running remotely on a GPU VM
  • You’ve connected it to your Mac via SSH
  • And now you can check, query, and chat with models — all from your local editor Image description

Step 13: Download Zed Editor

“Open Google or any web browser, type ‘Zed AI Editor’, visit the official website, and click the ‘Download Void’ button to download it.”
Image description

Step 14: Open Zed Editor

Once the download is complete, open the Zed Editor app from your Applications folder or Start Menu. It should launch with a clean, minimal interface ready for setup.
Image description

Step 15: Choose Your LLM Provider

Head over to Zed → Settings → LLMs.

Here, you’ll find a list of supported large language model providers — both free and paid:

  • Google AI
  • Mistral AI
  • LM Studio
  • OpenAI
  • Ollama

For this setup, we’ll go with Ollama — it’s fast, flexible, and works perfectly with self-hosted models.

We’ll be running Ollama on a GPU-powered VM, because we’re planning to load and play with large models that need serious horsepower. This gives us much faster response times during code generation and edits.

Once Ollama is running on the GPU server, we’ll expose it to our Mac using SSH port forwarding — so we can interact with the models locally inside Zed, just like a native setup.

This lets you harness powerful cloud GPUs, while keeping your coding workflow smooth and private on your Mac.
Image description

Step 16: View Available Models Inside Zed Editor

Once your models (like devstral or qwen2.5vl:3b) are running in the background via Ollama on your GPU VM — and the port forwarding is active — Zed Editor will automatically detect them.

Where to Find Them?
Head to the “Models” section inside Zed Editor.
You’ll see a list of all available models that Ollama is currently serving.

Models like devstral, qwen2.5vl:3b, or any other you’ve pulled and started will show up here — ready to chat, code, or assist you inside the editor.

No need for any extra configuration — Zed listens to http://localhost:11434, detects the models, and makes them available in the dropdown or sidebar automatically.

You’re now all set to write, test, and build using real local models inside Zed — powered by your GPU VM!
Image description

Image description

Image description

Step 17: Select a Model and Start Running Prompts

You’re almost there — now it’s time to put everything into action!

How to Use:

  • Go to the Models section inside Void Editor.
  • Select the model you want to use (e.g., devstral, gemma3:1b, etc.).
  • Jump into any file or open a new tab.
  • Use the built-in chat panel or prompt bar to ask questions, get suggestions, or generate code.

For example:

“How do I set up a Node.js server?”
“Refactor this function to use async/await.”
“Write a Python script to scrape a webpage.”

Enter fullscreen mode Exit fullscreen mode

Whatever your task — Zed + Ollama + your remote GPU model is now fully connected and ready to respond.
Image description

Image description

Step 18: Use a Paid Provider (Like OpenAI) by Adding Your API Key

If you prefer to use OpenAI’s models (like GPT-4 or GPT-4o), Zed Editor also supports that — all you need is your OpenAI API key.

How to Set It Up:

  • Open Zed Editor
  • Go to the Settings panel
  • Navigate to the Providers or API Keys section
  • Choose OpenAI from the list
  • Paste your OpenAI API key in the input field

Your key stays local and is only used within the editor — privacy is respected.
Image description

Image description

Once added, you can start using OpenAI’s models alongside your local ones. The same prompt bar and model selection flow applies — just choose OpenAI from the model menu and start coding, chatting, or writing.
Image description

Step 19: Select OpenAI Model and Run Prompts

Now that your OpenAI API key is added in Zed Editor, it’s time to put it to use.

How to Use:

  • Head to the Models panel or dropdown inside Zed.
  • From the provider list, select OpenAI.
  • Choose the model you want — like gpt-4, gpt-4o, or gpt-3.5-turbo.
  • Open any file or tab and start typing your question or prompt.

For example:

“Explain what this Python function does.”
“Generate TypeScript types from this JSON.”
“Write unit tests for this function.”

Enter fullscreen mode Exit fullscreen mode

The response will come directly from OpenAI’s API — integrated neatly into your Zed workflow.

Whether you’re coding, debugging, or brainstorming — it just works.
Image description

Image description

Image description

Image description

Conclusion

That’s it — your dream dev setup is now live. Zed Editor running on your Mac, Ollama hosting powerful models like Devstral and Qwen 2.5 VL on a GPU-powered VM, and everything connected seamlessly via SSH.

You get the best of both worlds:

  • A blazing-fast local code editor with smart inline LLM assistance
  • Backed by scalable, VRAM-heavy cloud GPUs that can actually run 7B+ models without breaking a sweat

This setup doesn’t just help you write code — it understands your workflow, adapts to your stack, and gives you full control from start to finish.

Whether you’re debugging legacy code, building a side project, or brainstorming wild ideas at 2AM — Zed + Ollama + LLMs + NodeShift GPU = your new secret weapon.

ACI image

ACI.dev: The Only MCP Server Your AI Agents Need

ACI.dev’s open-source tool-use platform and Unified MCP Server turns 600+ functions into two simple MCP tools on one server—search and execute. Comes with multi-tenant auth and natural-language permission scopes. 100% open-source under Apache 2.0.

Star our GitHub!

Top comments (1)

Collapse
 
dotallio profile image
Dotallio

This is such an in-depth setup, love the level of detail! Has anyone pushed this even further with larger models or mixed in any local no-code/AI workflows alongside Zed and Ollama?

Tiger Data image

🐯 🚀 Timescale is now TigerData: Building the Modern PostgreSQL for the Analytical and Agentic Era

We’ve quietly evolved from a time-series database into the modern PostgreSQL for today’s and tomorrow’s computing, built for performance, scale, and the agentic future.

So we’re changing our name: from Timescale to TigerData. Not to change who we are, but to reflect who we’ve become. TigerData is bold, fast, and built to power the next era of software.

Read more

👋 Kindness is contagious

Explore this insightful write-up, celebrated by our thriving DEV Community. Developers everywhere are invited to contribute and elevate our shared expertise.

A simple "thank you" can brighten someone’s day—leave your appreciation in the comments!

On DEV, knowledge-sharing fuels our progress and strengthens our community ties. Found this useful? A quick thank you to the author makes all the difference.

Okay