Forem

# ollama

Posts

👋 Sign in for the ability to sort posts by relevant, latest, or top.
The Best of Both Worlds: Merging IBM’s Project Bob with Ollama’s Image Ecosystem

The Best of Both Worlds: Merging IBM’s Project Bob with Ollama’s Image Ecosystem

Comments
10 min read
遠端使用 ollama 的方法

遠端使用 ollama 的方法

1
Comments
4 min read
讓 Claude Code 串接 Ollama 使用本地端模型

讓 Claude Code 串接 Ollama 使用本地端模型

2
Comments
1 min read
Open WebUI: Self-Hosted LLM Interface

Open WebUI: Self-Hosted LLM Interface

Comments
13 min read
Running Local AI with Flox and Ollama

Running Local AI with Flox and Ollama

Comments
2 min read
Building Self-Refining AI Agents with Ollama & Langfuse
Cover image for Building Self-Refining AI Agents with Ollama & Langfuse

Building Self-Refining AI Agents with Ollama & Langfuse

Comments
3 min read
The Ultimate LLM Inference Battle: vLLM vs. Ollama vs. ZML
Cover image for The Ultimate LLM Inference Battle: vLLM vs. Ollama vs. ZML

The Ultimate LLM Inference Battle: vLLM vs. Ollama vs. ZML

1
Comments
6 min read
This Might Be the Best Ollama Chat Client: OllaMan
Cover image for This Might Be the Best Ollama Chat Client: OllaMan

This Might Be the Best Ollama Chat Client: OllaMan

1
Comments
4 min read
Update of “Fun project of the week, Mermaid flowcharts generator!” — V2 and more…

Update of “Fun project of the week, Mermaid flowcharts generator!” — V2 and more…

Comments
10 min read
The Complete Guide to Local AI Coding in 2026
Cover image for The Complete Guide to Local AI Coding in 2026

The Complete Guide to Local AI Coding in 2026

2
Comments
4 min read
Choosing the Right LLM for Cognee: Local Ollama Setup

Choosing the Right LLM for Cognee: Local Ollama Setup

Comments
3 min read
Diagnose & Fix Painfully Slow Ollama: 4 Essential Debugging Techniques + Fixes
Cover image for Diagnose & Fix Painfully Slow Ollama: 4 Essential Debugging Techniques + Fixes

Diagnose & Fix Painfully Slow Ollama: 4 Essential Debugging Techniques + Fixes

20
Comments
3 min read
Securely Exposing LM Studio with Nginx Proxy + Auth + Manage loaded models

Securely Exposing LM Studio with Nginx Proxy + Auth + Manage loaded models

Comments
3 min read
Building an AI-Powered Log Analyser with RAG
Cover image for Building an AI-Powered Log Analyser with RAG

Building an AI-Powered Log Analyser with RAG

Comments
6 min read
Stop Paying OpenAI: Free Local AI in .NET with Ollama
Cover image for Stop Paying OpenAI: Free Local AI in .NET with Ollama

Stop Paying OpenAI: Free Local AI in .NET with Ollama

7
Comments 2
13 min read
Panduan Lengkap: Mengakses Ollama Windows dari WSL untuk Pengembangan AI Agent Pendahuluan

Panduan Lengkap: Mengakses Ollama Windows dari WSL untuk Pengembangan AI Agent Pendahuluan

Comments
10 min read
AI Infrastructure on Consumer Hardware

AI Infrastructure on Consumer Hardware

5
Comments
9 min read
Local LLM Hosting: Complete 2025 Guide - Ollama, vLLM, LocalAI, Jan, LM Studio & More

Local LLM Hosting: Complete 2025 Guide - Ollama, vLLM, LocalAI, Jan, LM Studio & More

1
Comments
19 min read
Using Ollama Web Search API in Python
Cover image for Using Ollama Web Search API in Python

Using Ollama Web Search API in Python

2
Comments 2
9 min read
Using Ollama Web Search API in Go
Cover image for Using Ollama Web Search API in Go

Using Ollama Web Search API in Go

Comments
11 min read
💻 Unlock RAG-Anything’s Power with Ollama on Your Machine (with Docling as Bonus)

💻 Unlock RAG-Anything’s Power with Ollama on Your Machine (with Docling as Bonus)

Comments 7
8 min read
How to Build Your Own AI Platform with Ollama Cloud Models
Cover image for How to Build Your Own AI Platform with Ollama Cloud Models

How to Build Your Own AI Platform with Ollama Cloud Models

Comments
3 min read
The Open Source PR code review copilot for Gitlab & ADO that Supports Ollama (without CI/CD setup)

The Open Source PR code review copilot for Gitlab & ADO that Supports Ollama (without CI/CD setup)

Comments
2 min read
Pros and Cons using containerized Ollama vs. local setup for Generative AI Applications

Pros and Cons using containerized Ollama vs. local setup for Generative AI Applications

Comments
6 min read
Meeting “mellea” !!!

Meeting “mellea” !!!

Comments
10 min read
loading...