Forem

# selfhosted

Posts

đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.
Tired of overpaying for emails, I built Senddock: A Self-Hostable Email API and Campaign Platform
Cover image for Tired of overpaying for emails, I built Senddock: A Self-Hostable Email API and Campaign Platform

Tired of overpaying for emails, I built Senddock: A Self-Hostable Email API and Campaign Platform

Comments 1
2 min read
llama.cpp MTP Beta, Gemma GGUF Fixes, & Sentinel Local-First AI Coding App

llama.cpp MTP Beta, Gemma GGUF Fixes, & Sentinel Local-First AI Coding App

Comments
3 min read
My Backup Failed Twice: Docker Permissions, Then GitHub's 2 GiB Limit

My Backup Failed Twice: Docker Permissions, Then GitHub's 2 GiB Limit

Comments
4 min read
Production AI Agent Wallet: GHCR Image with Auto-Provision and Healthcheck
Cover image for Production AI Agent Wallet: GHCR Image with Auto-Provision and Healthcheck

Production AI Agent Wallet: GHCR Image with Auto-Provision and Healthcheck

Comments
5 min read
Why We Chose Self-Hosted AI Over Cloud for Business Data Posted by the RagLeap team — building RagLeap, a private-server AI business platform

Why We Chose Self-Hosted AI Over Cloud for Business Data Posted by the RagLeap team — building RagLeap, a private-server AI business platform

Comments
4 min read
I gave my AI agent 3 tasks. Here's exactly what happened.
Cover image for I gave my AI agent 3 tasks. Here's exactly what happened.

I gave my AI agent 3 tasks. Here's exactly what happened.

Comments
3 min read
I Built a Model Router That Picks the Right AI for Every Task — Here's Why You Should Too

I Built a Model Router That Picks the Right AI for Every Task — Here's Why You Should Too

Comments
4 min read
Why We Chose Self-Hosted AI Over Cloud for Business Data

Why We Chose Self-Hosted AI Over Cloud for Business Data

Comments
4 min read
Qwen3.6-27B Local Inference on RTX 3090 with Native vLLM & Ollama Fallback

Qwen3.6-27B Local Inference on RTX 3090 with Native vLLM & Ollama Fallback

Comments
3 min read
Shipping Web Apps to a VPS Should Be This Simple
Cover image for Shipping Web Apps to a VPS Should Be This Simple

Shipping Web Apps to a VPS Should Be This Simple

Comments
4 min read
How to plan a private Telegram AI assistant with OpenClaw

How to plan a private Telegram AI assistant with OpenClaw

Comments
3 min read
Helicone is now in maintenance mode. Here is how to switch to a self-hosted alternative in 5 minutes.

Helicone is now in maintenance mode. Here is how to switch to a self-hosted alternative in 5 minutes.

Comments
2 min read
Building ChatNova: How I'm bringing IRC into 2026 with a modern webchat
Cover image for Building ChatNova: How I'm bringing IRC into 2026 with a modern webchat

Building ChatNova: How I'm bringing IRC into 2026 with a modern webchat

Comments
2 min read
I built a self-hosted AI agent that markets itself. Here's how.

I built a self-hosted AI agent that markets itself. Here's how.

1
Comments 2
3 min read
PFlash Boosts llama.cpp Prefill; Ollama Sees Major Speed Gains; Llama 3.2 on Android

PFlash Boosts llama.cpp Prefill; Ollama Sees Major Speed Gains; Llama 3.2 on Android

Comments
3 min read
đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.