Forem

# ollama

Posts

👋 Sign in for the ability to sort posts by relevant, latest, or top.
How to Set Up a Local Ubuntu Server to Host Ollama Models with a WebUI
Cover image for How to Set Up a Local Ubuntu Server to Host Ollama Models with a WebUI

How to Set Up a Local Ubuntu Server to Host Ollama Models with a WebUI

24
Comments 4
5 min read
Ollama 0.5 Is Here: Generate Structured Outputs
Cover image for Ollama 0.5 Is Here: Generate Structured Outputs

Ollama 0.5 Is Here: Generate Structured Outputs

2
Comments
3 min read
Building AI-Powered Apps with SvelteKit: Managing HTTP Streams from Ollama Server
Cover image for Building AI-Powered Apps with SvelteKit: Managing HTTP Streams from Ollama Server

Building AI-Powered Apps with SvelteKit: Managing HTTP Streams from Ollama Server

8
Comments
6 min read
Building 5 AI Agents with phidata and Ollama

Building 5 AI Agents with phidata and Ollama

37
Comments 2
6 min read
Run Ollama on Intel Arc GPU (IPEX)

Run Ollama on Intel Arc GPU (IPEX)

42
Comments 2
5 min read
Quick tip: Running OpenAI's Swarm locally using Ollama
Cover image for Quick tip: Running OpenAI's Swarm locally using Ollama

Quick tip: Running OpenAI's Swarm locally using Ollama

2
Comments
2 min read
Langchain4J musings
Cover image for Langchain4J musings

Langchain4J musings

12
Comments
8 min read
How to deploy SmolLM2 1.7B on a Virtual Machine in the Cloud with Ollama?
Cover image for How to deploy SmolLM2 1.7B on a Virtual Machine in the Cloud with Ollama?

How to deploy SmolLM2 1.7B on a Virtual Machine in the Cloud with Ollama?

11
Comments
6 min read
Ollama - Custom Model - llama3.2

Ollama - Custom Model - llama3.2

22
Comments 3
4 min read
Run Llama 3 Locally
Cover image for Run Llama 3 Locally

Run Llama 3 Locally

1
Comments
2 min read
Coding Assistants and Artificial Intelligence for the Rest of Us

Coding Assistants and Artificial Intelligence for the Rest of Us

Comments
1 min read
Using a Locally-Installed LLM to Fill in Client Requirement Gaps
Cover image for Using a Locally-Installed LLM to Fill in Client Requirement Gaps

Using a Locally-Installed LLM to Fill in Client Requirement Gaps

1
Comments
6 min read
Consuming HTTP Streams in PHP with Symfony HTTP Client and Ollama API
Cover image for Consuming HTTP Streams in PHP with Symfony HTTP Client and Ollama API

Consuming HTTP Streams in PHP with Symfony HTTP Client and Ollama API

15
Comments 2
3 min read
Llama 3.2 Running Locally in VSCode: How to Set It Up with CodeGPT and Ollama

Llama 3.2 Running Locally in VSCode: How to Set It Up with CodeGPT and Ollama

44
Comments 1
2 min read
Ollama Unveiled: Run LLMs Locally
Cover image for Ollama Unveiled: Run LLMs Locally

Ollama Unveiled: Run LLMs Locally

1
Comments
2 min read
No Bullshit Guide to Youtube shorts automation in NodeJS, OpenAI, Ollama, ElevanLabs & ffmpeg
Cover image for No Bullshit Guide to Youtube shorts automation in NodeJS, OpenAI, Ollama, ElevanLabs & ffmpeg

No Bullshit Guide to Youtube shorts automation in NodeJS, OpenAI, Ollama, ElevanLabs & ffmpeg

1
Comments
3 min read
OLLAMA + LLAMA3 + RAG + Vector Database (Local, Open Source, Free)
Cover image for OLLAMA + LLAMA3 + RAG + Vector Database (Local, Open Source, Free)

OLLAMA + LLAMA3 + RAG + Vector Database (Local, Open Source, Free)

32
Comments
2 min read
The 6 Best LLM Tools To Run Models Locally
Cover image for The 6 Best LLM Tools To Run Models Locally

The 6 Best LLM Tools To Run Models Locally

3
Comments
14 min read
Langchain Chat Assistant using Chainlit App
Cover image for Langchain Chat Assistant using Chainlit App

Langchain Chat Assistant using Chainlit App

4
Comments
2 min read
How to deploy Llama 3.1 405B in the Cloud?
Cover image for How to deploy Llama 3.1 405B in the Cloud?

How to deploy Llama 3.1 405B in the Cloud?

30
Comments
5 min read
LLM Support for Your PHP AI Projects

LLM Support for Your PHP AI Projects

3
Comments
1 min read
Run & Debug your LLM Apps locally using Ollama & Llama 3.1

Run & Debug your LLM Apps locally using Ollama & Llama 3.1

6
Comments
4 min read
How to deploy Llama 3.1 in the Cloud: A Comprehensive Guide
Cover image for How to deploy Llama 3.1 in the Cloud: A Comprehensive Guide

How to deploy Llama 3.1 in the Cloud: A Comprehensive Guide

43
Comments
5 min read
How to Get Automatic Code Review Using LLM Before Committing
Cover image for How to Get Automatic Code Review Using LLM Before Committing

How to Get Automatic Code Review Using LLM Before Committing

93
Comments 2
7 min read
Resumen y explicacion sobre Plain English Papers

Resumen y explicacion sobre Plain English Papers

Comments
5 min read
loading...