Forem

# ollama

Posts

👋 Sign in for the ability to sort posts by relevant, latest, or top.
How to Create a Node.js Proxy Server for Hosting the DeepSeek-R1 7B Model

How to Create a Node.js Proxy Server for Hosting the DeepSeek-R1 7B Model

14
Comments
6 min read
Local AI WebAPI with Semantic Kernel and Ollama

Local AI WebAPI with Semantic Kernel and Ollama

Comments
2 min read
Extendiendo Semantic Kernel: Creando Plugins para Consultas Dinámicas

Extendiendo Semantic Kernel: Creando Plugins para Consultas Dinámicas

3
Comments
8 min read
Semantic Kernel: Crea un API para Generación de Texto con Ollama y Aspire

Semantic Kernel: Crea un API para Generación de Texto con Ollama y Aspire

4
Comments
8 min read
Building an Ollama-Powered GitHub Copilot Extension
Cover image for Building an Ollama-Powered GitHub Copilot Extension

Building an Ollama-Powered GitHub Copilot Extension

25
Comments
5 min read
Working with LLMs in .NET using Microsoft.Extensions.AI
Cover image for Working with LLMs in .NET using Microsoft.Extensions.AI

Working with LLMs in .NET using Microsoft.Extensions.AI

2
Comments
6 min read
Local AI apps with C#, Semantic Kernel and Ollama
Cover image for Local AI apps with C#, Semantic Kernel and Ollama

Local AI apps with C#, Semantic Kernel and Ollama

2
Comments
2 min read
Step-by-Step Guide: Write Your First AI Storyteller with Ollama (llama3.2) and Semantic Kernel in C#
Cover image for Step-by-Step Guide: Write Your First AI Storyteller with Ollama (llama3.2) and Semantic Kernel in C#

Step-by-Step Guide: Write Your First AI Storyteller with Ollama (llama3.2) and Semantic Kernel in C#

8
Comments 2
5 min read
Running Out of Space? Move Your Ollama Models to a Different Drive 🚀
Cover image for Running Out of Space? Move Your Ollama Models to a Different Drive 🚀

Running Out of Space? Move Your Ollama Models to a Different Drive 🚀

3
Comments
1 min read
Run LLMs Locally with Ollama & Semantic Kernel in .NET: A Quick Start
Cover image for Run LLMs Locally with Ollama & Semantic Kernel in .NET: A Quick Start

Run LLMs Locally with Ollama & Semantic Kernel in .NET: A Quick Start

15
Comments
6 min read
How to Set Up a Local Ubuntu Server to Host Ollama Models with a WebUI
Cover image for How to Set Up a Local Ubuntu Server to Host Ollama Models with a WebUI

How to Set Up a Local Ubuntu Server to Host Ollama Models with a WebUI

23
Comments 4
5 min read
Ollama 0.5 Is Here: Generate Structured Outputs
Cover image for Ollama 0.5 Is Here: Generate Structured Outputs

Ollama 0.5 Is Here: Generate Structured Outputs

2
Comments
3 min read
Building AI-Powered Apps with SvelteKit: Managing HTTP Streams from Ollama Server
Cover image for Building AI-Powered Apps with SvelteKit: Managing HTTP Streams from Ollama Server

Building AI-Powered Apps with SvelteKit: Managing HTTP Streams from Ollama Server

8
Comments
6 min read
Building 5 AI Agents with phidata and Ollama

Building 5 AI Agents with phidata and Ollama

37
Comments 2
6 min read
Run Ollama on Intel Arc GPU (IPEX)

Run Ollama on Intel Arc GPU (IPEX)

40
Comments 2
5 min read
Quick tip: Running OpenAI's Swarm locally using Ollama
Cover image for Quick tip: Running OpenAI's Swarm locally using Ollama

Quick tip: Running OpenAI's Swarm locally using Ollama

2
Comments
2 min read
Langchain4J musings
Cover image for Langchain4J musings

Langchain4J musings

12
Comments
8 min read
How to deploy SmolLM2 1.7B on a Virtual Machine in the Cloud with Ollama?
Cover image for How to deploy SmolLM2 1.7B on a Virtual Machine in the Cloud with Ollama?

How to deploy SmolLM2 1.7B on a Virtual Machine in the Cloud with Ollama?

11
Comments
6 min read
Ollama - Custom Model - llama3.2

Ollama - Custom Model - llama3.2

22
Comments 3
4 min read
Run Llama 3 Locally
Cover image for Run Llama 3 Locally

Run Llama 3 Locally

1
Comments
2 min read
Coding Assistants and Artificial Intelligence for the Rest of Us

Coding Assistants and Artificial Intelligence for the Rest of Us

Comments
1 min read
Using a Locally-Installed LLM to Fill in Client Requirement Gaps
Cover image for Using a Locally-Installed LLM to Fill in Client Requirement Gaps

Using a Locally-Installed LLM to Fill in Client Requirement Gaps

1
Comments
6 min read
Consuming HTTP Streams in PHP with Symfony HTTP Client and Ollama API
Cover image for Consuming HTTP Streams in PHP with Symfony HTTP Client and Ollama API

Consuming HTTP Streams in PHP with Symfony HTTP Client and Ollama API

15
Comments 2
3 min read
Llama 3.2 Running Locally in VSCode: How to Set It Up with CodeGPT and Ollama

Llama 3.2 Running Locally in VSCode: How to Set It Up with CodeGPT and Ollama

44
Comments 1
2 min read
Ollama Unveiled: Run LLMs Locally
Cover image for Ollama Unveiled: Run LLMs Locally

Ollama Unveiled: Run LLMs Locally

1
Comments
2 min read
loading...