Forem

# ollama

Posts

👋 Sign in for the ability to sort posts by relevant, latest, or top.
Blossoming Intelligence: How to Run Spring AI Locally with Ollama
Cover image for Blossoming Intelligence: How to Run Spring AI Locally with Ollama

Blossoming Intelligence: How to Run Spring AI Locally with Ollama

24
Comments
3 min read
Running LLMs on Android: A Step-by-Step Guide
Cover image for Running LLMs on Android: A Step-by-Step Guide

Running LLMs on Android: A Step-by-Step Guide

4
Comments 3
2 min read
Quick tip: Ollama + SingleStore - LangChain = :-(
Cover image for Quick tip: Ollama + SingleStore - LangChain = :-(

Quick tip: Ollama + SingleStore - LangChain = :-(

1
Comments
5 min read
Run Large and Small Language Models locally with ollama
Cover image for Run Large and Small Language Models locally with ollama

Run Large and Small Language Models locally with ollama

14
Comments 1
3 min read
Setup REST-API service of AI by using Local LLMs with Ollama

Setup REST-API service of AI by using Local LLMs with Ollama

22
Comments
3 min read
Quick tip: How to Build Local LLM Apps with Ollama and SingleStore
Cover image for Quick tip: How to Build Local LLM Apps with Ollama and SingleStore

Quick tip: How to Build Local LLM Apps with Ollama and SingleStore

14
Comments
5 min read
OpenUI - One prompt UI Generation

OpenUI - One prompt UI Generation

16
Comments
1 min read
Hindi-Language AI Chatbot for Enterprises Using Qdrant, MLFlow, and LangChain
Cover image for Hindi-Language AI Chatbot for Enterprises Using Qdrant, MLFlow, and LangChain

Hindi-Language AI Chatbot for Enterprises Using Qdrant, MLFlow, and LangChain

2
Comments
11 min read
Conversational AI for Everyone: Create Your Own LLM

Conversational AI for Everyone: Create Your Own LLM

2
Comments
2 min read
Setup Llama 3 using Ollama and Open-WebUI
Cover image for Setup Llama 3 using Ollama and Open-WebUI

Setup Llama 3 using Ollama and Open-WebUI

Comments
3 min read
Devika - Install in local
Cover image for Devika - Install in local

Devika - Install in local

9
Comments
1 min read
AI/ML - LangChain4j - AiServices

AI/ML - LangChain4j - AiServices

2
Comments
9 min read
Empowering Local AI: Exploring Ollama LLM Model for Open Source Integration
Cover image for Empowering Local AI: Exploring Ollama LLM Model for Open Source Integration

Empowering Local AI: Exploring Ollama LLM Model for Open Source Integration

Comments 2
5 min read
How to Install Ollama on Windows
Cover image for How to Install Ollama on Windows

How to Install Ollama on Windows

109
Comments
6 min read
AI/ML - Langchain4j - Chat Memory

AI/ML - Langchain4j - Chat Memory

2
Comments
7 min read
How to get up & running a LLM locally - in 5 minutes

How to get up & running a LLM locally - in 5 minutes

2
Comments
2 min read
Supercharge Your Productivity with Ollama + Open Web UI and Large Language Models
Cover image for Supercharge Your Productivity with Ollama + Open Web UI and Large Language Models

Supercharge Your Productivity with Ollama + Open Web UI and Large Language Models

42
Comments 2
3 min read
Beginning the AI/ML Journey with Ollama, Langchain4J & JBang

Beginning the AI/ML Journey with Ollama, Langchain4J & JBang

2
Comments 1
10 min read
Java & GenAI : The Ultimate Developer's joy with Quarkus, Langchain4j and Ollama
Cover image for Java & GenAI : The Ultimate Developer's joy with Quarkus, Langchain4j and Ollama

Java & GenAI : The Ultimate Developer's joy with Quarkus, Langchain4j and Ollama

7
Comments 1
3 min read
OLLAMA with AMD GPU (ROCm)

OLLAMA with AMD GPU (ROCm)

9
Comments
2 min read
Locally hosted AI writing assistant
Cover image for Locally hosted AI writing assistant

Locally hosted AI writing assistant

29
Comments 5
3 min read
CoLlama 🦙 - ollama as your AI coding assistant (local machine and free)
Cover image for CoLlama 🦙 - ollama as your AI coding assistant (local machine and free)

CoLlama 🦙 - ollama as your AI coding assistant (local machine and free)

63
Comments 2
4 min read
How to Run Large Language Models Locally on a Windows Machine Using WSL and Ollama
Cover image for How to Run Large Language Models Locally on a Windows Machine Using WSL and Ollama

How to Run Large Language Models Locally on a Windows Machine Using WSL and Ollama

10
Comments 3
1 min read
Running Ollama 2 on NVIDIA Jetson Nano with GPU using Docker

Running Ollama 2 on NVIDIA Jetson Nano with GPU using Docker

10
Comments 2
3 min read
loading...