DEV Community

Cover image for Getting Started with Ollama and CrewAI
RajeevaChandra
RajeevaChandra

Posted on

1 1

Getting Started with Ollama and CrewAI

A Beginner-Friendly Guide to Building AI Agents on Your Own Machine

Have you ever wished for an intelligent assistant to help you plan trips, write blog posts, or do research? With tools like Ollama and CrewAI, you can now build your own AI-powered team on your laptop.

And the best part?
You don’t need to be an AI expert to get started.

In this guide, I’ll break things down in plain English and walk you through:

1) What Ollama is
2) What CrewAI does
3) How to set them up together
4) How to build your intelligent agents

What is Ollama?

Ollama is a simple tool that lets you run large language models (LLMs) — like ChatGPT-style models — locally on your machine.
No internet, no cloud, and no privacy concerns.

  • Simple installation and setup
  • Access to various open-source models
  • Local execution (no internet required after setup)
  • Command-line interface for interaction

🧠 Think of Ollama as your own personal AI lab on your laptop.

You can run models like:

  • LLaMA 3
  • Mistral
  • Code LLaMA

And more...

All with just one command.

👥 What is CrewAI?

While Ollama gives you the brain, CrewAI gives you the hands.

CrewAI helps you create a team of AI agents that can collaborate, communicate, and accomplish tasks for you.

  • Multi-agent collaboration
  • Role specialization for different agents
  • Task delegation and coordination
  • Integration with various LLMs (including those run via Ollama)

Each agent has:

  • A role
  • A goal
  • And a task

Together, they work as a crew to get things done.

We need to use them together: Ollama enables you to run AI models locally on your device, ensuring offline access, enhanced security, and privacy. Meanwhile, CrewAI organizes AI agents into collaborative teams, allowing them to handle complex, multi-step workflows efficiently.

Step 1. Install Ollama
First, let's install Ollama on your system:

For macOS:

bash
brew install ollama
Enter fullscreen mode Exit fullscreen mode

For Linux:

bash
curl -fsSL https://ollama.com/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

For Windows (via Winget):

bash
winget install Ollama.Ollama
Enter fullscreen mode Exit fullscreen mode

Step 2) Run an AI Model

ollama run llama2

Enter fullscreen mode Exit fullscreen mode

This downloads and runs the model right on your machine.

step 3) Install CrewAI
First, make sure Python is installed. Then open your terminal and run:

pip install crewai

Enter fullscreen mode Exit fullscreen mode

Step 4: Your First Ollama Interaction
Let's test Ollama with a simple query:

ollama run llama2 "Tell me a joke about AI"
Enter fullscreen mode Exit fullscreen mode

You should see the model generate a response directly in your terminal!

Step 5: Creating Your First CrewAI Project
Now let's build a simple CrewAI setup that uses Ollama as its LLM provider.

Create a new Python file, my_first_crew.py:

from crewai import Agent, Task, Crew
from langchain.llms import Ollama

# Set up Ollama LLM
ollama_llm = Ollama(model="llama2")

# Define your agents
researcher = Agent(
    role='Senior Research Analyst',
    goal='Discover new insights',
    backstory="""You're an expert at finding interesting information""",
    llm=ollama_llm,
    verbose=True
)

writer = Agent(
    role='Content Writer',
    goal='Write engaging content',
    backstory="""You're a talented writer who simplifies complex information""",
    llm=ollama_llm,
    verbose=True
)

# Create tasks
research_task = Task(
    description='Find interesting facts about AI in healthcare',
    agent=researcher
)

write_task = Task(
    description='Write a short blog post about AI in healthcare',
    agent=writer
)

# Form the crew
crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, write_task],
    verbose=2
)

# Execute the crew's tasks
result = crew.kickoff()

print("Here's the result:")
print(result)
Enter fullscreen mode Exit fullscreen mode
python my_first_crew.py
Enter fullscreen mode Exit fullscreen mode

Understanding the Code

  • Agents: Specialized AI workers with specific roles
  • Tasks: Individual assignments for each agent
  • Crew: The team that coordinates the agents
  • Ollama Integration: We're using the locally-run LLM via Ollama

Troubleshooting Tips

  1. If Ollama isn't working, make sure the service is running (ollama serve)
  2. For CrewAI errors, check your Python version (3.7+ required)
  3. Start with small models if you have limited RAM

Conclusion:
https://github.com/rajeevchandra/ollama-crewai-starter
You've now taken your first steps with Ollama and CrewAI! These tools open up exciting possibilities for local AI development and multi-agent systems. As you become more comfortable, you can explore more advanced features and build increasingly sophisticated AI applications.

Sentry image

See why 4M developers consider Sentry, “not bad.”

Fixing code doesn’t have to be the worst part of your day. Learn how Sentry can help.

Learn more

Top comments (0)

Sentry image

Make it make sense

Make sense of fixing your code with straight-forward application monitoring.

Start debugging →

👋 Kindness is contagious

Engage with a wealth of insights in this thoughtful article, cherished by the supportive DEV Community. Coders of every background are encouraged to bring their perspectives and bolster our collective wisdom.

A sincere “thank you” often brightens someone’s day—share yours in the comments below!

On DEV, the act of sharing knowledge eases our journey and forges stronger community ties. Found value in this? A quick thank-you to the author can make a world of difference.

Okay