<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: shailendra khade</title>
    <description>The latest articles on Forem by shailendra khade (@shailendra_khade_df763b45).</description>
    <link>https://forem.com/shailendra_khade_df763b45</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/shailendra_khade_df763b45"/>
    <language>en</language>
    <item>
      <title>Build a Simple Local Pune Travel AI with FAISS + Ollama LLM - POC</title>
      <dc:creator>shailendra khade</dc:creator>
      <pubDate>Fri, 06 Feb 2026 07:40:45 +0000</pubDate>
      <link>https://forem.com/shailendra_khade_df763b45/build-a-simple-local-pune-travel-ai-with-faiss-ollama-llm-poc-3dd0</link>
      <guid>https://forem.com/shailendra_khade_df763b45/build-a-simple-local-pune-travel-ai-with-faiss-ollama-llm-poc-3dd0</guid>
      <description>&lt;p&gt;Ever wondered how to create your own local AI assistant for city tours or travel recommendations? In this POC, we build a Pune Grand Tour AI using FAISS vector database for embeddings and Ollama LLM for generating answers. No Docker, no cloud costs — just local Python and embeddings.&lt;/p&gt;

&lt;p&gt;Let’s go step by step.&lt;/p&gt;

&lt;p&gt;🔹&lt;strong&gt;Purpose of this POC&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local AI Assistant: A mini ChatGPT specialized for Pune tourism.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Quick Retrieval: Use embeddings for fast similarity search over a curated dataset.&lt;/p&gt;

&lt;p&gt;Cost-efficient: No cloud vector DB required — FAISS runs entirely locally.&lt;/p&gt;

&lt;p&gt;Hands-on AI Exploration: Learn practical AI pipeline: embeddings → vector DB → LLM.&lt;/p&gt;

&lt;p&gt;🔹 &lt;strong&gt;Why FAISS?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;FAISS (Facebook AI Similarity Search) is a high-performance library for:&lt;/p&gt;

&lt;p&gt;Storing vector embeddings.&lt;/p&gt;

&lt;p&gt;Performing fast similarity search (like nearest neighbor search).&lt;/p&gt;

&lt;p&gt;Working locally, without needing cloud infrastructure.&lt;/p&gt;

&lt;p&gt;Key point: FAISS is ideal for projects like ours because all Pune data can fit in memory, and retrieval is very fast.&lt;/p&gt;

&lt;p&gt;🔹 &lt;strong&gt;Dataset&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We used a simple text dataset (pune_places_chunks.txt) containing Pune's:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Historical forts&lt;/li&gt;
&lt;li&gt;Monuments&lt;/li&gt;
&lt;li&gt;Museums &lt;/li&gt;
&lt;li&gt;Tourist spots&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each line or chunk represents one document. Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[PLACE] Shaniwar Wada
Shaniwar Wada is a historic fort located in Pune, built in 1732 by Peshwa Bajirao I.
It served as the administrative center of the Maratha Empire.


[PLACE] Aga Khan Palace
The Aga Khan Palace is known for its association with Mahatma Gandhi and history
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔹 &lt;strong&gt;Step 1: Create Embeddings &amp;amp; FAISS Vector Store&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Python script: ingest.py&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain_community.document_loaders import TextLoader
from langchain_huggingface import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS


# 1 Load processed Pune data
loader = TextLoader("../data/processed/pune_places_chunks.txt")
documents = loader.load()


# 2 Create embeddings
embeddings = HuggingFaceEmbeddings(
    model_name="sentence-transformers/all-MiniLM-L6-v2"
)


# 3 Create FAISS vector store
vectorstore = FAISS.from_documents(documents, embeddings)


# 4 Save vector DB locally
vectorstore.save_local("../embeddings/pune_faiss")


print("Pune embeddings created successfully")

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;python ingest.py&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Expected output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Pune embeddings created successfully
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;TextLoader loads our text chunks.&lt;/p&gt;

&lt;p&gt;HuggingFaceEmbeddings converts each document into vector representation.&lt;/p&gt;

&lt;p&gt;FAISS.from_documents builds a searchable vector store.&lt;/p&gt;

&lt;p&gt;save_local persists the FAISS DB for later retrieval.&lt;/p&gt;

&lt;p&gt;🔹&lt;strong&gt;Step 2: Query FAISS with Ollama LLM&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Python script: chat.py&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain_community.vectorstores import FAISS
from langchain_huggingface import HuggingFaceEmbeddings
from langchain_ollama import OllamaLLM


# 1 Load embeddings
embeddings = HuggingFaceEmbeddings(
    model_name="sentence-transformers/all-MiniLM-L6-v2"
)
print("Embeddings loaded")


# 2 Load FAISS DB (allow pickle deserialization)
vectordb = FAISS.load_local(
    "../embeddings/pune_faiss",
    embeddings,
    allow_dangerous_deserialization=True
)
print("FAISS Vector DB loaded")


# 3 Ask a question
question = "Tell me famous places to visit in Pune"
docs = vectordb.similarity_search(question, k=3)


if len(docs) == 0:
    print("No documents retrieved. Check embeddings folder.")
    exit(1)


context = "\n".join([d.page_content for d in docs])
print(f"Retrieved docs count: {len(docs)}")
print("Context preview (first 300 chars):")
print(context[:300])


# 4 Initialize Ollama LLM
llm = OllamaLLM(model="llama3")
print("Ollama LLM loaded")


# 5 Build prompt
prompt = f"""
You are a Pune travel guide AI.
Answer using only the context below.


Context:
{context}


Question:
{question}
"""


# 6 Generate AI response
response = llm.invoke(prompt)


print("\nPune AI says:\n")
print(response)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;python chat.py&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Embeddings loaded
FAISS Vector DB loaded
Retrieved docs count: 3
Context preview (first 300 chars):
[PLACE] Shaniwar Wada
Shaniwar Wada is a historic fort located in Pune, built in 1732...
Ollama LLM loaded


Pune AI says:


Pune is famous for Shaniwar Wada, Sinhagad Fort, Aga Khan Palace, and Dagdusheth Ganpati Temple.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;similarity_search retrieves the top 3 most relevant documents.&lt;/p&gt;

&lt;p&gt;Context is combined and sent as a prompt to Ollama LLM.&lt;/p&gt;

&lt;p&gt;LLM generates human-like answers based on retrieved Pune knowledge.&lt;/p&gt;

&lt;p&gt;🔹 &lt;strong&gt;Step 3: Make it Interactive with Streamlit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We can upgrade this to a fully interactive web app (app.py):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import streamlit as st
from langchain_community.vectorstores import FAISS
from langchain_huggingface import HuggingFaceEmbeddings
from langchain_ollama import OllamaLLM


st.title("Pune Grand Tour AI")
st.write("Ask about Pune's forts, monuments, and travel tips!")


@st.cache_resource
def load_vectorstore():
    embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2")
    return FAISS.load_local("../embeddings/pune_faiss", embeddings, allow_dangerous_deserialization=True)


@st.cache_resource
def load_llm():
    return OllamaLLM(model="llama3")


vectordb = load_vectorstore()
llm = load_llm()


question = st.text_input("Ask a question about Pune:")


if question:
    docs = vectordb.similarity_search(question, k=3)
    if not docs:
        st.warning("No documents found!")
    else:
        context = "\n".join([d.page_content for d in docs])
        prompt = f"You are a Pune travel guide AI.\n\nContext:\n{context}\n\nQuestion:\n{question}"
        response = llm.invoke(prompt)
        st.subheader("Retrieved Context")
        st.text(context[:500] + ("..." if len(context) &amp;gt; 500 else ""))
        st.subheader("AI Answer")
        st.write(response)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pip install streamlit&lt;/code&gt;&lt;br&gt;
&lt;code&gt;streamlit run app.py&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Result:&lt;br&gt;
A browser UI opens where you can ask any Pune-related question, and the AI responds interactively.&lt;/p&gt;

&lt;p&gt;🔹 &lt;strong&gt;Key Benefits of this POC&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fully Local — No cloud or Docker dependency.&lt;/li&gt;
&lt;li&gt;Fast Retrieval — FAISS provides instant similarity search.&lt;/li&gt;
&lt;li&gt;Context-aware AI — Ollama LLM answers based on curated Pune knowledge.&lt;/li&gt;
&lt;li&gt;Expandable — Add more documents, images, or travel tips.&lt;/li&gt;
&lt;li&gt;Interactive UI — Streamlit allows anyone to use the AI easily.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🔹 &lt;strong&gt;Common Issues &amp;amp; Fixes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdaw806j78z6ittsvn5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdaw806j78z6ittsvn5z.png" alt=" " width="800" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔹 &lt;strong&gt;Use Cases&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Local city guide AI for tourism apps&lt;/li&gt;
&lt;li&gt;Educational assistant for geography/history lessons&lt;/li&gt;
&lt;li&gt;Personal knowledge assistant for any curated dataset&lt;/li&gt;
&lt;li&gt;Prototype for RAG (Retrieval-Augmented Generation) projects&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🔹 &lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
With FAISS + Ollama LLM + Streamlit, you can build fast, local, context-aware AI assistants without relying on cloud services or Docker.&lt;/p&gt;

&lt;p&gt;This Pune AI POC demonstrates how a specialized knowledge base can power a chatbot capable of giving accurate, context-specific answers.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openai</category>
      <category>rag</category>
    </item>
    <item>
      <title>Building a Containerized GenAI Chatbot with Docker, Ollama, FastAPI &amp; ChromaDB</title>
      <dc:creator>shailendra khade</dc:creator>
      <pubDate>Tue, 03 Feb 2026 15:17:03 +0000</pubDate>
      <link>https://forem.com/shailendra_khade_df763b45/building-a-containerized-genai-chatbot-with-docker-ollama-fastapi-chromadb-36c0</link>
      <guid>https://forem.com/shailendra_khade_df763b45/building-a-containerized-genai-chatbot-with-docker-ollama-fastapi-chromadb-36c0</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Modern AI systems are not just Python scripts — they are distributed systems involving:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LLM engines&lt;/li&gt;
&lt;li&gt;APIs&lt;/li&gt;
&lt;li&gt;UI applications&lt;/li&gt;
&lt;li&gt;Vector databases&lt;/li&gt;
&lt;li&gt;Container orchestration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this blog, I share how I built a GenAI Chatbot using Docker-based microservices, similar to real-world AI platforms, and the real DevOps issues I faced while building it.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;System Architecture (AI Architect View)&lt;/strong&gt;&lt;br&gt;
User (Browser)&lt;br&gt;
     |&lt;br&gt;
     v&lt;br&gt;
[ Streamlit UI ]&lt;br&gt;
     |&lt;br&gt;
     v&lt;br&gt;
[ FastAPI Backend ]&lt;br&gt;
     |&lt;br&gt;
     +----&amp;gt; [ Ollama LLM Engine ]&lt;br&gt;
     |&lt;br&gt;
     +----&amp;gt; [ ChromaDB Vector Database ]&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Why Microservices?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Separating AI components into services improves scalability, maintainability, and fault isolation.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Project Structure&lt;/strong&gt;&lt;br&gt;
genai-docker-project/&lt;br&gt;
│&lt;br&gt;
├── backend/        # FastAPI + AI logic&lt;br&gt;
├── ui/             # Streamlit UI&lt;br&gt;
├── docker-compose.yml&lt;br&gt;
├── README.md&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Docker Compose (Core of System)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "3.8"

services:

  ollama:
    image: ollama/ollama
    ports:
      - "11434:11434"

  backend:
    build: ./backend
    ports:
      - "8000:8000"
    depends_on:
      - ollama
      - chroma

  chroma:
    image: chromadb/chroma
    ports:
      - "8001:8000"

  ui:
    build: ./ui
    ports:
      - "8501:8501"
    depends_on:
      - backend

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Backend (FastAPI + Ollama Integration)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from fastapi import FastAPI
import requests

app = FastAPI()

OLLAMA_URL = "http://ollama:11434/api/generate"

@app.post("/ask")
def ask_ai(question: str):
    payload = {"model": "mistral", "prompt": question}
    response = requests.post(OLLAMA_URL, json=payload)
    return response.json()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;UI (Streamlit)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import streamlit as st
import requests

st.title("GenAI Chatbot")

question = st.text_input("Ask a question:")

if st.button("Ask AI"):
    res = requests.post("http://backend:8000/ask", params={"question": question})
    st.write(res.json())

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft33fi0huhpxlhajtj0pp.png" alt=" " width="800" height="331"&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Real Errors &amp;amp; DevOps Solutions&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Error 1: Docker Permission Denied&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;permission denied while trying to connect to the Docker daemon socket&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Root Cause&lt;/strong&gt;&lt;br&gt;
User was not part of the docker group.&lt;br&gt;
&lt;strong&gt;Solution&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;sudo usermod -aG docker $USER&lt;br&gt;
newgrp docker&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error 2: Port Already in Use (11434)&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;failed to bind host port 0.0.0.0:11434: address already in use&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Root Cause&lt;/strong&gt;&lt;br&gt;
Ollama was already running on host via Snap:&lt;br&gt;
ps -ef | grep ollama&lt;br&gt;
&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
/snap/ollama/.../ollama serve&lt;br&gt;
&lt;strong&gt;Solution:&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;sudo snap stop ollama&lt;br&gt;
sudo snap disable ollama&lt;/code&gt;&lt;br&gt;
(or change Docker port)&lt;br&gt;
`ports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"21434:11434"`&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Error 3: Model Not Found in Ollama&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;"model 'mistral' not found"&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Root Cause&lt;/strong&gt;&lt;br&gt;
Ollama runtime was running, but model was not downloaded inside the container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;docker exec -it genai-docker-project-ollama-1 bash&lt;/code&gt;&lt;br&gt;
&lt;code&gt;ollama pull mistral&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error 4: Container Networking Issues&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Problem&lt;/strong&gt;&lt;br&gt;
Backend could not connect to Ollama.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Root Cause&lt;/strong&gt;&lt;br&gt;
Using localhost instead of container DNS name.&lt;br&gt;
&lt;strong&gt;Fix&lt;/strong&gt;&lt;br&gt;
OLLAMA_URL = "&lt;a href="http://ollama:11434/api/generate" rel="noopener noreferrer"&gt;http://ollama:11434/api/generate&lt;/a&gt;"&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;&lt;code&gt;Key Learnings (AI + DevOps)&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI systems are distributed systems.&lt;/li&gt;
&lt;li&gt;Docker is essential for reproducible ML environments.&lt;/li&gt;
&lt;li&gt;LLM platforms require careful networking and resource management.&lt;/li&gt;
&lt;li&gt;MLOps is the bridge between DevOps and AI.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>ollama</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Autonomous DevOps AI Agent (FastAPI + Ollama)</title>
      <dc:creator>shailendra khade</dc:creator>
      <pubDate>Thu, 22 Jan 2026 16:05:39 +0000</pubDate>
      <link>https://forem.com/shailendra_khade_df763b45/autonomous-devops-ai-agent-fastapi-ollama-9ji</link>
      <guid>https://forem.com/shailendra_khade_df763b45/autonomous-devops-ai-agent-fastapi-ollama-9ji</guid>
      <description>&lt;p&gt;&lt;strong&gt;Building an Autonomous DevOps AI Agent using FastAPI and Ollama&lt;/strong&gt;&lt;br&gt;
 &lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;DevOps is no longer just about CI/CD pipelines and monitoring dashboards.&lt;br&gt;
With the rise of Large Language Models (LLMs), DevOps is evolving into AI-powered automation (AIOps).&lt;/p&gt;

&lt;p&gt;In this project, I built an Autonomous DevOps AI Agent using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ollama (Local LLM – Mistral)&lt;/li&gt;
&lt;li&gt;FastAPI (API Layer)&lt;/li&gt;
&lt;li&gt;Linux System Tools&lt;/li&gt;
&lt;li&gt;Git &amp;amp; Deployment Scripts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This AI agent can analyze system metrics, interact with Git repositories, trigger deployments, and reason about DevOps tasks using an LLM.&lt;/p&gt;

&lt;p&gt;This post explains the architecture, implementation, challenges, and learnings.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;What is an Autonomous DevOps AI Agent?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Autonomous DevOps AI Agent is a system that:&lt;/li&gt;
&lt;li&gt;Collects infrastructure data (CPU, RAM, logs, etc.)&lt;/li&gt;
&lt;li&gt;Executes DevOps operations (Git, scripts, deployments)&lt;/li&gt;
&lt;li&gt;Uses AI (LLM) to analyze and respond intelligently&lt;/li&gt;
&lt;li&gt;Exposes everything via REST APIs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In simple words:&lt;/p&gt;

&lt;p&gt;AI + DevOps + Automation = Autonomous Agent&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydsnx69ufoif85j6555e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydsnx69ufoif85j6555e.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Components:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;FastAPI – API Gateway and Controller&lt;/li&gt;
&lt;li&gt;Ollama – Local LLM runtime (Mistral model)&lt;/li&gt;
&lt;li&gt;Linux Tools – System metrics and logs&lt;/li&gt;
&lt;li&gt;Git &amp;amp; Shell Scripts – DevOps automation&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;&lt;strong&gt;Step 1: Install Ollama and Model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Install Ollama on Linux:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;curl -fsSL &lt;a href="https://ollama.com/install.sh" rel="noopener noreferrer"&gt;https://ollama.com/install.sh&lt;/a&gt; | sh&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Pull an LLM model:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ollama pull mistral&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Verify:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ollama list&lt;/code&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Setup Python Environment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Create virtual environment:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;python3 -m venv ai-env&lt;br&gt;
source ai-env/bin/activate&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Install dependencies:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pip install fastapi uvicorn requests psutil&lt;/code&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Project Structure&lt;/strong&gt;&lt;br&gt;
devops-ai-agent/&lt;br&gt;
│&lt;br&gt;
├── agent.py        # AI reasoning (Ollama)&lt;br&gt;
├── system.py       # System metrics&lt;br&gt;
├── git_ops.py      # Git automation&lt;br&gt;
├── deploy.py       # Deployment scripts&lt;br&gt;
├── main.py         # FastAPI entry point&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Core Implementation&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;system.py – System Metrics&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import psutil

def get_system_info():
    return {
        "cpu": psutil.cpu_percent(),
        "memory": psutil.virtual_memory().percent,
        "disk": psutil.disk_usage("/").percent
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;agent.py – AI Brain (Ollama Integration)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests

OLLAMA_URL = "http://localhost:11434/api/generate"

def ai_think(prompt):
    payload = {
        "model": "mistral",
        "prompt": prompt,
        "stream": False
    }
    response = requests.post(OLLAMA_URL, json=payload)
    return response.json()["response"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;git_ops.py – Git Automation&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os

def clone_repo(repo_url):
    os.system(f"git clone {repo_url}")
    return {"status": "repo cloned", "repo": repo_url}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;deploy.py – Deployment Trigger&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os

def deploy_app():
    os.system("echo Deploying application...")
    return {"status": "deployment triggered"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;main.py – FastAPI Autonomous Agent&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from fastapi import FastAPI
from system import get_system_info
from git_ops import clone_repo
from deploy import deploy_app
from agent import ai_think

app = FastAPI()

@app.get("/")
def home():
    return {"message": "Autonomous DevOps AI Agent Running"}

@app.get("/system")
def system():
    return get_system_info()

@app.post("/chat")
def chat(prompt: str):
    return {"ai_response": ai_think(prompt)}

@app.post("/git/clone")
def git_clone(repo_url: str):
    return clone_repo(repo_url)

@app.post("/deploy")
def deploy():
    return deploy_app()

@app.post("/agent")
def autonomous_agent(task: str):
    reasoning = ai_think(f"You are DevOps AI. Task: {task}")
    return {"task": task, "ai_decision": reasoning}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Step 5: Run the Agent&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;uvicorn main:app --host 0.0.0.0 --port 7000&lt;/code&gt;&lt;/p&gt;




&lt;p&gt;Example Use Cases&lt;br&gt;
1 Check System Health&lt;br&gt;
&lt;code&gt;GET /system&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;2 Ask AI about DevOps&lt;br&gt;
&lt;code&gt;POST /chat?prompt=Explain Kubernetes architecture&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;3 Clone Git Repository&lt;br&gt;
&lt;code&gt;POST /git/clone?repo_url=https://github.com/user/repo.git&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;4 Autonomous DevOps Task&lt;br&gt;
&lt;code&gt;POST /agent?task=Check server health and suggest actions&lt;/code&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0z2yvhsqxllui5qk2vs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0z2yvhsqxllui5qk2vs.png" alt=" " width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Key Learnings&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ollama can act as a local LLM inference server.&lt;/li&gt;
&lt;li&gt;FastAPI is ideal for building AI-powered microservices.&lt;/li&gt;
&lt;li&gt;DevOps automation can be enhanced using LLM reasoning.&lt;/li&gt;
&lt;li&gt;This architecture is similar to real-world AIOps systems.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>ai</category>
      <category>api</category>
      <category>python</category>
    </item>
    <item>
      <title>Ollama + FastAPI API, Building My Own AI API Using Ollama and FastAPI on a Linux VM</title>
      <dc:creator>shailendra khade</dc:creator>
      <pubDate>Thu, 22 Jan 2026 10:54:23 +0000</pubDate>
      <link>https://forem.com/shailendra_khade_df763b45/ollama-fastapi-api-building-my-own-ai-api-using-ollama-and-fastapi-on-a-linux-vm-5a40</link>
      <guid>https://forem.com/shailendra_khade_df763b45/ollama-fastapi-api-building-my-own-ai-api-using-ollama-and-fastapi-on-a-linux-vm-5a40</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Large Language Models (LLMs) like ChatGPT are usually accessed via cloud APIs.&lt;br&gt;
But what if we could run our own AI model locally and expose it as an API?&lt;/p&gt;

&lt;p&gt;In this project, I built a custom AI API using Ollama + FastAPI on a Linux virtual machine.&lt;br&gt;
This API exposes LLM capabilities via REST endpoints, similar to how real-world AI microservices work.&lt;/p&gt;

&lt;p&gt;This post covers the architecture, implementation, challenges, and learnings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Ollama?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ollama is a tool that allows us to run LLM models like Mistral, Llama, and Gemma locally.&lt;/p&gt;

&lt;p&gt;It provides a local API endpoint:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http://localhost:11434&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We can wrap this with FastAPI to build our own AI service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvi83c63ovbo0ol74jhib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvi83c63ovbo0ol74jhib.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Install Ollama on Linux VM&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;curl -fsSL https://ollama.com/install.sh | sh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Verify installation:&lt;br&gt;
&lt;code&gt;ollama --version&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Pull an LLM Model&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;ollama pull mistral&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Check available models:&lt;br&gt;
&lt;code&gt;ollama list&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Setup Python Environment&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;python3 -m venv ai-env&lt;br&gt;
source ai-env/bin/activate&lt;br&gt;
pip install fastapi uvicorn requests&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Build Ollama API using FastAPI&lt;/strong&gt;&lt;br&gt;
Create file &lt;code&gt;ollama_api.py&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from fastapi import FastAPI
import requests

app = FastAPI()

OLLAMA_URL = "http://localhost:11434/api/generate"

@app.get("/")
def home():
    return {"message": "Ollama AI API is running"}

@app.get("/health")
def health():
    return {"status": "UP", "model": "mistral"}

@app.post("/chat")
def chat(prompt: str):
    payload = {
        "model": "mistral",
        "prompt": prompt,
        "stream": False
    }
    response = requests.post(OLLAMA_URL, json=payload)
    return response.json()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5: Run the API Server&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;uvicorn ollama_api:app --host 0.0.0.0 --port 9000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Test the AI API&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Test with curl&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;curl -X POST "http://localhost:9000/chat?prompt=Explain%20DevOps"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test from Host Machine&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;curl -X POST "http://&amp;lt;VM-IP&amp;gt;:9000/chat?prompt=explain AI"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rhpwisu11utbkviaem2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rhpwisu11utbkviaem2.png" alt=" " width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges Faced&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;1 Networking Issues in VM&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;0.0.0.0 cannot be used as a browser address.&lt;/li&gt;
&lt;li&gt;Required using the VM IP address to access the API.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;2 HTTPS vs HTTP&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Browser attempted HTTPS while API was running on HTTP.&lt;/li&gt;
&lt;li&gt;Solved by explicitly using HTTP.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;3 Python PEP 668 Error&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;System Python was protected.&lt;/li&gt;
&lt;li&gt;Solved using Python virtual environment (venv).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Learnings&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ollama can be used to run LLMs locally.&lt;/li&gt;
&lt;li&gt;FastAPI is a great framework to expose AI models as microservices.&lt;/li&gt;
&lt;li&gt;Virtual environments are essential in modern Linux systems.&lt;/li&gt;
&lt;li&gt;Building APIs on VMs helps understand real DevOps workflows.&lt;/li&gt;
&lt;li&gt;This architecture is similar to production AI services.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>api</category>
      <category>devops</category>
    </item>
    <item>
      <title>Building My First REST API on Linux VM using FastAPI (with Real DevOps Troubleshooting)</title>
      <dc:creator>shailendra khade</dc:creator>
      <pubDate>Thu, 22 Jan 2026 08:10:13 +0000</pubDate>
      <link>https://forem.com/shailendra_khade_df763b45/building-my-first-rest-api-on-linux-vm-using-fastapi-with-real-devops-troubleshooting-5dnm</link>
      <guid>https://forem.com/shailendra_khade_df763b45/building-my-first-rest-api-on-linux-vm-using-fastapi-with-real-devops-troubleshooting-5dnm</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As a DevOps &amp;amp; AI enthusiast, I wanted to understand how APIs actually work at the system level — not just theory, but hands-on on a Linux VM.&lt;/p&gt;

&lt;p&gt;So I decided to build a simple REST API using FastAPI on a Linux virtual machine and expose it to my host system.&lt;br&gt;
During this journey, I faced real-world networking and Python environment issues — and solved them like a DevOps engineer.&lt;/p&gt;

&lt;p&gt;This post documents my complete journey, steps, and learnings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is an API?&lt;/strong&gt;&lt;br&gt;
An API (Application Programming Interface) allows applications or users to communicate with a server using HTTP requests.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Client → sends request&lt;/li&gt;
&lt;li&gt;Server → processes logic&lt;/li&gt;
&lt;li&gt;API → returns JSON response&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa7ay8k40lhwfuqhpb1u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa7ay8k40lhwfuqhpb1u.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simple Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fwpk1wqflvdwbvx0e72.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fwpk1wqflvdwbvx0e72.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Setting up FastAPI on Linux VM&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Install Python and pip&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;sudo apt update&lt;br&gt;
sudo apt install python3 python3-pip -y&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Create Virtual Environment (Best Practice)&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;python3 -m venv api-env&lt;br&gt;
source api-env/bin/activate&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install FastAPI and Uvicorn&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;pip install fastapi uvicorn&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Step 2: Create a Simple API&lt;br&gt;
Create &lt;code&gt;main.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from fastapi import FastAPI

app = FastAPI()

@app.get("/")
def home():
    return {"message": "Hello from Linux VM API"}

@app.get("/health")
def health():
    return {"status": "UP"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Run the API Server&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;uvicorn main:app --host 0.0.0.0 --port 8000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Test API&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;From VM terminal&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;curl http://localhost:8000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Output:&lt;br&gt;
{"message":"Hello from Linux VM API"}&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From Host Machine (Desktop Browser)&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;http://192.168.204.128:8000/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3im1emwya5ipjugfsr66.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3im1emwya5ipjugfsr66.png" alt=" " width="626" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ig9m3valtyoxtcjw9cd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ig9m3valtyoxtcjw9cd.png" alt=" " width="588" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhj0ty4ct759r3l21v3v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhj0ty4ct759r3l21v3v.png" alt=" " width="621" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real Issues I Faced (and Solved)&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1 SSL / HTTPS Error&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;_Problem:&lt;br&gt;
Browser tried HTTPS but API was HTTP.&lt;/p&gt;

&lt;p&gt;Solution:&lt;br&gt;
Use HTTP explicitly._&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2 &lt;code&gt;0.0.0.0&lt;/code&gt;Access Issue&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Problem:&lt;br&gt;
Tried accessing API using 0.0.0.0.&lt;/p&gt;

&lt;p&gt;Reality:&lt;br&gt;
0.0.0.0 is a bind address, not a client address.&lt;/p&gt;

&lt;p&gt;Solution:&lt;br&gt;
Use VM IP or localhost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3 Python &lt;code&gt;PEP 668&lt;/code&gt; Error&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;externally-managed-environment&lt;/code&gt;&lt;br&gt;
Reason:&lt;br&gt;
System Python is protected in modern Linux.&lt;/p&gt;

&lt;p&gt;Solution:&lt;br&gt;
Use virtual environment (venv).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Learnings&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;0.0.0.0 is for server binding, not browser access.&lt;/li&gt;
&lt;li&gt;Always use virtual environments in Linux.&lt;/li&gt;
&lt;li&gt;Networking issues are common in VM-based setups.&lt;/li&gt;
&lt;li&gt;FastAPI is powerful and developer-friendly.&lt;/li&gt;
&lt;li&gt;Real DevOps work is about debugging, not just coding.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>api</category>
      <category>linux</category>
      <category>python</category>
      <category>ai</category>
    </item>
    <item>
      <title>Linux for Beginners: A Clear, Simple, and Practical Introduction</title>
      <dc:creator>shailendra khade</dc:creator>
      <pubDate>Tue, 20 Jan 2026 16:12:19 +0000</pubDate>
      <link>https://forem.com/shailendra_khade_df763b45/linux-for-beginners-a-clear-simple-and-practical-introduction-f8g</link>
      <guid>https://forem.com/shailendra_khade_df763b45/linux-for-beginners-a-clear-simple-and-practical-introduction-f8g</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Linux is one of the most powerful and widely used operating systems in the world.&lt;br&gt;
From servers, cloud platforms, DevOps pipelines, and cybersecurity to embedded systems — Linux is everywhere.&lt;/p&gt;

&lt;p&gt;In this article, we will understand Linux in the simplest way possible:&lt;br&gt;
What it is, how it works, and why it matters for beginners.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Linux?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Linux is an open-source operating system based on the Unix architecture.&lt;br&gt;
Being open-source means:&lt;/p&gt;

&lt;p&gt;Its code is free to view, modify, and distribute&lt;/p&gt;

&lt;p&gt;Anyone can contribute&lt;/p&gt;

&lt;p&gt;Multiple distributions (Ubuntu, RedHat, CentOS, Fedora, etc.) exist&lt;/p&gt;

&lt;p&gt;Linux controls the system resources like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CPU&lt;/li&gt;
&lt;li&gt;Memory&lt;/li&gt;
&lt;li&gt;Storage&lt;/li&gt;
&lt;li&gt;Processes&lt;/li&gt;
&lt;li&gt;Network&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why is Linux Important?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Linux powers over 90% of cloud servers and is the backbone of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS, Azure, GCP&lt;/li&gt;
&lt;li&gt;Kubernetes &amp;amp; Docker&lt;/li&gt;
&lt;li&gt;DevOps CI/CD&lt;/li&gt;
&lt;li&gt;Cybersecurity &amp;amp; Ethical hacking&lt;/li&gt;
&lt;li&gt;High-performance computing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want a career in Cloud, DevOps, SRE, Platform Engineering, or Cybersecurity — &lt;strong&gt;Linux is mandatory&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Components of Linux&lt;/strong&gt;&lt;br&gt;
1️⃣ Kernel&lt;/p&gt;

&lt;p&gt;The core of Linux.&lt;br&gt;
It manages hardware, processes, memory, and system resources.&lt;/p&gt;

&lt;p&gt;2️⃣ Shell&lt;/p&gt;

&lt;p&gt;A command-line interface that interacts with the OS.&lt;br&gt;
Popular shells:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bash&lt;/li&gt;
&lt;li&gt;Zsh&lt;/li&gt;
&lt;li&gt;Cshell&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3️⃣File System&lt;/p&gt;

&lt;p&gt;Linux follows a hierarchical structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/
├── bin
├── etc
├── home
├── var
└── usr

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4️⃣ Package Manager&lt;/p&gt;

&lt;p&gt;Helps install, update, and remove software.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;apt&lt;/code&gt; → Ubuntu/Debian&lt;br&gt;
&lt;code&gt;yum/dnf&lt;/code&gt; → RHEL/CentOS/Fedora&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Basic Linux Commands for Beginners&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here are the most essential commands:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Command&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;List files&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ls&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Change directory&lt;/td&gt;
&lt;td&gt;&lt;code&gt;cd&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Create directory&lt;/td&gt;
&lt;td&gt;&lt;code&gt;mkdir&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;View file content&lt;/td&gt;
&lt;td&gt;&lt;code&gt;cat&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Check current path&lt;/td&gt;
&lt;td&gt;&lt;code&gt;pwd&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Copy file&lt;/td&gt;
&lt;td&gt;&lt;code&gt;cp source dest&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Move file&lt;/td&gt;
&lt;td&gt;&lt;code&gt;mv source dest&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Remove file&lt;/td&gt;
&lt;td&gt;&lt;code&gt;rm&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;System info&lt;/td&gt;
&lt;td&gt;&lt;code&gt;uname -a&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;View running processes&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ps -aux&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Why Developers Prefer Linux&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stable&lt;/li&gt;
&lt;li&gt;Secure&lt;/li&gt;
&lt;li&gt;Highly customizable&lt;/li&gt;
&lt;li&gt;Better performance in servers&lt;/li&gt;
&lt;li&gt;Free and open-source&lt;/li&gt;
&lt;li&gt;Strong community support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Where to Practice Linux?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ubuntu (VMware/VirtualBox)&lt;/li&gt;
&lt;li&gt;WSL (Windows Subsystem for Linux)&lt;/li&gt;
&lt;li&gt;Cloud providers (AWS EC2, GCP VM)&lt;/li&gt;
&lt;li&gt;Online terminals like Katacoda / Webminal&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Linux is not just an operating system — it is a foundation for modern technology.&lt;br&gt;
Learning Linux opens doors to Cloud, DevOps, SRE, Automation, and more.&lt;/p&gt;

&lt;p&gt;Start small, practice commands daily, and explore the power of open-source.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
