<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Metta Surendhar</title>
    <description>The latest articles on Forem by Metta Surendhar (@mettasurendhar).</description>
    <link>https://forem.com/mettasurendhar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mettasurendhar"/>
    <language>en</language>
    <item>
      <title>Tired of Writing the Same Tests Again and Again? Meet Keploy</title>
      <dc:creator>Metta Surendhar</dc:creator>
      <pubDate>Tue, 09 Sep 2025 21:26:27 +0000</pubDate>
      <link>https://forem.com/mettasurendhar/stop-writing-repetitive-tests-how-keploy-generates-them-automatically-cim</link>
      <guid>https://forem.com/mettasurendhar/stop-writing-repetitive-tests-how-keploy-generates-them-automatically-cim</guid>
      <description>&lt;p&gt;When I first started learning about APIs and CRUD operations, I tested everything manually. I would run the application, perform an action in the UI or call an API, and then check if it behaved correctly. At that point, I didn’t even know what unit testing or or API tests was.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpbmoot8et012fsi4hpz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpbmoot8et012fsi4hpz.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Later, I realized that writing test cases is crucial. They help catch bugs early, reduce the risk of regressions, and give confidence when making changes. However, there was one big problem: &lt;strong&gt;writing tests felt repetitive and time-consuming&lt;/strong&gt;. Instead of focusing on building features, I was spending hours writing boilerplate code for tests.&lt;/p&gt;

&lt;p&gt;That’s when I came across &lt;strong&gt;Keploy&lt;/strong&gt;, a tool that &lt;strong&gt;automatically records API calls and generates test cases and data mocks&lt;/strong&gt;. This means you can test your application without writing traditional test scripts. &lt;/p&gt;




&lt;h2&gt;
  
  
  Why Automated Testing Matters
&lt;/h2&gt;

&lt;p&gt;Before diving into code, let’s briefly revisit why automated tests — whether &lt;strong&gt;unit tests&lt;/strong&gt; or &lt;strong&gt;API tests&lt;/strong&gt; — are important:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Catch bugs early&lt;/strong&gt; – Problems are detected before deployment.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prevent regressions&lt;/strong&gt; – Ensures that new changes don’t break existing features.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improve confidence&lt;/strong&gt; – Developers can refactor code without fear.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Save time in the long run&lt;/strong&gt; – Manual testing becomes unnecessary for routine checks.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The only challenge? Writing and maintaining tests manually is tedious. This is where &lt;strong&gt;Keploy&lt;/strong&gt; steps in.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is Keploy?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://keploy.io/" rel="noopener noreferrer"&gt;Keploy&lt;/a&gt; is an open-source testing toolkit designed primarily for &lt;strong&gt;API testing&lt;/strong&gt;. It helps you automate validation without writing traditional test scripts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Records API calls and responses while you use your application.
&lt;/li&gt;
&lt;li&gt;Automatically generates test cases in YAML format.
&lt;/li&gt;
&lt;li&gt;Creates mocks for external dependencies (databases, third-party APIs).
&lt;/li&gt;
&lt;li&gt;Provides a simple test mode to replay requests and validate responses.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of Keploy as a &lt;strong&gt;record-and-replay testing tool&lt;/strong&gt;. You interact with your app once, and Keploy creates reusable &lt;strong&gt;API-level tests&lt;/strong&gt; for you.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Build the To-Do List App
&lt;/h2&gt;

&lt;p&gt;Before we dive into Keploy, let’s first build a simple To-Do List application. This app will serve as the foundation on which we’ll later generate automated test cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  We’ll use:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Flask&lt;/strong&gt; → A lightweight Python web framework.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQLite&lt;/strong&gt; → A simple database to store tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flask-SQLAlchemy&lt;/strong&gt; → ORM (Object Relational Mapper) for managing database operations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Jinja2 Templates&lt;/strong&gt; → To render HTML pages.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The app is a classic CRUD (Create, Read, Update, Delete) example, which makes it a perfect candidate for learning automated testing.&lt;/p&gt;




&lt;h3&gt;
  
  
  Features
&lt;/h3&gt;

&lt;p&gt;Our To-Do List app will support the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Add tasks&lt;/strong&gt;: Enter a task and save it into the database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;View tasks&lt;/strong&gt;: See all tasks with their creation time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Update tasks&lt;/strong&gt;: Edit an existing task’s content.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Delete tasks&lt;/strong&gt;: Remove tasks you no longer need.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API endpoints&lt;/strong&gt;: Access the same functionality programmatically with REST APIs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This way, we’ll have both a web UI and a set of APIs for interaction.&lt;/p&gt;




&lt;h3&gt;
  
  
  Project Structure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;D:.
├─ static
│  └─ css
├─ templates
├─ tests
└─ app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;app.py&lt;/code&gt; – Main Flask application containing:

&lt;ul&gt;
&lt;li&gt;Database model (Todo)&lt;/li&gt;
&lt;li&gt;UI routes (for web pages)&lt;/li&gt;
&lt;li&gt;API routes (for REST APIs)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;templates/&lt;/code&gt; – HTML templates used for rendering pages:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;base.html&lt;/code&gt; → Shared layout for all pages.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;index.html&lt;/code&gt; → Main page showing the task list.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;update.html&lt;/code&gt; → Page for updating an existing task.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;static/css/&lt;/code&gt; – Custom CSS for styling the UI.&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;tests/&lt;/code&gt; – Placeholder directory where automated test cases will be generated later using Keploy.&lt;/li&gt;

&lt;/ul&gt;




&lt;h3&gt;
  
  
  Database Model
&lt;/h3&gt;

&lt;p&gt;Inside app.py, we’ll define a simple Todo model using SQLAlchemy.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;id&lt;/code&gt;: Unique identifier for each task.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;content&lt;/code&gt;: The actual task description.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;date_created&lt;/code&gt;: Timestamp of when the task was added.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Task List UI
&lt;/h3&gt;

&lt;p&gt;Once you start the app and add some tasks, the home page (index.html) will display them in a neat table:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Column 1&lt;/em&gt; → Task description.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Column 2&lt;/em&gt; → Date created.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Column 3&lt;/em&gt; → Update/Delete actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the bottom, there’s a form input to quickly add new tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8ucmb42e2p4f3minwve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8ucmb42e2p4f3minwve.png" alt=" " width="800" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;✅ At this point, we’ve built a complete Flask To-Do List app with both UI and APIs. This will now serve as the base project for integrating Keploy and generating test cases automatically.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Run the Application
&lt;/h2&gt;

&lt;p&gt;You can clone the project from [&lt;a href="https://github.com/MettaSurendhar/To-Do-List-Flask/tree/main" rel="noopener noreferrer"&gt;To-Do-List-Flask&lt;/a&gt;] and follow the README to initialize and run the app.&lt;/p&gt;

&lt;p&gt;Install &lt;code&gt;virtualenv&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pip install virtualenv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open a terminal in the project root directory and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ python -m venv .venv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ .venv\Scripts\activate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then install the dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ (.venv) pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally start the web server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ (env) python app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This server will start on &lt;a href="http://127.0.0.1:8080/" rel="noopener noreferrer"&gt;http://127.0.0.1:8080/&lt;/a&gt; by default&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Install Keploy
&lt;/h2&gt;

&lt;p&gt;If you’re on Windows, you’ll need WSL (Windows Subsystem for Linux).&lt;/p&gt;

&lt;p&gt;Initialize wsl:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;C:\Users\Dell&amp;gt; wsl
unix@DESKTOP:/mnt/c/Users/Dell$ 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl --silent -O -L https://keploy.io/install.sh &amp;amp;&amp;amp; source install.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check Installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ keploy -v 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 For Linux/macOS, follow the &lt;a href="https://keploy.io/docs/server/installation/" rel="noopener noreferrer"&gt;Keploy installation guide&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Record with Keploy:
&lt;/h2&gt;

&lt;p&gt;Now comes the fun part. Keploy has a record mode that listens to API calls while you use your application.&lt;/p&gt;

&lt;p&gt;Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;keploy record -c "python app.py"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwh1pwddtg4xow8urr9b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwh1pwddtg4xow8urr9b.png" alt=" " width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, perform some actions through both the UI and APIs.&lt;/p&gt;




&lt;h3&gt;
  
  
  UI Actions Captured
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a task&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0lduzkk2vthkgwiknjr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0lduzkk2vthkgwiknjr.png" alt=" " width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;View tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6owf29msi2qpdcfis98.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6owf29msi2qpdcfis98.png" alt=" " width="800" height="616"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update a task&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29gzp4srdrel4865apdv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29gzp4srdrel4865apdv.png" alt=" " width="800" height="435"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8snl2ztf8ntyzwpny67.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8snl2ztf8ntyzwpny67.png" alt=" " width="800" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Delete a task&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyilht6d37hdp30earzfr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyilht6d37hdp30earzfr.png" alt=" " width="800" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Terminal Output&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlkc7kgvtywz5oc89gva.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnlkc7kgvtywz5oc89gva.png" alt=" " width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h3&gt;
  
  
  API Actions Captured
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a task
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X POST http://localhost:8080/api/tasks \
  -H "Content-Type: application/json" \
  -d '{"content":"blog demo task"}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0ivdj0upvilh5hn7i6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0ivdj0upvilh5hn7i6v.png" alt=" " width="625" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update a task
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X PUT http://localhost:8080/api/tasks/1 \
  -H "Content-Type: application/json" \
  -d '{"content":"updated task"}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez5yaqv9uy2x1482f84g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez5yaqv9uy2x1482f84g.png" alt=" " width="571" height="134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;View tasks
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X GET http://localhost:8080/api/task
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqzrg9xo7m242j4tut83.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqzrg9xo7m242j4tut83.png" alt=" " width="800" height="78"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Delete a task
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X DELETE http://localhost:8080/api/tasks/1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foizbtm237jdkmmx9e207.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foizbtm237jdkmmx9e207.png" alt=" " width="650" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terminal Output&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhlh2egzsoijfrk4ib7w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhlh2egzsoijfrk4ib7w.png" alt=" " width="800" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each of these interactions gets stored as a test case in YAML format inside the keploy-tests/ folder.&lt;/p&gt;


&lt;h2&gt;
  
  
  Step 5: Run Tests with Keploy
&lt;/h2&gt;

&lt;p&gt;Once tests are recorded, run them anytime with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ keploy test -c "python app.py"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Keploy will:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Replay all recorded requests.&lt;/li&gt;
&lt;li&gt;Compare the actual responses with recorded ones.&lt;/li&gt;
&lt;li&gt;Generate a detailed test report.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures that your API behaves consistently over time.&lt;/p&gt;

&lt;p&gt;Generated Test Cases:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj2rk3kv5i97x48uz1yk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj2rk3kv5i97x48uz1yk.png" alt=" " width="259" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Generated Reports:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0zw58jozuwfagucm878.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0zw58jozuwfagucm878.png" alt=" " width="295" height="101"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Benefits of Using Keploy
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4rybqrsj8mbdwhy17w5s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4rybqrsj8mbdwhy17w5s.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Adopting Keploy in your development workflow brings several advantages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Automatic Test Generation&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No need to manually write repetitive test cases.
&lt;/li&gt;
&lt;li&gt;Keploy records your real API traffic and generates test cases in YAML format.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Keeps Tests Up-to-Date&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Whenever your API changes, just record again.
&lt;/li&gt;
&lt;li&gt;Test cases evolve with your application, reducing maintenance overhead.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data Mocks for External Services&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keploy automatically generates mocks for databases, third-party APIs, or external dependencies.
&lt;/li&gt;
&lt;li&gt;This allows tests to run reliably without depending on live external systems.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Language &amp;amp; Framework Agnostic&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Works with popular backend frameworks like Flask, Django, FastAPI, Spring Boot, Express.js, and more.
&lt;/li&gt;
&lt;li&gt;Flexible enough to integrate into diverse tech stacks.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;End-to-End Coverage&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Captures both UI-driven actions and API requests.
&lt;/li&gt;
&lt;li&gt;Provides comprehensive testing without extra setup.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;CI/CD Integration&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generated tests can be run in pipelines.
&lt;/li&gt;
&lt;li&gt;Ensures every deployment is validated with the same rigor as your local environment.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Automated testing is critical for building robust, bug-free applications, but writing and maintaining test cases can often feel repetitive and time-consuming. This is where Keploy changes the game.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5ju42tefnvssq0p0ana.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5ju42tefnvssq0p0ana.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By automatically recording real-world &lt;strong&gt;API interactions&lt;/strong&gt; and generating tests with data mocks, Keploy ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your APIs behave consistently.
&lt;/li&gt;
&lt;li&gt;Tests evolve naturally with your application.
&lt;/li&gt;
&lt;li&gt;Development cycles become faster and more reliable.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this blog, we built a Flask To-Do List app and saw how easily Keploy can record interactions, generate API test cases, and validate our endpoints. The key takeaway is:&lt;/p&gt;

&lt;p&gt;👉 With &lt;a href="https://keploy.io/" rel="noopener noreferrer"&gt;Keploy&lt;/a&gt;, you record once and test forever.  &lt;/p&gt;

&lt;p&gt;If you’re working with Flask, Django, FastAPI, or any modern backend, I highly encourage you to give Keploy a try. It’s a huge productivity booster and ensures your applications remain reliable as they grow.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>testing</category>
      <category>opensource</category>
      <category>devrel</category>
    </item>
    <item>
      <title>Final Year, No Guarantee - What It's Really Like Looking for a Job</title>
      <dc:creator>Metta Surendhar</dc:creator>
      <pubDate>Fri, 05 Sep 2025 18:08:44 +0000</pubDate>
      <link>https://forem.com/mettasurendhar/the-honest-side-of-my-job-hunt-not-the-linkedin-version-l14</link>
      <guid>https://forem.com/mettasurendhar/the-honest-side-of-my-job-hunt-not-the-linkedin-version-l14</guid>
      <description>&lt;h2&gt;
  
  
  How tough is getting a job?
&lt;/h2&gt;

&lt;p&gt;How stressful is getting a job? How much preparation is needed? How much time should I spend? How much better should I become? Am I ready? Will I be suitable for this job? Should I apply or not?&lt;/p&gt;

&lt;p&gt;These are the questions running through my brain often. And honestly, I escape from them by procrastinating—scrolling through social media or distracting myself with other work instead of doing what’s needed.&lt;/p&gt;

&lt;p&gt;Yes, I know this isn’t the right way. But the truth is, I don’t really know what I should do or how I should handle it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;The irony?&lt;/strong&gt;&lt;/em&gt; I actually have good stories and experiences to share. I’m capable of competing. I’ve done the work needed for job seeking and building a career. All I need to do is put everything I’ve done over the past years together and prepare. But right now, it feels like a burden—or sometimes even useless.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Why?&lt;/strong&gt;&lt;/em&gt; Because I’m not getting opportunities in on-campus placements. Off-campus companies reject me right at the resume shortlisting round. And the job market itself isn’t great right now.&lt;/p&gt;

&lt;p&gt;Most companies seem to prefer experienced people. And even when there are fresher openings, many of them are purely for BE and BTech graduates.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where I Stand ?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtqovudtqqtbzcbtx20m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtqovudtqqtbzcbtx20m.png" alt=" Where I Stand ? (cover image)" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;I’m not an Engineer&lt;/strong&gt;&lt;/em&gt;, but I study in one of the top engineering colleges in India: &lt;strong&gt;College of Engineering, Guindy (CEG), Anna University (AU)&lt;/strong&gt;. It’s a pioneer institution, built during the British era, with a long history and many achievements.&lt;/p&gt;

&lt;p&gt;Technically, I’m not called an “&lt;em&gt;engineer&lt;/em&gt;” because of my course. But the reality is—I follow almost the same syllabus, build the same skills, and work on similar projects as BE CS and BTech IT students. The only difference is the title.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;I’m not even a scientist&lt;/strong&gt;&lt;/em&gt;, though my degree is &lt;strong&gt;M.Sc. Information Technology&lt;/strong&gt;, an integrated course at CEG. Professionally, I don’t fit into the “&lt;em&gt;scientist&lt;/em&gt;” label either, since our course doesn’t involve research. Instead, it’s packed with the technical learning, tools, and hands-on practice that an engineer or IT professional would need.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;The tough part?&lt;/strong&gt;&lt;/em&gt; This course is managed by the Mathematics Department. So we don’t get CS scholars or experienced professors—only teaching fellows, most of whom aren’t strong enough to guide us properly.&lt;/p&gt;

&lt;p&gt;Because of this, our course doesn’t get proper on-campus placement opportunities. Most companies reject us immediately after seeing the course name. From the placement cell’s side, there’s also little to no support, even though we’ve been requesting it for years.&lt;/p&gt;

&lt;p&gt;The frustrating part is that our students are equally skilled, and some are even better than BE CS and BTech IT students in terms of projects and internships. Yet, we don’t get proper recognition or placements.&lt;/p&gt;

&lt;p&gt;With all this happening, I honestly don’t know how to face the situation. &lt;/p&gt;

&lt;p&gt;Have you ever felt like no matter how much effort you put in, the system just doesn’t see it?&lt;/p&gt;




&lt;h2&gt;
  
  
  My Journey So Far
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/metta-surendhar/" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncvsmzlmfcygg9zf99vx.png" alt=" My Journey So Far (cover image) " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From my &lt;strong&gt;&lt;em&gt;2nd year&lt;/em&gt;&lt;/strong&gt; onwards, I started taking small steps for the sake of my career and professional profile.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I created my LinkedIn account, began following students, alumni, professors, and tech pages.&lt;/li&gt;
&lt;li&gt;I started learning web development—first simple frontend projects, then functional pages. I built my portfolio and worked with a college team as a frontend developer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;During the &lt;strong&gt;&lt;em&gt;transition from 2nd to 3rd year&lt;/em&gt;&lt;/strong&gt;, I learned version control, handling packages, and frameworks. I moved into backend development, did a full-stack project, and took on more backend roles in different projects.&lt;/p&gt;

&lt;p&gt;By &lt;strong&gt;&lt;em&gt;3rd year&lt;/em&gt;&lt;/strong&gt;, I was deeply interested in backend development. I worked on a college project as a backend developer for 3 months. At the same time, I also took up responsibilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I served as the General Secretary of my college, co-organizing events and fests, and voicing student concerns to the management.&lt;/li&gt;
&lt;li&gt;I led a team of 5 on an alumni platform project, managing the entire backend myself.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I barely had personal time, but I didn’t mind. I loved what I was doing and stayed true to my responsibilities. Isn’t it funny how when you truly enjoy something, you forget about time?&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Between 3rd and 4th year&lt;/strong&gt;&lt;/em&gt;, I had to get an internship as part of my curriculum. During the internship drive, I attended only one company’s process—and I got selected.&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;&lt;em&gt;4th year&lt;/em&gt;&lt;/strong&gt;, I worked as a Platform Engineering Intern at Invisibl Cloud Solutions. Some of my alumni were there, so I got the chance to connect with them. I learned a lot—new tools, frameworks, and technologies—throughout those six months. Around this time, I also started attending tech meetups, made new connections, and gained great experiences.&lt;/p&gt;

&lt;p&gt;After the internship, I continued with my course. Around then, I took responsibility &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For sponsorship and logistics for my department symposium Mathrix, as the Head of Industry Relations. &lt;/li&gt;
&lt;li&gt;I worked hard, brought in sponsorships, managed logistics, and actually enjoyed the whole process.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Where I’m Stuck Now
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjx50l6mp5vxh309cr9n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjx50l6mp5vxh309cr9n.png" alt=" Where I’m Stuck Now (cover image)" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, I’m in my &lt;strong&gt;&lt;em&gt;final year&lt;/em&gt;&lt;/strong&gt;, preparing for placements. Alongside that, I’m handling two responsibilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I’m the Placement Representative, responsible for handling placements and bringing in companies.&lt;/li&gt;
&lt;li&gt;I’m also one of the Heads of Marketing and External Relations at Guindy Times, our college media club.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With all this going on, I still want to learn ML—but right now, I feel stuck and don’t know how to move forward.&lt;/p&gt;

&lt;p&gt;I know some of you reading this might be in the same state as me. And some of you may have already been through this and moved forward. If so, please share your experiences—it might help me, and others like me. After all, sometimes hearing someone else’s journey is all the push we need.&lt;/p&gt;

</description>
      <category>career</category>
      <category>beginners</category>
      <category>discuss</category>
      <category>learning</category>
    </item>
    <item>
      <title>This Event Cleared My Doubts About MLOps, DevOps &amp; Platform Engineering  -  OpsFusion 2024 Recap</title>
      <dc:creator>Metta Surendhar</dc:creator>
      <pubDate>Sat, 12 Jul 2025 20:32:21 +0000</pubDate>
      <link>https://forem.com/mettasurendhar/opsfusion-2024-insights-into-mlops-devops-and-platform-engineering-44m7</link>
      <guid>https://forem.com/mettasurendhar/opsfusion-2024-insights-into-mlops-devops-and-platform-engineering-44m7</guid>
      <description>&lt;p&gt;Recently, I had the opportunity to attend OpsFusion: Where Dev Meets ML—a technical meetup that brought together practitioners and enthusiasts across DevOps, MLOps, and Platform Engineering. The event was an excellent blend of hands-on sessions, real-world experiences, and emerging trends across these intersecting domains.&lt;/p&gt;

&lt;p&gt;In this blog, I’ve shared a structured summary of each session, along with key takeaways that resonated with me.&lt;/p&gt;

&lt;h2&gt;
  
  
  MLOps in Vertex AI – &lt;em&gt;by &lt;a href="https://www.linkedin.com/in/ACoAAAJ-XxUBPTKNK3_EdtDVJy-tU7cR9yO9GKs?lipi=urn%3Ali%3Apage%3Ad_flagship3_company_posts%3BuamMgHa0S0iASmsfzjWDdA%3D%3D" rel="noopener noreferrer"&gt;Navaneethan Gopal&lt;/a&gt;&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;This session focused on building end-to-end machine learning pipelines using Vertex AI, with a specific emphasis on automating the ML lifecycle beyond model development.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feceruu2lnljot3pp4hna.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feceruu2lnljot3pp4hna.jpg" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Highlights
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The demonstration used a multi-class classification problem (Dry Beans dataset) developed in Google Colab using Gemini for code assistance.&lt;/li&gt;
&lt;li&gt;It was emphasized that less than 1% of MLOps involves actual ML code. The remaining majority lies in operations—such as infrastructure, orchestration, testing, and monitoring.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Core Components of MLOps
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Data collection and validation&lt;/li&gt;
&lt;li&gt;Model training and testing&lt;/li&gt;
&lt;li&gt;Debugging and analysis&lt;/li&gt;
&lt;li&gt;Model monitoring post-deployment&lt;/li&gt;
&lt;li&gt;Cross-functional collaboration&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  MLOps Lifecycle Phases
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Discovery – problem and data exploration&lt;/li&gt;
&lt;li&gt;Development – feature engineering, dataset versioning, and integration with feature stores&lt;/li&gt;
&lt;li&gt;Deployment – serving the model through automated pipelines&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Maturity Levels in MLOps
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Level 0: Manual build and deploy&lt;/li&gt;
&lt;li&gt;Level 1: Automated training workflows&lt;/li&gt;
&lt;li&gt;Level 2: Fully automated and reproducible pipelines across environments&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Vertex AI Pipeline Overview
&lt;/h3&gt;

&lt;p&gt;The speaker provided a walkthrough of how to build and deploy a Vertex AI pipeline triggered from Bitbucket or a cronjob. The steps included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating a GCS (Google Cloud Storage) bucket&lt;/li&gt;
&lt;li&gt;Defining dataset and training components using XGBoost&lt;/li&gt;
&lt;li&gt;Initializing and deploying the pipeline via SDK integration&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Emerging Operations in ML
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;FMOps (Foundation Model Operations): Managing LLMs, latency, token usage, and cost&lt;/li&gt;
&lt;li&gt;LLMOps: Operations tailored to Retrieval-Augmented Generation (RAG) and large language models&lt;/li&gt;
&lt;li&gt;PromptOps: Monitoring and optimizing prompt performance and hallucination tracking&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Kubeflow
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Introduction to Kubeflow as a Kubernetes-native platform for ML workflows&lt;/li&gt;
&lt;li&gt;Creating custom components and reusable pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This session bridged the gap between foundational ML and scalable production pipelines, highlighting the growing need for robust, reproducible ML systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  Trunk-Based Development with Terraform – &lt;em&gt;by &lt;a href="https://www.linkedin.com/in/ACoAABKQasYBihAT9MUPw_7aNk5I4BLhrbo7RcU?lipi=urn%3Ali%3Apage%3Ad_flagship3_company_posts%3BuamMgHa0S0iASmsfzjWDdA%3D%3D" rel="noopener noreferrer"&gt;Harini Muralidharan&lt;/a&gt;&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;This session covered the developer-driven DevOps model, focusing on enabling application developers to define and manage infrastructure using Infrastructure as Code (IaC).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6u4ph0vuul1j2kntm6y.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6u4ph0vuul1j2kntm6y.jpg" alt=" " width="800" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Context: Challenges in Traditional DevOps
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Frequent inconsistencies between dev and production environments&lt;/li&gt;
&lt;li&gt;Developer reliance on operations teams for even minor infrastructure changes&lt;/li&gt;
&lt;li&gt;Lack of visibility and traceability in changes made to the system&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Principles of Developer-Driven DevOps
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Developers define and version infrastructure alongside application code&lt;/li&gt;
&lt;li&gt;Early detection and mitigation of issues via automation&lt;/li&gt;
&lt;li&gt;Promotes ownership without expecting developers to become operations experts&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Introduction to Terraform
&lt;/h3&gt;

&lt;p&gt;The session provided a deep dive into Terraform, its ecosystem, and how it enables scalable infrastructure on GCP.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why Terraform?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Open-source and cloud-agnostic&lt;/li&gt;
&lt;li&gt;Declarative syntax (HCL)&lt;/li&gt;
&lt;li&gt;Native support for GCP&lt;/li&gt;
&lt;li&gt;Strong community adoption and extensibility&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Core Components
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Providers: Connect Terraform with cloud services&lt;/li&gt;
&lt;li&gt;Resources: Define infrastructure components&lt;/li&gt;
&lt;li&gt;Variables &amp;amp; Outputs: Parameterization and visibility&lt;/li&gt;
&lt;li&gt;State Management: Track infrastructure state across teams&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Common Workflow
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init → terraform plan → terraform apply → terraform destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Integrating Terraform with CI/CD
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Using CI/CD pipelines (YAML) to automate Terraform commands&lt;/li&gt;
&lt;li&gt;Promotes consistent, reliable infrastructure changes with version control&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Best Practices
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Store code in Git with proper version control&lt;/li&gt;
&lt;li&gt;Use remote state storage (e.g., GCS or Terraform Cloud)&lt;/li&gt;
&lt;li&gt;Follow the principle of least privilege&lt;/li&gt;
&lt;li&gt;Modularize Terraform codebases for reusability&lt;/li&gt;
&lt;li&gt;Perform automated testing on infra modules&lt;/li&gt;
&lt;li&gt;Monitor for configuration drift and enforce corrective actions&lt;/li&gt;
&lt;li&gt;This talk emphasized the benefits of empowering developers while maintaining operational integrity, security, and scalability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This session bridged the gap between foundational ML and scalable production pipelines, highlighting the growing need for robust, reproducible ML systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  Platform Engineering vs DevOps: Evolution or Revolution? – &lt;em&gt;by &lt;a href="https://www.linkedin.com/in/ACoAAC-62I8BWfvgj0Z5C_10Uh4ftmvyQUIZh_k?lipi=urn%3Ali%3Apage%3Ad_flagship3_company_posts%3BuamMgHa0S0iASmsfzjWDdA%3D%3D" rel="noopener noreferrer"&gt;Crystal Darling&lt;/a&gt;&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;This session helped clarify the difference between DevOps, SRE, and the growing field of Platform Engineering.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faa6tqe7z9nvl9x0it7zr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faa6tqe7z9nvl9x0it7zr.jpg" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges in Traditional DevOps
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Operations teams are often blocked by development timelines&lt;/li&gt;
&lt;li&gt;Developers submit tickets for operational support, resulting in slow turnaround&lt;/li&gt;
&lt;li&gt;Limited autonomy in environments, infrastructure, and tool usage&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What Is Platform Engineering?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The practice of building and maintaining Internal Developer Platforms (IDPs)&lt;/li&gt;
&lt;li&gt;Platform engineers build self-service tools and abstractions for developers&lt;/li&gt;
&lt;li&gt;Treat developers as clients, providing them with consistent and secure environments&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Platform Engineering Skills
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes orchestration&lt;/li&gt;
&lt;li&gt;IaC tools like Terraform and Helm&lt;/li&gt;
&lt;li&gt;CI/CD systems&lt;/li&gt;
&lt;li&gt;CNCF tooling for observability, deployment, and monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Core Message
&lt;/h3&gt;

&lt;p&gt;Platform Engineering is not a rebranding of DevOps. It is a cultural and architectural evolution focused on developer experience, autonomy, and scalability.&lt;/p&gt;




&lt;h2&gt;
  
  
  Discussions on ML Research and Networking
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4g5x3gii028ifzmpi160.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4g5x3gii028ifzmpi160.jpeg" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;br&gt;
The event concluded with group discussions on recent research papers from Microsoft and Google—specifically those related to Copilot, RAG, and the inner workings of generative systems.&lt;/p&gt;

&lt;p&gt;It was a highly engaging session where I got to connect with fellow learners, exchange ideas, and hear how others are applying these concepts in real-world environments.&lt;/p&gt;




&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;Attending OpsFusion gave me a broader and more integrated view of how software systems are evolving—whether it’s about scaling ML models through MLOps, automating infrastructure with Terraform, or building robust internal platforms that make developer lives easier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxecjx0ihktiau3x6o7ot.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxecjx0ihktiau3x6o7ot.jpeg" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're someone who is navigating the intersection of ML, infrastructure, and deployment—or wants to bridge the gap between development and operations—events like these are immensely valuable.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>discuss</category>
      <category>cloud</category>
      <category>googlecloud</category>
    </item>
    <item>
      <title>Building Real-World Infrastructure as a Fresher: My Story with Logs, AI &amp; Observability</title>
      <dc:creator>Metta Surendhar</dc:creator>
      <pubDate>Sun, 29 Jun 2025 11:30:57 +0000</pubDate>
      <link>https://forem.com/mettasurendhar/engineering-logs-intelligence-my-internship-journey-at-invisibl-cloud-solutions-1lcd</link>
      <guid>https://forem.com/mettasurendhar/engineering-logs-intelligence-my-internship-journey-at-invisibl-cloud-solutions-1lcd</guid>
      <description>&lt;p&gt;I'm thrilled to share that I’ve successfully completed my six-month internship(June 2024 – December 2024) as a &lt;strong&gt;Platform Engineer&lt;/strong&gt; at &lt;strong&gt;Invisibl Cloud Solutions&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;What started as an exploration of unfamiliar tools and domains quickly turned into one of the most fulfilling technical journeys I’ve had so far.&lt;/p&gt;

&lt;p&gt;From building a log observability infrastructure to developing an AI-powered research agent, this internship helped me grow technically, professionally, and personally.&lt;/p&gt;




&lt;h3&gt;
  
  
  Internship Experience: &lt;em&gt;Learning, Growth &amp;amp; Gratitude&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;The internship was hybrid in nature—split between working from home and at the office workspace. What made the experience even more special was that many of the senior staff were alumni from our college, making the workspace incredibly friendly and collaborative.&lt;/p&gt;

&lt;p&gt;On the days I went to the office in person, I would ride along with Dinesh Kumar—it took us nearly an hour to reach the workspace, and we had some of the best conversations along the way.&lt;br&gt;
Sometimes, we’d reminisce about college days, exams, and projects. Other times, we’d discuss our ongoing work, explore technologies, and talk about careers, placements, and what the future holds. Those morning rides were truly special—casual, thoughtful, and always enriching.&lt;/p&gt;

&lt;p&gt;I personally loved going to the office because many of our seniors—alumni from our very own course—would be there. Since we shared that common ground, we had so much to talk about. Whether it was clearing doubts, learning about the industry, or just general chit-chat, they always made time for us.&lt;br&gt;
During lunch, we’d all sit together, gossip, joke around, and just have fun. Looking back, those were some of my favorite memories—I genuinely miss those days.&lt;/p&gt;

&lt;p&gt;Beyond office hours, I was deeply focused on learning and growing. Over the six months, I attended technical meetups, joined bootcamps, started blogging, and participated in hackathons. These experiences helped me not only sharpen my skills but also connect with the broader tech community.&lt;br&gt;
All of this complemented what I was learning at Invisibl Cloud, helping me grow both in depth and in direction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgoplv4to05w7q5hucra.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgoplv4to05w7q5hucra.webp" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;During my first three months, I worked on an &lt;strong&gt;&lt;em&gt;Observability Infrastructure Project&lt;/em&gt;&lt;/strong&gt;, where I dove deep into system logs, tools like Cribl and Grafana, and built a full-stack monitoring setup. Then, I transitioned to a &lt;strong&gt;&lt;em&gt;Generative AI-based project&lt;/em&gt;&lt;/strong&gt; centered around intelligent research paper discovery using RAG.&lt;/p&gt;

&lt;p&gt;I’m proud to share that the demos for both projects received positive feedback from the client, which was deeply satisfying, especially considering both domains were completely new to me when I started.&lt;/p&gt;

&lt;h4&gt;
  
  
  Gratitude
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;I owe a huge thank you to the entire &lt;a href="https://invisibl.io/" rel="noopener noreferrer"&gt;Invisibl Cloud Solutions&lt;/a&gt; team for this enriching opportunity.&lt;/li&gt;
&lt;li&gt;A heartfelt thank you to &lt;a href="https://www.linkedin.com/in/harishganesan/" rel="noopener noreferrer"&gt;Harish Ganesan&lt;/a&gt;, CEO of Invisibl Cloud Solutions, for not only trusting me with impactful work but also giving me the opportunity to work on the Gen AI project. His involvement and encouragement were truly motivating.&lt;/li&gt;
&lt;li&gt;A big thanks to &lt;a href="https://www.linkedin.com/in/vijayramh/" rel="noopener noreferrer"&gt;VijayRam Harinathan&lt;/a&gt; for his support and mentorship in the observability project—his feedback and belief in my work made a huge difference.&lt;/li&gt;
&lt;li&gt;Special appreciation to &lt;a href="https://www.linkedin.com/in/farhana-s-64b5b8212/" rel="noopener noreferrer"&gt;Farhana S&lt;/a&gt;, whose consistent mentorship helped me navigate the observability space for the very first time.&lt;/li&gt;
&lt;li&gt;I'm equally grateful to &lt;a href="https://www.linkedin.com/in/suryaa-azhakhiamanavalan-007468189/" rel="noopener noreferrer"&gt;Suryaa Azhakhiamanavalan&lt;/a&gt; for his guidance on the Generative AI project. His mentorship turned this challenge into a rewarding experience.&lt;/li&gt;
&lt;li&gt;And of course, &lt;a href="https://www.linkedin.com/in/harshita-miranda/" rel="noopener noreferrer"&gt;Harshita Miranda&lt;/a&gt;, my project partner from day one. Working with her on both projects was a joy—we shared ideas, solved challenges together, and supported each other throughout.&lt;/li&gt;
&lt;li&gt;Lastly, shoutout to my amazing friends who interned alongside me—&lt;a href="https://www.linkedin.com/in/dinesh-kumar-ch/" rel="noopener noreferrer"&gt;Dinesh Kumar&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/sree-varshan-m-328b45222/" rel="noopener noreferrer"&gt;Sree Varshan M&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/in/harini-s-995684248/" rel="noopener noreferrer"&gt;Harini S&lt;/a&gt;. You all made the workspace vibrant and the learning process fun!&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;em&gt;Project 1:&lt;/em&gt; Building Observability Infrastructure for System Logs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Objective:&lt;/em&gt;&lt;/strong&gt; To extend the existing metrics-based monitoring stack by incorporating log observability across Windows and Linux systems.&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Tech Stack:&lt;/em&gt;&lt;/strong&gt; Grafana, Loki, Cribl Edge, Cribl Stream, rsyslog, Prometheus&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Key Concepts:&lt;/em&gt;&lt;/strong&gt; Log collection, log routing, centralized logging, visualization&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwk396u14xp0dz28qaqpm.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwk396u14xp0dz28qaqpm.webp" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As someone &lt;a href="https://dev.to/mettasurendhar/observability-simplified-a-first-timers-guide-to-system-health-53nj"&gt;new to observability&lt;/a&gt;, I began with research into best practices and tools. The organization already had &lt;a href="https://dev.to/mettasurendhar/step-by-step-guide-to-configuring-cribl-and-grafana-for-data-processing-1j0f"&gt;metrics monitoring&lt;/a&gt;, and I was tasked with building the logs monitoring infrastructure from scratch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Windows Log Monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Leveraged native Event Logs (Application, System, Security).&lt;/li&gt;
&lt;li&gt;Collected logs using Cribl Agent.&lt;/li&gt;
&lt;li&gt;Processed and routed them through Cribl Edge and Cribl Stream.&lt;/li&gt;
&lt;li&gt;Stored in Grafana Loki.&lt;/li&gt;
&lt;li&gt;Visualized using &lt;a href="https://dev.to/mettasurendhar/getting-started-with-grafana-your-observability-superhero-awaits-okl"&gt;Grafana dashboards&lt;/a&gt;, with alerting and filtering options.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Linux Log Monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Linux was more challenging due to the absence of structured default logs.&lt;/li&gt;
&lt;li&gt;Created a Ubuntu virtual machine.&lt;/li&gt;
&lt;li&gt;Researched and implemented rsyslog to generate logs in custom templates.&lt;/li&gt;
&lt;li&gt;Integrated the logs into the same Cribl → Loki → Grafana pipeline.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Outcome:&lt;/em&gt;&lt;/strong&gt; Successfully built and delivered a cross-platform proof of concept for full-stack log observability, integrated seamlessly into the existing infrastructure.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;em&gt;Project 2:&lt;/em&gt; Generative AI Research Agent
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Objective:&lt;/em&gt;&lt;/strong&gt; To build an intelligent AI agent capable of retrieving and summarizing research papers based on user queries.&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Tech Stack:&lt;/strong&gt;&lt;/em&gt; Haystack, FastAPI, Streamlit, Python, Arxiv API, Gemini, OpenSearch&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Key Concepts:&lt;/strong&gt;&lt;/em&gt; Agent pipelines, Retrieval-Augmented Generation (RAG), API development, LLM integration&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxfe8mxmxb3gob4rkbxw.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxfe8mxmxb3gob4rkbxw.webp" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the second half of my internship, I worked on this exciting project with one other teammate. The goal was to help researchers find academic papers faster and more efficiently using Generative AI.&lt;/p&gt;

&lt;h3&gt;
  
  
  My Contributions
&lt;/h3&gt;

&lt;p&gt;🌟&lt;em&gt;Integrated the Arxiv API&lt;/em&gt; to fetch relevant research papers.&lt;br&gt;
🌟&lt;em&gt;Designed the agent pipeline&lt;/em&gt; using Haystack and Gemini, implementing RAG to combine retrieval with generation.&lt;br&gt;
🌟&lt;em&gt;Stored extracted data in OpenSearch&lt;/em&gt; for quick and context-aware access.&lt;br&gt;
🌟&lt;em&gt;Built a Streamlit-based POC&lt;/em&gt; to demo the functionality.&lt;br&gt;
🌟Later &lt;em&gt;developed a FastAPI version&lt;/em&gt; for production-level usage.&lt;/p&gt;

&lt;p&gt;The first month of development was incredibly intense—we often worked for over 10 hours a day to shape the prototype. With consistent support and motivation from Suryaa Azhakhiamanavalan and Harish Ganesan, and after multiple review meetings and revisions, we (myself and Harshita Miranda) were able to complete the proof of concept within the first month.&lt;br&gt;
Even though the pace felt heavy at the time, it turned out to be one of the most rewarding learning experiences of my internship.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Outcome:&lt;/em&gt;&lt;/strong&gt; Delivered a fully functional prototype in under a month, and then enhanced it into an API-ready microservice with scalable architecture.&lt;/p&gt;




&lt;h4&gt;
  
  
  Key Takeaways
&lt;/h4&gt;

&lt;p&gt;This internship gave me a crash course in:&lt;br&gt;
✔️Observability tools and infrastructure, from system logs to dashboard visualization.&lt;br&gt;
✔️Generative AI workflows, agent chaining, and RAG pipelines.&lt;br&gt;
✔️Real-world problem solving across two very different but equally challenging domains.&lt;br&gt;
✔️Working in a collaborative team, presenting demos to clients, and adapting to fast-paced learning curves.&lt;br&gt;
✔️Most importantly, it showed me the importance of taking initiative, asking the right questions, and owning the full cycle of a product — from idea to implementation.&lt;/p&gt;

&lt;h4&gt;
  
  
  Final Thoughts
&lt;/h4&gt;

&lt;p&gt;Looking back, I’m proud of how much I was able to learn and build in just six months. The trust, guidance, and opportunities I received from Invisibl Cloud Solutions shaped this internship into something I’ll always remember.&lt;/p&gt;

&lt;p&gt;From configuring log protocols on Linux to chaining LLM agents for intelligent research—this journey has been transformative. I’m grateful for every challenge, every lesson, and every teammate who made it all worthwhile.&lt;/p&gt;

</description>
      <category>devjournal</category>
      <category>ai</category>
      <category>career</category>
      <category>learning</category>
    </item>
    <item>
      <title>Hackathon Realities: What It's Like to Build, Code &amp; Ship in a Weekend</title>
      <dc:creator>Metta Surendhar</dc:creator>
      <pubDate>Thu, 16 Jan 2025 14:00:00 +0000</pubDate>
      <link>https://forem.com/mettasurendhar/hackathon-highlights-story-from-hackz-2024-finalist-23np</link>
      <guid>https://forem.com/mettasurendhar/hackathon-highlights-story-from-hackz-2024-finalist-23np</guid>
      <description>&lt;p&gt;I had the honor of leading my team in Hackz 2024, an intense 24-hour hackathon that stretched from 10 AM on November 23rd to 10 AM on November 24th. The event was buzzing with energy, creativity, and an overwhelming sense of purpose. With over 1000 teams registered, 500+ submitted their ideas, and only 20 were selected for the final round, our journey to the top felt nothing short of incredible.&lt;/p&gt;

&lt;p&gt;What made it even more special? We were the only team from the College of Engineering, Guindy (CEG) to make it this far—a proud moment for all of us.&lt;/p&gt;

&lt;p&gt;Despite the hackathon being hosted at our college, many teams from CEG had submitted ideas, but we were the sole representatives from our campus in the finals. Representing CEG among teams from diverse colleges and states was both a responsibility and a privilege.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4fhh4h3xdn360pqsmw5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4fhh4h3xdn360pqsmw5.jpg" alt=" " width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Challenge: Building Financial Inclusion for Seniors
&lt;/h2&gt;

&lt;p&gt;Our problem statement was as compelling as it was challenging:&lt;/p&gt;

&lt;p&gt;Develop an AI-powered financial inclusion platform designed for elderly individuals to simplify digital banking and financial planning. The platform should support voice commands, provide timely alerts for financial milestones, and offer tailored scam protection. By addressing digital literacy challenges, this solution aims to enhance independence and promote safe, accessible engagement with financial services for seniors.&lt;/p&gt;

&lt;p&gt;When we read this, we knew this was more than just a technical challenge; it was an opportunity to create a meaningful impact. With the rise in digital banking and scams targeting vulnerable populations, creating a tool to empower the elderly felt deeply significant.&lt;/p&gt;




&lt;h2&gt;
  
  
  Our Solution: &lt;em&gt;A Fintech Platform for Empowering the Elderly&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;After a whirlwind of brainstorming, designing, and coding, we presented our prototype, packed with features designed specifically for seniors:&lt;/p&gt;

&lt;p&gt;✔️ Voice Commands: For intuitive, hands-free interactions—perfect for users unfamiliar with complex interfaces.&lt;br&gt;
✔️ AI Chat Assistant: A personalized guide to help with financial queries and planning.&lt;br&gt;
✔️ Scam Protection Education: To safeguard users against fraud and teach them to spot red flags.&lt;br&gt;
✔️ Expense, Savings, and Investment Insights: Tailored recommendations to support better financial management.&lt;br&gt;
✔️ Ease of Use: Every feature was designed with accessibility and simplicity in mind to bridge the digital literacy gap.&lt;/p&gt;




&lt;h2&gt;
  
  
  My Role as Team Lead: A Learning Curve
&lt;/h2&gt;

&lt;p&gt;As the team lead, I had the responsibility of steering our efforts. Coordinating a team in such a high-pressure environment was both thrilling and demanding. I learned the importance of quick decision-making, fostering collaboration, and staying calm under pressure.&lt;/p&gt;

&lt;p&gt;I’m incredibly grateful to my teammates—Harini S., Sundar Balamoorthy, and Adhithya—for their dedication and hard work. Sundar and Adhithya were new to hackathons and development, but they stepped up brilliantly, proving that a willingness to learn and contribute matters more than experience. I hope this journey encourages them to take part in more hackathons.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtc7kiqzq91npq9pqk7y.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtc7kiqzq91npq9pqk7y.jpg" alt=" " width="800" height="549"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Hackz 2024 was no ordinary hackathon. It brought together diverse minds from across states, creating a vibrant melting pot of ideas and innovation. The energy in the room as we worked through the night, fueled by adrenaline (and coffee!), was unlike anything else.&lt;/p&gt;

&lt;p&gt;We worked tirelessly for 24 hours, juggling ideas, implementing features, and debugging issues, but it was all worth it. By the end, we had a working prototype—a tangible result of our collaboration and effort.&lt;/p&gt;

&lt;p&gt;While we didn’t win, being among the top 20 teams out of 500+ submissions was a milestone we cherished.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuf6rkn3skd9uptgi2n4e.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuf6rkn3skd9uptgi2n4e.jpg" alt=" " width="800" height="561"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;I want to express my heartfelt gratitude to:&lt;/p&gt;

&lt;p&gt;Navaneethan, our mentor, for his unwavering support and guidance. His encouragement kept us motivated till the very end. Navaneethan, I’m sorry we couldn’t bring home a trophy, but I’ve learned so much from you, and your insights will stay with me as I take on future projects.&lt;/p&gt;

&lt;p&gt;CSEA for organizing such a well-structured and supportive event. From managing logistics to ensuring participants were cared for, they truly went above and beyond.&lt;/p&gt;

&lt;p&gt;Temenos for sponsoring Hackz 2024 and fostering innovation among young developers.&lt;/p&gt;




&lt;h2&gt;
  
  
  Takeaways and Reflections
&lt;/h2&gt;

&lt;p&gt;Hackz 2024 wasn’t just about creating a product; it was a journey of growth and discovery. Here’s what I’m taking away:&lt;/p&gt;

&lt;p&gt;🌟 Leadership Lessons: Leading a team under a tight deadline taught me how to manage people, tasks, and time effectively.&lt;/p&gt;

&lt;p&gt;🌟 Problem-Solving Skills: Tackling real-world challenges pushed me to think creatively and practically.&lt;/p&gt;

&lt;p&gt;🌟 Understanding Expectations: Getting feedback from judges and mentors helped me understand what it takes to impress industry experts.&lt;/p&gt;

&lt;p&gt;I also realized that mistakes are stepping stones to growth. Each bug we fixed, each feature we struggled to implement, and every moment of doubt taught me something valuable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnm6rf3ytznv11zbzq546.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnm6rf3ytznv11zbzq546.jpg" alt=" " width="800" height="671"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hackz 2024 has left me inspired and more determined than ever to create impactful tech solutions. The journey doesn’t end here; it’s just the beginning. I plan to build on these experiences, improve my skills, and continue pushing boundaries.&lt;/p&gt;

&lt;p&gt;To anyone considering participating in a hackathon: go for it! You’ll leave with more than just technical skills—you’ll gain memories, friendships, and a sense of achievement that’s hard to match.&lt;/p&gt;

&lt;p&gt;Once again, thank you to everyone who made this journey unforgettable—my team, mentor, organizers, and sponsors. Here’s to many more hackathons and challenges ahead! 💪&lt;/p&gt;

</description>
      <category>hackathon</category>
      <category>fintech</category>
      <category>ai</category>
      <category>aiops</category>
    </item>
    <item>
      <title>What IBM's SRE Expert Wants You to Know About Observability - A Beginner's Guide</title>
      <dc:creator>Metta Surendhar</dc:creator>
      <pubDate>Thu, 09 Jan 2025 14:00:00 +0000</pubDate>
      <link>https://forem.com/mettasurendhar/observability-unveiled-key-insights-from-ibms-sre-expert-4k1h</link>
      <guid>https://forem.com/mettasurendhar/observability-unveiled-key-insights-from-ibms-sre-expert-4k1h</guid>
      <description>&lt;p&gt;During the &lt;strong&gt;Grafana and Friends Meetup&lt;/strong&gt; in Chennai, I had the opportunity to attend an insightful session by &lt;a href="https://www.linkedin.com/in/manojkumar-g-27574a13/?lipi=urn%3Ali%3Apage%3Ad_flagship3_detail_base%3Bba6HJha5QsOLPgCk8Usz%2FA%3D%3D" rel="noopener noreferrer"&gt;&lt;strong&gt;Manojkumar&lt;/strong&gt;&lt;/a&gt;, an SRE professional from &lt;a href="https://www.linkedin.com/company/ibm/" rel="noopener noreferrer"&gt;IBM&lt;/a&gt;. His talk centered around observability and how IBM tackles real-world challenges using &lt;strong&gt;Grafana&lt;/strong&gt; and &lt;strong&gt;AI&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Out of all the sessions that day, this one stood out as my personal favorite, and I couldn’t wait to share some key takeaways here!The talk covered four critical components in modern observability systems: &lt;strong&gt;logs&lt;/strong&gt;, &lt;strong&gt;metrics&lt;/strong&gt;, &lt;strong&gt;traces&lt;/strong&gt;, and &lt;strong&gt;profiling&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;These make up the foundation for any robust observability setup, and he explained how each one plays a role in monitoring and troubleshooting large-scale infrastructures.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Observability Stack - Logs, Metrics, Traces, and Profiling&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;1. &lt;strong&gt;Logs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Logs are often the first step in diagnosing issues.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They provide a granular record of everything happening within the system, from user activities to errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;At IBM, logs are used to trace the precise sequence of events that can lead to potential failures or performance degradation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2. &lt;strong&gt;Metrics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Metrics come in when you need to track the overall health of your system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By monitoring things like &lt;strong&gt;CPU usage&lt;/strong&gt;, &lt;strong&gt;memory consumption&lt;/strong&gt;, and &lt;strong&gt;response times&lt;/strong&gt;, metrics give a top-level view of how different components are performing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;While logs help you understand the "what" and "when," metrics help you catch patterns before they escalate into critical issues.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3. &lt;strong&gt;Traces:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Traces become vital in &lt;strong&gt;distributed systems&lt;/strong&gt; where a single request might travel through multiple services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IBM uses traces to monitor each step of the request path, allowing them to pinpoint bottlenecks and understand complex interactions between microservices.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;4. &lt;strong&gt;Profiling:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Profiling takes observability to the next level by digging into the code execution itself.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It’s useful for spotting inefficiencies in &lt;strong&gt;resource usage&lt;/strong&gt; (like CPU or memory) at a granular level, making it easier to optimize and fine-tune system performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Profiling provides the precision needed to identify which parts of the code need optimization, especially in performance-critical applications.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Real-World Challenges &amp;amp; Solutions in Observability&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Manojkumar didn’t just talk theory — also shared practical challenges he faced and the solutions implemented using Grafana and AI. Three problems, in particular, stood out:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Problem 1: Missing Logs in the Centralized Logging System&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  One of the biggest challenges they encountered was missing logs in their centralized logging system. Relying on CloudWatch metrics alone led to gaps in visibility, which made it hard to troubleshoot incidents.
&lt;/h4&gt;

&lt;p&gt;To close the gaps, they decided to &lt;strong&gt;&lt;em&gt;incorporate ElasticSearch metrics&lt;/em&gt;&lt;/strong&gt; alongside CloudWatch data. This approach gave them a more comprehensive view and reduced the chance of missed log entries, ensuring no critical data was lost in the process.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Problem 2: Where to Start Diagnostics?&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  With data pouring in from multiple sources—Prometheus, MySQL, Oracle, AWS, Azure—it can be overwhelming to know where to begin diagnosing a system issue.
&lt;/h4&gt;

&lt;p&gt;The team built a &lt;strong&gt;&lt;em&gt;collective dashboard&lt;/em&gt;&lt;/strong&gt; that aggregates data from all these different sources. This unified view streamlined their diagnostics process, allowing them to get a clearer picture faster. Instead of hunting for data in different places, everything was available in one interface, which reduced the mean time to recovery (MTTR).&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Problem 3: Multiple Alerts for a Single Issue&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Receiving multiple alerts for the same underlying issue was a common problem, leading to alert fatigue. This made it difficult to focus on the real issue amidst the flood of notifications.
&lt;/h4&gt;

&lt;p&gt;By utilizing &lt;strong&gt;&lt;em&gt;LLMs (Large Language Models)&lt;/em&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;em&gt;KNN (K-Nearest Neighbors)&lt;/em&gt;&lt;/strong&gt; algorithms, they were able to intelligently group alerts. The system consolidated related alerts into one primary notification using &lt;strong&gt;&lt;em&gt;AI-driven operations&lt;/em&gt;&lt;/strong&gt; through &lt;strong&gt;&lt;em&gt;ClickHouse&lt;/em&gt;&lt;/strong&gt; , drastically cutting down on unnecessary noise. This way, the team could focus on solving the root cause without getting overwhelmed by redundant alerts.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why This Talk Stood Out for Me&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As someone deeply interested in &lt;strong&gt;observability&lt;/strong&gt; and system health, Manojkumar’s talk felt incredibly relevant and timely. I’ve worked with observability tools like &lt;strong&gt;Grafana&lt;/strong&gt; and &lt;strong&gt;Cribl&lt;/strong&gt;, but seeing how IBM integrates AI to enhance monitoring was eye-opening. Their ability to handle large-scale infrastructure challenges using observability and AI offered a glimpse into the future of system monitoring.&lt;/p&gt;

&lt;p&gt;The solutions they’ve implemented—whether it's creating multi-source dashboards or using AI for alert grouping—demonstrate how powerful modern observability tools have become. It also reinforced the idea that observability is not just about collecting data; it’s about making sense of it efficiently to keep systems running smoothly.&lt;/p&gt;




&lt;p&gt;His talk has inspired me to dive even deeper into observability. In the coming weeks, I’ll be exploring more advanced Grafana features and tools like &lt;strong&gt;LGTM Stack (Loki, Grafana, Tempo, Mimir)&lt;/strong&gt; and &lt;strong&gt;Cribl&lt;/strong&gt; for smarter log management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stay tuned&lt;/strong&gt; as I continue this journey into understanding how we can use observability to improve system reliability and performance.&lt;/p&gt;

</description>
      <category>observability</category>
      <category>ibm</category>
      <category>sre</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Cribl &amp; Grafana: Build a Full Observability Pipeline From Scratch</title>
      <dc:creator>Metta Surendhar</dc:creator>
      <pubDate>Thu, 02 Jan 2025 14:00:00 +0000</pubDate>
      <link>https://forem.com/mettasurendhar/step-by-step-guide-to-configuring-cribl-and-grafana-for-data-processing-1j0f</link>
      <guid>https://forem.com/mettasurendhar/step-by-step-guide-to-configuring-cribl-and-grafana-for-data-processing-1j0f</guid>
      <description>&lt;p&gt;Data is the pulse of any system, and effectively managing it can bring significant value to your business. In this blog, we'll guide you step-by-step through setting up &lt;strong&gt;Cribl Edge&lt;/strong&gt; for data collection, &lt;strong&gt;Cribl Stream&lt;/strong&gt; for processing, and &lt;strong&gt;Grafana&lt;/strong&gt; for visualizing your metrics. Whether you're new to Cribl or looking for a refresher, this guide will have you up and running in no time.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Here’s what we'll cover:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Setting Up Cribl Agent for Data Collection&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configuring Cribl Edge to Send Data to Cribl Stream&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Processing Data with Cribl Stream&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Utilizing Data in Grafana&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s dive in!&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 1: Setting Up Cribl Agent for Data Collection&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Efficient data collection is the first step towards real-time system monitoring. &lt;strong&gt;Cribl Edge&lt;/strong&gt; helps you capture system metrics and logs from multiple sources and send them to &lt;strong&gt;Cribl Stream&lt;/strong&gt; for processing.&lt;/p&gt;

&lt;p&gt;Follow these instructions to install and configure Cribl Edge on &lt;strong&gt;Linux&lt;/strong&gt; and &lt;strong&gt;Windows&lt;/strong&gt; systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1.1 Create an Account in Cribl Cloud&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Before we begin, we need to set up an account in Cribl Cloud:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Sign-Up Process&lt;/strong&gt;: Go to &lt;a href="https://cribl.cloud/" rel="noopener noreferrer"&gt;Cribl Cloud&lt;/a&gt; and create an account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Login&lt;/strong&gt;: After signing up, log into Cribl Cloud with your credentials. The Cribl Cloud will be your primary interface for managing Edge nodes and data pipelines.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1k5td7bq1y0lz7pxi3lf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1k5td7bq1y0lz7pxi3lf.png" alt=" " width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;⚙️ Note:&lt;/strong&gt; For learning purposes, we will use Cribl Cloud to manage our data collection agents.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1.2 Access the Edge Fleet&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Navigate to Edge&lt;/strong&gt; : After logging in, select the “Manage” button in the Cribl Edge section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1m0hl8lurok5lwpaa850.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1m0hl8lurok5lwpaa850.png" alt=" " width="611" height="448"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Fleet Overview&lt;/strong&gt;: This will redirect you to the Edge page, where you can see a list of fleets and analytics&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3osl2oet2tvgpaohdkp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3osl2oet2tvgpaohdkp.png" alt=" " width="298" height="246"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Navigate to Default Fleet:&lt;/strong&gt; In Cribl Cloud, only one fleet (&lt;code&gt;default_fleet&lt;/code&gt;) will be available by default. Click on &lt;code&gt;default_fleet&lt;/code&gt; to view the monitoring data for Edge nodes, sources, and destinations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1.3 Add an Edge Node&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Edge Node Overview&lt;/strong&gt;: Edge nodes are responsible for collecting and sending data from your system to Cribl Stream&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Edge Node Installation&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the &lt;code&gt;default_fleet&lt;/code&gt; page, click the "Add/Update Edge Node" button in the upper right corner
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmdev4a7qyla6xf8lwx0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmdev4a7qyla6xf8lwx0.png" alt=" " width="266" height="74"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose an environment (Linux or Windows) where you want to install the Cribl Edge agent.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Linux Edge Node&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hover over the Linux tab, click "Add", and copy the installation script.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run the Script&lt;/strong&gt;: Open a terminal and execute the script as the root user.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Start and Verify&lt;/strong&gt;: After installation, ensure the agent is running with the command: &lt;code&gt;systemctl status cribl&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tz45vw3llyfr2zfkccu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tz45vw3llyfr2zfkccu.png" alt=" " width="352" height="192"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Windows Edge Node&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hover over "Windows" and click "Add" to view the command prompt and PowerShell scripts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modify the Script&lt;/strong&gt;: Edit the script by changing &lt;code&gt;"/qn"&lt;/code&gt; to &lt;code&gt;"/q"&lt;/code&gt; to ensure the installation runs in the foreground.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Run as Admin&lt;/strong&gt;: Run the script with administrator privileges to install the agent&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgzidd7g0xtuodd58sqb3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgzidd7g0xtuodd58sqb3.png" alt=" " width="352" height="192"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1.4 Check the Data Flow&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Verify Agent Installation&lt;/strong&gt;: Once the Edge node is installed, monitor its status in the Cribl Cloud by navigating to the &lt;strong&gt;Edge Node Monitoring&lt;/strong&gt; page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-Time Data Monitoring&lt;/strong&gt;: Under &lt;strong&gt;Edge Fleet → default_fleet → Overview → Monitor&lt;/strong&gt;, you can view metrics such as &lt;strong&gt;events in&lt;/strong&gt; and &lt;strong&gt;bytes in&lt;/strong&gt; to verify that the Edge node is collecting data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;List View for Health Status&lt;/strong&gt;: Use the “List View” to check the health and status of each Edge node  &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3jd1dfitt5razxz76uh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3jd1dfitt5razxz76uh.png" alt=" " width="513" height="135"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 2: Configuring Cribl Edge to Send Data to Cribl Stream&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Once the Cribl Edge agent is installed and collecting data, you need to configure it to send the collected data to &lt;strong&gt;Cribl Stream&lt;/strong&gt; for further processing&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2.1 Configure Source in Cribl Edge&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Sources Overview&lt;/strong&gt;: Data sources represent the type of data being collected (e.g., system metrics, logs).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Navigate to &lt;strong&gt;default_fleet → More → Sources&lt;/strong&gt; to add a new data source. Depending on your environment, configure one of the following:    &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9rkhgybnfv02cdrxs1jr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9rkhgybnfv02cdrxs1jr.png" alt=" " width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Windows Metrics:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9n9gbndfmzwlwoch7q3j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9n9gbndfmzwlwoch7q3j.png" alt=" " width="222" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable the &lt;code&gt;in_windows_metrics&lt;/code&gt; source and configure it by setting host metrics to "All."&lt;/li&gt;
&lt;li&gt;Set processing settings with Fields to a field name and value like &lt;code&gt;observ_data = 'edge_win_metrics'&lt;/code&gt; , Preprocessing Pipeline to &lt;code&gt;passthru&lt;/code&gt; and Connect Destination set to &lt;code&gt;Send to Routes.&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Commit and deploy the changes.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;System Metrics:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjjk3af36irzh2pu78t9y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjjk3af36irzh2pu78t9y.png" alt=" " width="218" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable the &lt;code&gt;in_system_metrics&lt;/code&gt; source and configure processing settings.&lt;/li&gt;
&lt;li&gt;Set Fields to a field name and value like &lt;code&gt;observ_data = 'edge_lin_metrics'&lt;/code&gt; , Preprocessing Pipeline to &lt;code&gt;passthru&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Ensure Connect Destination set to &lt;code&gt;Send to Routes&lt;/code&gt; and commit/deploy changes&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ Note:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Can’t enable both Windows and Linux sources in the same fleet simultaneously.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The destination can also be connected via interface using quick connect for more details check the &lt;a href="https://docs.cribl.io/stream/quickconnect/" rel="noopener noreferrer"&gt;docs&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2.2 Configure Destination in Cribl Edge&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Go to &lt;strong&gt;default_fleet → More → Destinations&lt;/strong&gt; to add a new destination.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwq4auk2hyubv0rsxaxtw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwq4auk2hyubv0rsxaxtw.png" alt=" " width="763" height="230"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use &lt;strong&gt;Cribl TCP&lt;/strong&gt; as the destination for both Windows and Linux sources:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwil0wtzixc1djy51hif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwil0wtzixc1djy51hif.png" alt=" " width="222" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set a unique output ID (e.g., &lt;code&gt;cribl_system&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Enter the IP address from Cribl Cloud’s Access Details (can be get from your &lt;strong&gt;cribl cloud → Access details → Ingress IPs&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;Enter the port number (e.g., &lt;code&gt;10300&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Commit and deploy the changes.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2.3 Verify Source and Destination Configuration&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Verify that both source and destination are enabled (indicated by a check mark).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If there’s an issue (indicated by a cross mark), check the logs to resolve &lt;a href="https://www.google.com/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=2ahUKEwjhitbkuuyIAxW5xjgGHeOdKjkQFnoECB4QAw&amp;amp;url=https%3A%2F%2Fdocs.cribl.io%2Fstream%2F4.6%2Fcommon-errors%2F%23%3A~%3Atext%3DCause%253A%2520Cribl%2520Stream%2520doesn%27t%2Cyour%2520Cribl%2520Stream%2520Sources%27%2520configuration.&amp;amp;usg=AOvVaw2jdYzkFyD8qENs-Ddu-ENg&amp;amp;opi=89978449" rel="noopener noreferrer"&gt;configuration errors&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2.4 Create the Data Route&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Route Overview&lt;/strong&gt;: The data route links the source (e.g., &lt;code&gt;Windows or Linux metrics&lt;/code&gt;) to the destination (Cribl Stream).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Route Configuration&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In Cribl Stream, go to &lt;strong&gt;default_fleet → More → Data Routes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F294mx6gkmaucd3knrey0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F294mx6gkmaucd3knrey0.png" alt=" " width="798" height="275"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a new route that links the source and destination:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2flfjfs957jg2o813dsx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2flfjfs957jg2o813dsx.png" alt=" " width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name the route and set filter expressions (&lt;code&gt;observ_data == 'edge_win_metrics'&lt;/code&gt; for Windows and &lt;code&gt;observ_data == 'edge_lin_metrics'&lt;/code&gt; for Linux) to ensure only Windows/Linux metrics are sent through this route.&lt;/li&gt;
&lt;li&gt;Set the pipeline to &lt;code&gt;passthru&lt;/code&gt; (default pipeline that doesn't modify data) and output to the Cribl TCP destination created earlier (&lt;code&gt;cribl_tcp:cribl_system&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Save the changes and Commit and deploy it to activate the route.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2.5 Capture and Verify Data Flow&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Status Check:&lt;/strong&gt; Use the source and destination status and chart pages to view live data&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3z03rch2ozaqzqeacu5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3z03rch2ozaqzqeacu5.png" alt=" " width="800" height="190"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Capture Events&lt;/strong&gt;: Monitor live data capture in source, destination and the data route.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckk4miq29d7nxy1nbaof.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckk4miq29d7nxy1nbaof.png" alt=" " width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Verify Routing&lt;/strong&gt;: Ensure that data flows seamlessly from source to destination by capturing data in the data route as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftr2zepaw1j70z227l9g0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftr2zepaw1j70z227l9g0.png" alt=" " width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Troubleshoot&lt;/strong&gt;: If data doesn’t flow as expected, check the logs in &lt;strong&gt;Cribl Edge&lt;/strong&gt; for potential configuration errors.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jyq35gadbiy1j35l1xj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jyq35gadbiy1j35l1xj.png" alt=" " width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 3: Processing Data with Cribl Stream&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now that Cribl Edge is sending data to Cribl Stream, the next step is to configure &lt;strong&gt;Cribl Stream&lt;/strong&gt; to receive, process, and route this data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxi7ryc8pnpxicufn614.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxi7ryc8pnpxicufn614.png" alt=" " width="610" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3.1 Configure Source in Cribl Stream&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Setting Up the TCP Source&lt;/strong&gt;: Cribl Stream needs to listen for incoming data from Cribl Edge via a TCP connection.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Navigate to &lt;strong&gt;Cribl Stream → Default → Data → Sources&lt;/strong&gt; and add a source.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyz7qtpdspm9krwc1sjp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyz7qtpdspm9krwc1sjp.png" alt=" " width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;Cribl TCP Source&lt;/strong&gt; to match the configuration of the Cribl Edge TCP destination.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2waa4de19y3pvmujgeyl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2waa4de19y3pvmujgeyl.png" alt=" " width="221" height="200"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add a new source with a unique input ID, set the IP to bind to the edge which will be in default &lt;code&gt;0.0.0.0&lt;/code&gt;, and configure it with the same port used in Cribl Edge (e.g., &lt;code&gt;10300&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Commit and deploy the changes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3.2 Configure Destination in Cribl Stream&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Destination Configuration&lt;/strong&gt;: The processed data will be sent to &lt;strong&gt;Grafana&lt;/strong&gt; using &lt;strong&gt;Prometheus Remote Write&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Go to &lt;strong&gt;Cribl Stream → Default → Data → Destinations&lt;/strong&gt; and select &lt;strong&gt;Prometheus&lt;/strong&gt; destination.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1x7iwzsfxycrj0aivl73.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1x7iwzsfxycrj0aivl73.png" alt=" " width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a new destination with a unique input ID like &lt;code&gt;prometheus-output&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w5nj883a2teqgmkst0y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w5nj883a2teqgmkst0y.png" alt=" " width="221" height="205"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set the remote write URL , get the &lt;strong&gt;Prometheus Remote Write URL&lt;/strong&gt; from your &lt;a href="https://grafana.com/auth/sign-up/create-user" rel="noopener noreferrer"&gt;Grafana Cloud&lt;/a&gt; account (found under &lt;strong&gt;Prometheus → Send Metrics → Write URL&lt;/strong&gt; )&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Commit and deploy the changes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3.3 Create a Processing Pack&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Processing Packs&lt;/strong&gt;: A processing pack in Cribl Stream allows you to create modular pipelines to filter, enrich, or modify data before it reaches its destination.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Go to &lt;strong&gt;Cribl Stream → Default → Processing → Packs&lt;/strong&gt; and add a pack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3t3vc8gu0jf39wqemwf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3t3vc8gu0jf39wqemwf.png" alt=" " width="239" height="204"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a new pack (e.g., &lt;code&gt;Cribl-Windows-Metrics&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgbo5n1j2jr474kkbf2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgbo5n1j2jr474kkbf2f.png" alt=" " width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use functions and routes within the pack to process data via adding a pipelines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3xydwh38qhikj6og6si.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3xydwh38qhikj6og6si.png" alt=" " width="800" height="615"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For more details on packs and pipelines, refer to Cribl &lt;a href="https://docs.cribl.io/stream/packs/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3.4 Configure the Data Route&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create a Data Route&lt;/strong&gt;: Similar to Cribl Edge, create a data route that links the TCP source to the Prometheus destination.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In Cribl Stream, go to &lt;strong&gt;Default → Routing → Data Routes&lt;/strong&gt; and add a route.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qfjf7xfpc323uf5gack.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qfjf7xfpc323uf5gack.png" alt=" " width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set filter expressions based on the source tags (&lt;code&gt;observ_data=='edge_win_metrics'&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Link the pack (&lt;code&gt;Cribl-Windows-Metrics&lt;/code&gt;) to the source and set the output to &lt;strong&gt;Prometheus (&lt;/strong&gt;&lt;code&gt;prometheus:prometheus-output&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff44xx1r7bhu2xsntavbs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff44xx1r7bhu2xsntavbs.png" alt=" " width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Commit and deploy the changes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3.5 Verify Data Flow&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Monitor Event Flow&lt;/strong&gt;: Use the data capture and status pages in &lt;strong&gt;Cribl Stream&lt;/strong&gt; to verify that events are flowing correctly from the sources to the destinations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn25y3avvzy2xqebcfvrv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn25y3avvzy2xqebcfvrv.png" alt=" " width="800" height="172"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Capture Data&lt;/strong&gt;: Monitor live data for around 50 minutes and ensure the data is being processed and sent to Grafana.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdu8ohj0viv0mgixutf6r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdu8ohj0viv0mgixutf6r.png" alt=" " width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogezoh66jtjw89snik4z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogezoh66jtjw89snik4z.png" alt=" " width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Troubleshoot&lt;/strong&gt;: If data doesn’t flow or doesn’t processed as expected , check the logs for potential &lt;a href="https://www.google.com/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=2ahUKEwjhitbkuuyIAxW5xjgGHeOdKjkQFnoECBoQAQ&amp;amp;url=https%3A%2F%2Fdocs.cribl.io%2Fstream%2Fcommon-errors&amp;amp;usg=AOvVaw3IxyywzbFiHO9uTEbAHzBu&amp;amp;opi=89978449" rel="noopener noreferrer"&gt;configuration errors&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 4: Utilizing Data in Grafana&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Once the data has been processed by Cribl Stream, you can visualize it in &lt;strong&gt;Grafana&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4.1 Create a Dashboard in Grafana&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Log in to Grafana Cloud&lt;/strong&gt;: If you don’t have an account, sign up at &lt;a href="https://www.google.com/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=2ahUKEwjGy9eb1e2IAxXbnK8BHfYrPAMQFnoECAkQAQ&amp;amp;url=https%3A%2F%2Fgrafana.com%2Fauth%2Fsign-in&amp;amp;usg=AOvVaw3p5Bo4MhZ_R_HkXs6wXoGn&amp;amp;opi=89978449" rel="noopener noreferrer"&gt;Grafana Cloud&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create a Dashboard&lt;/strong&gt;: After logging in, go to &lt;strong&gt;Create Dashboard&lt;/strong&gt; and add a &lt;strong&gt;new panel&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdamw9hj5zfui2szmrj7k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdamw9hj5zfui2szmrj7k.png" alt=" " width="211" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data Source&lt;/strong&gt;: Set the data source to &lt;strong&gt;Prometheus&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi87zgfdog44b4vducdbb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi87zgfdog44b4vducdbb.png" alt=" " width="716" height="730"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Query Configuration&lt;/strong&gt;: Use PromQL queries to retrieve data from Prometheus. For example, &lt;code&gt;windows_cpu_percent_active&lt;/code&gt; to visualize CPU usage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Customize the Panel&lt;/strong&gt;: Give the panel a meaningful name (e.g., &lt;code&gt;Windows CPU Metrics&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzo09mp4hz6hs0ph7zgmy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzo09mp4hz6hs0ph7zgmy.png" alt=" " width="800" height="648"&gt;&lt;/a&gt;       &lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4.2 Fine-Tuning Visualization&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Panel Customization&lt;/strong&gt;: Adjust time ranges, choose chart types (line, bar, etc.), and set thresholds for key metrics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multiple Panels&lt;/strong&gt;: Add panels for different metrics (memory, disk usage, network I/O).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deploy Dashboard&lt;/strong&gt;: Save and deploy the dashboard for real-time monitoring.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4.3 Monitoring and Analyzing Data&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-Time Data&lt;/strong&gt;: Grafana will now display real-time metrics based on the data collected, processed, and routed from Cribl Edge and Stream.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Alerts and Notifications&lt;/strong&gt;: Set up alerts in Grafana based on threshold values (e.g., high CPU usage).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;And there you have it! By following these steps, you can successfully set up Cribl Edge for data collection, Cribl Stream for processing, and Grafana for visualizing the data. This guide provides a foundation for customization of your data pipelines, allowing you to monitor, process, and visualize large-scale metrics effectively.&lt;/p&gt;

&lt;p&gt;In the next post, we will dive deeper into the detailed steps for creating dashboards, panels, and alerts in Grafana. Stay tuned!&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>elasticsearch</category>
      <category>grafana</category>
      <category>cribl</category>
    </item>
    <item>
      <title>New to Monitoring? Start With Grafana - Easy Steps to Visualize &amp; Alert</title>
      <dc:creator>Metta Surendhar</dc:creator>
      <pubDate>Thu, 26 Dec 2024 14:00:00 +0000</pubDate>
      <link>https://forem.com/mettasurendhar/getting-started-with-grafana-your-observability-superhero-awaits-okl</link>
      <guid>https://forem.com/mettasurendhar/getting-started-with-grafana-your-observability-superhero-awaits-okl</guid>
      <description>&lt;p&gt;Let’s dive into one of the powerful tools in the observability stack—&lt;a href="https://www.google.com/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=2ahUKEwjLqa__9r2IAxXgSmwGHVdkCiIQFnoECAsQAQ&amp;amp;url=https%3A%2F%2Fgrafana.com%2F&amp;amp;usg=AOvVaw3tMv8aYc48hHzH5iAKu3XU&amp;amp;opi=89978449" rel="noopener noreferrer"&gt;&lt;strong&gt;Grafana&lt;/strong&gt;&lt;/a&gt;. Whether you’re monitoring system health, exploring logs, or tracing requests, Grafana acts as your all-in-one &lt;strong&gt;observability superhero&lt;/strong&gt;. It helps you visualize, analyze, and act on your system’s data to gain deep insights, identify issues, and optimize performance.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why Grafana?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Grafana is much more than just a dashboard—it’s a tool that empowers you to &lt;strong&gt;understand your system in real-time&lt;/strong&gt;. Here’s why Grafana deserves a spot in your observability toolkit:&lt;/p&gt;

&lt;p&gt;💠&lt;strong&gt;Visual Storytelling at its Best&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Grafana transforms raw data into meaningful visualizations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;From graphs to dashboards, you can reveal patterns, trends, and potential bottlenecks that would otherwise stay hidden.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It brings your data to life, giving you the insights you need to make informed decisions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;💠&lt;strong&gt;Data from (Literally) Anywhere&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Grafana integrates with a wide array of data sources—such as Prometheus, InfluxDB, Elasticsearch, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No matter where your data comes from, Grafana brings it together in a unified view, allowing you to leverage existing data without switching tools or infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;💠&lt;strong&gt;Serious Analytics Power&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Beyond visualizations, Grafana allows you to run complex queries, set custom alerts, and build tailored dashboards.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Whether you’re tracking metrics or responding to incidents, Grafana equips you with the analytic power to uncover actionable insights.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;💠&lt;strong&gt;Open-Source Love ❤️and Community-Driven&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;As an open-source project, Grafana benefits from a vibrant, active community.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This means continuous improvements, access to hundreds of plugins, and endless community-driven resources to help you get the most out of Grafana.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Setting Alerts in Grafana 🔔&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;One of the most powerful features Grafana offers is alerting. It enables you to proactively respond to issues by setting up alerts that notify you when key metrics reach certain thresholds. Imagine being alerted to high CPU usage or memory exhaustion before it impacts your application. With Grafana, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create alert rules based on metric thresholds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose alert notifications — via email, Slack, Webhook, or other platforms—so you never miss a critical event.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define conditions that trigger alerts for the exact scenarios you want to monitor.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Setting alerts helps you catch potential problems early, ensuring your system is always performing at its best.&lt;/p&gt;

&lt;h3&gt;
  
  
  Silence When You Need It 🤫
&lt;/h3&gt;

&lt;p&gt;Alerts are essential, but sometimes you need a little quiet—especially during planned maintenance or when you’re troubleshooting known issues. That’s where Grafana’s &lt;strong&gt;silencing&lt;/strong&gt; feature comes in.&lt;/p&gt;

&lt;p&gt;With silences, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Temporarily mute alerts for specific services, applications, or instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Avoid unnecessary notifications during deployments or maintenance windows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensure critical alerts still get through by selectively silencing less important ones.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Silencing ensures that you’re only notified of issues when it truly matters—no more alert fatigue!&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;My Journey with Grafana and Observability&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff91qf8gcwhgef70z9ebx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff91qf8gcwhgef70z9ebx.png" alt=" " width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As someone who recently stepped into the world of observability, Grafana quickly became my go-to tool. While initially focused on metrics, I soon realized the true power of Grafana lies in its ability to combine logs, traces, and metrics into a single, cohesive observability platform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Here’s how I’ve leveraged Grafana in my own projects:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I connected &lt;strong&gt;Prometheus&lt;/strong&gt; to Grafana to track key system metrics like CPU usage, memory consumption, and latency from Windows systems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I explored logs with &lt;strong&gt;Loki&lt;/strong&gt;, which allowed me to visualize log data, trace errors, and pinpoint issues from Linux and Windows systems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using Grafana Alerts and silences, I set up proactive monitoring that helped me identify and address issues before they affected users.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Grafana has transformed the way to approach observability, giving me the tools to see the big picture and act on the small details. It’s not just about visualization—it’s about &lt;strong&gt;control, insight, and continuous improvement&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Start Your Grafana Journey Today!&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Ready to embrace the power of Grafana? Here are a few steps to help you get started on your observability journey:&lt;/p&gt;

&lt;p&gt;💠 &lt;strong&gt;Download and Install&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Grafana is easy and straightforward to install. &lt;a href="https://www.google.com/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=2ahUKEwjjwOGS-72IAxU2UGwGHY5XFfEQFnoECAkQAQ&amp;amp;url=https%3A%2F%2Fgrafana.com%2Fgrafana%2Fdownload&amp;amp;usg=AOvVaw2KtPYWYhbJp5AVo-bR8cUB&amp;amp;opi=89978449" rel="noopener noreferrer"&gt;Download&lt;/a&gt; the appropriate package for your operating system and follow the setup instructions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;💠 &lt;strong&gt;Connect Data Sources&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure Grafana to connect to your data sources, whether it’s Prometheus, Loki,Tempo or others, and start building custom dashboards. Check out the &lt;a href="https://www.google.com/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=2ahUKEwjStfWL-72IAxVoRmwGHWoGHAAQFnoECBkQAQ&amp;amp;url=https%3A%2F%2Fgrafana.com%2Fdocs%2Fgrafana%2Flatest%2Fdatasources%2F&amp;amp;usg=AOvVaw0gKqqimBAKEw2XZfJh9hRE&amp;amp;opi=89978449" rel="noopener noreferrer"&gt;Grafana documentation&lt;/a&gt; for detailed setup guides.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;💠 &lt;strong&gt;Explore Features&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dive into Grafana’s powerful features, from visualizations to alerting and beyond. The &lt;a href="https://www.google.com/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=2ahUKEwj958m8-r2IAxVOVWwGHXYbJC0QFnoECAgQAQ&amp;amp;url=https%3A%2F%2Fcommunity.grafana.com%2F&amp;amp;usg=AOvVaw2coHVsNBftjjV5CJLAn1z3&amp;amp;opi=89978449" rel="noopener noreferrer"&gt;Grafana community&lt;/a&gt; is full of plugins and tutorials to help you master the tool.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;💠 &lt;strong&gt;Set Alerts and Silences&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up your &lt;a href="https://www.google.com/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=2ahUKEwizl7vT-r2IAxVoS2wGHaIMJ3wQFnoECAkQAQ&amp;amp;url=https%3A%2F%2Fgrafana.com%2Fdocs%2Fgrafana%2Flatest%2Falerting%2F&amp;amp;usg=AOvVaw3-2gfiOXHODUP8Vn6LH53Q&amp;amp;opi=89978449" rel="noopener noreferrer"&gt;alerts&lt;/a&gt; and don’t forget to use &lt;a href="https://www.google.com/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=2ahUKEwjmjZ7e-r2IAxXNdmwGHRPUMe8QFnoECBkQAQ&amp;amp;url=https%3A%2F%2Fgrafana.com%2Fdocs%2Fgrafana%2Flatest%2Falerting%2Fconfigure-notifications%2Fcreate-silence%2F&amp;amp;usg=AOvVaw3jA08iu8nFxqoolzTylJAC&amp;amp;opi=89978449" rel="noopener noreferrer"&gt;silences&lt;/a&gt; when necessary, so you can stay ahead of any issues without being overwhelmed by notifications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Grafana isn’t just a tool—it’s your partner in observability. It empowers you to visualize, act, and optimize your systems, all in one powerful platform. So, why wait? &lt;a href="https://www.google.com/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=2ahUKEwiy_a2f-b2IAxV2RWcHHQ_aGbIQFnoECCMQAQ&amp;amp;url=https%3A%2F%2Fgrafana.com%2Fdocs%2Fgrafana%2Flatest%2Fgetting-started%2Fbuild-first-dashboard%2F&amp;amp;usg=AOvVaw3ZNWjXZZ7Ozcgu5N5rhXPI&amp;amp;opi=89978449" rel="noopener noreferrer"&gt;&lt;strong&gt;Start your Grafana journey today&lt;/strong&gt;&lt;/a&gt;, and unlock the full potential of your data!  &lt;/p&gt;

&lt;h3&gt;
  
  
  Stay Tuned!
&lt;/h3&gt;

&lt;p&gt;In upcoming blogs, we'll delve deeper into creating effective Grafana dashboards, setting up alerts, and exploring more advanced features to enhance your observability capabilities. Don’t miss out!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why Your System Needs Observability - The Beginner Guide Every Dev Should Read</title>
      <dc:creator>Metta Surendhar</dc:creator>
      <pubDate>Thu, 19 Dec 2024 14:00:00 +0000</pubDate>
      <link>https://forem.com/mettasurendhar/observability-simplified-a-first-timers-guide-to-system-health-53nj</link>
      <guid>https://forem.com/mettasurendhar/observability-simplified-a-first-timers-guide-to-system-health-53nj</guid>
      <description>&lt;p&gt;Ever wondered how tech giants keep their systems running smoothly even when handling millions of users? Or maybe you're curious about how you can ensure your own projects are rock solid? The answer lies in a little magic called Observability—and today, we’re going to dive right into it!&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s the Buzz About Observability?
&lt;/h2&gt;

&lt;p&gt;Imagine you’re debugging your code without any tools—no console logs, no debuggers, nothing but the code itself. Frustrating, right? Now, scale that up to managing an entire application or a complex system. That’s where observability comes in—it’s like having a comprehensive debugger for your entire system.  &lt;/p&gt;

&lt;p&gt;Observability allows you to understand what's happening inside your applications by analyzing the data they generate. With observability, you can identify and resolve issues before they escalate, optimize performance, and ensure everything runs smoothly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Observability vs. Monitoring: What’s the Difference?
&lt;/h2&gt;

&lt;p&gt;You might think, “Isn’t observability just a fancy term for monitoring?” Not quite. While both are critical for system reliability, they serve different purposes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; is like setting up health checks on your system. It watches specific metrics or logs and alerts you when something goes wrong, like high CPU usage or a failed API request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Observability&lt;/strong&gt; goes beyond that—it’s about understanding why things are happening. Think of it like having the ability to step through the running code of your system in real-time, understanding each decision and interaction. It’s not just knowing something went wrong but also how and why it happened.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In essence, monitoring tells you when there's an issue, while observability helps you understand the root cause.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Three Pillars of Observability
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1oxqi3dz798xv2gykcu1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1oxqi3dz798xv2gykcu1.png" alt=" " width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Observability works by collecting and analyzing the three types of telemetry data—logs, metrics, and traces. To fully understand observability, it’s essential to grasp the three main types of telemetry data:&lt;/p&gt;

&lt;h3&gt;
  
  
  Logs :
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Logs are the detailed records of what’s happening inside your system. They capture events, errors, and other critical information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For developers, logs are like the print statements in your code—they help you trace the flow of execution and understand what happened when an issue occurred.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It help you understand what actions were taken at specific times. They’re invaluable for troubleshooting specific issues, like why a server crashed or why a user experienced an error.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Metrics :
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Metrics are numerical data that represent the performance and health of your system. They include things like CPU usage, memory consumption, request latency, and error rates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It give you a quick snapshot of your system’s overall state. Think of them as the performance stats of your application, similar to how you’d monitor frame rates in a video game to ensure smooth performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They’re crucial for setting up alerts that notify you when something goes wrong, like a sudden spike in latency or a drop in request rates.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Traces :
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Traces follow the path of a request as it moves through various services in your system. They help you visualize how different parts of your application interact and where bottlenecks or errors might occur.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They are like following a breadcrumb trail through your code, seeing exactly where each function call leads and how it impacts the system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This is especially important in microservices architectures, where understanding the interaction between services is key to diagnosing performance issues.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By combining these data types, observability tools can offer a holistic view of your system’s health and behavior, allowing you to identify and fix problems faster.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Should You Care About Observability?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3d30uu182w6w7rt1916.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3d30uu182w6w7rt1916.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, why all the fuss about observability? Here’s why it matters:&lt;/p&gt;

&lt;h3&gt;
  
  
  Proactive Problem-Solving:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Observability lets you catch issues before your users do.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Instead of waiting for an error report, you can detect and resolve problems early, ensuring a smoother user experience and less downtime.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Optimized Performance:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;By keeping an eye on metrics and traces, you can identify inefficiencies and optimize your system to run faster and more efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This is crucial whether you're running a small application or a large-scale distributed system.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Enhanced Collaboration:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Observability data acts as a common language for your team.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Developers, DevOps engineers, and SREs can all work from the same data, making it easier to collaborate on solving problems and improving the system.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Getting Started with Observability
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszx2am5av78cq80coily.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszx2am5av78cq80coily.png" alt=" " width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ready to bring observability into your projects? Here’s how to get started:&lt;/p&gt;

&lt;h3&gt;
  
  
  Choose Your Tools:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Tools like Grafana, Prometheus, and Loki are great for getting your observability stack up and running.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Each specializes in different aspects—metrics, logs, and traces—so you can tailor your setup to your needs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Set Up Monitoring:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Start small by setting up monitoring for your most critical systems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Track basic metrics like CPU, memory, and disk usage to understand your system’s normal behavior.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Implement Alerts:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Alerts are your early warning system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set up thresholds for your metrics so you’ll be notified the moment something goes off the rails.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Explore and Experiment:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Observability is a vast field, and there’s always more to learn.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Experiment with different tools and techniques to find what works best for your systems.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  My Journey into Observability
&lt;/h2&gt;

&lt;p&gt;In my work, I had the opportunity to explore and implement observability using various tools. I extracted metrics from Windows and Linux logs through the Cribl TCP source, processed them in Cribl Stream, and then used Prometheus to store and visualize the data on Grafana dashboard panels.&lt;/p&gt;

&lt;p&gt;I also set up alerts for key metrics like CPU, disk, and memory using Grafana Alertmanager and Mimir, ensuring that any critical issues were immediately flagged. Additionally, I utilized silences in Grafana to manage and suppress alerts during maintenance windows or non-critical periods.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why You Should Start Today
&lt;/h2&gt;

&lt;p&gt;Whether you’re managing a large-scale system or just starting out with a small project, observability is key to ensuring reliability and performance. It’s like having superpowers for your code—powers that let you see inside your systems and make sure everything’s running just the way it should.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stay Tuned for More!
&lt;/h2&gt;

&lt;p&gt;I’m excited to share more about observability in my upcoming posts, where we’ll dive deeper into specific tools and techniques. Whether you’re a beginner or a seasoned pro, there’s always more to learn, so stay tuned!&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>firstyearincode</category>
      <category>grafana</category>
      <category>observability</category>
    </item>
    <item>
      <title>Why Running Containers as Root Is Risky - Use Rootless Containers Instead</title>
      <dc:creator>Metta Surendhar</dc:creator>
      <pubDate>Thu, 12 Dec 2024 14:00:00 +0000</pubDate>
      <link>https://forem.com/mettasurendhar/rootless-containers-what-they-are-and-why-you-should-use-them-3p16</link>
      <guid>https://forem.com/mettasurendhar/rootless-containers-what-they-are-and-why-you-should-use-them-3p16</guid>
      <description>&lt;p&gt;Running containers with root privileges has long been recognized as a security risk. When a container operates with root access, it potentially exposes the host system to severe vulnerabilities. If that container is compromised, an attacker could gain root-level control over the entire host, which is why the concept of "rootless" containers is so important.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbp74ph6cb1sm0t4fvgk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgbp74ph6cb1sm0t4fvgk.png" alt=" " width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Rootless Containers: What They Are and Why They Matter
&lt;/h2&gt;

&lt;p&gt;Rootless containers are designed to run without requiring root privileges on the host system. This means that even if a container is breached, the attacker wouldn't gain root access to the host. Rootless containers enhance security by significantly reducing the potential damage that could be done by a compromised container.&lt;/p&gt;

&lt;h3&gt;
  
  
  Here’s how they work:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;User Namespaces:&lt;/strong&gt; Rootless containers leverage user namespaces, a feature of the Linux kernel that maps user and group IDs within the container to different, non-root IDs on the host. So, even if a process runs as "root" inside the container, it’s actually operating as a non-root user on the host, ensuring the host remains protected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Control Groups (cgroups):&lt;/strong&gt; These manage and limit resource usage like CPU, memory, and disk I/O for containerized processes, preventing any single container from consuming too many resources on the host.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Seccomp (Secure Computing Mode):&lt;/strong&gt; This filters system calls made by containerized applications, restricting what actions they can perform on the host, thereby reducing the attack surface.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SELinux and AppArmor:&lt;/strong&gt; These are security modules that enforce access controls on containerized processes, further isolating them from the host system.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why You Should Care About Rootless Operation
&lt;/h2&gt;

&lt;p&gt;Running containers as root is risky business. Any exploit within a container running as root could allow an attacker to break out of the container and gain root access to your host. This could spell disaster for the entire system. Rootless containers, on the other hand, are designed to prevent this scenario. Even if an attacker manages to breach the container, they’ll find themselves with limited access and unable to escalate privileges on the host.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Check Container Privileges in Docker
&lt;/h3&gt;

&lt;p&gt;If you’re using Docker and want to check if a container is running with root privileges, you can use this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;codedocker run —user root -it my-root-container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running this command will explicitly start the container as the root user. If the container starts without issues, it’s running with root privileges—something you typically want to avoid in production due to the security risks involved.&lt;/p&gt;

&lt;p&gt;To run a Docker container in a more secure, non-root mode, use the &lt;code&gt;--user&lt;/code&gt; flag to specify a non-root user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;codedocker run --user 1000:1000 -it my-rootless-container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, &lt;code&gt;1000:1000&lt;/code&gt; refers to a non-root user and group ID. This ensures your container operates with limited privileges, enhancing overall security.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Podman Makes Rootless the Default
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0301a151yyudlnb5nifr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0301a151yyudlnb5nifr.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While Docker supports rootless containers, it wasn’t designed with this as the default setting. Podman, on the other hand, was built from the ground up with rootless operation as the standard. This makes Podman inherently more secure, especially for those environments where security is a top priority.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Benefits of Running Rootless
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduced Attack Surface:&lt;/strong&gt; Rootless containers minimize the chances of a successful privilege escalation attack.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Compliance with Security Policies:&lt;/strong&gt; If your organization mandates that applications must not run as root, rootless containers help you stay compliant.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improved Security Posture:&lt;/strong&gt; By running containers with the least amount of privilege necessary, you’re actively reducing your risk exposure.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;Ensuring that your containers are running without root privileges is a critical step toward securing your containerized environments. Whether you’re using Docker, Podman, or another container engine, adopting rootless containers represents a significant leap forward in security. By limiting the privileges of your containerized processes, you’re safeguarding your infrastructure against potential exploits and attacks.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>podman</category>
      <category>security</category>
    </item>
    <item>
      <title>Why Modern Apps Run in Containers - A Journey from FreeBSD Jail to Today</title>
      <dc:creator>Metta Surendhar</dc:creator>
      <pubDate>Thu, 05 Dec 2024 14:00:00 +0000</pubDate>
      <link>https://forem.com/mettasurendhar/the-evolution-and-power-of-containers-from-freebsd-jail-to-docker-and-beyond-2ikm</link>
      <guid>https://forem.com/mettasurendhar/the-evolution-and-power-of-containers-from-freebsd-jail-to-docker-and-beyond-2ikm</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;The world of software development has changed a lot in the past decade, mainly because of container technology. What started as a small solution for isolating processes is now a key part of how we develop and deploy apps. From the early days of FreeBSD Jail to the big impact of Docker, containers have changed how we build, deploy, and scale applications. In this blog, we’ll look at the power of containers, the important role of Docker, the history of container technology, and the ongoing efforts to standardize this growing field.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fko3oxel27f5p8w7elegx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fko3oxel27f5p8w7elegx.png" alt=" " width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  The Power of Containers
&lt;/h3&gt;

&lt;p&gt;Containers are essential in modern software development, especially for scalable, cloud-native apps. Their power comes from providing lightweight, portable, and consistent environments, with key features being isolation and image immutability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Isolation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;One of the most significant advantages of containers is their ability to encapsulate applications and their dependencies in isolated environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Each container runs independently, with its own filesystem, network interface, and process space.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This isolation ensures that applications operate uniformly regardless of the underlying infrastructure, preventing conflicts between applications and enabling consistent performance across different environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Isolation also enhances security by limiting the scope of access that an application has, reducing the risk of unauthorized interactions with the host system or other containers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Image Immutability:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;A container image is an immutable, pre-packaged environment that includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application code&lt;/li&gt;
&lt;li&gt;Runtime&lt;/li&gt;
&lt;li&gt;Libraries&lt;/li&gt;
&lt;li&gt;Configuration files necessary to run the application&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Once an image is created, it remains unchanged across different stages of deployment—from development to production.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;This immutability ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consistency and reliability&lt;/li&gt;
&lt;li&gt;Elimination of the "it works on my machine" problem&lt;/li&gt;
&lt;li&gt;Simplified rollbacks in case of deployment issues&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;The ability to create a consistent environment that can be replicated across multiple platforms is a key reason why containers are vital in modern development practices.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;




&lt;h3&gt;
  
  
  Why Docker Became a Game-Changer
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fti19mrbz937b01eqeg3u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fti19mrbz937b01eqeg3u.png" alt=" " width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docker, introduced in 2013 by the company then known as DotCloud, revolutionized the way developers build, ship, and run applications. Before Docker, deploying applications across different environments was fraught with challenges due to discrepancies in operating systems, software versions, and configurations. Docker addressed these issues by providing a simple, consistent way to package applications and their dependencies into a single, portable container image.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simplified Packaging and Deployment:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Docker enabled developers to adopt a "build once, run anywhere" approach.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By bundling an application and its dependencies into a single container image, Docker eliminated inconsistencies between development, testing, and production environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This significantly reduced the time and effort required to deploy applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It made it easier for developers to focus on building features rather than troubleshooting deployment issues.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Developer-Friendly Tools:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Docker provided a suite of tools that made containerization accessible to a broader audience.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Docker CLI allowed developers to easily create, manage, and share container images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker Hub, a centralized repository for container images, offered a vast library of pre-built images that could be used to jumpstart development.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This ease of use, combined with the availability of a wide range of ready-to-use images, made Docker an attractive option for developers looking to streamline their workflows.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Ecosystem and Community Support:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Docker’s open-source nature and the vibrant community around it led to the rapid development of complementary tools and frameworks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker Compose simplified the management of multi-container applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker Swarm provided native container orchestration capabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The integration of these tools into existing workflows further solidified Docker's position as the go-to solution for containerization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker set the standard for the industry and inspired the development of many other container-related technologies and standards.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  The Origins of Container Technology
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2s763s6wbh0kz7e21gw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2s763s6wbh0kz7e21gw.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The concept of containers is rooted in the idea of process isolation, a principle that dates back to the early days of Unix. However, the evolution of containers as we know them today can be traced through several key developments, each building upon the last to create the sophisticated container environments we use today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FreeBSD Jail (2000):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The FreeBSD Jail was one of the first implementations of process isolation at the operating system level.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Introduced in 2000, it allowed administrators to partition a FreeBSD system into multiple independent "jails," each with its own filesystem, network interfaces, and users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This early form of containerization provided a way to securely isolate applications while sharing the same OS kernel.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It laid the groundwork for future container technologies.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Linux Containers (LXC) (2008):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Linux Containers (LXC) brought the concept of containers to the Linux ecosystem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enabled the creation of lightweight, isolated environments using kernel features like cgroups (control groups) and namespaces.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;LXC was the first comprehensive solution that allowed multiple Linux containers to run on a single host with a high degree of isolation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Marked a significant step forward in container-based virtualization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provided a more flexible and scalable alternative to traditional virtual machines.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Docker (2013):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Docker took the concepts introduced by LXC and built upon them, offering a more user-friendly and developer-focused approach.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By abstracting the complexities of LXC and adding features like Docker Hub and Docker Compose, Docker made containerization accessible to a wider audience.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker’s innovation was not just in the technology itself but in how it packaged and presented that technology, making it easy to use and integrate into existing workflows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The introduction of Docker marked the beginning of widespread container adoption, transforming containers from a niche technology into a mainstream solution for modern application development.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The progression from FreeBSD Jail to Docker illustrates the evolution of container technology from a specialized solution for server partitioning to a fundamental component of modern application development and deployment.&lt;/p&gt;




&lt;h3&gt;
  
  
  Standardizing Containers: The Open Container Initiative
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1yffdbxvbajf74ews5g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1yffdbxvbajf74ews5g.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As containers gained popularity, the need for standardization became increasingly apparent. The rapid adoption of containers by the industry led to the creation of various container formats and runtimes, which, without standardization, could lead to fragmentation and compatibility issues. To address these concerns, the Open Container Initiative (OCI) was formed in 2015.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Image Specification:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The OCI Image Specification defines the format and structure of a container image.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This standardization ensures that container images are portable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Container images can be reliably used across different platforms and tools.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This fosters a more open and interoperable ecosystem.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Runtime Specification:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The OCI Runtime Specification defines how to run a container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It includes how to create, manage, and destroy container instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By standardizing the runtime environment, the OCI ensures that containers behave consistently across different environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This reduces the risk of incompatibilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It simplifies the deployment process.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Distribution Specification:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The OCI Distribution Specification covers how to distribute and share container images, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Transport&lt;/li&gt;
&lt;li&gt;Storage&lt;/li&gt;
&lt;li&gt;Retrieval&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;This specification standardizes the way container images are pushed to and pulled from registries.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Ensures that containers can be easily shared and deployed across various platforms and environments.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;These standards have played a crucial role in the continued growth and adoption of container technology, ensuring that containers remain a reliable and versatile tool for developers and organizations alike.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrapping Up
&lt;/h3&gt;

&lt;p&gt;The journey from the early days of process isolation to the modern era of containerization has been marked by significant advancements in technology, driven by the need for more efficient, scalable, and secure solutions. Containers have become an essential part of the software development lifecycle, offering unparalleled benefits in terms of isolation, consistency, and portability.&lt;/p&gt;

&lt;p&gt;Docker's introduction revolutionized the way developers approached containerization, setting the stage for widespread adoption and the development of a thriving ecosystem. As the industry continues to evolve, initiatives like the Open Container Initiative are ensuring that containers remain standardized, interoperable, and accessible to all.&lt;/p&gt;

&lt;p&gt;As developers and organizations look to the future, embracing the power of containers will be key to building and deploying applications that are not only robust and scalable but also secure and consistent across all environments. Whether you’re just starting with containers or looking to optimize your existing workflows, understanding the history, power, and potential of containers is essential for staying ahead in the rapidly changing world of software development.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Tired of Docker's Limitations? Here's Why Podman Should Be Your Next Container Engine</title>
      <dc:creator>Metta Surendhar</dc:creator>
      <pubDate>Thu, 28 Nov 2024 14:00:00 +0000</pubDate>
      <link>https://forem.com/mettasurendhar/exploring-podman-and-beyond-open-source-alternatives-to-docker-for-secure-containerization-59kd</link>
      <guid>https://forem.com/mettasurendhar/exploring-podman-and-beyond-open-source-alternatives-to-docker-for-secure-containerization-59kd</guid>
      <description>&lt;p&gt;Attending the CNCF Chennai Meetup was an enlightening experience, particularly with the insightful talk by Ram, Chief Evangelist at the Cloud Foundry Foundation. His presentation, titled &lt;strong&gt;"Cloud Native Containers: Myth, Truth, or Marketing?,"&lt;/strong&gt; provided a comprehensive overview of the evolution of containers, the rise of Docker, and the emergence of modern, efficient, open-source alternatives. &lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4trabpatchqek683dlmh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4trabpatchqek683dlmh.png" alt=" " width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This blog summarizes and expands on Ram's talk, delving into the world of container technology and exploring the tools developers can use today to build and manage containers more securely and effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  Podman: The Leading Open Source Docker Alternative
&lt;/h2&gt;

&lt;p&gt;As Docker gained popularity, the demand for more flexible, secure, and open alternatives also grew. Enter Podman—an open-source container engine developed by Red Hat that has quickly become a preferred choice for many developers and organizations. Podman offers many of Docker’s features but introduces significant enhancements tailored to meet the needs of today’s security-conscious and compliance-driven environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploring Podman
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://podman.io/" rel="noopener noreferrer"&gt;Podman&lt;/a&gt; provides a comprehensive, daemonless container management solution that is fully compatible with the Open Container Initiative (OCI) standards. This ensures that it can interoperate seamlessly with Docker images and other OCI-compliant tools.&lt;/p&gt;




&lt;h3&gt;
  
  
  Key features of Podman include:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Daemonless Architecture:&lt;/strong&gt; &lt;br&gt;
Unlike Docker, which relies on a central daemon to manage containers, Podman operates without a long-running background process. Each container is managed as an individual process by the user who initiated it. This approach not only improves resource efficiency but also enhances security by reducing the attack surface—there’s no single point of failure or a privileged process that can be exploited.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rootless Operation:&lt;/strong&gt; Security is at the heart of Podman’s design. By default, Podman runs containers as non-root users, minimizing the risk of privilege escalation attacks. This is a crucial advantage over Docker, where the daemon typically requires root privileges, posing potential security risks. While Docker can be configured to run in a rootless mode, Podman makes rootless operation the standard, ensuring a safer environment by default.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CLI Compatibility with Docker:&lt;/strong&gt; Podman’s command-line interface (CLI) is designed to be nearly identical to Docker’s, making it easy for developers to switch from Docker to Podman without having to learn new commands. This compatibility even extends to Docker Compose, thanks to the &lt;code&gt;podman-compose&lt;/code&gt; tool, which replicates Docker Compose functionality using Podman.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These features make Podman a compelling alternative for developers and organizations looking to enhance security, reduce dependency on privileged processes, and maintain compatibility with existing container workflows.&lt;/p&gt;




&lt;h3&gt;
  
  
  Top Open Source Tools for Container Builds
&lt;/h3&gt;

&lt;p&gt;Beyond Podman, several other open-source tools offer unique features and capabilities for container builds. These tools are integral to modern DevOps practices, providing flexibility and efficiency in container creation and management.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://buildah.io/" rel="noopener noreferrer"&gt;1. Unleashing Buildah&lt;/a&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Buildah is the backbone of Podman for building container images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It allows users to create images from scratch or customize existing ones.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Offers the flexibility to build images without needing a Dockerfile.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Compliance with OCI standards ensures compatibility with any OCI-compliant container runtime.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Makes Buildah a powerful and versatile tool for developers looking to streamline their image-building process.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://github.com/GoogleContainerTools/kaniko" rel="noopener noreferrer"&gt;2. Kaniko: Building in Kubernetes&lt;/a&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kaniko is a build tool designed to run within Kubernetes clusters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It enables developers to build container images inside containers or Kubernetes pods without requiring privileged access to the host.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This is particularly useful in CI/CD pipelines, where security is paramount, and running builds without elevated privileges is a best practice.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kaniko’s integration with Kubernetes makes it a go-to choice for teams leveraging Kubernetes as their primary platform.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://docs.docker.com/build/buildkit/" rel="noopener noreferrer"&gt;3. The Efficiency of BuildKit&lt;/a&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;BuildKit is a highly efficient and modern toolkit for building container images, originally developed as part of the Moby project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It’s now widely used within the Docker ecosystem and beyond.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;BuildKit supports advanced features like parallel builds, build caching, and multi-stage builds, significantly speeding up the image-building process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Its flexibility and performance enhancements make it a robust option for developers looking to optimize their container build workflows.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://nixos.org/" rel="noopener noreferrer"&gt;4. NixOS: A Unique Approach&lt;/a&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;NixOS is an innovative Linux distribution that takes a declarative approach to system configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;While not a traditional container tool, NixOS’s ability to perform VM work through containers offers an unparalleled level of reproducibility and isolation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This makes NixOS ideal for developers who need a highly controlled and consistent environment for their applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is especially useful in scenarios where precise environment replication is critical.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://ko.build/" rel="noopener noreferrer"&gt;5. Ko: Tailored for Go Developers&lt;/a&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ko is a specialized tool designed for Go developers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It simplifies the process of building and deploying Go applications as container images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ko eliminates the need for a Dockerfile.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It automates the creation of optimized, small container images directly from Go source code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This tool is particularly valuable for microservices architecture.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Efficient and rapid deployment of Go applications is essential with Ko.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://github.com/chainguard-dev/apko" rel="noopener noreferrer"&gt;6. Apko: Minimal and Secure&lt;/a&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Apko, developed by Chainguard, focuses on building minimal and secure container images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It emphasizes simplicity and ease of auditing, making it an excellent choice for security-conscious organizations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Apko’s approach to container image creation helps minimize the attack surface, which is crucial in environments where security is a top priority.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;a href="https://buildpacks.io/" rel="noopener noreferrer"&gt;7. Buildpacks: Simplifying Container Creation&lt;/a&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Buildpacks offer a higher-level abstraction for building container images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Originally developed by Heroku and now part of the Cloud Native Buildpacks project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Developers can create optimized container images without writing Dockerfiles.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Buildpacks automatically detect the language and dependencies of an application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They generate a container image tailored to the application's needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Support rebasing, allowing for base image updates without rebuilding the entire application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Save time and resources in the deployment process.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Building Containers: Reproducibility, Isolation, and Security
&lt;/h2&gt;

&lt;p&gt;As container usage grows, the importance of reproducible, isolated, and secure builds becomes increasingly clear. Here are some critical considerations for achieving these goals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reproducibility:&lt;/strong&gt; A reproducible container build process ensures that the same container image can be generated consistently, with identical content, regardless of when or where it is built. This is vital for debugging, auditing, and maintaining compliance with regulations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Isolation:&lt;/strong&gt; Containers should be isolated not only during runtime but also throughout the build process. This prevents potential conflicts and ensures that the build environment does not influence the final container image, leading to more reliable and predictable deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Parameter Less Builds:&lt;/strong&gt; To enhance build reliability, containers should be built without relying on external parameters or environment variables, which can introduce variability and unpredictability into the build process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Provenance:&lt;/strong&gt; Understanding the origin of every component within a container is essential for security and compliance. Provenance tools help track and verify the source of software components, ensuring that your containers are built from trusted and secure sources.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Embracing the Future of Container Technology
&lt;/h2&gt;

&lt;p&gt;The container technology landscape has evolved significantly since the early days of Docker. Developers now have access to a wide array of open-source tools, each offering unique features that cater to different aspects of containerization. Whether you’re seeking a secure, rootless container engine like Podman or specialized build tools like Buildah, Kaniko, or Buildpacks, the open-source community has created robust alternatives that can meet the diverse needs of modern software development.&lt;/p&gt;

&lt;p&gt;As you explore these tools, it’s crucial to align them with your specific requirements—be it security, performance, ease of use, or integration with your existing CI/CD pipeline. By selecting the right tool for the job, you can ensure that your containerized applications are built and deployed efficiently, securely, and with minimal overhead.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>docker</category>
      <category>security</category>
      <category>podman</category>
    </item>
  </channel>
</rss>
