<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Das</title>
    <description>The latest articles on Forem by Das (@bsd).</description>
    <link>https://forem.com/bsd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/bsd"/>
    <language>en</language>
    <item>
      <title>Querying Your Test Results with OpenSearch MCP</title>
      <dc:creator>Das</dc:creator>
      <pubDate>Sat, 11 Apr 2026 16:23:45 +0000</pubDate>
      <link>https://forem.com/bsd/querying-your-test-results-with-opensearch-mcp-51no</link>
      <guid>https://forem.com/bsd/querying-your-test-results-with-opensearch-mcp-51no</guid>
      <description>&lt;p&gt;&lt;em&gt;Ask your OpenSearch data questions in plain English using any MCP-compatible AI assistant.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;This works with any test framework — Robot Framework is used here because that is where the data already lives.&lt;/em&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/bsd/real-time-test-results-in-robot-framework-15nh"&gt;previous part of this series&lt;/a&gt;, every Robot Framework test result was streamed into OpenSearch the moment it completed — failures visible in a live dashboard without waiting for the suite to finish.&lt;/p&gt;

&lt;p&gt;That part solved visibility. This part solves what happens after the run ends.&lt;/p&gt;

&lt;p&gt;Every result is already in OpenSearch: test name, suite, status, failure message, tags, duration, run ID. The data is there. The question is how to use it efficiently. Building filters in Dashboards and writing DQL queries in Discover works, but it is slow and it breaks focus at exactly the moment you need to act.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://modelcontextprotocol.io" rel="noopener noreferrer"&gt;Model Context Protocol (MCP)&lt;/a&gt; is an open standard that lets AI assistants connect directly to external data sources. OpenSearch has an official MCP server. Connect it to any MCP-compatible AI assistant — Claude, Copilot, or others — and your test data becomes queryable in plain English, from wherever you are already working.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;After a long CI run, the questions are always the same. Which tests failed? Have they failed before? Do they share a tag or a suite? What does the error mean? Is this a test bug or an application bug?&lt;/p&gt;

&lt;p&gt;Answering those manually means switching tools: open Dashboards, build a filter, switch to Discover for the full message, run another query for the historical view. Each step is small. Across a team running multiple pipelines a day, it adds up — and context evaporates while it is happening.&lt;/p&gt;

&lt;p&gt;MCP removes those steps. The AI assistant queries the index directly. You ask in plain English, you get an answer, you stay in the editor.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is MCP?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://modelcontextprotocol.io" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt; is an open standard, originally developed by Anthropic and now widely adopted, for connecting AI assistants to external tools and data sources. Instead of the assistant working from what you paste into a conversation, it calls out to your systems and works from live data.&lt;/p&gt;

&lt;p&gt;OpenSearch published an official MCP server: &lt;code&gt;opensearch-mcp-server-py&lt;/code&gt;. Register it with any MCP-compatible AI assistant and it can query any OpenSearch index directly — no custom integration code, no middleware.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The examples below use Claude Code. Any MCP-compatible assistant follows the same registration pattern.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────┐
│     Robot Framework (local)     │
│     tests run as normal         │
└──────────────┬──────────────────┘
               │  end_test fires after each test
               ▼
┌─────────────────────────────────┐
│     opensearch_listener.py      │
│     indexes result immediately  │
└──────────────┬──────────────────┘
               │
               ▼
┌─────────────────────────────────┐
│     OpenSearch  (Docker)        │
│     port 9200                   │
│     index: robot-results        │
└──────────┬──────────────────────┘
           │
     ┌─────┴──────────┐
     │                │
     ▼                ▼
┌──────────┐    ┌─────────────────┐
│Dashboard │    │   MCP Server    │
│port 5601 │    │   (local proc)  │
│live view │    └────────┬────────┘
└──────────┘             │
                         ▼
                ┌─────────────────┐
                │  MCP-compatible │
                │  AI assistant   │
                └────────┬────────┘
                         │
                         ▼
                ┌─────────────────┐
                │  plain English  │
                │  queries + fixes│
                └─────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The dashboard and MCP paths run on the same OpenSearch instance. The dashboard stays useful for live monitoring during a run. MCP is the interface for investigation and action after the run.&lt;/p&gt;




&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Robot Framework&lt;/strong&gt; — test framework with a listener API for hooking into execution&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenSearch&lt;/strong&gt; — stores every test result as it happens (from part one)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;opensearch-mcp-server-py&lt;/strong&gt; — official OpenSearch MCP server, published by the OpenSearch project&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude Code&lt;/strong&gt; — MCP-compatible AI assistant used in this setup (any MCP-compatible assistant works)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Python&lt;/strong&gt; — ties it all together&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step by Step
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Everything from part one is in place: OpenSearch running in Docker, the listener shipping results, the &lt;code&gt;robot-results&lt;/code&gt; index populated. Verify OpenSearch is healthy before continuing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:9200
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A JSON response means it is ready. If OpenSearch is down when the MCP server registers, it will appear connected but return errors on every query.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The MCP Server Package
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;opensearch-mcp-server-py&lt;/code&gt; is already in &lt;code&gt;requirements.txt&lt;/code&gt; and was installed in part one. Nothing new to install.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Security note:&lt;/strong&gt; Only add MCP servers from trusted sources. This package is the &lt;a href="https://github.com/opensearch-project/opensearch-mcp-server-py" rel="noopener noreferrer"&gt;official OpenSearch MCP server&lt;/a&gt;, maintained by the OpenSearch project.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  2. Environment Variables
&lt;/h3&gt;

&lt;p&gt;The MCP server reads connection details from environment variables. Create a &lt;code&gt;.env&lt;/code&gt; file in the project root (already in &lt;code&gt;.gitignore&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="n"&gt;OPENSEARCH_URL&lt;/span&gt;=&lt;span class="n"&gt;http&lt;/span&gt;://&lt;span class="n"&gt;localhost&lt;/span&gt;:&lt;span class="m"&gt;9200&lt;/span&gt;
&lt;span class="n"&gt;OPENSEARCH_NO_AUTH&lt;/span&gt;=&lt;span class="n"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;OPENSEARCH_NO_AUTH=true&lt;/code&gt; is for local development only. Never use it on a shared or production instance.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Register with Claude Code
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude mcp add opensearch &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;OPENSEARCH_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:9200 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;OPENSEARCH_NO_AUTH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--&lt;/span&gt; uv run &lt;span class="nt"&gt;--project&lt;/span&gt; /path/to/results-execution-monitoring python &lt;span class="nt"&gt;-m&lt;/span&gt; mcp_server_opensearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude mcp list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;opensearch: ... ✓ Connected
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Permanent Config for CLI and VS Code
&lt;/h3&gt;

&lt;p&gt;The registration above is session-scoped. To persist it across sessions — and have it work in both the Claude Code CLI and the VS Code extension — add it to &lt;code&gt;~/.claude/settings.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"opensearch"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"uv"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"run"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"--project"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"/path/to/results-execution-monitoring"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"python"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"-m"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"mcp_server_opensearch"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"OPENSEARCH_URL"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:9200"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"OPENSEARCH_NO_AUTH"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"true"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After saving, reload VS Code (&lt;code&gt;Ctrl+Shift+P&lt;/code&gt; → &lt;em&gt;Developer: Reload Window&lt;/em&gt;). Run &lt;code&gt;claude mcp list&lt;/code&gt; to confirm.&lt;/p&gt;




&lt;h2&gt;
  
  
  Querying Results in Practice
&lt;/h2&gt;

&lt;p&gt;Once connected, the assistant queries &lt;code&gt;robot-results&lt;/code&gt; directly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Isolating a specific run
&lt;/h3&gt;

&lt;p&gt;Every document in a run carries the same &lt;code&gt;run_id&lt;/code&gt;, which is printed at the start of each run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;What tests failed in run 88b407bf?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The assistant returns test names, suite names, failure messages, and elapsed times as a readable summary — not raw JSON.&lt;/p&gt;

&lt;p&gt;In CI, pass the build number as the &lt;code&gt;run_id&lt;/code&gt; so results are traceable to a specific pipeline build:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python &lt;span class="nt"&gt;-m&lt;/span&gt; robot &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--listener&lt;/span&gt; opensearch_listener.OpenSearchListener:url&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:9200:run_id&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BUILD_NUMBER&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  tests/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;What failed in build 42?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Understanding failure patterns over time
&lt;/h3&gt;

&lt;p&gt;A single failure is a data point. The same test failing across five runs over a week is a problem that needs a decision — is it a flaky test, a broken feature, or an environmental issue?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Show me all failed tests from the last 7 days
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells you immediately whether today's failures are new or recurring. A test that first appeared today is a different priority from one that has been failing silently for a week.&lt;/p&gt;

&lt;h3&gt;
  
  
  Triage by tag
&lt;/h3&gt;

&lt;p&gt;Not all failures carry the same weight. Tests tagged &lt;code&gt;smoke&lt;/code&gt; are meant to catch the most critical issues fastest. Knowing whether failing tests are smoke tests or deep regression tests changes how urgently you respond.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Show me failures with the smoke tag from the last 3 days
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Acting on failures — in the same conversation
&lt;/h3&gt;

&lt;p&gt;Once the assistant has the failure list, the conversation continues without switching context. The failure message, test name, suite, and tags are all in the indexed document. The error is already in scope.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The Division By Zero Fails test is failing with ZeroDivisionError.
What should the Robot Framework keyword look like to handle that safely?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Which of these failures look like test bugs vs application bugs?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Generate a test case that correctly validates that dividing by zero raises an exception.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The path from "what failed" to "here is the fix" happens in one conversation, without copy-pasting anything.&lt;/p&gt;




&lt;h2&gt;
  
  
  Result
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Before MCP:&lt;/strong&gt; run finishes → open Dashboards → filter by run → open Discover for error messages → cross-reference previous runs manually → copy error into a chat → figure out the fix → switch back to editor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With MCP:&lt;/strong&gt; run finishes → ask what failed → ask if it has happened before → ask what the fix looks like → fix it.&lt;/p&gt;

&lt;p&gt;The time between "run finished" and "I know what to do" is shorter. Not because the failures changed, but because the path from data to action is direct.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The listener from part one moved test result availability from the end of the run to the moment each test completes. This part moves analysis from dashboards and query consoles to plain English, in the tool you are already using.&lt;/p&gt;

&lt;p&gt;The approach is not specific to Robot Framework or Claude. Any test framework with a hook system can stream results to OpenSearch using the same listener pattern. Any MCP-compatible AI assistant can be registered against the same MCP server. The infrastructure stays the same regardless of what is being tested or which assistant is being used.&lt;/p&gt;

&lt;p&gt;Store results as they happen. Query them in plain English. Act on what you find.&lt;/p&gt;




&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt; — &lt;a href="https://github.com/007bsd/results-execution-monitoring" rel="noopener noreferrer"&gt;github.com/007bsd/results-execution-monitoring&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Context Protocol&lt;/strong&gt; — &lt;a href="https://modelcontextprotocol.io" rel="noopener noreferrer"&gt;modelcontextprotocol.io&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Introducing MCP in OpenSearch&lt;/strong&gt; — &lt;a href="https://opensearch.org/blog/introducing-mcp-in-opensearch/" rel="noopener noreferrer"&gt;opensearch.org/blog/introducing-mcp-in-opensearch&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;opensearch-mcp-server-py&lt;/strong&gt; — &lt;a href="https://github.com/opensearch-project/opensearch-mcp-server-py" rel="noopener noreferrer"&gt;github.com/opensearch-project/opensearch-mcp-server-py&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Part one&lt;/strong&gt; — &lt;a href="https://dev.to/bsd/real-time-test-results-in-robot-framework-15nh"&gt;Real-Time Test Results in Robot Framework&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;The complete code is in the GitHub repo. If anything in the MCP setup behaves differently, leave a comment.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>robotframework</category>
      <category>python</category>
      <category>opensearch</category>
      <category>testing</category>
    </item>
    <item>
      <title>Real-Time Test Results in Robot Framework</title>
      <dc:creator>Das</dc:creator>
      <pubDate>Sat, 21 Mar 2026 20:44:19 +0000</pubDate>
      <link>https://forem.com/bsd/real-time-test-results-in-robot-framework-15nh</link>
      <guid>https://forem.com/bsd/real-time-test-results-in-robot-framework-15nh</guid>
      <description>&lt;p&gt;&lt;em&gt;No more waiting for the suite to finish to find out what failed.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In a previous organisation, a proof of concept was built to stream Robot Framework test results into a live monitoring dashboard using the ELK stack (Elasticsearch, Logstash, and Kibana). The idea was simple: instead of waiting for the full run to complete and then reading the output report, results would appear in a dashboard the moment each test finished.&lt;/p&gt;

&lt;p&gt;The problem with ELK is that it is no longer fully open source. Elastic changed its licensing in 2021. OpenSearch is the community fork that stayed open. This project is a rebuild of that original setup using OpenSearch, with the same goal: live test visibility while the suite is still running.&lt;/p&gt;

&lt;p&gt;The approach works with any test framework that exposes hooks into the test lifecycle. Robot Framework is used here because that is where the original work was done, but the same pattern applies to pytest, Selenium, and others.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;When a test suite runs, results are only written to &lt;code&gt;output.xml&lt;/code&gt; when the entire run completes. For short suites this is fine. For longer ones it creates a gap.&lt;/p&gt;

&lt;p&gt;During the run, the only signal is whatever the terminal prints. A failure shows up as &lt;code&gt;FAIL&lt;/code&gt; with a short message, but by the time the run finishes and the report is available, the context around that failure is gone. Was it a transient error? A timeout that only happens under load? Something that affected multiple tests in the same suite? There is no way to know without digging through logs after the fact.&lt;/p&gt;

&lt;p&gt;What would actually help is seeing each result as it happens, with the full message, the test name, the suite, and the duration, in a searchable dashboard that stays available after the run.&lt;/p&gt;




&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://robotframework.org/" rel="noopener noreferrer"&gt;&lt;strong&gt;Robot Framework&lt;/strong&gt;&lt;/a&gt; - test framework with a listener API for hooking into test execution&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://opensearch.org/" rel="noopener noreferrer"&gt;&lt;strong&gt;OpenSearch&lt;/strong&gt;&lt;/a&gt; - open-source search and analytics engine for storing results&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://opensearch.org/docs/latest/dashboards/" rel="noopener noreferrer"&gt;&lt;strong&gt;OpenSearch Dashboards&lt;/strong&gt;&lt;/a&gt; - visualisation layer for building live dashboards&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;Docker&lt;/strong&gt;&lt;/a&gt; - runs OpenSearch and the dashboard in containers&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.python.org/" rel="noopener noreferrer"&gt;&lt;strong&gt;Python&lt;/strong&gt;&lt;/a&gt; - the listener that ships results to OpenSearch&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+-------------------------------+
|  Robot Framework (local)      |
|  listener fires after each    |
|  test and ships the result    |
+---------------+---------------+
                |
                v
+---------------+---------------+        +---------------------------+
|  OpenSearch (Docker)          | -----&amp;gt; |  OpenSearch Dashboards    |
|  port 9200                    |        |  port 5601                |
+-------------------------------+        +---------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Robot Framework and tests run locally as usual. Docker handles OpenSearch and the dashboard without any manual installation needed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step by Step
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/products/docker-desktop" rel="noopener noreferrer"&gt;Docker Desktop&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.python.org/downloads" rel="noopener noreferrer"&gt;Python 3.8+&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  1. How the Listener Works
&lt;/h3&gt;

&lt;p&gt;Robot Framework exposes a listener API, a Python class it calls at specific points during execution. The hook used here is &lt;code&gt;end_test&lt;/code&gt;, which fires immediately after each test completes, before the next one starts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;OpenSearchListener&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;ROBOT_LISTENER_API_VERSION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;end_test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;attrs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;run_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;run_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;suite&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;suite_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;attrs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;attrs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tags&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attrs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tags&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]),&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;start_time&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;...,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;elapsed_seconds&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;attrs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;elapsedtime&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;indexed_at&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;utcnow&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;isoformat&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each test result is indexed into OpenSearch straight away. The document is in the index before the next test has started.&lt;/p&gt;

&lt;p&gt;Each document looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"run_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"a3f2c1..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"suite"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Tests.Login"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"test"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Valid credentials should log in"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"FAIL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Element not found: #submit-btn"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"tags"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"smoke"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"auth"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"start_time"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-03-21T12:20:35.701Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"elapsed_seconds"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;3.001&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;run_id&lt;/code&gt; field is generated once per test run, which makes it straightforward to filter the dashboard to a specific run or compare runs over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Setup
&lt;/h3&gt;

&lt;p&gt;Clone the repo and install dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/007bsd/results-execution-monitoring
&lt;span class="nb"&gt;cd &lt;/span&gt;results-execution-monitoring
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start OpenSearch and the dashboard:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The first run downloads around 1 GB of images. Once containers are up, confirm OpenSearch is ready with &lt;code&gt;curl http://localhost:9200&lt;/code&gt;. A JSON response means it is running.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Run the setup script once to create the index pattern and a starter dashboard:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python setup_dashboard.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is meant to get things running quickly. The dashboard it creates is a starting point, not the final word. See the section below on building dashboards manually.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Running Tests with the Listener
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python &lt;span class="nt"&gt;-m&lt;/span&gt; robot &lt;span class="nt"&gt;--listener&lt;/span&gt; opensearch_listener.OpenSearchListener tests/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The listener confirms it has started and prints the &lt;code&gt;run_id&lt;/code&gt; and index name:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6e7ltoszxwblfjdnuf48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6e7ltoszxwblfjdnuf48.png" alt="Screenshot: Terminal showing the listener starting, creating the index and printing run_id" width="800" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the run progresses, each result is confirmed in the terminal:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvq9lj2e3e81ulapit42b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvq9lj2e3e81ulapit42b.png" alt="Screenshot: Terminal showing PASS and FAIL lines streaming from the OpenSearchListener as tests complete" width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open &lt;code&gt;http://localhost:5601&lt;/code&gt; before or during the run to watch results appear in real time.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Verifying Data in OpenSearch
&lt;/h3&gt;

&lt;p&gt;To confirm results are being indexed correctly, the Dev Tools console at &lt;code&gt;http://localhost:5601/app/dev_tools#/console&lt;/code&gt; can be used to query the index directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;GET&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;robot-results/_mapping&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3oh2pe061r5bks66vvc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3oh2pe061r5bks66vvc.png" alt="Screenshot: Dev Tools console showing the robot-results index mapping with all fields correctly typed" width="800" height="348"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;GET&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;robot-results/_search&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"size"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Exploring Results in Discover
&lt;/h3&gt;

&lt;p&gt;OpenSearch Dashboards has a Discover view that lets you search and filter all indexed documents. It is useful for digging into specific failures, filtering by tag, run, or suite, and understanding patterns across multiple runs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvqv1ft6q97hs0jzlpe0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvqv1ft6q97hs0jzlpe0.png" alt="Screenshot: Discover view showing filtered failed tests with full details including test name, suite, message, and elapsed time" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Building Dashboards Manually
&lt;/h3&gt;

&lt;p&gt;The setup script creates a starter dashboard to get things going, but most people will want to build their own visualisations on top of the data. OpenSearch Dashboards has a full visual editor for this, no configuration files or scripts required.&lt;/p&gt;

&lt;p&gt;To create a visualisation:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;code&gt;http://localhost:5601&lt;/code&gt; and open &lt;strong&gt;Visualize&lt;/strong&gt; from the menu&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create visualization&lt;/strong&gt; and choose a chart type&lt;/li&gt;
&lt;li&gt;Select &lt;code&gt;robot-results&lt;/code&gt; as the index&lt;/li&gt;
&lt;li&gt;Configure the metric (e.g. Count) and bucket (e.g. Terms on &lt;code&gt;status&lt;/code&gt;) to get a pass/fail breakdown&lt;/li&gt;
&lt;li&gt;Save the visualisation with a name&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To build a dashboard from those visualisations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open &lt;strong&gt;Dashboard&lt;/strong&gt; from the menu&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create new dashboard&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Add from library&lt;/strong&gt; and select the saved visualisations&lt;/li&gt;
&lt;li&gt;Arrange the panels, set a title, and save&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Some useful combinations to start with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pie or donut chart of &lt;code&gt;status&lt;/code&gt; for an overall pass/fail ratio&lt;/li&gt;
&lt;li&gt;Data table of recent failures showing &lt;code&gt;test&lt;/code&gt;, &lt;code&gt;suite&lt;/code&gt;, and &lt;code&gt;message&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Bar chart of &lt;code&gt;elapsed_seconds&lt;/code&gt; by test name to spot slow tests&lt;/li&gt;
&lt;li&gt;A metric panel showing total fail count for the current run&lt;/li&gt;
&lt;li&gt;Filter by &lt;code&gt;run_id&lt;/code&gt; to isolate and compare specific runs&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Result
&lt;/h2&gt;

&lt;p&gt;Once the stack is running, every test result appears in the dashboard the moment it is indexed. Failures are immediately visible with the full message, test name, suite, and duration, without waiting for the suite to complete.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foq7eold66hztbvtyuxel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foq7eold66hztbvtyuxel.png" alt="Screenshot: Robot Framework Results dashboard showing the failed tests table, Pass vs Fail donut chart, and Total Fail count" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g3j8nkl2y0i1op09bci.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g3j8nkl2y0i1op09bci.gif" alt="GIF: Dashboard with both panels side by side updating as test results come in" width="600" height="300"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The standard approach to test reporting is batch: everything is available when the run finishes, and not before. For short suites this is acceptable. For longer ones it is a genuine visibility problem.&lt;/p&gt;

&lt;p&gt;Streaming results to OpenSearch as each test completes inverts that. Results are available immediately, failures are visible in context, and the history of every run is retained and searchable without any extra work.&lt;/p&gt;

&lt;p&gt;The listener pattern used here is Robot Framework-specific, but the underlying idea applies to any test framework with a comparable hook system.&lt;/p&gt;




&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt; - &lt;a href="https://github.com/007bsd/results-execution-monitoring" rel="noopener noreferrer"&gt;github.com/007bsd/results-execution-monitoring&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Robot Framework Listener API&lt;/strong&gt; - &lt;a href="https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#listener-interface" rel="noopener noreferrer"&gt;robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#listener-interface&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenSearch Documentation&lt;/strong&gt; - &lt;a href="https://opensearch.org/docs/latest/" rel="noopener noreferrer"&gt;opensearch.org/docs/latest&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenSearch Dashboards&lt;/strong&gt; - &lt;a href="https://opensearch.org/docs/latest/dashboards/" rel="noopener noreferrer"&gt;opensearch.org/docs/latest/dashboards&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt; - &lt;a href="https://docs.docker.com/" rel="noopener noreferrer"&gt;docs.docker.com&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;If you try setting this up and encounter any issues, please leave a comment. The complete code for this project is available on GitHub&lt;/em&gt;&lt;/p&gt;

</description>
      <category>robotframework</category>
      <category>python</category>
      <category>opensearch</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Real-Time Data Monitoring Using InfluxDB and Grafana</title>
      <dc:creator>Das</dc:creator>
      <pubDate>Sat, 14 Mar 2026 11:51:04 +0000</pubDate>
      <link>https://forem.com/bsd/real-time-data-monitoring-using-influxdb-and-grafana-3a1n</link>
      <guid>https://forem.com/bsd/real-time-data-monitoring-using-influxdb-and-grafana-3a1n</guid>
      <description>&lt;p&gt;&lt;em&gt;A practical guide to building a live monitoring dashboard for any data source.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In a previous organisation, I built and maintained several internal monitoring dashboards. The idea came from patterns I had seen in engineering blogs about observability systems, and I experimented internally with a similar setup to bring different metrics into a single real-time view. The architecture behind those dashboards was straightforward and reusable.&lt;/p&gt;

&lt;p&gt;This project is a recreation of that setup using publicly available APIs, tracking weather across two cities, commodity prices, and exchange rates EUR-&amp;gt;INR as a working example. It continuously collects data, stores it in a time-series database, and visualises it in a live Grafana dashboard. It is a starting point for anyone who wants to experiment with real-time data or build similar dashboards for their own projects.&lt;/p&gt;

&lt;p&gt;Organisations and individuals often have access to data from multiple sources that updates continuously. The challenge is not finding the data. Most of it is freely available. The challenge is aggregating it into a single, live view that is easy to read and act on.&lt;/p&gt;

&lt;p&gt;Without a proper pipeline, data either gets checked manually, sits in spreadsheets, or simply goes unmonitored. A real-time dashboard solves this by automating the collection, storage, and visualisation in one place.&lt;/p&gt;

&lt;p&gt;The same problem exists at different scales. Whether it is infrastructure monitoring inside a company, tracking external data feeds, or keeping an eye on any system that produces regular measurements, the approach is the same.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.influxdata.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;InfluxDB&lt;/strong&gt;&lt;/a&gt; — time-series database that stores all collected data&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;Grafana&lt;/strong&gt;&lt;/a&gt; — visualisation layer that renders live dashboards&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;Docker&lt;/strong&gt;&lt;/a&gt; — runs the entire stack in containers&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.python.org/" rel="noopener noreferrer"&gt;&lt;strong&gt;Python&lt;/strong&gt;&lt;/a&gt; — polling script that fetches data and writes to InfluxDB&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.influxdata.com/time-series-platform/telegraf/" rel="noopener noreferrer"&gt;&lt;strong&gt;Telegraf&lt;/strong&gt;&lt;/a&gt; — alternative data collection agent by InfluxData&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This project uses a Python script for data collection, but Telegraf or any other tool that can write to InfluxDB works just as well.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;There are two separate flows in the project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Live data flow
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Data Sources (APIs)
        │
        ▼
   poller.py
   (runs every 15m)
        │
        ▼
     InfluxDB
     (storage)
        │
        ▼
     Grafana
        │
        ▼
     Browser
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Dashboard setup flow
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dashboard.yml
      │
      ▼
build_dashboard.py
      │
      ▼
dashboard.json
      │
      ▼
Grafana loads it automatically
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The live data flow runs continuously in the background. The dashboard setup flow runs once locally, and again whenever the dashboard configuration is updated.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The dashboard can also be created manually through the Grafana UI without using any configuration files or scripts. The code-based approach is used here to keep the setup version controlled and easy to modify.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywxi8erg5kl35lxzuw4a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywxi8erg5kl35lxzuw4a.png" alt="Screenshot: Architecture Diagram" width="800" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step by Step
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before getting started, make sure the following are installed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.docker.com/products/docker-desktop/" rel="noopener noreferrer"&gt;&lt;strong&gt;Docker Desktop&lt;/strong&gt;&lt;/a&gt; — required to run the stack&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.python.org/downloads/" rel="noopener noreferrer"&gt;&lt;strong&gt;Python&lt;/strong&gt;&lt;/a&gt; — required to generate the dashboard configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Docker is used here to demonstrate the full setup locally. InfluxDB and Grafana can also be installed directly on any machine or server without Docker if preferred.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  1. Data Sources
&lt;/h2&gt;

&lt;p&gt;Three publicly available APIs are used in this project, covering weather, commodity prices, and exchange rates. No API keys or signups are required.&lt;/p&gt;

&lt;p&gt;Any API or data source that returns regularly updating values can be used here. The structure of the pipeline does not change regardless of what data is being collected.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Data Collection
&lt;/h2&gt;

&lt;p&gt;The poller is a Python script that queries the APIs every fifteen minutes and writes the results directly to InfluxDB.&lt;/p&gt;

&lt;p&gt;One concept worth understanding in InfluxDB before writing data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tags&lt;/strong&gt; describe the data and are indexed for filtering. Examples: city name, source identifier, category.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fields&lt;/strong&gt; contain the actual numeric values that get stored and plotted.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Getting this distinction right makes querying and building dashboards significantly simpler.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxwx6j88v5g015ktmbdo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxwx6j88v5g015ktmbdo.png" alt="Screenshot: docker compose logs -f poller" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Storage with InfluxDB
&lt;/h2&gt;

&lt;p&gt;InfluxDB is a time-series database built for handling measurements that arrive continuously. Data is written as timestamped points, which makes it well suited for infrastructure metrics, sensor readings, financial feeds, or any other regularly updating values.&lt;/p&gt;

&lt;p&gt;Each record written contains a measurement name, tags, numeric fields, and a timestamp.&lt;/p&gt;

&lt;p&gt;To verify the connection is working, the Grafana Explore view can be used to query the bucket directly. A simple query returning the last hour of data confirms that the poller is writing successfully and Grafana can read it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6l1y4p6hoyiqffxwpqob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6l1y4p6hoyiqffxwpqob.png" alt="Screenshot: Run Flux Query From Grafana" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Visualisation with Grafana
&lt;/h2&gt;

&lt;p&gt;Grafana reads from InfluxDB and renders the data as a live dashboard. It supports charts, stat panels, colour thresholds, gauges, and auto-refresh at configurable intervals.&lt;/p&gt;

&lt;p&gt;It also connects to many other data sources including Prometheus, PostgreSQL, and others, making it flexible beyond just this stack.&lt;/p&gt;

&lt;p&gt;One of the more useful features is the time range selector. The same dashboard can show data from the last 5 minutes all the way through to months of history, and the panels update instantly as the range changes.&lt;/p&gt;

&lt;p&gt;Grafana also supports alerting. Alerts can be configured on any panel to trigger notifications when a value crosses a threshold — useful for staying on top of data without having to watch the dashboard continuously. Notifications can be sent to email, Slack, PagerDuty, and many other channels.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxoxt2re7x4stce5ytb3.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxoxt2re7x4stce5ytb3.gif" alt="GIF: Grafana Dashboard" width="720" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Dashboard as Code
&lt;/h2&gt;

&lt;p&gt;Grafana dashboards are often created manually through the UI. That works for quick experiments, but the exported JSON becomes large and difficult to maintain over time.&lt;/p&gt;

&lt;p&gt;In this project the dashboard is defined in a &lt;code&gt;dashboard.yml&lt;/code&gt; configuration file. A script converts it into the JSON format Grafana expects, and Grafana loads it automatically on startup.&lt;/p&gt;

&lt;p&gt;This keeps the dashboard fully version controlled. Adding a new panel is a matter of updating one line in the config file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2jeiwstcn7mql4grnrk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2jeiwstcn7mql4grnrk.png" alt="Screenshot: dashboard.yml With Grafana" width="800" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Running the Stack
&lt;/h2&gt;

&lt;p&gt;The entire system starts with one command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This launches three containers: InfluxDB, Grafana, and the poller. The dashboard is immediately available at &lt;code&gt;localhost:3000&lt;/code&gt;. Pre-configured, pre-seeded, already collecting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fakdvpybhhbcpd7qb4p7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fakdvpybhhbcpd7qb4p7e.png" alt="Screenshot: Docker Starting With Containers Running&amp;lt;br&amp;gt;
" width="800" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Result
&lt;/h2&gt;

&lt;p&gt;Once the stack is running, the dashboard provides a continuously updating view of all collected data. It refreshes automatically, is colour-coded by thresholds, and is queryable over any time range.&lt;/p&gt;

&lt;p&gt;The same setup can be pointed at any data source without changing the underlying infrastructure. The only part that changes is the polling script.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The pattern covered in this project: collect, store, visualize. It is a solid foundation for building active monitoring dashboards around any kind of live data.&lt;/p&gt;

&lt;p&gt;The data collection piece is intentionally flexible. Telegraf works well for infrastructure and system metrics with minimal configuration. A custom script makes sense when working with external APIs that need specific handling. A cloud function or a scheduled job fits just as well depending on the environment.&lt;/p&gt;

&lt;p&gt;What stays consistent is the rest of the stack. InfluxDB handles the time-series storage reliably regardless of where the data comes from. Grafana turns it into something usable, with the added ability to set up alerts and query across any time range without much overhead.&lt;/p&gt;

&lt;p&gt;The same approach applies whether the goal is monitoring application performance, tracking external data feeds, or building visibility into any system that produces regular measurements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt; — &lt;a href="https://github.com/007bsd/live-monitor" rel="noopener noreferrer"&gt;github.com/007bsd/live-monitor&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;InfluxDB Documentation&lt;/strong&gt; — &lt;a href="https://docs.influxdata.com/" rel="noopener noreferrer"&gt;docs.influxdata.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grafana Documentation&lt;/strong&gt; — &lt;a href="https://grafana.com/docs/" rel="noopener noreferrer"&gt;grafana.com/docs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grafana Alerting&lt;/strong&gt; — &lt;a href="https://grafana.com/docs/grafana/latest/alerting/" rel="noopener noreferrer"&gt;grafana.com/docs/grafana/latest/alerting&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Telegraf&lt;/strong&gt; — &lt;a href="https://www.influxdata.com/time-series-platform/telegraf/" rel="noopener noreferrer"&gt;influxdata.com/telegraf&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt; — &lt;a href="https://docs.docker.com/" rel="noopener noreferrer"&gt;docs.docker.com&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;If you try setting this up and encounter any issues, please leave a comment. The complete code for this project is available on GitHub.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>grafana</category>
      <category>influxdb</category>
      <category>python</category>
      <category>docker</category>
    </item>
    <item>
      <title>Building a Fully Local LaTeX Thesis Workflow with VS Code and MiKTeX</title>
      <dc:creator>Das</dc:creator>
      <pubDate>Sat, 14 Mar 2026 11:11:52 +0000</pubDate>
      <link>https://forem.com/bsd/building-a-fully-local-latex-thesis-workflow-with-vs-code-and-miktex-11hg</link>
      <guid>https://forem.com/bsd/building-a-fully-local-latex-thesis-workflow-with-vs-code-and-miktex-11hg</guid>
      <description>&lt;h2&gt;
  
  
  Why I Didn't Use Word
&lt;/h2&gt;

&lt;p&gt;I tried writing in Word at first. It works fine for small documents, but once the thesis started growing, it became harder to manage. Formatting, figures, and updating the table of contents sometimes caused other sections to shift unexpectedly. Fixing small layout issues was taking more time than actual writing.&lt;/p&gt;

&lt;p&gt;For a long academic document with references, numbered figures, and strict formatting requirements, I wanted something more structured.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I Didn't Use Overleaf Either
&lt;/h2&gt;

&lt;p&gt;Overleaf is the go-to recommendation for LaTeX beginners, it runs in the browser, there's nothing to install, and it has a live PDF preview. For a quick document, it's fine.&lt;/p&gt;

&lt;p&gt;But for a full thesis, it has real limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For security reasons, you may not want your research or unpublished work stored on a public platform&lt;/li&gt;
&lt;li&gt;The free tier has limits&lt;/li&gt;
&lt;li&gt;Your data lives on their servers&lt;/li&gt;
&lt;li&gt;You can't work offline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The alternative? Run everything locally. Your machine, your files, your Git repository. Free forever. Works offline. Private by default.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack: LaTeX + MiKTeX + VS Code
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LaTeX&lt;/strong&gt; — the document preparation system. You write in plain text with markup, and LaTeX produces a perfectly formatted PDF&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MiKTeX&lt;/strong&gt; — a LaTeX distribution for Windows that manages all the packages you need, installing them automatically when required&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VS Code&lt;/strong&gt; — the code editor, used here as a LaTeX editor with the LaTeX Workshop extension&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;XeLaTeX&lt;/strong&gt; — the compiler. Better than pdfLaTeX for modern fonts and Unicode&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Biber&lt;/strong&gt; — bibliography manager, works with BibTeX &lt;code&gt;.bib&lt;/code&gt; files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Makeglossaries&lt;/strong&gt; — handles abbreviation lists and acronyms&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Python + Pygments&lt;/strong&gt; — required for the &lt;code&gt;minted&lt;/code&gt; package for syntax-highlighted code blocks&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step-by-Step Setup
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Before you start:&lt;/strong&gt; This guide covers Windows. The same tools exist for macOS and Linux with slightly different installation steps.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 1: Install MiKTeX
&lt;/h2&gt;

&lt;p&gt;MiKTeX is the LaTeX engine. It comes with XeLaTeX, Biber, and Makeglossaries, and it will automatically download any missing LaTeX packages the first time you compile.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;a href="https://miktex.org/download" rel="noopener noreferrer"&gt;miktex.org/download&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Download the Windows installer and run it&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;Install for all users&lt;/strong&gt; if you have admin rights, otherwise install for yourself&lt;/li&gt;
&lt;li&gt;After installation, open a terminal and verify:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xelatex &lt;span class="nt"&gt;--version&lt;/span&gt;
biber &lt;span class="nt"&gt;--version&lt;/span&gt;
makeglossaries &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If anything is missing, MiKTeX's package manager can install them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojeyzzvnsviyb46rsjyv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojeyzzvnsviyb46rsjyv.png" alt="Screenshot: MiKTeX Console" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr34v73ytd4d74zg52r4v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr34v73ytd4d74zg52r4v.png" alt="Screenshot: Terminal showing all three version outputs" width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Install Python and Pygments
&lt;/h2&gt;

&lt;p&gt;The template uses the &lt;code&gt;minted&lt;/code&gt; package for code syntax highlighting. Check if Python is already installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python &lt;span class="nt"&gt;--version&lt;/span&gt;
pygmentize &lt;span class="nt"&gt;-V&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If not, download from &lt;a href="https://www.python.org/downloads/" rel="noopener noreferrer"&gt;python.org/downloads&lt;/a&gt;. Then install Pygments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;pygments
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Install VS Code and Extensions
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Download from &lt;a href="https://code.visualstudio.com" rel="noopener noreferrer"&gt;code.visualstudio.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Install and open VS Code&lt;/li&gt;
&lt;li&gt;Search and install the &lt;strong&gt;LaTeX Workshop&lt;/strong&gt; extension from the Extensions panel&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5gy5buv57ywi3yo9e71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5gy5buv57ywi3yo9e71.png" alt="Screenshot: Required VS Code Extensions" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Clone the Template and Configure the Compiler
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/007bsd/metropolia-thesis-latex.git
&lt;span class="nb"&gt;cd &lt;/span&gt;metropolia-thesis-latex
code &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or download the ZIP from GitHub directly and open the folder in VS Code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8q8t1wlqwjf683wfqtl2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8q8t1wlqwjf683wfqtl2.png" alt="Screenshot: VS Code with the thesis folder open" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The project includes a &lt;code&gt;.vscode/settings.json&lt;/code&gt; that already configures LaTeX Workshop to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use XeLaTeX as the compiler&lt;/li&gt;
&lt;li&gt;Set &lt;code&gt;main.tex&lt;/code&gt; as the root document&lt;/li&gt;
&lt;li&gt;Auto-clean auxiliary files after building&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 5: Compile Your First PDF
&lt;/h2&gt;

&lt;p&gt;Open &lt;code&gt;main.tex&lt;/code&gt;. Click the green &lt;strong&gt;▶ Build LaTeX project&lt;/strong&gt; button in the top right, or press &lt;code&gt;Ctrl+Alt+B&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;LaTeX Workshop will run the full compilation sequence:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;xelatex &lt;span class="nt"&gt;-shell-escape&lt;/span&gt; &lt;span class="nt"&gt;-8bit&lt;/span&gt; main
biber main
makeglossaries main
xelatex &lt;span class="nt"&gt;-shell-escape&lt;/span&gt; &lt;span class="nt"&gt;-8bit&lt;/span&gt; main
xelatex &lt;span class="nt"&gt;-shell-escape&lt;/span&gt; &lt;span class="nt"&gt;-8bit&lt;/span&gt; main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first compile may take a minute as MiKTeX downloads any missing packages. When it succeeds, &lt;code&gt;main.pdf&lt;/code&gt; appears in the project root and opens automatically in the VS Code PDF preview panel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6xyhdrfqgygb8k9smbk.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6xyhdrfqgygb8k9smbk.gif" alt="GIF: Clicking the Build button in VS Code → compilation progress in the terminal → PDF preview appearing on the right" width="720" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Start Writing
&lt;/h2&gt;

&lt;p&gt;Open any file in the &lt;code&gt;chapters/&lt;/code&gt; folder and start editing. Every time you save (&lt;code&gt;Ctrl+S&lt;/code&gt;), LaTeX Workshop auto-recompiles and the PDF updates in real time.&lt;/p&gt;

&lt;p&gt;Project structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my-thesis/
├── main.tex          ← Master file
├── chapters/         ← One .tex file per chapter
├── biblio.bib        ← Bibliography
├── illustration/     ← Images and figures
├── code/             ← Code files for minted snippets
└── style/            ← Formatting rules
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Tips From Actually Using This
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Manage references with Zotero.&lt;/strong&gt; Collect and organize references in Zotero, export directly as a &lt;code&gt;.bib&lt;/code&gt; file into &lt;code&gt;biblio.bib&lt;/code&gt;. Works seamlessly with Biber and saves a lot of manual formatting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep your Git commits small.&lt;/strong&gt; Commit chapter by chapter, not just at the end. You will want to roll back if you rewrite a section and change your mind.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don't edit your style files.&lt;/strong&gt; These contain the formatting rules. If you change them and something breaks, debugging is painful.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;After compilation succeeds you have a properly formatted PDF — table of contents, numbered figures, bibliography, and acronym list all generated automatically. No style settings touched manually. No Word. No cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xsrsmt9xhxytf3llwfo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xsrsmt9xhxytf3llwfo.png" alt="Screenshot: VS Code with generated PDF" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters Beyond the Thesis
&lt;/h2&gt;

&lt;p&gt;For students in technical fields, getting used to LaTeX means getting comfortable with plain text and structured writing. That mindset helps beyond just thesis work.&lt;/p&gt;

&lt;p&gt;The setup took me around 30–45 minutes. After that, it was just writing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Template repository:&lt;/strong&gt; &lt;a href="https://github.com/007bsd/metropolia-thesis-latex" rel="noopener noreferrer"&gt;github.com/007bsd/metropolia-thesis-latex&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MiKTeX:&lt;/strong&gt; &lt;a href="https://miktex.org" rel="noopener noreferrer"&gt;miktex.org&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VS Code:&lt;/strong&gt; &lt;a href="https://code.visualstudio.com" rel="noopener noreferrer"&gt;code.visualstudio.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LaTeX Workshop extension:&lt;/strong&gt; Search "LaTeX Workshop" in VS Code Extensions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pygments:&lt;/strong&gt; &lt;code&gt;pip install pygments&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zotero:&lt;/strong&gt; &lt;a href="https://www.zotero.org" rel="noopener noreferrer"&gt;zotero.org&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;If you run into any issues setting this up, feel free to leave a comment — happy to help based on my experience.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>latex</category>
      <category>thesis</category>
      <category>vscode</category>
      <category>writing</category>
    </item>
  </channel>
</rss>
