<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Sreenu Sasubilli</title>
    <description>The latest articles on Forem by Sreenu Sasubilli (@sreenu_sasubilli_f9289c4e).</description>
    <link>https://forem.com/sreenu_sasubilli_f9289c4e</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/sreenu_sasubilli_f9289c4e"/>
    <language>en</language>
    <item>
      <title>On-Call Incident Triage Panel</title>
      <dc:creator>Sreenu Sasubilli</dc:creator>
      <pubDate>Thu, 05 Feb 2026 19:08:33 +0000</pubDate>
      <link>https://forem.com/sreenu_sasubilli_f9289c4e/on-call-incident-triage-panel-id9</link>
      <guid>https://forem.com/sreenu_sasubilli_f9289c4e/on-call-incident-triage-panel-id9</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/algolia"&gt;Algolia Agent Studio Challenge&lt;/a&gt;: Consumer-Facing Non-Conversational Experiences&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built an &lt;strong&gt;On-Call Triage Intelligence Panel&lt;/strong&gt; for SRE and DevOps teams.&lt;/p&gt;

&lt;p&gt;Instead of a chatbot, this system &lt;strong&gt;proactively surfaces the most relevant operational patterns, likely causes, and first-check actions&lt;/strong&gt; when an engineer is diagnosing an incident. The goal is to reduce cognitive load and MTTR during high-stress on-call situations — without requiring back-and-forth conversation.&lt;/p&gt;

&lt;p&gt;Engineers already work inside dashboards, runbooks, and incident tools. This experience enhances that existing workflow by injecting &lt;strong&gt;AI-driven retrieval intelligence&lt;/strong&gt; directly into incident triage, rather than asking users to “chat” with a system.&lt;/p&gt;




&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Live Index (Algolia Search Explorer):&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://dashboard.algolia.com/apps/BF4Z56HB7R/explorer/browse/oncall_triage_kb" rel="noopener noreferrer"&gt;https://dashboard.algolia.com/apps/BF4Z56HB7R/explorer/browse/oncall_triage_kb&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This demo uses Algolia’s Search Explorer to simulate how the &lt;strong&gt;On-Call Triage Panel&lt;/strong&gt; would operate when embedded inside an SRE workflow (alerting tools, observability dashboards, or internal runbooks).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example scenarios you can try:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High latency incident&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Query:&lt;/strong&gt; &lt;code&gt;orders-api p99 latency&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Filters:&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;service = orders-api&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;env = prod&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41vxroeuj0csr71cd8nj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41vxroeuj0csr71cd8nj.png" alt="High latency incident" width="800" height="562"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Network-related incident&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Query:&lt;/strong&gt; &lt;code&gt;packet loss&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Filters:&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;service = orders-api&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;env = staging&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbto7hd4mks44d8jcorj5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbto7hd4mks44d8jcorj5.png" alt="Packet Loss Incident" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Database error spike&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Query:&lt;/strong&gt; &lt;code&gt;db errors&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Filters:&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;service = payments-api&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;env = prod&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpit0xrv43phspeywjd6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpit0xrv43phspeywjd6.png" alt="Database Error Incident" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also explore by filtering on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;service&lt;/code&gt; (payments-api, search-api, orders-api)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;env&lt;/code&gt; (prod, staging, any)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;severity&lt;/code&gt; (high, medium, low)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What happens:&lt;/strong&gt;&lt;br&gt;
Each search instantly surfaces the most relevant &lt;strong&gt;triage patterns&lt;/strong&gt;, showing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why the pattern was matched (contextual explainability)&lt;/li&gt;
&lt;li&gt;Likely root causes based on historical incidents&lt;/li&gt;
&lt;li&gt;A copy-ready &lt;strong&gt;“first checks”&lt;/strong&gt; checklist for immediate on-call action&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This demonstrates &lt;strong&gt;proactive, non-conversational assistance&lt;/strong&gt; — intelligence is injected directly into the workflow without requiring chat or back-and-forth interaction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mock Screenshot (UI Concept):&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Source: &lt;a href="https://github.com/sasubillis/oncall_triage_mock/blob/main/index.html" rel="noopener noreferrer"&gt;https://github.com/sasubillis/oncall_triage_mock/blob/main/index.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4eli2t8ev8ai6ga1sq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4eli2t8ev8ai6ga1sq6.png" alt="On-call triage panel mock" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How I Used Algolia Agent Studio
&lt;/h2&gt;

&lt;p&gt;Algolia Agent Studio was used to power the &lt;strong&gt;retrieval intelligence layer&lt;/strong&gt;, not a conversational UI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Indexed Data
&lt;/h3&gt;

&lt;p&gt;I indexed ~100 realistic SRE knowledge records including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Incident patterns&lt;/li&gt;
&lt;li&gt;Historical incidents&lt;/li&gt;
&lt;li&gt;Symptoms&lt;/li&gt;
&lt;li&gt;Services and environments&lt;/li&gt;
&lt;li&gt;Severity levels&lt;/li&gt;
&lt;li&gt;Likely causes&lt;/li&gt;
&lt;li&gt;First-check remediation steps&lt;/li&gt;
&lt;li&gt;Confidence and explanation metadata&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each record is structured to represent &lt;strong&gt;operational decision artifacts&lt;/strong&gt;, not free-form text.&lt;/p&gt;

&lt;h3&gt;
  
  
  Retrieval Strategy
&lt;/h3&gt;

&lt;p&gt;Instead of prompting an LLM, the system relies on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI-powered semantic relevance&lt;/li&gt;
&lt;li&gt;Attribute weighting&lt;/li&gt;
&lt;li&gt;Typo tolerance&lt;/li&gt;
&lt;li&gt;Contextual ranking across multiple signals&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, a query like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;packet loss payments api&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Automatically retrieves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Relevant historical incidents&lt;/li&gt;
&lt;li&gt;Matching triage patterns&lt;/li&gt;
&lt;li&gt;Environment-appropriate remediation steps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No conversation is required — the intelligence is embedded in retrieval itself.&lt;/p&gt;

&lt;h3&gt;
  
  
  Targeted Prompting (Non-Conversational)
&lt;/h3&gt;

&lt;p&gt;Agent Studio is used to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Explain &lt;em&gt;why&lt;/em&gt; a pattern is shown&lt;/li&gt;
&lt;li&gt;Rank patterns by confidence and operational relevance&lt;/li&gt;
&lt;li&gt;Surface the most actionable next steps first&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is agentic behavior without dialogue.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Fast Retrieval Matters
&lt;/h2&gt;

&lt;p&gt;In on-call scenarios, &lt;strong&gt;every second matters&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Sub-50ms retrieval allows triage guidance to appear instantly during active incidents, when seconds matter.&lt;/p&gt;

&lt;p&gt;Algolia’s fast, contextual retrieval enables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sub-second access to operational knowledge&lt;/li&gt;
&lt;li&gt;Reduced time spent searching runbooks&lt;/li&gt;
&lt;li&gt;Fewer context switches during incidents&lt;/li&gt;
&lt;li&gt;Faster identification of known failure patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of engineers remembering where knowledge lives, &lt;strong&gt;the system remembers for them&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is especially critical in high-severity incidents where cognitive overload is common.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Fits the Challenge
&lt;/h2&gt;

&lt;p&gt;This project is intentionally &lt;strong&gt;non-conversational&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;There is no chat interface and no back-and-forth prompting. The AI value comes from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Learning-based relevance ranking&lt;/li&gt;
&lt;li&gt;Pattern recognition across historical incidents&lt;/li&gt;
&lt;li&gt;Proactive surfacing of the most useful information at the right moment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It demonstrates how &lt;strong&gt;AI-powered retrieval&lt;/strong&gt; can quietly enhance real workflows — exactly what the Consumer-Facing Non-Conversational Experiences category is about.&lt;/p&gt;




&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;Many AI demos focus on talking to users.&lt;/p&gt;

&lt;p&gt;This project focuses on &lt;strong&gt;helping users think less&lt;/strong&gt; during critical moments.&lt;/p&gt;

&lt;p&gt;By embedding AI directly into operational workflows through Algolia’s retrieval engine, this approach shows how intelligent systems can assist users &lt;em&gt;without ever asking a question back&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>algoliachallenge</category>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>CI Guardian: Safe Human-in-the-Loop AI CI Remediation</title>
      <dc:creator>Sreenu Sasubilli</dc:creator>
      <pubDate>Tue, 27 Jan 2026 08:40:39 +0000</pubDate>
      <link>https://forem.com/sreenu_sasubilli_f9289c4e/ci-guardian-safe-human-in-the-loop-ai-ci-remediation-2bj0</link>
      <guid>https://forem.com/sreenu_sasubilli_f9289c4e/ci-guardian-safe-human-in-the-loop-ai-ci-remediation-2bj0</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/github-2026-01-21"&gt;GitHub Copilot CLI Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;CI Guardian is implemented as a GitHub CLI extension (gh ci-guardian) and runs entirely from the terminal, integrating GitHub Actions logs with GitHub Copilot CLI for safe, human-in-the-loop remediation.&lt;/p&gt;

&lt;p&gt;Instead of blindly applying AI-generated patches, CI Guardian analyzes real CI logs, summarizes the failure, and attempts a minimal fix only if it’s low-risk. If the fix is unclear or unsafe, it stops and leaves the decision to a human.&lt;/p&gt;

&lt;p&gt;The tool can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Diagnose CI failures with structured root-cause analysis&lt;/li&gt;
&lt;li&gt;Attempt minimal, semantic fixes&lt;/li&gt;
&lt;li&gt;Automatically open PRs only when patches apply cleanly&lt;/li&gt;
&lt;li&gt;Refuse unsafe or low-confidence fixes and escalate to a human when necessary&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I tested CI Guardian on both a small demo repo and a real fork of Flask, including scenarios with fork permissions, pull-request-only CI, and multiple workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Repository:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/sasubillis/gh-ci-guardian" rel="noopener noreferrer"&gt;https://github.com/sasubillis/gh-ci-guardian&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The extension entrypoint maps directly to &lt;code&gt;ci_guardian/cli.py&lt;/code&gt;, which handles run discovery, log extraction, Copilot prompting, patch validation, and PR creation.&lt;/p&gt;

&lt;p&gt;All screenshots below were captured against real repositories with real failing CI runs, including a fork of Flask to demonstrate behavior on a production-scale codebase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example usage:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Diagnose the latest failing CI run
gh ci-guardian diagnose --latest --branch all

# Attempt a safe fix and open a PR if possible
gh ci-guardian fix --latest --branch all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;What the demo shows:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CI failures diagnosed into structured JSON&lt;/li&gt;
&lt;li&gt;Copilot-generated unified diffs&lt;/li&gt;
&lt;li&gt;Automatic PR creation when patches are safe&lt;/li&gt;
&lt;li&gt;Graceful refusal with preserved diffs when fixes are unsafe (human-in-the-loop)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This behavior was demonstrated on a real Flask fork where CI failures only surface on pull requests, not direct pushes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Diagnosis on Failing CI with demo repo&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmr4nkxpmamw0yq2npbil.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmr4nkxpmamw0yq2npbil.png" alt="Diagnosis on Failing CI with demo repo" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix made by ci-guardian on demo repo&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgam319sbf1x3hl07fk92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgam319sbf1x3hl07fk92.png" alt="Fix made by ci-guardian on demo repo" width="800" height="37"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PR opened in GitHub by ci-guardian&lt;/strong&gt;&lt;br&gt;
When a fix is safe and minimal, CI Guardian automatically opens a remediation pull request.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fei9jk08ih1rhv4qq0op7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fei9jk08ih1rhv4qq0op7.png" alt="PR opened in GitHub by ci-guardian" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Diagnosis on Failing CI on real repo (Flask)&lt;/strong&gt;&lt;br&gt;
CI Guardian converts a real failing GitHub Actions run into a structured, machine-readable diagnosis using GitHub Copilot CLI.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27o5ot7qlx75m846h52i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27o5ot7qlx75m846h52i.png" alt="Diagnosis on Failing CI on real repo (Flask)" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Human-in-the-loop Intervention&lt;/strong&gt;&lt;br&gt;
CI Guardian safely refuses to auto-fix an ambiguous CI failure on a real Flask fork and escalates to human review.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmw8wcfbgtn6j8l56d437.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmw8wcfbgtn6j8l56d437.png" alt="Human-in-the-loop Intervention" width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My Experience with GitHub Copilot CLI
&lt;/h2&gt;

&lt;p&gt;GitHub Copilot CLI was used as a &lt;strong&gt;reasoning engine&lt;/strong&gt;, not a blind code generator. I used &lt;code&gt;copilot -p&lt;/code&gt; to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Summarize CI logs into structured root-cause explanations&lt;/li&gt;
&lt;li&gt;Generate minimal unified diffs grounded in real failure logs&lt;/li&gt;
&lt;li&gt;Draft concise pull request titles and descriptions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key insight was that Copilot is most effective when paired with &lt;strong&gt;strict guardrails&lt;/strong&gt;. CI Guardian treats Copilot output as a &lt;em&gt;proposal&lt;/em&gt;, not a command, and enforces safety checks before applying any change. This results in automation that accelerates debugging without sacrificing trust or correctness.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>githubchallenge</category>
      <category>cli</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>Building an IPL Cricket Stats Assistant with Algolia Agent Studio</title>
      <dc:creator>Sreenu Sasubilli</dc:creator>
      <pubDate>Sat, 10 Jan 2026 01:02:05 +0000</pubDate>
      <link>https://forem.com/sreenu_sasubilli_f9289c4e/building-an-ipl-cricket-stats-assistant-with-algolia-agent-studio-44mg</link>
      <guid>https://forem.com/sreenu_sasubilli_f9289c4e/building-an-ipl-cricket-stats-assistant-with-algolia-agent-studio-44mg</guid>
      <description>&lt;h2&gt;
  
  
  IPL Cricket Stats Assistant — A Conversational AI Powered by Algolia
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/algolia"&gt;Algolia Agent Studio Challenge&lt;/a&gt;:&lt;br&gt;&lt;br&gt;
**Consumer-Facing Conversational Experiences&lt;/em&gt;**  &lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built an IPL Cricket Stats Assistant, a consumer-facing conversational AI that answers natural-language questions about IPL batting performance.&lt;/p&gt;

&lt;p&gt;Users can ask questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Rohit Sharma highest score”&lt;/li&gt;
&lt;li&gt;“Sharma highest score”&lt;/li&gt;
&lt;li&gt;“Virat Kohli at Chinnaswamy Stadium”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The assistant returns grounded, factual answers sourced directly from structured IPL match data.&lt;/p&gt;

&lt;p&gt;This assistant is designed for &lt;strong&gt;everyday cricket fans&lt;/strong&gt;, not analysts.&lt;br&gt;&lt;br&gt;
It supports natural language questions using familiar terms, nicknames, and partial names, allowing users to explore IPL statistics conversationally without needing structured filters or technical knowledge.&lt;/p&gt;




&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Live Agent (Algolia Agent Studio):&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The agent is published and testable directly inside Algolia Agent Studio.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Frontend Demo:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A lightweight React + InstantSearch demo was built locally to validate real-world usage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Screenshots&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Example queries demonstrating alias resolution, ambiguity handling, and deterministic retrieval&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Alias handling
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoxlkbvmk5yq654nqpo9.png" alt="Alias handling" width="800" height="1168"&gt;
&lt;/li&gt;
&lt;li&gt;Nickname handling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvtp8xkb1ty595ajhaxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvtp8xkb1ty595ajhaxp.png" alt="Nickname handling" width="800" height="1164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Canonical name + venue filter&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdbyc8nblqik7lhuqkp3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdbyc8nblqik7lhuqkp3.png" alt="Canonical name + venue filter" width="800" height="1130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ambiguity handling + Clarification follow-up&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F818ai187k6ombc9zd89w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F818ai187k6ombc9zd89w.png" alt="Ambiguity handling + Clarification follow-up" width="800" height="1130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Season filter&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftx4u6aqmtyxypem3v1yx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftx4u6aqmtyxypem3v1yx.png" alt="Season filter" width="800" height="1153"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How I Used Algolia Agent Studio
&lt;/h2&gt;

&lt;p&gt;Algolia Agent Studio serves as the orchestration layer between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A fast, structured &lt;strong&gt;Algolia Search index&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;A conversational &lt;strong&gt;LLM interface&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Carefully designed &lt;strong&gt;agent instructions&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key design choices:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Every answer is retrieved using &lt;strong&gt;Algolia Search&lt;/strong&gt; (no guessing).&lt;/li&gt;
&lt;li&gt;Each record represents &lt;strong&gt;one batsman’s performance in one match&lt;/strong&gt;, enabling deterministic responses.&lt;/li&gt;
&lt;li&gt;Filters are applied whenever possible (batsman, season, venue, match_id).&lt;/li&gt;
&lt;li&gt;The agent explicitly handles ambiguous queries (e.g., “Sharma”) by asking for clarification instead of assuming intent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result is a conversational experience that feels natural, but behaves like a reliable data system.&lt;/p&gt;




&lt;h2&gt;
  
  
  Data Source &amp;amp; Modeling
&lt;/h2&gt;

&lt;p&gt;The original data comes from the publicly available &lt;strong&gt;IPL Complete Dataset&lt;/strong&gt; on Kaggle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.kaggle.com/datasets/patrickb1912/ipl-complete-dataset-20082020" rel="noopener noreferrer"&gt;https://www.kaggle.com/datasets/patrickb1912/ipl-complete-dataset-20082020&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The raw dataset contains &lt;strong&gt;ball-by-ball delivery data (150K+ rows)&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
For this project, I transformed the data in a Google Colab notebook to make it agent-friendly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Modeling decisions:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Aggregated ball-level data into &lt;strong&gt;one record per batsman per match&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Precomputed runs, balls, fours, and sixes per match&lt;/li&gt;
&lt;li&gt;Added a &lt;code&gt;batsman_aliases&lt;/code&gt; field to support natural queries
(e.g., “Rohit”, “Hitman” → “RG Sharma”)&lt;/li&gt;
&lt;li&gt;Removed the need for cross-record arithmetic inside the agent&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This reduced the dataset to &lt;strong&gt;~9.5K clean, deterministic records&lt;/strong&gt;, optimized for fast retrieval and conversational accuracy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this mattered:&lt;/strong&gt;  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Modeling the data at the “one batsman, one match” level ensures the agent never invents statistics and can answer questions instantly using pure retrieval.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;(Optional: Google Colab notebook showing how raw IPL ball-by-ball data was aggregated into one-record-per-batsman-per-match for Agent Studio ingestion:&lt;br&gt;&lt;br&gt;
&lt;a href="https://colab.research.google.com/drive/1UXomb6vJfgX2aT8Patvb1HubTHk38eOG?usp=sharing" rel="noopener noreferrer"&gt;https://colab.research.google.com/drive/1UXomb6vJfgX2aT8Patvb1HubTHk38eOG?usp=sharing&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Fast Retrieval Matters
&lt;/h2&gt;

&lt;p&gt;Cricket statistics are &lt;strong&gt;fact-heavy and precision-sensitive&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
A single incorrect number breaks user trust.&lt;/p&gt;

&lt;p&gt;Algolia’s fast, contextual retrieval ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sub-100ms responses, even with filters&lt;/li&gt;
&lt;li&gt;Accurate grounding for every answer&lt;/li&gt;
&lt;li&gt;Clean handling of ambiguity and partial queries&lt;/li&gt;
&lt;li&gt;A conversational UX without sacrificing correctness&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of generating answers, the agent &lt;strong&gt;retrieves facts and explains them&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This project demonstrates how &lt;strong&gt;Agent Studio + well-modeled data&lt;/strong&gt; can create conversational experiences that are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trustworthy&lt;/li&gt;
&lt;li&gt;Fast&lt;/li&gt;
&lt;li&gt;User-friendly&lt;/li&gt;
&lt;li&gt;Production-ready&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rather than building “just a chatbot,” I focused on designing an agent that behaves like a &lt;strong&gt;reliable statistical assistant&lt;/strong&gt;, grounded in real data and optimized for human queries.&lt;/p&gt;

&lt;p&gt;Thanks for checking it out!&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>algoliachallenge</category>
      <category>ai</category>
      <category>agents</category>
    </item>
  </channel>
</rss>
