<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Apoorv Gupta</title>
    <description>The latest articles on Forem by Apoorv Gupta (@apoorv_dev07).</description>
    <link>https://forem.com/apoorv_dev07</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/apoorv_dev07"/>
    <language>en</language>
    <item>
      <title>Laptop/PC Device Anomaly Analyzer: Self-Healing System Powered by Agentic Postgres</title>
      <dc:creator>Apoorv Gupta</dc:creator>
      <pubDate>Mon, 10 Nov 2025 07:31:10 +0000</pubDate>
      <link>https://forem.com/apoorv_dev07/laptoppc-device-anomaly-analyzer-self-healing-system-powered-by-agentic-postgres-2n7m</link>
      <guid>https://forem.com/apoorv_dev07/laptoppc-device-anomaly-analyzer-self-healing-system-powered-by-agentic-postgres-2n7m</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/tigerdata-2025-10-15"&gt;Agentic Postgres Challenge with Tiger Data&lt;/a&gt;&lt;/em&gt; &lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Laptop Anomaly Analyzer&lt;/strong&gt; is an autonomous, AI-powered monitoring system built on &lt;strong&gt;Agentic Postgres + Tiger MCP + Gemini&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
It continuously collects local system metrics (CPU, RAM, Disk, Network I/O), detects performance anomalies in real time, and automatically explains &lt;em&gt;why&lt;/em&gt; they happened — all from within Postgres itself.  &lt;/p&gt;

&lt;p&gt;No external ML service, no Python inference engine — just &lt;strong&gt;Agentic Postgres acting as its own AI brain&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;The project began as a simple TimescaleDB-based logger for my laptop performance, but evolved into an &lt;strong&gt;agentic database experiment&lt;/strong&gt; where the DB doesn’t just store telemetry, it &lt;em&gt;understands, reasons, and reacts&lt;/em&gt;.  &lt;/p&gt;




&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;🔗 &lt;strong&gt;GitHub Repository:&lt;/strong&gt; &lt;a href="https://github.com/StephCurry07/Device-Anomaly-Detector" rel="noopener noreferrer"&gt;github.com/StephCurry07/laptop-anomaly-analyzer&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;em&gt;(Repo includes collector script, MCP config, and dashboard setup)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Outputs:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26wvmkjt9zl0rmizbwn9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26wvmkjt9zl0rmizbwn9.png" alt="Starting off with Gemini CLI" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8pcj8fkubqo7wtrfc1k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8pcj8fkubqo7wtrfc1k.png" alt="A basic ask to connect to TimescaleDB" width="800" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltdd166to3jsi75qtmrb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltdd166to3jsi75qtmrb.png" alt="Query returned" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3ay5ata1tnwlacz6uzb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3ay5ata1tnwlacz6uzb.png" alt="Uncovering the project's base" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dashboard Preview:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
![Grafana Dashboard]&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9nhv0phiwfbtk2iyvo4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9nhv0phiwfbtk2iyvo4.png" alt=" " width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How I Used Agentic Postgres
&lt;/h2&gt;

&lt;p&gt;This project combines several of &lt;strong&gt;Agentic Postgres’ most advanced features&lt;/strong&gt;:&lt;/p&gt;

&lt;h3&gt;
  
  
  ⚙️ &lt;strong&gt;Tiger MCP (Model Context Protocol)&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Runs three autonomous database agents:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;anomaly_detector&lt;/code&gt; → runs every 10 minutes to detect CPU/RAM anomalies.
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;root_cause_agent&lt;/code&gt; → triggers on new anomalies, uses vector search to find similar incidents.
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;daily_summary&lt;/code&gt; → summarizes system performance once a day using Gemini reasoning.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;All logic executes &lt;em&gt;inside the database&lt;/em&gt;, orchestrated through MCP — no external scripts required.&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  💬 &lt;strong&gt;Tiger CLI&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Provides a natural-language interface:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; &lt;span class="s2"&gt;"Summarize anomalies in the last 24 hours"&lt;/span&gt;
 &lt;span class="s2"&gt;"Find similar CPU spikes from past week"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Overall Experience
&lt;/h2&gt;

&lt;p&gt;It was my first time using postgres for a project. Building with Agentic Postgres completely changed how I think about data systems. Instead of pushing data to an external model or pipeline, the database itself became the reasoning layer, thanks to MCP and TigerData.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stack Used
&lt;/h2&gt;

&lt;p&gt;Postgres + TimescaleDB&lt;br&gt;
Tiger MCP&lt;br&gt;
TigerData&lt;br&gt;
Gemini CLI&lt;br&gt;
Grafana (for visualization)&lt;br&gt;
Python (for metric collection)&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>agenticpostgreschallenge</category>
      <category>ai</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Beyond the Cache: AI-Driven Incident Management with Redis</title>
      <dc:creator>Apoorv Gupta</dc:creator>
      <pubDate>Mon, 11 Aug 2025 06:58:01 +0000</pubDate>
      <link>https://forem.com/apoorv_dev07/beyond-the-cache-ai-driven-incident-management-with-redis-2l18</link>
      <guid>https://forem.com/apoorv_dev07/beyond-the-cache-ai-driven-incident-management-with-redis-2l18</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/redis-2025-07-23"&gt;Redis AI Challenge&lt;/a&gt;: Beyond the Cache.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built an &lt;strong&gt;AI-powered incident management platform&lt;/strong&gt; that doesn’t just track incidents—it actively &lt;strong&gt;tries to fix them automatically&lt;/strong&gt; in real-time.&lt;br&gt;&lt;br&gt;
Instead of being a passive dashboard, the system reacts instantly to new issues by triggering an &lt;strong&gt;autofix workflow&lt;/strong&gt; backed by Redis Streams and JSON storage.  &lt;/p&gt;

&lt;p&gt;When an incident is detected, Redis:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stores it as the &lt;strong&gt;primary database&lt;/strong&gt; in JSON format.&lt;/li&gt;
&lt;li&gt;Pushes it into a &lt;strong&gt;Redis Stream&lt;/strong&gt; for processing.&lt;/li&gt;
&lt;li&gt;Supports &lt;strong&gt;real-time WebSocket updates&lt;/strong&gt; to all connected clients so the UI reflects changes instantly.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The system can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Detect&lt;/strong&gt;: New incidents are pushed into Redis JSON.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fix&lt;/strong&gt;: A service attempts context-specific fixes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Update&lt;/strong&gt;: The dashboard is refreshed via WebSocket with status changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This makes the platform not just a monitoring tool—but a self-healing system.&lt;/p&gt;




&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Github Link: &lt;a href="https://github.com/StephCurry07/Redis-IncidentResponseDashboard" rel="noopener noreferrer"&gt;https://github.com/StephCurry07/Redis-IncidentResponseDashboard&lt;/a&gt;&lt;br&gt;&lt;br&gt;
📹 &lt;strong&gt;Video Walkthrough&lt;/strong&gt;: &lt;a href="https://youtu.be/00vPkSHT3Ac" rel="noopener noreferrer"&gt;YouTube Link&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Screenshots:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F709fwv8eowjji3zpla3d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F709fwv8eowjji3zpla3d.png" alt="Incident Dashboard" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How I Used Redis 8
&lt;/h2&gt;

&lt;p&gt;I used &lt;strong&gt;Redis 8&lt;/strong&gt; well beyond simple caching:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Primary Database (RedisJSON)&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Incidents are stored directly in RedisJSON with rich metadata (&lt;code&gt;severity&lt;/code&gt;, &lt;code&gt;status&lt;/code&gt;, &lt;code&gt;description&lt;/code&gt;, &lt;code&gt;tags&lt;/code&gt;).
&lt;/li&gt;
&lt;li&gt;Updates are instant and atomic—ideal for high-frequency changes from multiple services.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Real-time Streams for Autofix Pipeline&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every new incident is appended to a Redis Stream (&lt;code&gt;incident_stream&lt;/code&gt;).
&lt;/li&gt;
&lt;li&gt;A background AI service consumes from this stream to attempt automated fixes, logging results back to Redis.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Pub/Sub for UI Live Updates&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On every update (fix, status change), a message is published to &lt;code&gt;incident_updates&lt;/code&gt;.
&lt;/li&gt;
&lt;li&gt;The frontend listens over WebSockets for instant UI refreshes without polling.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Full-Text Search (RediSearch)&lt;/strong&gt; &lt;em&gt;(optional enhancement)&lt;/em&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allows users to quickly filter incidents by description, tags, or owner with sub-millisecond search results.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Why It’s Beyond the Cache
&lt;/h2&gt;

&lt;p&gt;Redis is the &lt;strong&gt;operational core&lt;/strong&gt; of this system—without it, real-time reaction, distributed event processing, and instant updates wouldn’t be possible.&lt;br&gt;&lt;br&gt;
It’s:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;primary DB&lt;/strong&gt; (RedisJSON)&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;data pipeline&lt;/strong&gt; (Streams)&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;search engine&lt;/strong&gt; (RediSearch)
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;... all in one.  &lt;/p&gt;

&lt;p&gt;Instead of just serving cached reads, Redis orchestrates the entire lifecycle from &lt;strong&gt;incident detection → AI fix → live UI updates&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Implement AI autofix logic to handle different infrastructure types.
&lt;/li&gt;
&lt;li&gt;Integrate with CI/CD pipelines for proactive rollback.
&lt;/li&gt;
&lt;li&gt;Add historical analytics using RedisTimeSeries.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>redischallenge</category>
      <category>devchallenge</category>
      <category>database</category>
      <category>ai</category>
    </item>
    <item>
      <title>Stock Search and Insights Using Algolia and n8n</title>
      <dc:creator>Apoorv Gupta</dc:creator>
      <pubDate>Mon, 28 Jul 2025 01:08:01 +0000</pubDate>
      <link>https://forem.com/apoorv_dev07/stock-search-and-insights-using-algolia-and-n8n-569m</link>
      <guid>https://forem.com/apoorv_dev07/stock-search-and-insights-using-algolia-and-n8n-569m</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/algolia-2025-07-09"&gt;Algolia MCP Server Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Stock Search &amp;amp; Insights Platform
&lt;/h1&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built a &lt;strong&gt;Stock Search &amp;amp; Insights Platform&lt;/strong&gt; powered by the &lt;strong&gt;Algolia MCP Server&lt;/strong&gt; for blazing-fast symbol and company name lookups.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Algolia&lt;/strong&gt; provides instant search across an index of stock names and symbols.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bun&lt;/strong&gt; serves as the lightweight backend to fetch data and interact with external APIs.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;n8n&lt;/strong&gt; acts as an &lt;strong&gt;orchestration backend&lt;/strong&gt;, managing workflows for technical analysis, AI-driven insights, and data enrichment.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TwelveData APIs&lt;/strong&gt; are used for fetching real-time prices, technical analysis, and SMA calculations.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chart-IMG API&lt;/strong&gt; generates &lt;strong&gt;advanced charts&lt;/strong&gt; with &lt;strong&gt;Bollinger Bands, RSI, and Volume&lt;/strong&gt; indicators.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPT‑4o‑mini&lt;/strong&gt; analyzes the data and produces quick, AI-driven insights for each stock.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result: A &lt;strong&gt;single interface&lt;/strong&gt; where users can &lt;strong&gt;search for a stock&lt;/strong&gt;, instantly view &lt;strong&gt;real-time price data&lt;/strong&gt;, &lt;strong&gt;AI-powered analysis&lt;/strong&gt;, and &lt;strong&gt;technical charts&lt;/strong&gt; — all in one place.  &lt;/p&gt;




&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Repo:&lt;/strong&gt; &lt;a href="https://github.com/StephCurry07/Algolia-StockSearch" rel="noopener noreferrer"&gt;https://github.com/StephCurry07/Algolia-StockSearch&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Video Walkthrough:&lt;/strong&gt; &lt;a href="https://youtu.be/BVtRHilMbRo" rel="noopener noreferrer"&gt;https://youtu.be/BVtRHilMbRo&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  How I Utilized the Algolia MCP Server
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Built an &lt;strong&gt;Algolia MCP Server&lt;/strong&gt; to manage and query a &lt;strong&gt;custom stock index&lt;/strong&gt; containing symbols, company names, and exchanges.
&lt;/li&gt;
&lt;li&gt;Exposed &lt;strong&gt;MCP-like endpoints&lt;/strong&gt; (&lt;code&gt;/mcp/searchStocks&lt;/code&gt;, &lt;code&gt;/mcp/analyzeStock&lt;/code&gt;) that act as a single entry point for the frontend, abstracting away multiple API calls and complex workflows.
&lt;/li&gt;
&lt;li&gt;Integrated &lt;strong&gt;Algolia InstantSearch&lt;/strong&gt; with my React frontend for &lt;strong&gt;fast, typo-tolerant, and responsive search&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Used the MCP server as a &lt;strong&gt;broker&lt;/strong&gt; between Algolia, &lt;strong&gt;n8n workflows&lt;/strong&gt; (for chart generation, technical analysis, and AI insights), and &lt;strong&gt;external APIs&lt;/strong&gt; (TwelveData &amp;amp; Chart-IMG).
&lt;/li&gt;
&lt;li&gt;This architecture &lt;strong&gt;decouples the frontend from multiple data sources&lt;/strong&gt; — the MCP server handles enrichment, error handling, and data aggregation before sending a unified response back to the UI.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Performance is key:&lt;/strong&gt; Algolia MCP made stock searching instantaneous, which is crucial for a financial data app.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workflow automation saves time:&lt;/strong&gt; n8n helped me orchestrate data fetching (real-time quotes, SMA, technical indicators) and combine them into a single response for the frontend.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI adds value:&lt;/strong&gt; Using GPT‑4o‑mini, I transformed raw numbers into actionable insights for end-users.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Charting matters:&lt;/strong&gt; Integrating Chart-IMG allowed me to display &lt;strong&gt;professional-grade charts with key indicators&lt;/strong&gt; effortlessly.&lt;/li&gt;
&lt;li&gt;Learned how to combine Algolia + MCP + Bun + n8n + React + AI into a cohesive product pipeline.&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>algoliachallenge</category>
      <category>ai</category>
      <category>bunjs</category>
      <category>n8n</category>
    </item>
  </channel>
</rss>
