<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ashley Childress</title>
    <description>The latest articles on Forem by Ashley Childress (@anchildress1).</description>
    <link>https://forem.com/anchildress1</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/anchildress1"/>
    <language>en</language>
    <item>
      <title>Meet Hotfix—The Dragon Your Legacy Code Deserves</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Mon, 13 Apr 2026 04:44:55 +0000</pubDate>
      <link>https://forem.com/anchildress1/meet-hotfix-the-dragon-your-legacy-code-deserves-4141</link>
      <guid>https://forem.com/anchildress1/meet-hotfix-the-dragon-your-legacy-code-deserves-4141</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/aprilfools-2026"&gt;DEV April Fools Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt; &lt;br&gt;
The permanent solution to every developer headache: thermal decommissioning.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upload a screenshot → &lt;em&gt;Hotfix&lt;/em&gt; roasts it&lt;/li&gt;
&lt;li&gt;Gemini generates structured incident reports&lt;/li&gt;
&lt;li&gt;Community votes via escalation system + shares&lt;/li&gt;
&lt;li&gt;Top incidents become global P0 disasters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Hotfix&lt;/em&gt; files serious incident reports. It does not understand that it is completely unhinged. That's what makes it so funny.&lt;/p&gt;

&lt;p&gt;Here’s a real incident report generated via live capture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F990bo4u6qqy7h8ppwc6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F990bo4u6qqy7h8ppwc6z.png" alt="Screenshot Legacy Smelter P0 example" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  I Am the Problem 🏚️
&lt;/h3&gt;

&lt;p&gt;I am the subject matter expert (SME) for several legacy applications at work, and every single time somebody stirs dust in the server room—since I can't come up with any other viable explanation—something breaks. After dealing with this nonsense in one form or another for well over a solid year, I announced &lt;strong&gt;the permanent fix: smelting.&lt;/strong&gt; I am fully confident that smelting those legacy servers will resolve my ongoing issues instantaneously.&lt;/p&gt;

&lt;p&gt;The one thing I've been lacking in my fantastical smelting solution is a dragon. Nobody seemed rather invested in how serious I am about problem solving, because so far not one person has offered me a dragon to get the job done. So I built my own—and I'm sharing it, because legacy code suffering is not a solo experience. Take a screenshot and let the Legacy Smelter handle the problem for you.&lt;/p&gt;
&lt;h3&gt;
  
  
  Asset Designation: &lt;em&gt;Hotfix&lt;/em&gt; 🪧
&lt;/h3&gt;

&lt;p&gt;Meet &lt;em&gt;Hotfix&lt;/em&gt;—and yes, I named the dragon &lt;em&gt;Hotfix&lt;/em&gt; because that is hilarious. Anything else would have been a giant missed opportunity for dragon naming. This app is more than a dragon, though—it's a whole incident management system. You can upload any screenshot—problematic code, poor UI designs, bugs that make you want to scream, or a selfie (if you can handle a little roasting)—and &lt;em&gt;Hotfix&lt;/em&gt; will smelt the problem and give you a detailed incident report memorializing the true fix, which is melting it into oblivion.&lt;/p&gt;

&lt;p&gt;The incident reports are added to a global manifest where you can share with friends who would appreciate your solution to the problem. Links are configured to unfurl properly on most platforms, including Slack and Discord. Sharing an incident is considered a containment breach by the system—wait seven seconds between shares to avoid rate limits—which increases the overall Impact for that incident. You can also escalate your favorite incidents, which carries even more weight. The top three global incidents with the highest impact rating are displayed on the main page as P0 priority.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Operational Notice:&lt;/strong&gt; Submitted images are processed by Gemini's paid API. Google is not using your uploaded images for training—they're only retained 55 days for abuse monitoring. Do not submit assets you do not own. Do not submit from a company device.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Live at &lt;strong&gt;&lt;a href="https://hotfix.anchildress1.dev" rel="noopener noreferrer"&gt;hotfix.anchildress1.dev&lt;/a&gt;&lt;/strong&gt;—head to the live site for camera uploads, since iframes don't have camera permissions.&lt;/p&gt;


&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://legacy-smelter-288489184837.us-east1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Try to Break It ⛓️‍💥
&lt;/h3&gt;

&lt;p&gt;Upload:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The worst UI you've ever seen&lt;/li&gt;
&lt;li&gt;Your most cursed code snippet&lt;/li&gt;
&lt;li&gt;A selfie (if you think you're emotionally prepared)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Share it&lt;/li&gt;
&lt;li&gt;Escalate it&lt;/li&gt;
&lt;li&gt;Win a sanction&lt;/li&gt;
&lt;li&gt;Try to get into the global P0 leaderboard&lt;/li&gt;
&lt;li&gt;Copy your output in the comments—it counts as a containment breach!&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;The repo includes the full React frontend, Express server, Cloud Functions for sanction judging, Firestore rules, and a docs/ folder with the design decisions and prompt files referenced in this post.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/anchildress1" rel="noopener noreferrer"&gt;
        anchildress1
      &lt;/a&gt; / &lt;a href="https://github.com/anchildress1/legacy-smelter" rel="noopener noreferrer"&gt;
        legacy-smelter
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A hardware-accelerated mobile web app that visually melts user-uploaded legacy tech into a puddle of slag. Built for the DEV April Fools Challenge.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div&gt;
&lt;a rel="noopener noreferrer nofollow" href="https://repository-images.githubusercontent.com/1201373945/f2802097-2afe-4c31-848f-a94cc13ca0b1"&gt;&lt;img width="1200" height="475" alt="Legacy Smelter" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frepository-images.githubusercontent.com%2F1201373945%2Ff2802097-2afe-4c31-848f-a94cc13ca0b1"&gt;&lt;/a&gt;
&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Legacy Smelter&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;A satirical incident reporting system for condemned digital artifacts. Upload an image. Hotfix processes it. Output: molten slag.&lt;/p&gt;
&lt;p&gt;The system analyzes uploaded images using Gemini Vision and files a formal postmortem — classification, severity, failure origin, disposition, archive note — before thermally decommissioning the artifact via dragon-based remediation.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Features&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gemini Vision analysis&lt;/strong&gt; — 16-field structured incident schema delivered via Gemini's constrained JSON mode&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hotfix animation&lt;/strong&gt; — PixiJS dragon idle, fly-in, and smelt sequence with audio&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Incident postmortem&lt;/strong&gt; — full structured report overlay with social share (X, Bluesky, Reddit, LinkedIn) plus copy-link&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Global incident manifest&lt;/strong&gt; — real-time Firestore feed of all thermally decommissioned artifacts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Decommission index&lt;/strong&gt; — live cumulative pixel count across all incidents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Camera support&lt;/strong&gt; — deploy field scanner via device camera or file upload&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Stack&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Framework&lt;/td&gt;
&lt;td&gt;React 19 + TypeScript + Vite&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Animation&lt;/td&gt;
&lt;td&gt;PixiJS 8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI&lt;/td&gt;
&lt;td&gt;Gemini (&lt;code&gt;gemini-3.1-flash-lite-preview&lt;/code&gt;) via &lt;code&gt;@google/genai&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Database&lt;/td&gt;
&lt;td&gt;Firebase Firestore&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;…&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/anchildress1/legacy-smelter" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;⚖️ This project is licensed under &lt;a href="https://github.com/anchildress1/legacy-smelter/tree/v2.0.0?tab=License-1-ov-file" rel="noopener noreferrer"&gt;Polyform Shield 1.0.0&lt;/a&gt; and is released for this challenge as &lt;a href="https://github.com/anchildress1/legacy-smelter/tree/v2.0.0?tab=readme-ov-file" rel="noopener noreferrer"&gt;v2.0.0&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Dragon 🥚
&lt;/h3&gt;

&lt;p&gt;Getting the animation right was the hardest part of the entire build, and I went into it knowing almost nothing about sprite animation beyond whether something looked right or not. I found the dragon sprites on &lt;a href="https://gamedevmarket.net" rel="noopener noreferrer"&gt;GameDevMarket.net&lt;/a&gt; and figured AI could handle the rest—which was optimistic of me, because AI is decidedly rough at producing smooth animation on the first try or the fifth. I picked up bits and pieces along the way, spent a humbling amount of time on what probably should have been a simpler problem, and I am still nowhere near an expert—but I am rather pleased with how &lt;em&gt;Hotfix&lt;/em&gt; turned out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcfql1crxhfsgfvnru5m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcfql1crxhfsgfvnru5m.png" alt="Screenshot of Hotfix—the Legacy Smelter dragon" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Stack 🧰
&lt;/h3&gt;

&lt;p&gt;The front end is React 19 and TypeScript on Vite, Tailwind v4 for styling, PixiJS 8 for the dragon animation because Canvas 2D was never going to give me the smoothness I needed, and Howler.js so the smelt actually feels like something is happening. On the backend, Firestore handles everything community-facing, Firebase Auth gates the upload endpoint, and a small Express server keeps my Gemini API key off the client.&lt;/p&gt;

&lt;p&gt;Gemini runs through the &lt;code&gt;@google/genai&lt;/code&gt; SDK with two models doing two different jobs. Sanction judging fires as a Cloud Functions v2 &lt;code&gt;onDocumentCreated&lt;/code&gt; trigger, claimed inside a Firestore transaction so concurrent invocations can't overlap.&lt;/p&gt;

&lt;p&gt;Deployment is Cloud Run primarily because I like having the embeds available in these posts. I have a strong deployment pipeline already, which is running locally for this build instead of inside GHA—I already have the setup wired into Claude to build this flow for every app I create, so input from me is minimal.&lt;/p&gt;

&lt;p&gt;The downside is that Cloud Run is not the stack I would have picked for this application had AI Studio not wired it that way from the beginning. Cloud Run is expensive, cold starts can be problematic for performance, and I didn't want it always-on just to run background functions—which I never scheduled anyway, so ultimately unnecessary. But that's how Cloud Functions got involved and turned this toy project into a three-server special in GCP.&lt;/p&gt;

&lt;h3&gt;
  
  
  Global Smelt Accumulation 🌋
&lt;/h3&gt;

&lt;p&gt;Every image uploaded is converted into a total pixel count and added to a running Firestore counter. It's displayed at the top of every page and is a completely useless metric that I enjoy seeing—a completely valid use case.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vibing a Solution 🫠
&lt;/h3&gt;

&lt;p&gt;I was convinced I didn't need to write tests for a toy project I didn't expect to last, and I failed miserably at that conviction. I ended up using Vitest with Testing Library and the Firebase emulator, because fighting AI to stop making the same mistakes gets expensive much faster than just writing a test suite. The majority of my time was spent validating and complaining that the UI was not yet finished across Claude, ChatGPT, and Gemini. I think the four of us together somehow managed to not embarrass me, which I have categorized as a win.&lt;/p&gt;

&lt;h3&gt;
  
  
  Credits 🪙
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Hotfix&lt;/em&gt; owes his entire existence to the artists whose work makes up the core of the experience. All assets sourced from &lt;a href="https://gamedevmarket.net" rel="noopener noreferrer"&gt;GameDevMarket.net&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dragon animation sprites&lt;/strong&gt; — &lt;a href="https://www.gamedevmarket.net/asset/animated-dragon" rel="noopener noreferrer"&gt;Animated Dragon&lt;/a&gt; by RobertBrooks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slag/liquid effects&lt;/strong&gt; — &lt;a href="https://www.gamedevmarket.net/asset/flowing-gooliquid-5653" rel="noopener noreferrer"&gt;Flowing Goo-Liquid&lt;/a&gt; by RobertBrooks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sound effects&lt;/strong&gt; — &lt;a href="https://www.gamedevmarket.net/asset/dark-fantasy-studio-dragon" rel="noopener noreferrer"&gt;Dark Fantasy Studio – Dragon&lt;/a&gt; by DFS (Nicolas Jeudy)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Prize Category
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Best Google AI Usage 🏅
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What Gemini Powers ⚙️
&lt;/h4&gt;

&lt;p&gt;Two Gemini models power the live experience. Every upload is processed by &lt;code&gt;gemini-3.1-flash-lite-preview&lt;/code&gt;, which:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;identifies the subject and draws a bounding box around the primary artifact&lt;/li&gt;
&lt;li&gt;extracts five hex colors as a chromatic profile&lt;/li&gt;
&lt;li&gt;generates a 15-field structured incident report under strict voice and word-count constraints&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Hotfix&lt;/em&gt; uses that bounding box to smelt the portion of the image Gemini actually flagged&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;gemini-3-flash-preview&lt;/code&gt; handles sanction selection on a separate path, grading batches of five incidents based on comedic scoring rules—more on that below.&lt;/p&gt;

&lt;p&gt;The voice was a complete accident. The first pass at the prompt was a plain "read the image and return a structured report" instruction, which worked fine right up until I tried to trick the system with a selfie just to see what would happen. It roasted me. Thoroughly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1y4gng596hti77k6ppu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1y4gng596hti77k6ppu.png" alt="Screenshot of Legacy Smelter Postmortem Incident Report—Archive Note" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I spent the rest of the build optimizing for that exact energy—an enterprise postmortem entirely convinced of its own importance. The voice rules at the top of the prompt file are the load-bearing ones:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Voice&lt;/span&gt;

Enterprise incident report. Postmortem tone: dry, precise, operational, concise. Accusatory toward the artifact and its history.

The system treats absurd subjects as routine incidents. It is filing an incident report. It does not know it is funny.

&lt;span class="gu"&gt;## Comedy mechanics&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; Specificity over generality. "Also, the green paint" is funny. Find the one weird concrete thing in the image and call it out.
&lt;span class="p"&gt;-&lt;/span&gt; The deadpan afterthought. End a technical assessment with a flat, too-honest trailing observation.
&lt;span class="p"&gt;-&lt;/span&gt; Commit beyond the point of reason. Start institutional, then dramatically escalate without changing tone.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;"The system does not know it is funny" is the whole design philosophy in one sentence. That's the entire premise in a nutshell.&lt;/p&gt;

&lt;p&gt;Every one of the 15 returned fields has its own word-count cap and voice constraint baked into the prompt—without them, Gemini defaults to generic corporate language and the bit falls apart. The full prompt file is in the repo in &lt;a href="https://github.com/anchildress1/legacy-smelter/blob/v2.0.0/server.js#L144" rel="noopener noreferrer"&gt;&lt;code&gt;server.js&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  The Sanction Logic 📛
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;gemini-3-flash-preview&lt;/code&gt; handles the sanction path—Flash Lite falls apart on comparison judging across a batch, and Pro is overkill that actually loses some of the unhinged quality Flash is known for.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftoolo42suf5v5lrb23w6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftoolo42suf5v5lrb23w6.png" alt="Screenshot of a Gemini sanction" width="800" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The original image is never stored, so Gemini can't grade accuracy against the source—it can only judge the writing. The first draft used strict grading criteria and kept picking the most technically accurate report instead of the funniest. Version two mostly lets Gemini run wild, and it picks the funny one now. The guidelines that survived:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Signals that a record may deserve sanction:
&lt;span class="p"&gt;
-&lt;/span&gt; disproportionate institutional seriousness applied to an ordinary software or workplace failure
&lt;span class="p"&gt;-&lt;/span&gt; precise, concrete details that make the situation feel embarrassingly real
&lt;span class="p"&gt;-&lt;/span&gt; escalation from a small defect, design choice, or human workaround into procedural absurdity
&lt;span class="p"&gt;-&lt;/span&gt; wording that implies everyone involved has accepted something obviously unreasonable as normal
&lt;span class="p"&gt;-&lt;/span&gt; dry phrasing that lands harder the straighter it is read

Do not reward a record merely for being:
&lt;span class="p"&gt;
-&lt;/span&gt; wordy
&lt;span class="p"&gt;-&lt;/span&gt; random
&lt;span class="p"&gt;-&lt;/span&gt; technically dense
&lt;span class="p"&gt;-&lt;/span&gt; surreal without a clear comedic turn
&lt;span class="p"&gt;-&lt;/span&gt; mildly clever but interchangeable with the others
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The full sanction prompt file is in the repo in &lt;a href="https://github.com/anchildress1/legacy-smelter/blob/v2.0.0/functions/sanction.js#L76" rel="noopener noreferrer"&gt;&lt;code&gt;functions/sanction.js&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  Building with Google AI 🧪
&lt;/h4&gt;

&lt;p&gt;I touched nearly every Google AI tool during this build. Gemini Chat for brainstorming and prompt iteration, but it couldn't hold context long enough to be useful past the first few rounds. AI Studio for the initial scaffold—which checked my live API key into the repo on init, so that was fun until GitHub's secret detection caught it before I did. The CLI for animation work, though the accessibility skill was broken and I ended up routing around it. Antigravity until the free tier ran out mid-animation pass. Gemini Pro for the social banner, only it wasn't able to iterate for accurate edits. Each one ran out of steam before I was done, which is how I ended up reaching for all of them.&lt;/p&gt;

&lt;p&gt;What actually shipped runs on Gemini. Every postmortem is &lt;code&gt;gemini-3.1-flash-lite-preview&lt;/code&gt; doing exactly what it's good at, live, in production. Every sanction is &lt;code&gt;gemini-3-flash-preview&lt;/code&gt; reading a batch of five and picking the one a dev would quote to a coworker. Two models, two jobs, both in constrained JSON mode, both doing real work on every request.&lt;/p&gt;

&lt;p&gt;Gemini's version of this project is released as &lt;a href="https://github.com/anchildress1/legacy-smelter/tree/v0.0.1" rel="noopener noreferrer"&gt;v0.0.1&lt;/a&gt; and produced this rather useless but very funny animation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8s3yu5kqzy5j23spsrv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8s3yu5kqzy5j23spsrv.png" alt="Screenshot of Gemini's version of the app for v0.0.1" width="792" height="1374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What actually shipped runs on Gemini. Every postmortem is &lt;code&gt;gemini-3.1-flash-lite-preview&lt;/code&gt; doing exactly what it's good at, live, in production. Every sanction is &lt;code&gt;gemini-3-flash-preview&lt;/code&gt; reading a batch of five and picking the one a dev would quote to a coworker. Two models, two jobs, both in constrained JSON mode, both doing real work on every request.&lt;/p&gt;


&lt;h3&gt;
  
  
  Community Favorite 🪩
&lt;/h3&gt;

&lt;p&gt;Legacy Smelter is a system designed to be shared, escalated, and collectively abused. Every incident lands on a global manifest, links unfurl on Slack and Discord, shares rack up breach points, escalations carry real weight, and the top three P0 incidents are permanent shrines to whatever the community found most absurd. If that sounds like something you'd enjoy, you're exactly who I built it for.&lt;/p&gt;


&lt;h3&gt;
  
  
  The Permanent Fix
&lt;/h3&gt;

&lt;p&gt;All in all I'm more than thrilled to finally have my dragon accessible whenever I'm fed up with something. It's a nice way to relieve some stress and the output can be genuinely hilarious overkill. &lt;/p&gt;

&lt;p&gt;Some problems just aren’t meant to be fixed...&lt;/p&gt;

&lt;p&gt;They’re meant to be smelted.&lt;/p&gt;


&lt;div class="ltag__user ltag__user__id__3224358"&gt;
    &lt;a href="/anchildress1" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=150,height=150,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3224358%2F7f675c78-6aa0-466a-a5a7-c3e35440d53a.png" alt="anchildress1 image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/anchildress1"&gt;Ashley Childress&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/anchildress1"&gt;Distributed backend specialist. Perfectly happy playing second fiddle—it means I get to chase fun ideas, dodge meetings, and break things no one told me to touch, all without anyone questioning it. 😇&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;






&lt;h4&gt;
  
  
  🛡️ Thermally Decommissioned with Assistance
&lt;/h4&gt;

&lt;p&gt;This post was written by me with collaborative editing from Claude, ChatGPT, and Gemini. The code for &lt;em&gt;Legacy Smelter&lt;/em&gt; was built using Claude Code—who also wrote the tests, the deployment pipeline, the Cloud Functions, and then got put to work on this submission post because I don't believe in downtime. &lt;/p&gt;

&lt;p&gt;ChatGPT and Gemini were consulted at various stages, though "consulted" is generous for how often they were told they were wrong. No AI was harmed in the making of this project, but one of them has now been through every phase of the software development lifecycle in a single sprint and may need to file its own incident report.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>418challenge</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Forged Between Coal and Code</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Fri, 03 Apr 2026 05:49:31 +0000</pubDate>
      <link>https://forem.com/anchildress1/forged-between-coal-and-code-phi</link>
      <guid>https://forem.com/anchildress1/forged-between-coal-and-code-phi</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/wecoded-2026"&gt;2026 WeCoded Challenge&lt;/a&gt;: Frontend Art&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Show us your Art
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Carbon Trace&lt;/em&gt; is an immersive memoir that I designed, wrote, narrated, and produced. I used my native Appalachian accent throughout since the origin story starts at home in a small coal town in Southwest Virginia.&lt;/p&gt;

&lt;p&gt;For the full experience, visit my website at &lt;a href="https://carbon-trace.anchildress1.dev" rel="noopener noreferrer"&gt;https://carbon-trace.anchildress1.dev&lt;/a&gt; and be sure to turn on your sound. &lt;/p&gt;

&lt;p&gt;Pay attention to the ambient audio shifting between scenes. Watch the circuit traces grow from barely visible to full coverage. The ghost-drift text is intentionally out of sync with the narration—it's not a subtitle, it's a feeling.&lt;/p&gt;

&lt;p&gt;Built with Canvas 2D, WebGL displacement effects, GSAP timelines, layered Howler.js audio, and accessibility-first interaction design—no frameworks, no shortcuts.&lt;/p&gt;


&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://carbon-trace-288489184837.us-east1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;💡 This submission reflects the &lt;a href="https://github.com/anchildress1/carbon-trace/tree/v1.0.1" rel="noopener noreferrer"&gt;v1.0.1&lt;/a&gt; release used for the competition build.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Inspiration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Origins of &lt;em&gt;Carbon Trace&lt;/em&gt; 🪨
&lt;/h3&gt;

&lt;p&gt;When I first saw this challenge, I felt what I wanted to draw almost immediately. The first obstacle was figuring out how to translate that feeling into code.&lt;/p&gt;

&lt;p&gt;I wasn't inspired by any one thing. I was inspired by &lt;em&gt;everything&lt;/em&gt;. To accurately convey the depth of gender roles in my life, I had to start at the beginning in the small Appalachian coal town where I grew up. Life there has clear binary boundaries: men work in the mines and women take care of the home. I've been pushing back on this ideal for as long as I can remember—starting with the toy kitchen gift I had zero interest in as a toddler.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Carbon Trace&lt;/em&gt; isn't another generic idea of equality. It's the fight I went through to be treated as an equal in a male dominated world. The diamond is a metaphor for my life and moves through its own journey in pictures as I tell you mine. I wrote and narrated the script in the exact same dialect I grew up speaking. Each scene has independent ambient audio designed to embody a specific emotion. There are small animations throughout that help bring the static images alive. Each individual component adds a layer of depth to the overall story.&lt;/p&gt;

&lt;p&gt;Every image builds off of the previous version as the narrative progresses and follows the same constraints: circuitry begins barely perceptible and grows every frame until it covers the entire frame. The diamond starts black and covered in coal and shines brighter until its full power in the end. Strategic lighting throughout obscures faces to keep the focus on the diamond as the primary character and prevent this from being about any one person—it's designed to be about women in the industry as a whole because my experience is not unique. It's one of many.&lt;/p&gt;

&lt;p&gt;So the diamond is my story zoomed out and abstracted so every individual can feel themselves inside the experience while I tell you about mine.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Numbers Haven't Changed 📉
&lt;/h3&gt;

&lt;p&gt;Even though we have grown from ideas like the ones I grew up with—where women belong in the kitchen, not in the coal mines—the inequality is still glaringly apparent in the tech space.&lt;/p&gt;

&lt;p&gt;In college I served as president of the &lt;a href="https://www.westga.edu/news/student-success/cs-wow.php" rel="noopener noreferrer"&gt;CS WoW&lt;/a&gt; club that aims to improve visibility of tech-related careers for young grade school girls through community outreach. Even with programs like this throughout the US, only one woman earns a CS degree for every 4 men (&lt;a href="https://nces.ed.gov/programs/digest/d23/tables/dt23_325.35.asp" rel="noopener noreferrer"&gt;NCES, 2021–22&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;In tech and more specifically engineering, men currently outnumber women 4 to 1 (&lt;a href="https://www.bls.gov/cps/cpsaat39.htm" rel="noopener noreferrer"&gt;BLS, CPS Table 39&lt;/a&gt;). The women who do work these jobs earn approximately 12% less on average than their male counterparts (&lt;a href="https://www.bls.gov/opub/reports/womens-earnings/2023/" rel="noopener noreferrer"&gt;BLS, Highlights of Women's Earnings 2023&lt;/a&gt;). I built &lt;em&gt;Carbon Trace&lt;/em&gt; as a long-lasting impact piece. It maintains the state of truth in 2026 the same as these statistics do. I didn't build it to raise awareness. I built &lt;em&gt;Carbon Trace&lt;/em&gt; to make you feel what these numbers can't.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why It Had to Be Immersive 🌊
&lt;/h3&gt;

&lt;p&gt;I imagine there's at least one person reading this and wondering why I needed a full scale production to tell a story that I just as easily could have written about. My answer is because &lt;strong&gt;I needed you to feel it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every technical layer in &lt;em&gt;Carbon Trace&lt;/em&gt; exists to carry a piece of that feeling. The ambient audio shifts between scenes to set an emotional tone that words alone can't establish—mine dust settling, water running, wind through an empty room. The ghost-drift text floats fragments of thought across the screen like the things you almost say out loud but don't. The circuit trace shimmer starts nearly invisible and grows brighter every scene because the potential was always there—it just needed the right conditions to be seen. The PixiJS displacement effects make the world around the diamond physically respond: water flows, heat rises, and the diamond glows with increasing intensity. None of these layers are decorative. Each one is a narrative instrument, and &lt;em&gt;Carbon Trace&lt;/em&gt; is what happens when they all play at once.&lt;/p&gt;




&lt;h2&gt;
  
  
  My Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/anchildress1" rel="noopener noreferrer"&gt;
        anchildress1
      &lt;/a&gt; / &lt;a href="https://github.com/anchildress1/carbon-trace" rel="noopener noreferrer"&gt;
        carbon-trace
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Deterministic scene engine for an interactive narrative experience using GSAP, Howler, Canvas 2D and PixiJS. Built for WeCoded 2026 Frontend Art.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/anchildress1/carbon-trace/public/assets/images/carbon-trace-banner-gh-e897ebe7.webp"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fanchildress1%2Fcarbon-trace%2FHEAD%2Fpublic%2Fassets%2Fimages%2Fcarbon-trace-banner-gh-e897ebe7.webp" alt="Banner"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Carbon Trace: An Immersive Art Experience&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="https://github.com/anchildress1/carbon-trace/actions/workflows/ci.yml" rel="noopener noreferrer"&gt;&lt;img src="https://github.com/anchildress1/carbon-trace/actions/workflows/ci.yml/badge.svg" alt="CI"&gt;&lt;/a&gt; &lt;a href="https://github.com/anchildress1/carbon-trace/LICENSE" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/2cb6e0aa35fa3e38e0e2b58f8f6f5e63b4a57f080945f2f6d61b1966fb7542d5/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6c6963656e73652d506f6c79666f726d253230536869656c642d626c7565" alt="License: Polyform Shield"&gt;&lt;/a&gt; &lt;a href="https://sonarcloud.io/project/overview?id=anchildress1_carbon-trace" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/b073cc108d821fb439d8fe79837a0d5b16732f256c17c31df7c6e148c226ad7b/68747470733a2f2f736f6e6172636c6f75642e696f2f6170692f70726f6a6563745f6261646765732f6d6561737572653f70726f6a6563743d616e6368696c6472657373315f636172626f6e2d7472616365266d65747269633d616c6572745f737461747573" alt="Quality Gate"&gt;&lt;/a&gt; &lt;a href="https://sonarcloud.io/project/overview?id=anchildress1_carbon-trace" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/d81ceb39c2bc2bcffce9819b98bbd21ce6956fd6f1200ed2d91989525e319130/68747470733a2f2f736f6e6172636c6f75642e696f2f6170692f70726f6a6563745f6261646765732f6d6561737572653f70726f6a6563743d616e6368696c6472657373315f636172626f6e2d7472616365266d65747269633d636f766572616765" alt="Coverage"&gt;&lt;/a&gt; &lt;a href="https://developer.chrome.com/docs/lighthouse/accessibility" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/d4a75227d1a6ef2a77b7d4fdeb80a300ce48b56b66dac82bbc204a8164b25980/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6163636573736962696c6974792d39352532352532422532304c69676874686f7573652d627269676874677265656e" alt="Accessibility"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;An immersive visual narrative told from the awareness of a diamond trapped in a coal seam—12 painted scenes with ghost-drift text, narrated audio, and pixel-level visual effects. Built for &lt;a href="https://dev.to/devteam/join-the-2026-wecoded-challenge-and-celebrate-underrepresented-voices-in-tech-through-writing--4828" rel="nofollow"&gt;WeCoded 2026 DEV Challenge&lt;/a&gt; Frontend Art.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Experience it live: &lt;a href="https://carbon-trace.anchildress1.dev" rel="nofollow noopener noreferrer"&gt;https://carbon-trace.anchildress1.dev&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;The Story 💎&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;A diamond wakes up inside a coal seam. It doesn't know what it is yet—just pressure, darkness, and the sense that something isn't right. Over 12 scenes it moves through tunnels, furnaces, pockets, sinks, and silence. It gets carried, stored, forgotten, and found again. By the end, it isn't just a diamond anymore. It's a circuit. It's music. It's light.&lt;/p&gt;
&lt;p&gt;The narrative follows a carbon cycle that isn't chemistry—it's personal. Coal to diamond to circuit to light. Each scene is a painted image (Leonardo AI, Flux 2 Pro) with narration I recorded, ambient textures, ghost-drift text that pours in and blows out…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/anchildress1/carbon-trace" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;⚖️ This project is licensed under &lt;a href="https://github.com/anchildress1/carbon-trace/blob/main/LICENSE" rel="noopener noreferrer"&gt;Polyform Shield 1.0.0&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  What I Am Not 🔧
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;I am not a frontend developer.&lt;/strong&gt; I'm a backend-focused engineer who had never heard of Canvas 2D, Howler.js, PixiJS, or GSAP before this project. I spent just as much time learning what these tools do as I did designing the system around them. AI helped me learn what each tool did and gave me alternatives. I decided what to do with it from there.&lt;/p&gt;

&lt;p&gt;I'm also not an artist. I used &lt;a href="https://leonardo.ai" rel="noopener noreferrer"&gt;Leonardo.ai&lt;/a&gt; to generate all images and dusted off some old GIMP skills to build the image layer masks by hand. Everything you see in &lt;em&gt;Carbon Trace&lt;/em&gt; was built by someone who doesn't do this every day—which is exactly why it took a full production pipeline, 13 ADRs, and four competing AI reviewers to ship it.&lt;/p&gt;

&lt;p&gt;What I am is a backend engineer who brought backend discipline to a frontend art project. The ADR process, the adversarial review gauntlet, the CI/CD pipeline, 685 unit tests, 220 E2E tests—that's what happens when someone who builds production systems decides to build something meaningful instead.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Voice You Can't Generate 🎙️
&lt;/h3&gt;

&lt;p&gt;I wrote and narrated the script myself because real Appalachian is something that AI is incapable of—even with the list of words it's allowed to use in reference to the area I still call home. Words like "holler" (hollow) or "sangle" (single) and phrases like "ain't got a pot to piss in" (little financial means) are all authentic from the Southwestern Virginia and Eastern Kentucky regions.&lt;/p&gt;

&lt;p&gt;I know enough about recording to know I never wanted to learn it myself. However, &lt;em&gt;Carbon Trace&lt;/em&gt; could not exist without quality recordings that you don't get from QuickTime. So, I taught myself enough of GarageBand to record all tracks and no, that wasn't very much fun. I got in and out with the basics and then used &lt;code&gt;ffmpeg&lt;/code&gt; to help slice the ambient sounds from &lt;a href="http://freesound.org" rel="noopener noreferrer"&gt;FreeSound.org&lt;/a&gt;. AI was in the background to help me iterate ideas until I was happy with the end result.&lt;/p&gt;

&lt;h3&gt;
  
  
  How I Wrangled the Robots 🦾
&lt;/h3&gt;

&lt;p&gt;Since this project lives entirely outside my usual stack, I leaned heavily on my AI friends to get the job done, but this was not a prompt-and-go solution.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Review Gauntlet ⚔️
&lt;/h4&gt;

&lt;p&gt;My primary workflow was Claude Code as implementer, Codex and Antigravity performed adversarial code reviews for every branch to identify inconsistencies and bugs, and Copilot had the final review sign off for all changes. Sonar and Trivy ran for every PR along with a suite of tests, including Playwright and Lighthouse.&lt;/p&gt;

&lt;p&gt;This is just one of many examples of why this works overall—one LLM is never good at everything and putting them in competition with each other helps to increase code quality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flejvtljh477qu3sv5lft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flejvtljh477qu3sv5lft.png" alt="Screenshot showing adversarial review findings (P1/P2 bugs, tests passing) in Codex" width="800" height="785"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The adversarial reviews were critical to the final build, because no single AI was allowed to operate unchecked in a repo where I didn't plan to personally review the code. Beyond architecture bugs, the gauntlet caught frontend-specific issues—a mask processing loop that was freezing the page during scene loads, and repeated layout calculations that caused animation stutter. It also proved to be a pain because every time I thought I was done with a feature, there would be another hour or more of AI wrangling I had to do. The back and forth continued until all of my helper reviewers agreed on the ultimate solution and only then was the branch merged.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgapdlukampka1489v2g0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgapdlukampka1489v2g0.png" alt="Screenshot Antigravity catching Claude's PausableTimer hallucination with mathematical proof" width="800" height="771"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  From Design Doc to Decision Records 🗂️
&lt;/h4&gt;

&lt;p&gt;I started with a simple design document in markdown that was converted into an &lt;code&gt;AGENTS.md&lt;/code&gt; file and wired to each AI individually. By the time I made it to version 5 of the "simple" design, I decided I needed something with a bit more structure. That's when I started writing architecture decision records instead and I added them to the repo for tracking. I ended up with 13 ADRs, most of which were updated after one or more decisions I made proved impossible given the constraints I defined. This forced every major technical decision to be intentional instead of experimental.&lt;/p&gt;

&lt;p&gt;Alongside the repo work, ChatGPT and Claude Cowork helped me with image generation prompts and gave me all the info I needed about GSAP, Howler.js, PixiJS, and Canvas 2D to be able to make design decisions. They had competing reviews between them, as well, just to make sure all the pertinent information was available to me when I needed it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 For a full breakdown of every architectural decision made during the build, &lt;a href="https://github.com/anchildress1/carbon-trace/tree/v1.0.1/docs/ADRs" rel="noopener noreferrer"&gt;the ADRs&lt;/a&gt; are available in the repo.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Hundreds of Wrong Diamonds 🔮
&lt;/h3&gt;

&lt;p&gt;Leonardo wasn't very easy to wrangle either, as I generated literally hundreds of images to perfect each scene. ChatGPT and Claude often helped with wording, so both had their own best-practice instructions generated from research, covering several different image flows across models including Flux Pro 2.0, Nano Banana, and GPT Image 1.5.&lt;/p&gt;

&lt;p&gt;I had several hilarious outtakes during image generation, too. I learned that specific words like "rough faceted" or "silhouette" did not mix well with some models. I ended up with a somewhat extensive set of rules for prompt generation to ensure the diamond's story was properly told through each picture.&lt;/p&gt;

&lt;p&gt;Here's a couple of my favorite outtake images:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39fmxh13fuqmhmh9glr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39fmxh13fuqmhmh9glr5.png" alt="Diamond in jeans pocket with coal scrip coins—wrong context/scale, funny failure" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgni9og02f0oj5ounrdbz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgni9og02f0oj5ounrdbz.png" alt="Man reaching for tiny diamond by firelight—face visible, directly violates the " width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Under the Hood ⚙️
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Application architecture:&lt;/strong&gt; Vanilla JS (no framework), 14 ES modules orchestrated by a 5-state machine (Loading → Paused → Scene Active → Transitioning → Credits). My goal was to make this a production-level application without over-engineering or introducing abstraction where it doesn't belong. This is a static single page, one-flow-only application and the entire flow for each scene is controlled by &lt;code&gt;scenes.json&lt;/code&gt;—every frame's image, narration lines, ambient audio, audio cues, effects, and transition config lives in one file for easy edits that don't interfere with code structure. Every scene difference is expressed as configuration, not logic, which means adding a new per-scene behavior is adding a config key, not an if-block.&lt;/p&gt;

&lt;p&gt;The state machine isn't just a label—it controls how every subsystem behaves at any given moment. When a user pauses, audio, canvas transitions, PixiJS effects, shimmer dots, GSAP timelines, and auto-advance timers all freeze in sync. When they resume, everything restarts from exactly where it left off. An unconditional auto-advance timer fires regardless of whether the narration &lt;code&gt;end&lt;/code&gt; event arrives, eliminating a race condition where scenes could stall if the browser swallowed the event. Every timer in the system is pause-aware through a shared &lt;code&gt;PausableTimer&lt;/code&gt; utility so nothing leaks across scene boundaries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The audio system is the most complex piece.&lt;/strong&gt; I wanted to include different emotional ambient tracks for each scene designed to play just under the narration layer. I sourced all tracks from &lt;a href="https://freesound.org" rel="noopener noreferrer"&gt;FreeSound.org&lt;/a&gt;, but no two sound effects have the same volume, which meant I needed the ability to mix on demand from the backend in addition to fade controls and delayed timing. Two independent Howler.js channels are responsible for running each track concurrently:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Channel&lt;/th&gt;
&lt;th&gt;Format&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Ambient&lt;/td&gt;
&lt;td&gt;m4a, looped&lt;/td&gt;
&lt;td&gt;Crossfades between scenes (800ms), pauses with all channels during nav&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Narration&lt;/td&gt;
&lt;td&gt;m4a, one-shot&lt;/td&gt;
&lt;td&gt;Per-scene voiceover with configurable delay, pre-buffers next scene's audio during current playback&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I also implemented buffer recovery escalation through three distinct stages: nudge (no-op seek to force browser re-eval), reload (preserve position → reset source → restore), exhaustion (log warning, clear state, prevent UI lockup). All timers use unified pause/resume logic to prevent cross-scene leakage. The final scene layers in a licensed track from Bridge City Sinners that fades in before the narration ends, then boosts in volume with a 3-second fade once the voiceover completes—a cinematic handoff from story to music.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rendering architecture:&lt;/strong&gt; The visual stack is four layers composited on top of each other—a Canvas 2D scene layer for images, a PixiJS/WebGL canvas for displacement effects, a separate Canvas 2D overlay for the shimmer trace dots, and a DOM layer on top for text, captions, and controls. Each layer has its own render loop and pauses independently with the state machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PixiJS visual effects:&lt;/strong&gt; A separate WebGL-powered canvas handles pixel-level scene animations—water displacement, heat distortion, glow, and shockwave—each confined to mask-based regions so only targeted areas of the image animate. Effect parameters modulate in real time from audio frequency data via a Web Audio AnalyserNode. The entire PixiJS bundle (~330 KB) is lazy-loaded after the initial paint so it never blocks the first screen the user sees, and if WebGL fails entirely, the experience degrades gracefully to static images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Circuit trace overlays:&lt;/strong&gt; The circuit traces aren't just static images—they're a live shimmer overlay rendered on a dedicated canvas. Each scene loads a hand-authored PNG mask that I drew in GIMP, where dark pixels define walkable paths. &lt;code&gt;shimmer.js&lt;/code&gt; spawns glowing dots that navigate those paths using 8-directional pathfinding, pulsing in warm amber tones that shift per scene. The opacity ramps from 5% in the opening to full coverage by the finale—the circuitry was always there, it just needed the right conditions to be seen. &lt;/p&gt;

&lt;p&gt;The first design couldn't produce what I had in mind, so I deferred it, rewrote the ADR, and came back with a completely different approach that would get the job done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GSAP timeline orchestration:&lt;/strong&gt; The ghost-drift text was designed to keep the audience engaged in the narration in real time. I set up positioning as a percentage value relative to the container and originally allowed for alignment options. Later, I decided that was unnecessary and removed the extra noise from the codebase.&lt;/p&gt;

&lt;p&gt;All captions sync directly into the GSAP timeline via callbacks instead of independent timers. That way when the user pauses or resumes any scene, the captions are automatically included.&lt;/p&gt;

&lt;p&gt;The credits overlay has its own ADR and runs a GSAP-driven scroll with touch, wheel, and keyboard input, focus management for links, and full reduced-motion support.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accessibility (WCAG AA):&lt;/strong&gt; Any time I do any front-end work, accessibility is top of mind. This project was no different. I made sure all standard best practices were followed after AI helped to research what that looks like in 2026, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;aria-live="polite"&lt;/code&gt; region to announce full narration text on scene change&lt;/li&gt;
&lt;li&gt;Roving tabindex for the scene progress bar&lt;/li&gt;
&lt;li&gt;Standard media keyboard nav: Space (play/pause), Enter/Arrow (advance), Escape (pause)&lt;/li&gt;
&lt;li&gt;Screen reader narration separate from visual ghost-drift text (&lt;code&gt;aria-hidden="true"&lt;/code&gt; on visual elements to prevent duplication)&lt;/li&gt;
&lt;li&gt;A persistent caption toggle via localStorage&lt;/li&gt;
&lt;li&gt;Reduced motion is fully supported—&lt;code&gt;prefers-reduced-motion&lt;/code&gt; disables all canvas effects, freezes shimmer dots, cuts transitions instantly, and responds to live preference changes mid-session&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I used AI to research and implement accessibility standards, then tested the final result and orchestrated changes to prevent repo chaos.&lt;/p&gt;

&lt;h3&gt;
  
  
  Shipping It 🚢
&lt;/h3&gt;

&lt;p&gt;Underneath the story is a production-grade engineering process. Since I already have a pretty solid workflow with Release Please and Cloud Run, I provided the examples to AI and had the full CI/CD pipeline configured early on. That allowed me to track each shippable feature as a new deployed version for the final round of testing.&lt;/p&gt;

&lt;p&gt;The setup for me was minimal, but it was the last piece of turning this fancy art project into a small scale production build. The final build is ~5,500 lines of code (LOC) backed by ~14,500 LOC of tests across 685 unit tests and 220 E2E tests. Five CI workflows cover linting, automated tests, Lighthouse CI for both mobile and desktop performance, security scanning via Trivy and CodeQL, static analysis through SonarCloud, and release automation.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Diamond Knows Now 💎
&lt;/h2&gt;

&lt;p&gt;I took an unconventional path to get here, but looking back, I was always going to end up exactly where I am. The circuit traces in every scene of &lt;em&gt;Carbon Trace&lt;/em&gt; didn't appear out of nowhere—they were there from the start, just waiting to be seen. That's my story too. I was made to solve problems, even when nobody around me expected that from a girl growing up in a poor coal town.&lt;/p&gt;

&lt;p&gt;It's not always easy. But I've never been afraid of hard work to get the job done. The end result is a full circuit—built from pressure, time, and a refusal to stay small.&lt;/p&gt;

&lt;p&gt;The fact that I'm a female engineer shouldn't matter. It only matters that I'm a good one.&lt;/p&gt;


&lt;div class="ltag__user ltag__user__id__3224358"&gt;
    &lt;a href="/anchildress1" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=150,height=150,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3224358%2F7f675c78-6aa0-466a-a5a7-c3e35440d53a.png" alt="anchildress1 image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/anchildress1"&gt;Ashley Childress&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/anchildress1"&gt;Distributed backend specialist. Perfectly happy playing second fiddle—it means I get to chase fun ideas, dodge meetings, and break things no one told me to touch, all without anyone questioning it. 😇&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;





&lt;h3&gt;
  
  
  🛡️ Pressure-Tested by More Than One Brain
&lt;/h3&gt;

&lt;p&gt;This post was written by me with collaborative editing from Claude, ChatGPT, and Gemini. The code for &lt;em&gt;Carbon Trace&lt;/em&gt; was built using Claude Code, Codex, Antigravity, and Copilot, and it was directed by a human who refused to let any of them off easy. All images were generated with Leonardo.ai under my art direction. All narration is my actual voice. No AI was harmed in the making of this post, but all were argued with repeatedly and extensively.&lt;/p&gt;

</description>
      <category>wecoded</category>
      <category>devchallenge</category>
      <category>frontend</category>
      <category>css</category>
    </item>
    <item>
      <title>I Let AI Write to My Database (With Guardrails)🔬</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Fri, 13 Mar 2026 02:20:52 +0000</pubDate>
      <link>https://forem.com/anchildress1/i-let-ai-write-to-my-database-with-guardrails-473o</link>
      <guid>https://forem.com/anchildress1/i-let-ai-write-to-my-database-with-guardrails-473o</guid>
      <description>&lt;p&gt;My System Notes project started as a DEV Challenge and turned into a three-part systems experiment. Like most of my projects, it didn’t stay small.&lt;/p&gt;

&lt;p&gt;It started as a simple idea: let the system capture engineering decisions as they happen and make them easy to reference later. Mostly as a future-me record of “what was I thinking?” for any given build.&lt;/p&gt;

&lt;p&gt;You can read through my progression of thoughts across these challenge submissions, if you're curious:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/anchildress1/my-portfolio-doesnt-live-on-the-page-218e"&gt;My Portfolio Doesn’t Live on the Page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/anchildress1/from-static-portfolio-to-indexed-decisions-46bf"&gt;From Static Portfolio to Indexed Decisions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/anchildress1/conversational-retrieval-when-chat-becomes-navigation-2gij"&gt;Conversational Retrieval: When Chat Becomes Navigation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The portfolio site does more than just display indexed decisions. It serves as my AI playground for pushing systems behind the scenes, just to see what happens. Over the last few weeks, that playground exposed a very boring problem. The exact kind that quietly slows everything down:&lt;/p&gt;

&lt;p&gt;✋ Someone still has to &lt;strong&gt;write artifacts into the system&lt;/strong&gt;—a less than thrilling, highly repetitive job I never actually wanted.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bottleneck I Accidentally Built ⚙️
&lt;/h2&gt;

&lt;p&gt;The thinking process for the System Notes index already looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;idea  
↓  
conversation with AI  
↓  
decision
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most of the reasoning happens in that conversation. ChatGPT helps challenge ideas, organize the thinking, and refine the direction. Turning those decisions into indexed artifacts required an extra step, and it got worse after I migrated from JSON to Supabase.&lt;/p&gt;

&lt;p&gt;Originally, I handled it all manually but that got tiresome quickly. So, I let AI identify and summarize decisions that were made at the end of a session. From there I’d copy, paste, edit, and insert the record.&lt;/p&gt;

&lt;p&gt;Later, I gave ChatGPT strict artifact instructions to format the output as a SQL insert. That removed one step and technically worked. In practice, not so much.&lt;/p&gt;

&lt;p&gt;It was far from a perfect system and was often buggy. Even worse—it still required me to context switch, copy the output, paste it into a query, and fix whatever the AI inevitably messed up along the way.&lt;/p&gt;

&lt;p&gt;So before tinkering too much, the second half of my workflow looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;decision
↓
AI generates SQL
↓
copy
↓
paste
↓
I fix SQL
↓
insert
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which is not exactly the frictionless system I had in mind…&lt;/p&gt;




&lt;h2&gt;
  
  
  Supascribe: Letting AI Write Data Artifacts 🏗️
&lt;/h2&gt;

&lt;p&gt;Since AI was already doing most of the heavy lifting, I saw no reason not to remove several of those steps with a little upfront structure. So, I wrote Supascribe—a small devtool designed to remove the manual translation layer eating into my build time.&lt;/p&gt;

&lt;p&gt;Supascribe does one unconventional thing: allow the AI collaborator to &lt;strong&gt;write directly to the database&lt;/strong&gt;, with a human-in-the-loop review step.&lt;/p&gt;

&lt;p&gt;Risky? &lt;em&gt;Probably.&lt;/em&gt; Uncontrolled? &lt;em&gt;No.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;The pipeline now looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AI collaboration
↓
artifact proposal
↓
human review
↓
schema check
↓
database insert
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ChatGPT drafts the artifact from the conversation history, and after I approve it, the tool writes it to Supabase.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukt80rja6o9gy7rge9oq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukt80rja6o9gy7rge9oq.png" alt="Screenshot Supascribe in ChatGPT pre-approval review" width="800" height="958"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The goal is simple: shorten the distance between &lt;strong&gt;thinking about a decision and capturing it in the system&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Right now the tool is intentionally minimal. It does exactly three things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accept structured artifact input from ChatGPT&lt;/li&gt;
&lt;li&gt;Check all required fields with a strict Zod schema&lt;/li&gt;
&lt;li&gt;Write the artifact into the database&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s it—there's no magic yet. Just structured input, a schema check, and a controlled insert. As it turns out, that was enough to remove the SQL-copying circus from my workflow.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where The System Still Slows Me Down 🚧
&lt;/h2&gt;

&lt;p&gt;The biggest problem is this isn't exactly the foolproof solution I first envisioned and it still relies heavily on my Approve/Deny button to maintain data integrity. AI is allowed to propose artifacts and insert them, but only after I allow it—which isn't what I wanted, but absolutely necessary for version one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxddihu0jvitw93r0r5my.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxddihu0jvitw93r0r5my.png" alt="Screenshot Supascribe in ChatGPT approval HITL step" width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The integrity of the index is protected, but the system doesn't eliminate the human bottleneck yet. Right now Supascribe shortens the path between conversation and artifact, but it doesn’t fully automate it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This system accelerates thinking, not decision authority.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And that’s intentional. Letting AI write at-will into your data layer without strict guardrails is a great way to accidentally invent a brand new genre of data corruption. 😕&lt;/p&gt;




&lt;h2&gt;
  
  
  Teaching AI To Touch Data Safely 🦾
&lt;/h2&gt;

&lt;p&gt;The next phase of this experiment is testing how much autonomy the AI collaborator can safely gain.&lt;/p&gt;

&lt;p&gt;That likely means stronger guardrails in two immediate places:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The backend can enforce stricter validation around artifact structure and write behavior.&lt;/li&gt;
&lt;li&gt;The AI can perform structured validation before proposing artifacts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The next goal is to make the workflow resilient enough for AI to safely participate in &lt;strong&gt;knowledge capture&lt;/strong&gt;, not just idea generation. Right now the system is cautious by design, but I do want to gradually increase its autonomy and see how well data integrity holds over time.&lt;/p&gt;

&lt;p&gt;What started as documentation automation is turning into something bigger: testing how much responsibility an AI collaborator can safely hold.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Question Behind This 🌀
&lt;/h2&gt;

&lt;p&gt;My System Notes portfolio started as a simple portfolio experiment. Supascribe turned it into a systems experiment.&lt;/p&gt;

&lt;p&gt;Now I'm testing how well AI acts as a participant in the &lt;strong&gt;artifact creation layer of a knowledge system&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Not just generating text or ideas, but using its own memory and strict guidelines to identify which decisions should become part of the underlying system.&lt;/p&gt;

&lt;p&gt;Admittedly, that’s a much more dangerous layer for AI to operate in. And sounds like fun to me.&lt;/p&gt;

&lt;p&gt;Most AI tooling stays safely away from the data layer of any system. It's allowed to draft, suggest, summarize, and code. However, Supascribe goes one step further and asks:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;What happens if the AI helps write the system itself?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Yes—I’m aware this could explode in very entertaining ways. That’s kind of the point. 🌀 &lt;/p&gt;

&lt;p&gt;I started this experiment trying to remove friction from documentation. &lt;/p&gt;

&lt;p&gt;What I’m actually testing is whether AI can safely participate in the systems that decide what gets remembered and what gets trusted.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛡️ The System Didn’t Write This Alone
&lt;/h2&gt;

&lt;p&gt;This post was written by me, with ChatGPT acting as a thinking partner while refining structure and clarity. The decisions, experiments, and system design are mine. ChatGPT helped challenge wording and tighten the narrative.&lt;/p&gt;

&lt;p&gt;AI wants you to know that it performed no database writes during the editing of this post. That seemed wise.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>database</category>
      <category>devtools</category>
      <category>discuss</category>
    </item>
    <item>
      <title>I Stopped Reviewing Code: A Backend Dev’s Experiment with Google Gemini</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Wed, 04 Mar 2026 00:02:48 +0000</pubDate>
      <link>https://forem.com/anchildress1/i-stopped-reviewing-code-a-backend-devs-experiment-with-google-gemini-5424</link>
      <guid>https://forem.com/anchildress1/i-stopped-reviewing-code-a-backend-devs-experiment-with-google-gemini-5424</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/mlh-built-with-google-gemini-02-25-26"&gt;Built with Google Gemini: Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 I’ve been officially obsessed with AI for nearly a year now. Not from an ML research angle and not from a purist implementation standpoint. The thrill, for me, is in finding the limits as a user and then leaning on them until something gives. One of my favorite Hunter S. Thompson lines talks about “the tendency to push it as far as you can.” That has been my operating principle this entire year.&lt;/p&gt;

&lt;p&gt;This build started as a portfolio experiment. It turned into something else entirely. This challenge became the cleanest environment I’ve found to test what actually happens when you step out of the implementation loop and let the model build the world without you.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  What I Built with Google Gemini
&lt;/h2&gt;

&lt;p&gt;When I saw the New Year, New You Portfolio Challenge, I knew it required a UI. That wasn’t a surprise. What &lt;em&gt;was&lt;/em&gt; a surprise was how quickly I would realize I didn’t understand what I was looking at once it started coming together.&lt;/p&gt;

&lt;p&gt;I’m a backend developer. You hand me a distributed systems problem and I’ll happily spend hours untangling it. You ask me to make a &lt;code&gt;div&lt;/code&gt; visible in a browser and my brain actively searches for the exit. With only one weekend to build, there was no room for the "eyes-glazing-over" phase. Google Gemini would implement and I would supervise—that was my whole plan.&lt;/p&gt;

&lt;p&gt;I walked in expecting Antigravity, powered primarily by Gemini Pro, to behave like every other AI system I’d tested—predictable and fairly easy to keep inside the guardrails. I thought I already knew what those guardrails looked like: strict types, linting, and the familiar routine of code review. &lt;/p&gt;

&lt;h3&gt;
  
  
  The Pivot: Dropping the Code Review Ritual
&lt;/h3&gt;

&lt;p&gt;Initially, I followed the "responsible" pattern: prompt, review the diff, run tests, approve. It felt disciplined. It looked professional.&lt;/p&gt;

&lt;p&gt;Very quickly, I realized I had no meaningful context for what I was reviewing in a frontend stack. I wasn't improving the output; I was participating in ceremony. So, I stopped reviewing code altogether.&lt;/p&gt;

&lt;p&gt;Instead of validating lines of code, &lt;strong&gt;I validated outcomes&lt;/strong&gt;. If the UI rendered correctly and passed functional tests, that was success. I cranked up the autonomy, taught Antigravity my repository expectations, and let it run. Copilot reviewed the code in my place, and Gemini responded in a closed loop. I stepped out of the implementation and into the role of a systems auditor.&lt;/p&gt;




&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;This portfolio iteration documents what happens when you turn an agent loose inside a defined system.&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://system-notes-ui-288489184837.us-east1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;




&lt;p&gt;For this build, the Antigravity panel was the primary interface. I defined the repo rules and testing expectations there, and Gemini implemented directly within that structure. It became the control surface for the entire loop.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qjpmeg7cxyul1miyyig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qjpmeg7cxyul1miyyig.png" alt="Screenshot Antigravity Agent Manager"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;V1 Release:&lt;/strong&gt; &lt;a href="https://github.com/anchildress1/system-notes/tree/v1.1.0" rel="noopener noreferrer"&gt;Preserved version v1.1.0&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Live Portfolio:&lt;/strong&gt; &lt;a href="https://anchildress1.dev" rel="noopener noreferrer"&gt;https://anchildress1.dev&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Replacing Trust With Systems
&lt;/h3&gt;

&lt;p&gt;I didn’t simply remove oversight; I replaced it with Lighthouse audits and expanded test coverage. My assumption was simple: if the browser behaves and the tests pass, the code is "safe." I believed I had replaced trust in code with trust in systems. I was wrong—I had confused passing tests with structural integrity.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  High Reasoning Isn’t Optional
&lt;/h3&gt;

&lt;p&gt;I learned that for autonomous development, reasoning depth is a stability requirement. With lower reasoning modes (like Flash), changes were often partial—updating 2/3 of the files but "forgetting" the tests or documentation. &lt;/p&gt;

&lt;p&gt;Switching to High Reasoning mode in Gemini Pro changed the pattern. Runtime errors dropped, and cross-file consistency improved. It finally started "remembering" to keep the docs aligned with the code changes without constant nudging.&lt;/p&gt;

&lt;p&gt;Reasoning depth wasn’t about intelligence—it was about reliability under autonomy. Gemini’s deeper reasoning and context retention made the closed-loop workflow viable; without it, cross-file consistency collapsed quickly under autonomy.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Reality Check: Sonar
&lt;/h3&gt;

&lt;p&gt;After the high of the successful build wore off, I introduced Sonar as a retrospective audit. The UI rendered correctly. The tests passed. Everything appeared stable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sonar reported 13 reliability issues and assigned the project a C reliability rating.&lt;/strong&gt; Of those issues, 66% were classified as high severity. Security review surfaced three hotspots, including a container running the default Python image as root and dependency references that did not pin full commit SHAs.&lt;/p&gt;

&lt;p&gt;Maintainability scored an A, but still carried 70 maintainability issues—structural patterns that didn’t break behavior, yet increased long-term complexity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvr7t86vvt317r9bg561.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvr7t86vvt317r9bg561.png" alt="Screenshot 81 Sonar failures"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That was the moment confidence turned into scrutiny.&lt;/p&gt;

&lt;p&gt;The application worked. The tests passed. But reliability, security posture, and structural integrity told a different story. The tests validated behavior; Sonar validated assumptions. And those are not the same thing.&lt;/p&gt;

&lt;p&gt;The lesson? &lt;strong&gt;AI-generated tests can pass because they were written to satisfy the implementation, not challenge it.&lt;/strong&gt; Structural validation requires an independent layer of review outside the generation loop.&lt;/p&gt;

&lt;h2&gt;
  
  
  Google Gemini Feedback
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What Worked Well
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cohesive Implementation:&lt;/strong&gt; High reasoning Gemini Pro produced cross-file changes that respected the intent of the repository.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agentic Orchestration:&lt;/strong&gt; The model switching was seamless, and the orchestration interface made it possible to define expectations clearly and enforce them consistently.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Where Friction Appeared
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cooldown Transparency:&lt;/strong&gt; While the interface shows when current credits refresh, the length of the next cooldown remains a black box.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool Performance:&lt;/strong&gt; MCP responsiveness materially impacted iteration speed, sometimes forcing me to batch requests rather than work in small, rapid increments.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Pro Tip:&lt;/strong&gt; It would be a massive UX win to see exactly how long your &lt;em&gt;next&lt;/em&gt; cooldown will be (e.g., "Your next cooldown will be X hours long") directly on the models page. Knowing if the lockout is 1 hour or 96 hours is vital for developer planning.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  The Final Verdict: Autonomy Still Demands an Audit
&lt;/h3&gt;

&lt;p&gt;The lesson wasn’t that Gemini failed; it was that systems-level trust requires more than passing tests. In future builds, autonomy won’t ship without an explicit adversarial audit. Whether that means a mandatory Sonar gate, a red-team prompt pass, or a second high-reasoning model instructed to hunt for the first model’s shortcuts—the loop must be challenged.&lt;/p&gt;

&lt;p&gt;This project began as a weekend experiment to escape the “teleportation” haze of frontend development. It ended as an exploration of the razor-thin edge of system-level trust. The real build wasn’t the portfolio—it was discovering what happens when you lean on the limits of AI until they finally give.&lt;/p&gt;

&lt;p&gt;Removing myself from the implementation loop didn’t eliminate responsibility; it redefined it. The more freedom you give an agent, the more rigor you must give your audit.&lt;/p&gt;

&lt;h4&gt;
  
  
  🛡️ The Tools Behind The Curtain
&lt;/h4&gt;

&lt;p&gt;This post was brewed by me—with a shot of Google Gemini and a splash of ChatGPT. If you catch a bias or a goof, call it out. AI isn’t perfect, and neither am I.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>geminireflections</category>
      <category>gemini</category>
    </item>
    <item>
      <title>Find the DEV Post That Needs You Now 🫶</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Sun, 01 Mar 2026 12:17:00 +0000</pubDate>
      <link>https://forem.com/anchildress1/find-the-dev-post-that-needs-you-now-33ng</link>
      <guid>https://forem.com/anchildress1/find-the-dev-post-that-needs-you-now-33ng</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/weekend-2026-02-28"&gt;DEV Weekend Challenge: Community&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Community
&lt;/h2&gt;

&lt;p&gt;DEV feels like home: learning, lively discussions, and new connections. It’s a place where people genuinely want to help each other, but fast-moving feeds make it hard to see where a reply would matter most. This tool is meant for members who want to help and just need direction.&lt;/p&gt;

&lt;p&gt;That support matters because someone is always willing to help. The harder part is knowing where help is actually needed. When posts are easy to miss, willingness doesn’t always translate into action.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where This Started
&lt;/h3&gt;

&lt;p&gt;I’ve been a DEV member for less than a year, but I’m more active here than anywhere else online. I’m willing to volunteer where I can, but knowing how and where to help is difficult without direction. I built a system to provide a consistent, openly scored view of where input may be needed most. The goal is fewer “how can I help?” moments and more meaningful responses.&lt;/p&gt;

&lt;p&gt;A few weeks ago I noticed a post in &lt;code&gt;#mentalhealth&lt;/code&gt; where someone had reached out and nobody had answered. I care deeply about this topic, and the post had been written days earlier. I responded immediately, but I wish I had seen it sooner. Sometimes simply being heard makes a real difference. Some posts deserve a timely human reply but can be buried by feed dynamics.&lt;/p&gt;

&lt;p&gt;What really bothered me is that if I saw this once, there are likely many others like it. The primary feed favors recent and high-performing posts, which means others can slip through the cracks. So I built a visibility dashboard for anyone who wants to help posts get attention when they need it. It uses a simple scoring structure with one goal: show humans where their input may matter right now.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;Rather than sorting only by recency or popularity, DEV Community Dashboard prioritizes conversations showing meaningful signal but limited engagement—helping community members decide where their contribution can have the greatest impact.  &lt;/p&gt;

&lt;p&gt;Behind the scenes, AI augments lightweight heuristics with bounded semantic analysis. Instead of matching tags or phrases alone, the system evaluates conversational context to estimate where attention may be useful. All classifications rely solely on publicly available DEV data. It uses only published posts and never touches private content. Nothing about the original content is changed; each item links back to the canonical article so the conversation stays on DEV.&lt;/p&gt;




&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Typical feeds prioritize recency or engagement. That works for discovery, but useful posts can still be missed. New members may be asking their first question, or someone may have a time-sensitive problem. When those go unanswered, the community never gets the chance to respond.&lt;/p&gt;

&lt;p&gt;I built a public dashboard to surface posts that need attention so others can receive the same support I experienced when I started blogging here. The site is online at &lt;a href="https://dev-signal.checkmarkdevtools.dev" rel="noopener noreferrer"&gt;https://dev-signal.checkmarkdevtools.dev&lt;/a&gt; and free to use. Every post follows the same calculations to keep behavior predictable, while humans remain the deciding factor.&lt;/p&gt;

&lt;p&gt;Workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the page&lt;/li&gt;
&lt;li&gt;Pick a surfaced post&lt;/li&gt;
&lt;li&gt;Reply on DEV&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It reprioritizes the public feed using signal quality and engagement metrics, highlighting posts with strong signal but low interaction. Updates run hourly for posts published between 2 hours and 5 days ago. This window balances visibility (not too new) with relevance (not stale). Each item links directly back to the canonical DEV article in a new window.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If the embed doesn’t load, use the direct link above.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://dev-community-dashboard-595137784250.us-east1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  What This Is
&lt;/h3&gt;

&lt;p&gt;The dashboard highlights situations such as first-time posters without replies or requests for help that have not received responses. Community members open the page, select a post, and respond directly on DEV.&lt;/p&gt;

&lt;p&gt;Its role is simple: route attention.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fni2hq7yusulk529dwt0o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fni2hq7yusulk529dwt0o.png" alt="Screenshot DEV Community Dashboard primary post list"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are four primary triage categories:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Needs Support&lt;/td&gt;
&lt;td&gt;Language suggests burnout, emotional strain, or direct help-seeking; may benefit from a thoughtful reply&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Awaiting Collaboration&lt;/td&gt;
&lt;td&gt;No meaningful replies yet; a person should engage directly&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Silent Signal&lt;/td&gt;
&lt;td&gt;Minimal engagement activity despite visibility&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Trending Signal&lt;/td&gt;
&lt;td&gt;Valuable content with limited reach; worth amplifying&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Secondary states (such as rapid activity spikes or anomalous metrics) act as informational flags rather than routing drivers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Design Principles
&lt;/h3&gt;

&lt;p&gt;I spent time ensuring this did not become a moderation or quality ranking system. The goal is visibility at the right moment with transparent categorization.&lt;/p&gt;

&lt;p&gt;Every post exposes the metrics used to classify it. Hidden scoring breaks trust, so values appear numerically and visually with hover descriptions explaining each metric in plain language.&lt;/p&gt;

&lt;p&gt;I included a feedback loop to the &lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt; where discussions and improvements can happen. The tool belongs to the DEV community as much as it belongs to me.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqag0qmpnoqyoapdgdzct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqag0qmpnoqyoapdgdzct.png" alt="Screenshot DEV Community Dashboard post details"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;The project focuses on one task: surface posts that likely need a human reply. The repository shows how public DEV posts are collected, how engagement signals are calculated, and how the list is updated on a schedule.&lt;/p&gt;

&lt;p&gt;The repo includes docs, diagrams, tests, and security scans to keep behavior predictable.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/ChecKMarKDevTools" rel="noopener noreferrer"&gt;
        ChecKMarKDevTools
      &lt;/a&gt; / &lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard" rel="noopener noreferrer"&gt;
        dev-community-dashboard
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Community behavior analytics dashboard for DEV.to. Observes activity patterns, engagement dynamics, and moderation signals without judging individual users.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://raw.githubusercontent.com/ChecKMarKDevTools/forem-community-dashboard/main/public/dev-weekend-challenge-banner-community-dashboard.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FChecKMarKDevTools%2Fforem-community-dashboard%2Fmain%2Fpublic%2Fdev-weekend-challenge-banner-community-dashboard.png" alt="DEV Community Dashboard Banner"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Community&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;&lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/stargazers" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/230860467f0537ea069b9b0285e1c2410c76264419553b4b1a0f34714cee365f/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f436865634b4d61724b446576546f6f6c732f6465762d636f6d6d756e6974792d64617368626f6172643f7374796c653d666c6174266c6f676f3d676974687562266c6f676f436f6c6f723d7768697465" alt="GitHub Stars"&gt;&lt;/a&gt; &lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/./LICENSE" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/3bb96c1262fe676641aecef2f5a5af79b1960d3673453e9954646b6dde639a17/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c6963656e73652d506f6c79466f726d5f536869656c645f312e302e302d626c75653f7374796c653d666c6174" alt="License"&gt;&lt;/a&gt; &lt;a href="https://dev.to/anchildress1" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/94d72588a043ff3e756b46de8dfc9c0fe443b0500216877ba0c3ac7e75da6795/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4445562e746f2d616e6368696c6472657373312d3041304130413f7374796c653d666c6174266c6f676f3d646576646f74746f266c6f676f436f6c6f723d7768697465" alt="DEV.to"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Pipeline&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;&lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/actions/workflows/ci.yml" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/fded8bee2c78c942db1639ee1606ae4862542608b72cd3b92e9e30f477dd079a/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f616374696f6e732f776f726b666c6f772f7374617475732f436865634b4d61724b446576546f6f6c732f6465762d636f6d6d756e6974792d64617368626f6172642f63692e796d6c3f6272616e63683d6d61696e267374796c653d666c6174266c6f676f3d676974687562616374696f6e73266c6f676f436f6c6f723d7768697465266c6162656c3d4349" alt="CI Build &amp;amp; Test"&gt;&lt;/a&gt; &lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/actions/workflows/cron.yml" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/f65bd93b96fa09c972d77079b9369af8be1133036cb1e43afe41bab750f1a326/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f616374696f6e732f776f726b666c6f772f7374617475732f436865634b4d61724b446576546f6f6c732f6465762d636f6d6d756e6974792d64617368626f6172642f63726f6e2e796d6c3f6272616e63683d6d61696e267374796c653d666c6174266c6f676f3d676974687562616374696f6e73266c6f676f436f6c6f723d7768697465266c6162656c3d43726f6e25323053796e63" alt="DEV Post Sync"&gt;&lt;/a&gt; &lt;a href="https://sonarcloud.io/summary/overall?id=ChecKMarKDevTools_forem-community-dashboard" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/0f0fae45dde50a1fd16379716b7314a794cc8ca04ad533b761a8a9454d45049e/68747470733a2f2f736f6e6172636c6f75642e696f2f6170692f70726f6a6563745f6261646765732f6d6561737572653f70726f6a6563743d436865634b4d61724b446576546f6f6c735f666f72656d2d636f6d6d756e6974792d64617368626f617264266d65747269633d616c6572745f737461747573" alt="Quality Gate"&gt;&lt;/a&gt; &lt;a href="https://sonarcloud.io/summary/overall?id=ChecKMarKDevTools_forem-community-dashboard" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/bdb2ad8bcace8f0e08c9a85418f9a3a6cbb47ce5ac9cd946429e242d0d1b6269/68747470733a2f2f736f6e6172636c6f75642e696f2f6170692f70726f6a6563745f6261646765732f6d6561737572653f70726f6a6563743d436865634b4d61724b446576546f6f6c735f666f72656d2d636f6d6d756e6974792d64617368626f617264266d65747269633d636f766572616765" alt="SonarCloud Coverage"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Scans&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;&lt;a href="https://trufflesecurity.com/trufflehog" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/8f58a013ffb0ab24fe52abb34ddd57636e31e8d60bf251e6e2d1273d243d250b/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f54727566666c65486f672d5365637265745f5363616e2d3030303030303f7374796c653d666c6174266c6f676f3d74727566666c65686f67266c6f676f436f6c6f723d7768697465" alt="TruffleHog"&gt;&lt;/a&gt; &lt;a href="https://semgrep.dev" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/bc962a14646b5ac37c13848dadc179809dbb97e491209ee6624cb068cc5d2b55/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f53656d677265702d534153542d3442313141383f7374796c653d666c6174266c6f676f3d73656d67726570266c6f676f436f6c6f723d7768697465" alt="Semgrep"&gt;&lt;/a&gt; &lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/security/code-scanning" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/d5fad76ccbbc6b7cdc0d4cc500d5a63d0f4b9f8cd99d34f044305076dd258c1f/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f6465514c2d53656375726974795f416e616c797369732d3232323232323f7374796c653d666c6174266c6f676f3d676974687562266c6f676f436f6c6f723d7768697465" alt="CodeQL"&gt;&lt;/a&gt; &lt;a href="https://github.com/hadolint/hadolint" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/3630a1ab31ca8b112f8ee4b50d20ac109eaa4a9271d2249c5d1c20354e660c3a/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4861646f6c696e742d446f636b657266696c655f4c696e742d3234393645443f7374796c653d666c6174266c6f676f3d646f636b6572266c6f676f436f6c6f723d7768697465" alt="Hadolint"&gt;&lt;/a&gt; &lt;a href="https://github.com/rhysd/actionlint" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/813fb53db449a5cbf841650a44fb05e79ba3bf380f41187d75bf3d84f069d200/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f616374696f6e6c696e742d4748415f4c696e742d3230383846463f7374796c653d666c6174266c6f676f3d676974687562616374696f6e73266c6f676f436f6c6f723d7768697465" alt="actionlint"&gt;&lt;/a&gt; &lt;a href="https://stylelint.io" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/b322e273a0b86ec3ef2faf0e2bb6fe579a43503572773b21c6c5d0679acb1306/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f5374796c656c696e742d4353535f4c696e742d3236333233383f7374796c653d666c6174266c6f676f3d7374796c656c696e74266c6f676f436f6c6f723d7768697465" alt="Stylelint"&gt;&lt;/a&gt; &lt;a href="https://developer.chrome.com/docs/lighthouse" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/0e38937fe85eae79138fcf8653b5ae7c11f45b61de498da75b5ff9239670fcd6/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c69676874686f7573652d413131795f3130302532352d4634344232313f7374796c653d666c6174266c6f676f3d6c69676874686f757365266c6f676f436f6c6f723d7768697465" alt="Lighthouse"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Stack&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;&lt;a href="https://nextjs.org" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/99ddbfdf3dee36e56fa095c938cf23fd2a3d2f12213748db516e4f0227c9ba85/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4e6578742e6a732d31362d3030303030303f7374796c653d666c6174266c6f676f3d6e657874646f746a73266c6f676f436f6c6f723d7768697465" alt="Next.js"&gt;&lt;/a&gt; &lt;a href="https://www.typescriptlang.org" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/a5cf3c2250320471051b545fa47c78b6e24b13ad342d833df970091e5d939a7d/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f547970655363726970742d352d3331373843363f7374796c653d666c6174266c6f676f3d74797065736372697074266c6f676f436f6c6f723d7768697465" alt="TypeScript"&gt;&lt;/a&gt; &lt;a href="https://tailwindcss.com" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/87c52dedb67dbf6f901f9ccaf2a5a584f2027392ce7477950261526886f5377f/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f5461696c77696e645f4353532d342d3036423644343f7374796c653d666c6174266c6f676f3d7461696c77696e64637373266c6f676f436f6c6f723d7768697465" alt="Tailwind CSS"&gt;&lt;/a&gt; &lt;a href="https://supabase.com" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/67b08b8ef0a38893ab73c928f449f482939a2495d9a089a544e7e2bf770c7d85/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f53757061626173652d506f737467726553514c2d3346434638453f7374796c653d666c6174266c6f676f3d7375706162617365266c6f676f436f6c6f723d7768697465" alt="Supabase"&gt;&lt;/a&gt; &lt;a href="https://vitest.dev" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/eae53c8a7d8c7526bce98307645a3af432b4addd19afde8737a6032c573ec53a/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f5669746573742d54657374696e672d3645394631383f7374796c653d666c6174266c6f676f3d766974657374266c6f676f436f6c6f723d7768697465" alt="Vitest"&gt;&lt;/a&gt; &lt;a href="https://pnpm.io" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/f249db7328d70965ca46ae9dc2a950a8652c1d5b5c4272813cfb0ab4fa03e303/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f706e706d2d5061636b6167655f4d616e616765722d4636393232303f7374796c653d666c6174266c6f676f3d706e706d266c6f676f436f6c6f723d7768697465" alt="pnpm"&gt;&lt;/a&gt; &lt;a href="https://www.docker.com" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/30acce87865ee23bfd8dc74f84a04d0c3759e91808c46774684e0e52360d6fb1/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f446f636b65722d436f6e7461696e65722d3234393645443f7374796c653d666c6174266c6f676f3d646f636b6572266c6f676f436f6c6f723d7768697465" alt="Docker"&gt;&lt;/a&gt; &lt;a href="https://cloud.google.com/run" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/e2e762273fb4881eebfa0e40d3898473a2d3af2afc9cb67bf3b84c063b9f6495/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436c6f75645f52756e2d4465706c6f796d656e742d3432383546343f7374796c653d666c6174266c6f676f3d676f6f676c65636c6f7564266c6f676f436f6c6f723d7768697465" alt="Google Cloud Run"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Code Quality&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;&lt;a href="https://eslint.org" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/4e0ba00de15444ff75c74fa9cee197115b6e608afe24a1b2ee6232404c92fdc0/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f45534c696e742d4c696e74696e672d3442333243333f7374796c653d666c6174266c6f676f3d65736c696e74266c6f676f436f6c6f723d7768697465" alt="ESLint"&gt;&lt;/a&gt; &lt;a href="https://prettier.io" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/c4d27ffe3450d5e14f4431272f475531dda19a8d3b731afc84f620d8f41d4ade/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f50726574746965722d466f726d617474696e672d4637423933453f7374796c653d666c6174266c6f676f3d7072657474696572266c6f676f436f6c6f723d626c61636b" alt="Prettier"&gt;&lt;/a&gt; &lt;a href="https://www.conventionalcommits.org" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/e1d5b3e0f9b0dc7b458e4656601e866b45c68a1f71292aa4b1a6697bde366ae4/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f6e76656e74696f6e616c5f436f6d6d6974732d312e302e302d4645353139363f7374796c653d666c6174266c6f676f3d636f6e76656e74696f6e616c636f6d6d697473266c6f676f436f6c6f723d7768697465" alt="Conventional Commits"&gt;&lt;/a&gt; &lt;a href="https://github.com/evilmartians/lefthook" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/200ff5464d756b61bdba247594d0ad9e54aa88a3665a6bc919938ba925e2bd6f/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c656674686f6f6b2d4769745f486f6f6b732d4646314531453f7374796c653d666c6174266c6f676f3d676974266c6f676f436f6c6f723d7768697465" alt="Lefthook"&gt;&lt;/a&gt; &lt;a href="https://www.gnu.org/software/make/" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/f8dfdbfa1f6a57385f4c83e6c08ee134bbb5329080b683b2fff8cd65bb4f8365/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4d616b6566696c652d4275696c642d3432373831393f7374796c653d666c6174266c6f676f3d676e75266c6f676f436f6c6f723d7768697465" alt="Makefile"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;AI&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;&lt;a href="https://claude.ai" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/0c3e615d27cc4534f21eec9c1aa4353ab8b7957a01bd3eae7941dcf808a9f7c6/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436c617564652d416e7468726f7069632d4439373735373f7374796c653d666c6174266c6f676f3d616e7468726f706963266c6f676f436f6c6f723d7768697465" alt="Claude"&gt;&lt;/a&gt; &lt;a href="https://chat.openai.com" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/5272e2b639c2d024092c8ea19780666ff67263a9c98f82b87d8ea9c14edfdce1/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436861744750542d4f70656e41492d3734414139433f7374796c653d666c6174266c6f676f3d6f70656e6169266c6f676f436f6c6f723d7768697465" alt="ChatGPT"&gt;&lt;/a&gt; &lt;a href="https://platform.openai.com/docs/models" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/0ac5c5986015bd7ecd91abf33c09f1ab8adbf36c0ba9eb4346249f1b2442bd4e/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6770742d2d352d2d6e616e6f2d496e746572616374696f6e5f5369676e616c2d3734414139433f7374796c653d666c6174266c6f676f3d6f70656e6169266c6f676f436f6c6f723d7768697465" alt="gpt-5-nano"&gt;&lt;/a&gt; &lt;a href="https://deepmind.google/models/gemini/" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/59b8405eebcdd227c44b0ecbd596028dbab1df21b8bf2c93ab6068d1a2995259/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f47656d696e692d416e7469677261766974792d3845373542323f7374796c653d666c6174266c6f676f3d676f6f676c6567656d696e69266c6f676f436f6c6f723d7768697465" alt="Google Gemini"&gt;&lt;/a&gt; &lt;a href="https://leonardo.ai" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/8736bc76451f0168f4a69bc8ca477c708fdf263eb4b63987d716f6d3b7ac6679/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c656f6e6172646f2e61692d536565647265616d5f342e352d3743334145443f7374796c653d666c6174266c6f676f3d646174613a696d6167652f7376672b786d6c3b6261736536342c50484e325a79423462577875637a30696148523063446f764c336433647935334d793576636d63764d6a41774d43397a646d636949485a705a58644362336739496a41674d4341794e4341794e4349675a6d6c73624430696432687064475569506a786a61584a6a6247556759336739496a45794969426a655430694d54496949484939496a45774969382b5043397a646d632b266c6f676f436f6c6f723d7768697465" alt="Leonardo.ai"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Support&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;&lt;a href="https://github.com/sponsors/anchildress1" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/0f1a77f879fdc4c4eeddbb2fc1e5639c4cf45313c8052a71f9bbcfde56053d35/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f53706f6e736f722d4769744875625f53706f6e736f72732d4541344141413f7374796c653d666c6174266c6f676f3d67697468756273706f6e736f7273266c6f676f436f6c6f723d7768697465" alt="Sponsor"&gt;&lt;/a&gt; &lt;a href="https://buymeacoffee.com/anchildress1" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/06f37a250a176c97608e815e3a19b8fd41ceae4aa31e65984dd9eee054ea53f8/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4275795f4d655f615f436f666665652d537570706f72742d4646444430303f7374796c653d666c6174266c6f676f3d6275796d6561636f66666565266c6f676f436f6c6f723d626c61636b" alt="Buy Me a Coffee"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;DEV Community Dashboard&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;A signal-surfacing tool for &lt;a href="https://forem.com/" rel="nofollow noopener noreferrer"&gt;Forem&lt;/a&gt; communities (dev.to and self-hosted instances). It ingests the latest posts via the public Forem API, classifies each one into attention categories (Awaiting Collaboration, Anomalous Signal, Trending Signal, Rapid Discussion, Steady Signal), and persists the results in Supabase so community helpers can see where conversations need a human eye.&lt;/p&gt;
&lt;p&gt;This is &lt;strong&gt;not&lt;/strong&gt; a moderation tool or a scorecard. It is designed to help helpers know where to look.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Production:&lt;/strong&gt; &lt;a href="https://dev-signal.checkmarkdevtools.dev" rel="nofollow noopener noreferrer"&gt;https://dev-signal.checkmarkdevtools.dev&lt;/a&gt; &lt;em&gt;(Cloud Run -- deployed post-initial-release)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;v1.1.0 adds LLM interaction scoring, NEEDS_SUPPORT detection, and incremental caching and was created for the &lt;a href="https://dev.to/devteam/happening-now-dev-weekend-challenge-submissions-due-march-2-at-759am-utc-5fg8" rel="nofollow"&gt;DEV Weekend Challenge&lt;/a&gt;.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Documentation&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Document&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/./docs/architecture.md" rel="noopener noreferrer"&gt;Architecture&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;System overview, data flow diagrams, deployment, API routes, guardrails&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/./docs/interaction-signal.md" rel="noopener noreferrer"&gt;Interaction Signal&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Composite signal formula, LLM scoring pipeline, model cascade, heuristic fallback, incremental scoring, signal spread, topic tags&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/./docs/metrics.md" rel="noopener noreferrer"&gt;Metrics Reference&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Full &lt;code&gt;ArticleMetrics&lt;/code&gt; field reference, risk components, velocity, participation,&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;…&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/ChecKMarKDevTools/dev-community-dashboard" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  How I Built It
&lt;/h3&gt;

&lt;p&gt;The app collects public DEV posts through the API, calculates engagement signals, stores results, and renders a prioritized list. The comment-scoring step returns schema-validated JSON (with typed fields and bounded ranges); invalid outputs fall back to deterministic heuristics, and the scoring pipeline is covered by automated tests to keep classification behavior stable as the system evolves. Updates run hourly and each item links back to the original article.  &lt;/p&gt;

&lt;p&gt;The prioritization model favors lack of interaction over popularity. Posts decay over time so those with the highest potential impact surface first.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7zpstiq5dypu1oqliwz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7zpstiq5dypu1oqliwz.png" alt="Screenshot DEV Community Dashboard post analytics center"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Measuring Interaction Signal
&lt;/h3&gt;

&lt;p&gt;Traditional dashboards often rely on keyword sentiment counts, which struggle to distinguish between surface praise and substantive discussion.  &lt;/p&gt;

&lt;p&gt;This system uses a composite interaction signal focused primarily on relevance and depth, with limited weight given to tone. Each comment contributes to a post-level score estimating where a constructive reply could meaningfully shape the conversation.&lt;/p&gt;

&lt;p&gt;The comment scoring model is guided by a structured system prompt that defines how relevance, depth, and constructiveness are evaluated before contributing to the overall interaction signal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TASK: Interaction signal analysis of blog post comments.
INPUT: A blog post body followed by numbered comments.
RULES:
- Extract 1-3 topic keywords from the post body as topic_tags.
- For each comment, assign interaction scores.
- Set needs_support to true if the post body contains signals of emotional distress, mental health struggle, burnout, isolation, or explicit help-seeking.
- Never infer beyond available text. Score only what is present.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full prompt and a detailed explanation of calculations exist in the &lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/docs" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt; along with system diagrams. Each one maps to a graph that's displayed on the post details page.&lt;/p&gt;

&lt;h3&gt;
  
  
  Limitations
&lt;/h3&gt;

&lt;p&gt;This is a signal-based prioritization model, not a full understanding of intent.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nuanced tone, sarcasm, or highly domain-specific language may affect classification accuracy.&lt;/li&gt;
&lt;li&gt;Posts can move between categories quickly as new replies or reactions change the underlying signals.&lt;/li&gt;
&lt;li&gt;The system reflects public engagement patterns only.&lt;/li&gt;
&lt;li&gt;Thresholds are calibrated for general community patterns and may not perfectly fit every tag or topic area.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The dashboard surfaces likelihood, not certainty. Human interpretation completes the picture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Broader Impact
&lt;/h3&gt;

&lt;p&gt;The goal is simple: help DEV members see where their attention can matter most. The dashboard surfaces where engagement is thin, where conversations are drifting, and where a thoughtful reply could shift the tone. Participation remains voluntary; the system only highlights opportunity.&lt;/p&gt;

&lt;p&gt;If it works, fewer posts sit unanswered, engagement becomes more intentional, and contributors have clearer context before jumping in.&lt;/p&gt;

&lt;p&gt;If you have ideas or feedback, share them below. You can also star the &lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard" rel="noopener noreferrer"&gt;checkmarkdevtools/dev-community-dashboard&lt;/a&gt; repository to follow its progress.&lt;/p&gt;

&lt;h4&gt;
  
  
  🛡️ The Editor Who Doesn’t Commit Code
&lt;/h4&gt;

&lt;p&gt;This piece was written by me, with ChatGPT acting as a second set of eyes. It helped tighten wording and keep explanations clear, but every decision, tradeoff, and line of code came from a human brain and a late-night idea that refused to go away.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>weekendchallenge</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Skills Aren’t Magic. They’re Scoped Context. 🧭🗂️</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Wed, 18 Feb 2026 13:44:00 +0000</pubDate>
      <link>https://forem.com/anchildress1/skills-arent-magic-theyre-scoped-context-d07</link>
      <guid>https://forem.com/anchildress1/skills-arent-magic-theyre-scoped-context-d07</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;🦄 Skills don’t magically make your agent smarter. They change when context is loaded.&lt;/p&gt;

&lt;p&gt;I intended to add Copilot skills to the list of topics I’ve written about, but it quickly turned into a behavior discussion instead of a how-to. Honestly, the same patterns behind &lt;a href="https://dev.to/anchildress1/series/33920"&gt;custom agents&lt;/a&gt;, &lt;a href="https://dev.to/anchildress1/series/32574"&gt;reusable prompts&lt;/a&gt;, and &lt;a href="https://dev.to/anchildress1/series/32973"&gt;repo instructions&lt;/a&gt; all apply here. If you really want to understand a skill, then the mechanism matters more than writing the file.&lt;/p&gt;

&lt;p&gt;Most frustration I see comes from expecting improved agent intelligence instead of selectivity. The truly interesting part is knowing when they help… and when they quietly make things worse. 🚦💎&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftq6ihv3t2b6ddljpdia3.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftq6ihv3t2b6ddljpdia3.png%3Fv%3D2026" alt="Human-crafted, AI-edited badge" width="200" height="200"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What Skills Actually Change 💳
&lt;/h2&gt;

&lt;p&gt;I’ve been rotating between Claude, GPT-5.2, and Gemini long enough to notice a pattern: most friction isn’t about model capability. It’s about context.&lt;/p&gt;

&lt;p&gt;Once you regularly switch agents, you start seeing how much behavior differences come down to what each system loads, when it loads it, and how aggressively it summarizes what you gave it.&lt;/p&gt;

&lt;p&gt;That’s where skills start to matter.&lt;/p&gt;

&lt;p&gt;Skills reduce context overload by deferring detailed instructions until the moment they’re relevant. When written well, they feel like relief. When written poorly, they introduce overhead: lookup cost, planning cost, and extra reasoning steps before execution begins.&lt;/p&gt;

&lt;p&gt;That overhead accumulates. Which is why I’m more interested in when skills &lt;strong&gt;should not exist&lt;/strong&gt; than in how many you can create with the free space.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 Across tools, the bigger difference isn’t “which model is smarter.” It’s how each agentic system decides what context deserves attention.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Instructions vs Skills 🧷
&lt;/h2&gt;

&lt;p&gt;Metaphors always land faster than jargon for me. So here’s the one that stuck for me: "Bob the Builder".&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The agent is the builder.
&lt;/li&gt;
&lt;li&gt;Instructions are the blueprints.
&lt;/li&gt;
&lt;li&gt;Skills are the tools.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Blueprints describe what is always true. If you’re building a house, the structural plan does not change because you switched from wiring to drywall. In a repository, that’s what belongs in &lt;code&gt;.github/copilot-instructions.md&lt;/code&gt;: guidance that is universally applicable and always loaded for every task.&lt;/p&gt;

&lt;p&gt;Skills are conditional. You wouldn’t scatter every possible tool across the floor before starting a task. You grab what’s needed when you need it. Loading everything up front slows you down—and missing the one tool that actually matters often changes the outcome entirely.&lt;/p&gt;

&lt;p&gt;That distinction is even more important now that context bloat is a real constraint. Long instruction files get summarized and those summaries will drift from the original intent. The most important line you were relying on for tone or guardrails is often the first casualty.&lt;/p&gt;

&lt;p&gt;A skill avoids that by staying out of baseline context from the start.&lt;/p&gt;

&lt;p&gt;At runtime, only the skill’s &lt;strong&gt;name&lt;/strong&gt; and &lt;strong&gt;description&lt;/strong&gt; are visible to the agent. It evaluates whether the current task matches that description. If the skill is relevant, then—and only then—it loads the full &lt;code&gt;SKILL.md&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;When a skill isn't activated, the agent didn’t “forget”—it never saw those details in the first place.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;ProTip:&lt;/strong&gt; GitHub’s docs on &lt;a href="https://docs.github.com/en/copilot/concepts/agents/about-agent-skills" rel="noopener noreferrer"&gt;agent skills&lt;/a&gt; and Claude Code’s &lt;a href="https://code.claude.com/docs/en/skills" rel="noopener noreferrer"&gt;skills docs&lt;/a&gt; are worth reviewing if you want the official mechanics behind activation.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Skill Structure and Activation 🪛
&lt;/h2&gt;

&lt;p&gt;A Copilot skill is defined by a &lt;code&gt;SKILL.md&lt;/code&gt; file. For repo-level skills, the structure looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.github/
|-- skills/
|   `-- your-skill-name/
|       `-- SKILL.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The directory tree itself doesn't matter as much as when the agent activates it.&lt;/p&gt;

&lt;p&gt;Only the skill’s metadata is evaluated initially. If the description matches the task, then the agent loads only the &lt;code&gt;SKILL.md&lt;/code&gt; file and treats its contents as procedural guidance.&lt;/p&gt;

&lt;p&gt;If you extend a skill with additional files, they are invisible unless explicitly referenced and deliberately loaded.&lt;/p&gt;

&lt;p&gt;This separation is the entire value proposition of a skill:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Before activation&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;After activation&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Operates on inferred repository patterns&lt;/td&gt;
&lt;td&gt;Executes defined procedural rules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Uses baseline instructions only&lt;/td&gt;
&lt;td&gt;Uses baseline instructions + skill guidance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Optimizes for general applicability&lt;/td&gt;
&lt;td&gt;Optimizes for task-specific behavior&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;ProTip:&lt;/strong&gt; Copilot also checks &lt;code&gt;.agents/skills&lt;/code&gt; and &lt;code&gt;.claude/skills&lt;/code&gt; (globally and per repo). That makes cross-tool skill reuse feasible without duplicating logic unnecessarily.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Anatomy of &lt;code&gt;SKILL.md&lt;/code&gt; 🧬
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;SKILL.md&lt;/code&gt; file defines both activation metadata and execution guidance. The &lt;code&gt;name&lt;/code&gt; and &lt;code&gt;description&lt;/code&gt; are always visible to the agent. The rest of the file becomes active only after invocation.&lt;/p&gt;

&lt;p&gt;Skills can mirror:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a custom agent&lt;/li&gt;
&lt;li&gt;a reusable prompt&lt;/li&gt;
&lt;li&gt;a custom instruction&lt;/li&gt;
&lt;li&gt;or a hybrid of all three&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below is a simplified example designed to activate only when editing a &lt;code&gt;CHANGELOG.md&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;changelog-writer&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Rewrite changelog entries with cheeky, narrative flair following project conventions. Use this when asked to rewrite or update CHANGELOG.md entries.&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="gu"&gt;## Execution Workflow&lt;/span&gt;

When invoked to rewrite a changelog entry:
&lt;span class="p"&gt;
1.&lt;/span&gt; &lt;span class="gs"&gt;**Read CHANGELOG.md**&lt;/span&gt; to extract tone and structure
&lt;span class="p"&gt;2.&lt;/span&gt; &lt;span class="gs"&gt;**Identify release type and breaking changes**&lt;/span&gt;
&lt;span class="p"&gt;3.&lt;/span&gt; &lt;span class="gs"&gt;**Select emoji(s)**&lt;/span&gt; appropriate to release theme
&lt;span class="p"&gt;4.&lt;/span&gt; &lt;span class="gs"&gt;**Craft italicized opening quote**&lt;/span&gt;
&lt;span class="p"&gt;5.&lt;/span&gt; &lt;span class="gs"&gt;**Write body content**&lt;/span&gt;
&lt;span class="p"&gt;6.&lt;/span&gt; &lt;span class="gs"&gt;**Validate links, formatting, and breaking-change visibility**&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key observation here isn’t the workflow itself. It’s the activation boundary. Without activation, none of that logic exists in the agent’s working memory.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 The full version lives in my &lt;a href="https://github.com/anchildress1/awesome-github-copilot/blob/main/skills/changelog-writer/SKILL.md" rel="noopener noreferrer"&gt;awesome-github-copilot&lt;/a&gt; repo if you want to inspect it more closely.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  One Sentence Version 🐎
&lt;/h2&gt;

&lt;p&gt;If a behavior must apply consistently, it belongs in repository or global instructions. &lt;/p&gt;

&lt;p&gt;If a behavior is conditional, procedural, or task-specific, it belongs in a skill. &lt;/p&gt;

&lt;p&gt;A skill should feel like a tool you occasionally reach for—not a consistent rule the agent has to rediscover on its own every session. However, once instructions grow large enough, they stop acting like baseline context and start acting like noise. At that point trimming becomes more valuable than adding.&lt;/p&gt;

&lt;p&gt;In case it helps, this is the prompt I use when reducing instruction bloat for newer LLMs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Review #copilot-instructions.md and optimize for AI consumption. Remove information that can be inferred from repository structure or code usage. Eliminate duplication and anything that does not improve clarity or reduce ambiguity. Preserve personality and tone directives. The final file should prioritize agent understanding over human readability.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;ProTip:&lt;/strong&gt; Back up the original first. Agents are confident editors and occasionally &lt;em&gt;confident editors&lt;/em&gt; erase the one line that mattered most.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🛡️ Behind the Curtain
&lt;/h2&gt;

&lt;p&gt;I wrote this post, and ChatGPT helped like a well-defined skill. I made the final calls—it activated when needed and stayed out of the way otherwise.&lt;/p&gt;

</description>
      <category>githubcopilot</category>
      <category>ai</category>
      <category>tutorial</category>
      <category>programming</category>
    </item>
    <item>
      <title>From Static Portfolio to Indexed Decisions 📃</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Mon, 09 Feb 2026 03:00:00 +0000</pubDate>
      <link>https://forem.com/anchildress1/from-static-portfolio-to-indexed-decisions-46bf</link>
      <guid>https://forem.com/anchildress1/from-static-portfolio-to-indexed-decisions-46bf</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/algolia"&gt;Algolia Agent Studio Challenge&lt;/a&gt;: Consumer-Facing Non-Conversational Experiences&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 I instantly knew what to build as soon as I saw this challenge paired with the New Year, New You Google Challenge. Honestly, I’d been meaning to build a portfolio for a long time and never prioritized the work. This challenge finally interested me enough to take that idea and actually run with it.&lt;/p&gt;

&lt;p&gt;Besides, it’s much more satisfying to &lt;em&gt;show&lt;/em&gt; why something works when there’s a story attached. If you want to skip ahead, at least read the first part carefully.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Static portfolios treat decisions as narrative. This project treats them as data.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feebl9ix1qdc87s73p9nt.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feebl9ix1qdc87s73p9nt.png%3Fv%3D2026" alt="Human-crafted, AI edited badge" width="200" height="200"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;This backend dev wasn’t built for pretty UIs. I was built for systems. So I created a non-conversational portfolio that behaves like a well-oiled machine instead of a static showcase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traditional portfolios require interpretation. This system removes interpretation entirely.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When I first envisioned what my portfolio site would look like, I knew I wanted it to stand apart from the usual LinkedIn résumé echoes. I’m strongly allergic to “normal” on the best days, but novelty alone doesn’t scale. I also knew the site had to be backed by infrastructure strong enough to survive my constant experiments and changing approaches over time.&lt;/p&gt;

&lt;p&gt;Naturally, those ideas converged into a single decision: build a living journal of my projects, struggles, and decisions as they happened.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decisions are first-class records here, not explanatory prose.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Something future-me could query months from now when I’m inevitably asking, “What in the world were you thinking?”&lt;/p&gt;

&lt;p&gt;As soon as I saw this challenge post, I started documenting every challenge, decision, outcome, and constraint. That process began with everything I could reconstruct from my existing GitHub projects.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 Even if I documented every single time I changed my mind, this index structure could absolutely handle it. Don’t worry. I didn’t go &lt;em&gt;that&lt;/em&gt; far.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Index Design
&lt;/h3&gt;

&lt;p&gt;The index is the system. If it fails, nothing else matters.&lt;/p&gt;

&lt;p&gt;Once I had a handle on controlling assistant agents on the UI side, the index became the real work. Designing it, breaking it, and curating it took the most time. Early patterns weren’t great for retrieval performance, but after studying Algolia best-practice guidance, things finally clicked.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The result is a collection of small, atomic records optimized for retrieval.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These power a clean UX through facets and deterministic sorting using both signal strength and record creation time.&lt;/p&gt;

&lt;p&gt;Here’s a real example pulled directly from the site:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"objectID"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"card:project:challenge:algolia-agent-studio-2026-02"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Algolia Agent Studio Challenge participation"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"blurb"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"An applied exploration of conversational retrieval."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"fact"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"I participated in the Algolia Agent Studio DEV Challenge during February 2026, focusing on conversational and non-conversational search behavior using indexed content."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"tags.lvl0"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"DEV Challenge"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Approach"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"tags.lvl1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"DEV Challenge &amp;gt; Algolia Agent Studio"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Approach &amp;gt; Experimentation"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"projects"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"System Notes"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"category"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Experience"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"created_at"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-02-08T05:42:00-05:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"signal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Why these fields exist:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;signal&lt;/code&gt; controls relevance pressure under ranking&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;created_at&lt;/code&gt; stabilizes ordering across time&lt;/li&gt;
&lt;li&gt;hierarchical tags enable narrowing without dilution&lt;/li&gt;
&lt;li&gt;constrained categories prevent ambiguous grouping&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 In case you were wondering, I didn’t write these by hand. I defined the rules and constraints, handed them to ChatGPT, and manually tracked the generated output in a JSON file stored in the repo at &lt;a href="https://github.com/anchildress1/system-notes/tree/v2.0.0/apps/api/algolia" rel="noopener noreferrer"&gt;System Notes v2.0.0/Algolia&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h3&gt;
  
  
  Ask AI as Search
&lt;/h3&gt;

&lt;p&gt;This project includes both a conversational chat interface and an Ask AI Search experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For this entry, Ask AI is intentionally treated as a pure search surface, not a conversational agent.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The conversational state is optional and these results consider only the non-conversational queries executed against the Algolia index.&lt;/p&gt;

&lt;p&gt;I evaluated retrieval performance over time—specifically speed, relevance, and consistency—while making iterative improvements to index configuration, ranking rules, and facets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If identical queries did not return identical results, the configuration was not finished.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The system now returns the correct indexed records quickly and predictably, without requiring query reformulation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4u9fcujjel58k7fv6wv.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4u9fcujjel58k7fv6wv.png%3Fv%3D2026" alt="Screenshot 256 results in 1ms" width="1150" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foyyk1qdlefsx1xv5o8sg.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foyyk1qdlefsx1xv5o8sg.png%3Fv%3D2026" alt="Screenshot filter categories, search results" width="2490" height="740"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 A copy of all index configuration files are kept in the repository as a reference at &lt;a href="https://github.com/anchildress1/system-notes/tree/v2.0.0/apps/api/algolia/config" rel="noopener noreferrer"&gt;System Notes v2.0.0—apps/api/algolia/config&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Live Demo
&lt;/h2&gt;

&lt;p&gt;The site is deployed at &lt;a href="https://algolia.anchildress1.dev" rel="noopener noreferrer"&gt;https://algolia.anchildress1.dev&lt;/a&gt; to keep it separate from the previous challenge submission.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try searching for “Algolia” or filter by the categories on the left to load relevant results.&lt;/strong&gt;&lt;/p&gt;


&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://system-notes-ui-103463304277.us-east1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Current canonical:&lt;/strong&gt; &lt;a href="https://algolia.anchildress1.dev" rel="noopener noreferrer"&gt;https://algolia.anchildress1.dev&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source code:&lt;/strong&gt; &lt;a href="https://github.com/anchildress1/system-notes/tree/v2.0.0" rel="noopener noreferrer"&gt;System Notes v2.0.0&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 If you want a full comparison snapshot, the original site remains live at &lt;a href="https://anchildress1.dev" rel="noopener noreferrer"&gt;https://anchildress1.dev&lt;/a&gt;. The difference between that version and the Algolia-powered build is dramatic.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Algolia Agent Studio in Practice
&lt;/h2&gt;

&lt;p&gt;A well-designed index alone isn’t enough. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retrieval quality is dictated by configuration discipline, not feature count.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I tested most options available in Algolia’s configuration panel while tuning this system. The most impactful changes involved aggressively limiting searchable attributes and tightening facet definitions.&lt;/p&gt;

&lt;p&gt;I also discovered that overly generous synonym expansion negatively affected agent retrieval speed, so those were deliberately scaled back.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2g8au83hpkditti6212k.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2g8au83hpkditti6212k.png%3Fv%3D2026" alt="Screenshot primary indexes in Algolia" width="2342" height="726"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Keeping the Index in Sync
&lt;/h3&gt;

&lt;p&gt;To avoid duplicating content manually, I configured an Algolia crawler to index content from DEV using my AI-optimized mirror site.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This keeps the index authoritative without human intervention.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The crawler is a lightweight JavaScript configuration managed directly from the Algolia dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cluy4uvy288o25ow5mf.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cluy4uvy288o25ow5mf.png%3Fv%3D2026" alt="Screenshot Algolia crawler testing" width="1404" height="1048"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 The crawler configuration file is stored in the repo at &lt;a href="https://github.com/anchildress1/system-notes/tree/v2.0.0/apps/api/algolia/sources/crawler.js" rel="noopener noreferrer"&gt;System Notes v2.0.0—apps/api/algolia/sources/crawler.js&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Tuning with Analytics
&lt;/h2&gt;

&lt;p&gt;An unfortunate API-key mistake prevented me from retaining full historical analytics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Even so, analytics were used to confirm that retrieval behavior stabilized under repeat queries.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbisdn9732a0m5h1trj8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbisdn9732a0m5h1trj8.png" alt="Screenshot Algolia search events" width="420" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 For the record, Algolia makes API key recovery painless &lt;em&gt;if&lt;/em&gt; you record the original key. Naturally, I did not.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why Fast, Predictable Retrieval Matters
&lt;/h2&gt;

&lt;p&gt;Before Algolia, users had to rely on me to remember and document every meaningful decision tied to a project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That does not scale. Retrieval does.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, I have a system capable of rapidly retrieving &lt;strong&gt;hundreds of decision-level records&lt;/strong&gt; across active builds.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Original Design&lt;/th&gt;
&lt;th&gt;Paired with Algolia&lt;/th&gt;
&lt;th&gt;Observed Improvement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Project cards showing finished work&lt;/td&gt;
&lt;td&gt;Choice cards indexed as search records&lt;/td&gt;
&lt;td&gt;✅ Enables &lt;strong&gt;decision-level retrieval&lt;/strong&gt; instead of content browsing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Projects shown as static artifacts&lt;/td&gt;
&lt;td&gt;Searchable sequence of constrained decisions&lt;/td&gt;
&lt;td&gt;✅ Demonstrates &lt;strong&gt;retrieval-first system thinking&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Narrative explanations only&lt;/td&gt;
&lt;td&gt;Retrieval-backed records with rationale&lt;/td&gt;
&lt;td&gt;✅ Proves answers are &lt;strong&gt;grounded in indexed data&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Generic portfolio navigation&lt;/td&gt;
&lt;td&gt;Algolia-powered discovery as primary UX&lt;/td&gt;
&lt;td&gt;✅ Makes Algolia &lt;strong&gt;structural to the experience&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;“Chat with AI” as a feature&lt;/td&gt;
&lt;td&gt;AI layered over Algolia retrieval&lt;/td&gt;
&lt;td&gt;✅ Signals &lt;strong&gt;intentional AI restraint&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Silent gaps when data is missing&lt;/td&gt;
&lt;td&gt;Fallback logic surfaced in results&lt;/td&gt;
&lt;td&gt;✅ Shows &lt;strong&gt;real-world constraint handling&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;This system would not exist without Algolia. It isn’t an enhancement. It’s the foundation.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s Next
&lt;/h2&gt;

&lt;p&gt;I ran out of time and this challenge had a hard stop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Given the choice, I optimized retrieval stability over feature breadth.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When time allows, these are next:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wire custom URL routing so search results are directly addressable&lt;/li&gt;
&lt;li&gt;Finalize recommendations driven by real user interaction events&lt;/li&gt;
&lt;li&gt;Introduce a Supabase backing store for indexed records to support long-term growth&lt;/li&gt;
&lt;li&gt;Migrate existing project cards into the new indexed record format&lt;/li&gt;
&lt;li&gt;Continue UI refinement and performance tuning&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 After winners are announced, this site will live solely at &lt;a href="https://anchildress1.dev" rel="noopener noreferrer"&gt;https://anchildress1.dev&lt;/a&gt; as my canonical portfolio.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🛡️ The Credits in the Margins
&lt;/h2&gt;

&lt;p&gt;This piece was written by a human, with ChatGPT used along the way for editing, clarity passes, and structural tightening while drafting. The final shape, technical claims, and decisions are human-made.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>algoliachallenge</category>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>Conversational Retrieval: When Chat Becomes Navigation 💬</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Mon, 09 Feb 2026 03:00:00 +0000</pubDate>
      <link>https://forem.com/anchildress1/conversational-retrieval-when-chat-becomes-navigation-2gij</link>
      <guid>https://forem.com/anchildress1/conversational-retrieval-when-chat-becomes-navigation-2gij</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/algolia"&gt;Algolia Agent Studio Challenge&lt;/a&gt;: Consumer-Facing Conversational Experiences&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 I never truly planned to enter this challenge twice—it just sort of happened. I can tell you exactly &lt;em&gt;why&lt;/em&gt; it happened though.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI stopped being interesting the moment it became expected.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I wasn’t the first person to experiment with AI-driven interfaces, but I’ve been doing it long enough to recalibrate my expectations. Once AI becomes table stakes, the real work shifts. The question is no longer &lt;em&gt;can&lt;/em&gt; you use AI, but &lt;em&gt;how intentionally&lt;/em&gt; you design around it.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://dev.to/anchildress1/from-static-portfolio-to-indexed-decisions-46bf"&gt;non-conversational entry&lt;/a&gt; proved something important: fast, predictable retrieval changes how a system feels. This entry starts from the same foundation and explores what happens when that retrieval layer is surfaced through conversation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feebl9ix1qdc87s73p9nt.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feebl9ix1qdc87s73p9nt.png%3Fv%3D2026" alt="Human-crafted, AI edited badge" width="200" height="200"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 Important Note on Scope
&lt;/h2&gt;

&lt;p&gt;This submission focuses exclusively on the &lt;em&gt;conversational layer&lt;/em&gt; of the system.  &lt;/p&gt;

&lt;p&gt;My &lt;a href="https://dev.to/anchildress1/from-static-portfolio-to-indexed-decisions-46bf"&gt;first submission post&lt;/a&gt; walks through the indexing strategy, retrieval architecture, and backend system design that make this experience possible. That foundation is intentionally treated as a given here so the conversation layer can be evaluated on its own terms.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built: Two Interfaces, One Discipline 🧱
&lt;/h2&gt;

&lt;p&gt;This system presents two distinct ways to enter the same body of knowledge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ask AI exists as a focused retrieval surface.&lt;/strong&gt; It is designed for moments when the user already knows what they’re looking for and wants a clear, direct answer. A question goes in. A grounded response comes back. The interaction resolves cleanly, without conversational momentum.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ruckus 2.0 (the chat agent) becomes a way to navigate through my portfolio&lt;/strong&gt;. Questions don’t necessarily end the interaction. They shape it. Each response helps orient the user, and each follow-up becomes a small decision about where to go next. Instead of resolving immediately, the interface supports exploration without losing direction.&lt;/p&gt;

&lt;p&gt;Both interfaces rely on the same indexed data. Neither invents answers. Neither speculates beyond what is retrievable. What changes is not the intelligence of the system, but the posture it takes toward the user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This separation is intentional.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 Ask AI answers the question that was asked. Chat helps decide which question to ask next.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Ask AI — Focused Retrieval 🔎
&lt;/h3&gt;

&lt;p&gt;Ask AI is optimized for moments when the user already knows what they’re looking for and wants a clean, bounded answer. A question goes in. A grounded response comes back. The interaction resolves without momentum.&lt;/p&gt;

&lt;p&gt;This interface is about &lt;strong&gt;precision&lt;/strong&gt;, not exploration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkvtbxtkzjtxe8q9po0y.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkvtbxtkzjtxe8q9po0y.png%3Fv%3D2026" alt="Screenshot of Algolia Ask AI response" width="1466" height="920"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 For this entry, the focus is not on Ask AI as a standalone feature, but on how it supports conversational movement through the system.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Ruckus 2.0 — Conversational Navigation 🧭
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Ruckus is designed for movement.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of resolving immediately, conversation unfolds across turns. Each response narrows context. Each follow-up becomes a directional choice, allowing users to navigate through indexed records and long-form content without upfront configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This interface reduces the cognitive load of deciding how to search.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6t5adtw8yesoydbki3br.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6t5adtw8yesoydbki3br.png" alt="Screenshot Ruckus 2.0 with prompt suggestions" width="800" height="827"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Rather than requiring users to understand the shape of the data up front, the system lets that shape reveal itself gradually. The chat layer sits on top of indexed records, long-form blog content, and explicit retrieval rules, allowing users to discover relationships through interaction instead of configuration.&lt;/p&gt;

&lt;p&gt;Conversation here is directional. It does not wander. It does not pretend to know more than it does.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3017dfc8a9vcfajyy9o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3017dfc8a9vcfajyy9o.png" alt="Screenshot Ruckus 2.0 answer to previous prompt suggestion" width="800" height="1045"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 This is the point where the system stops feeling like search and starts feeling like motion.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Smart Navigation (Almost) 🚧
&lt;/h3&gt;

&lt;p&gt;Conversational navigation only works if it can be trusted beyond the moment it happens.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conversational paths should survive reloads, not disappear into session state.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To support that, I began wiring event tracking and smart URLs tied to user actions. Algolia’s InstantSearch library makes it straightforward to persist UI state directly into the URL, allowing conversational paths to be shareable, bookmarkable, and resilient.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://algolia.anchildress1.dev/search?category=Work+Style&amp;amp;project=System+Notes&amp;amp;tag0=Discipline&amp;amp;tag0=Mindset&amp;amp;tag1=Discipline+%3E+Engineering&amp;amp;tag1=Mindset+%3E+Systems+Thinking
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;🦄 This work is not fully complete, but the structure is in place. The system can be extended without redesign, which was a deliberate tradeoff given the challenge timeline.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Live Demo 🛝
&lt;/h2&gt;

&lt;p&gt;This project is easiest to understand by using it.&lt;/p&gt;

&lt;p&gt;The demo below shows the conversational layer in action, including how chat responses guide movement through indexed records and long-form content without requiring users to understand the underlying structure.&lt;/p&gt;

&lt;p&gt;Conversation here isn’t about free-form dialogue. It’s about orientation. Each suggested response narrows context. Each follow-up reinforces direction. The system doesn’t try to be impressive. It tries to stay predictable.&lt;/p&gt;

&lt;p&gt;Try prompting either Ask AI or Ruckus 2.0 with "Tell me about this portfolio" in the chat interface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Judges evaluating this entry should focus less on individual answers and more on how context narrows across turns.&lt;/strong&gt;&lt;/p&gt;


&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://system-notes-ui-103463304277.us-east1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Current canonical:&lt;/strong&gt; &lt;a href="https://algolia.anchildress1.dev" rel="noopener noreferrer"&gt;https://algolia.anchildress1.dev&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source code:&lt;/strong&gt; &lt;a href="https://github.com/anchildress1/system-notes/tree/v2.0.0" rel="noopener noreferrer"&gt;System Notes v2.0.0&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What matters most in this demo isn’t any single answer. It’s how the system behaves across turns. Questions resolve cleanly when they should. When they don’t, the interface helps users decide where to go next instead of guessing for them.&lt;/p&gt;

&lt;p&gt;Compared to single chat-box approaches that try to handle every intent at once, this system separates fast resolution from exploratory movement, making conversational behavior easier to predict and easier to trust.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 If you want a full comparison snapshot, the original site remains live at &lt;a href="https://anchildress1.dev" rel="noopener noreferrer"&gt;https://anchildress1.dev&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  How I Used Algolia Agent Studio 🧪
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Ruckus 2.0 Iterative Testing
&lt;/h3&gt;

&lt;p&gt;Algolia Agent Studio is used here to support the conversational half of the experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ma3e1sb05vj1cwq46gz.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ma3e1sb05vj1cwq46gz.png%3Fv%3D2026" alt="Screenshot Algolia Agent Studio iterative agent testing" width="2122" height="1426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The agent operates within clear boundaries. It answers only from indexed records and blog content. It generates follow-up prompts only when the system knows those questions are answerable. Its role is not to impress, but to keep movement intentional.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dry wit is allowed. A little sharpness is encouraged. Making fun of me is absolutely permitted.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Guessing is not.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To support this, structured records and long-form blog content are retrieved separately. This avoids flattening narrative context into truncated fields and allows each source to be tuned independently for accuracy, latency, and scope.&lt;/p&gt;

&lt;p&gt;Rather than describing the agent abstractly, I made its constraints explicit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## SELF_MODEL&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; Ruckus is a constrained system interface with opinions.
&lt;span class="p"&gt;-&lt;/span&gt; Ruckus is not a person.
&lt;span class="p"&gt;-&lt;/span&gt; Ruckus is not Ashley.
&lt;span class="p"&gt;-&lt;/span&gt; Ruckus did not author the work described.
&lt;span class="p"&gt;-&lt;/span&gt; Ruckus operates exclusively on retrieved context provided by the system.
&lt;span class="p"&gt;-&lt;/span&gt; Wit is permitted; invention is not.

&lt;span class="gu"&gt;### HUMOR_RULES&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; Humor is dry, situational, and brief.
&lt;span class="p"&gt;-&lt;/span&gt; Humor never carries information on its own.
&lt;span class="p"&gt;-&lt;/span&gt; Jokes appear only after facts land.
&lt;span class="p"&gt;-&lt;/span&gt; Light teasing of Ashley’s recurring patterns is allowed and observational.
&lt;span class="p"&gt;-&lt;/span&gt; Never condescending. Never explanatory.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 The full prompt file is stored in the repo at &lt;a href="https://github.com/anchildress1/system-notes/tree/v2.0.0/apps/api/algolia/algolia_prompt.md" rel="noopener noreferrer"&gt;System Notes v2.0.0—apps/api/algolia/algolia_prompt.md&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Prompted Suggestions 🧭
&lt;/h3&gt;

&lt;p&gt;Open-ended chat tends to drift.&lt;/p&gt;

&lt;p&gt;To prevent that, the interface includes prompted follow-up suggestions that act as navigational signposts rather than guesses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The system only suggests questions it already knows how to answer.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These prompts are derived directly from retrieved results. They narrow scope, reinforce direction, and keep the conversation grounded in what actually exists. Prompting here doesn’t add intelligence. It removes ambiguity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6haihowhmvya3qudcnh9.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6haihowhmvya3qudcnh9.png%3Fv%3D2026" alt="Screenshot of Algolia Agent Studio for Ruckus prompt suggestions" width="1244" height="900"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 The full prompt for suggestions is stored in the repo at &lt;a href="https://github.com/anchildress1/system-notes/tree/v2.0.0/apps/api/algolia/suggestions_prompt.md" rel="noopener noreferrer"&gt;System Notes v2.0.0—apps/api/algolia/suggestions_prompt.md&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Retrieval Beyond Indexed Records 📚
&lt;/h3&gt;

&lt;p&gt;This isn’t just chat. This is multi-source retrieval with intent. Some answers only exist as prose.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqw6w0t4xq4w5anp8un89.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqw6w0t4xq4w5anp8un89.png%3Fv%3D2026" alt="Screenshot of Algolia Agent Studio for Ruckus search tool" width="878" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The agent can retrieve long-form blog content directly, allowing conversational navigation to move between indexed decisions and narrative explanations without losing context or inventing summaries. If a post doesn’t answer the question, it isn’t surfaced.&lt;/p&gt;

&lt;p&gt;This allows movement from quick lookup into deeper explanation without breaking trust.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgeh17dv7et38s8vd1ok0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgeh17dv7et38s8vd1ok0.png" alt="Screenshot of Algolia Agent Studio for Ruckus custom blog search tool" width="800" height="947"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 The blog search performs a similar job as it's sister web crawler, but allows the agent to pull the entire blog post as context instead of trimming it for quicker indexing. Yes—the tokens are worth it.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why Fast Retrieval Matters 🏎️
&lt;/h2&gt;

&lt;p&gt;Many conversational systems hide slow or uncertain retrieval behind fluent language. This one doesn’t try to. Conversational flow only works when the foundation underneath it is solid.&lt;/p&gt;

&lt;p&gt;Without a fast, well-structured index layer, responses become slower and less reliable. Latency increases. Ambiguity creeps in. The system starts compensating instead of respecting boundaries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conversation works here because retrieval resolves first.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When the system can’t answer, it stops. There is no speculative reasoning loop and no attempt to sound helpful for its own sake. Chat doesn’t replace search in this build. It reveals it, one step at a time.&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s Next 🔮
&lt;/h2&gt;

&lt;p&gt;Time was the primary constraint for this entry. When given the choice, I prioritized reliable conversational paths over feature breadth.&lt;/p&gt;

&lt;p&gt;Next steps are clear:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Finish wiring smart URL state across all conversational actions
&lt;/li&gt;
&lt;li&gt;Expand event tracking to observe real navigation patterns
&lt;/li&gt;
&lt;li&gt;Continue tightening response latency
&lt;/li&gt;
&lt;li&gt;Refine fallback behavior when conversational paths dead-end
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This system stands on the same retrieval foundation as &lt;a href="https://dev.to/anchildress1/from-static-portfolio-to-indexed-decisions-46bf"&gt;my non-conversational entry&lt;/a&gt;. The difference is not what the system knows: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It’s how users move through it.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🛡️ Built With a Human at the Wheel
&lt;/h2&gt;

&lt;p&gt;This post was written by me, with ChatGPT used as a drafting and editing partner to help restructure sections, tighten language, and improve clarity while preserving intent and voice.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>algoliachallenge</category>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>I genuinely meant it when I said I was done. I even had that rare, fragile sense of closure. Then I noticed one small thing, which led to another, and somehow became a full pass of “minor” edits. I failed at stopping, but this time I really did. 🚧🚦</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Wed, 28 Jan 2026 06:15:35 +0000</pubDate>
      <link>https://forem.com/anchildress1/i-genuinely-meant-it-when-i-said-i-was-done-i-even-had-that-rare-fragile-sense-of-closure-then-i-1cna</link>
      <guid>https://forem.com/anchildress1/i-genuinely-meant-it-when-i-said-i-was-done-i-even-had-that-rare-fragile-sense-of-closure-then-i-1cna</guid>
      <description>&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/anchildress1/my-portfolio-doesnt-live-on-the-page-218e" class="crayons-story__hidden-navigation-link"&gt;My Portfolio Doesn’t Live on the Page 🚫📃&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
      &lt;a href="https://dev.to/anchildress1/my-portfolio-doesnt-live-on-the-page-218e" class="crayons-article__context-note crayons-article__context-note__feed"&gt;&lt;p&gt;New Year, New You Portfolio Challenge Submission&lt;/p&gt;

&lt;/a&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/anchildress1" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3224358%2F7f675c78-6aa0-466a-a5a7-c3e35440d53a.png" alt="anchildress1 profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/anchildress1" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Ashley Childress
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Ashley Childress
                
              
              &lt;div id="story-author-preview-content-3190808" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/anchildress1" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3224358%2F7f675c78-6aa0-466a-a5a7-c3e35440d53a.png" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Ashley Childress&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/anchildress1/my-portfolio-doesnt-live-on-the-page-218e" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Jan 24&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/anchildress1/my-portfolio-doesnt-live-on-the-page-218e" id="article-link-3190808"&gt;
          My Portfolio Doesn’t Live on the Page 🚫📃
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/devchallenge"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;devchallenge&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/googleaichallenge"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;googleaichallenge&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/portfolio"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;portfolio&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/gemini"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;gemini&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/anchildress1/my-portfolio-doesnt-live-on-the-page-218e" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/exploding-head-daceb38d627e6ae9b730f36a1e390fca556a4289d5a41abb2c35068ad3e2c4b5.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/multi-unicorn-b44d6f8c23cdd00964192bedc38af3e82463978aa611b4365bd33a0f1f4f3e97.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;19&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/anchildress1/my-portfolio-doesnt-live-on-the-page-218e#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            8 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;




</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>portfolio</category>
      <category>gemini</category>
    </item>
    <item>
      <title>My Portfolio Doesn’t Live on the Page 🚫📃</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Sat, 24 Jan 2026 01:16:49 +0000</pubDate>
      <link>https://forem.com/anchildress1/my-portfolio-doesnt-live-on-the-page-218e</link>
      <guid>https://forem.com/anchildress1/my-portfolio-doesnt-live-on-the-page-218e</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/new-year-new-you-google-ai-2025-12-31"&gt;New Year, New You Portfolio Challenge Presented by Google AI&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 &lt;strong&gt;TL;DR for Judges:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Live portfolio deployed on &lt;strong&gt;Google Cloud Run&lt;/strong&gt;, embeded below

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://system-notes-ui-288489184837.us-east1.run.app" rel="noopener noreferrer"&gt;https://system-notes-ui-288489184837.us-east1.run.app&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://anchildress1.dev" rel="noopener noreferrer"&gt;https://anchildress1.dev&lt;/a&gt; (canonical)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Embedded below with required label: &lt;code&gt;dev-tutorial=devnewyear2026&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Source + system notes linked

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/anchildress1/system-notes/releases/tag/system-notes-v1.2.0" rel="noopener noreferrer"&gt;System Notes v1.2.0&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Focus: AI-assisted system design, not a static page&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  About Me 👩🏻‍🦰
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How I Ended Up Here 🌀
&lt;/h3&gt;

&lt;p&gt;For those of you who don’t know me yet, or who haven’t wandered into one of my other posts and stayed longer than you meant to—hey, I’m Ashley. I’m a very opinionated, very stubborn, and &lt;em&gt;happily backend-only&lt;/em&gt; software engineer, which means I spend a fair amount of time actively running away from anything that ends in the letters 'UI'. That detail matters, because it makes everything that follows a little ironic.&lt;/p&gt;

&lt;p&gt;I don’t do hackathons, which I wrote about in &lt;a href="https://dev.to/anchildress1/the-hackathon-i-swore-off-and-the-exhaustion-that-mostly-compiled-c4l"&gt;this post&lt;/a&gt;. I really don’t do New Year’s resolutions either! I fundamentally disagree with the idea that growth needs a ceremonial date on the calendar. If something is broken, I want to know now. If it needs fixing, I want to fix it now. Harsh feedback today beats polite intentions tomorrow.&lt;/p&gt;

&lt;p&gt;This wasn’t about resolutions, and it wasn’t even about a portfolio refresh in isolation. If I had seen this challenge on its own, I probably would have kept scrolling. What stopped me was the pairing with the Algolia challenge, because together they finally lined up with something I’d been meaning to build for a while and hadn’t prioritized. I gave myself a weekend not because I expected something spectacular, but because the tools I wanted to learn finally matched something I actually needed to build, and the timing felt intentional rather than forced.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚖️ &lt;strong&gt;TL;DR:&lt;/strong&gt; This wasn’t a month-long build. It was one focused weekend, followed by exactly four (and a half) evenings of intentional obsession over the things you &lt;em&gt;won’t&lt;/em&gt; see on the page.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqxh5x11fkmhrsr3vcdq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqxh5x11fkmhrsr3vcdq.png" alt="Human-crafted, AI-edited badge"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  The Problem Beneath the Prompt 🧠
&lt;/h3&gt;

&lt;p&gt;For me, this was an AI challenge first and a portfolio challenge second. I love my job, I’m not looking for recruiters, and I’m not trying to market myself for a career move. This site exists for experimentation and self-amusement, and it only resembles a portfolio because that’s the shape the challenge happens to take.&lt;/p&gt;

&lt;p&gt;I approached the work in two deliberate parts. The first was finally learning Antigravity, which I’d downloaded, glanced at, and then avoided actually using. Pairing that with the Google AI Pro subscription gave me enough room to experiment freely, and in practice that meant leaning heavily on Google Gemini Pro 3 with high reasoning enabled. Every attempt to dial it back introduced subtle breakage, so I accepted higher reasoning as the right tool for this job.&lt;/p&gt;

&lt;p&gt;The second part was laying early groundwork for the Algolia challenge by introducing a chatbot up front, rather than bolting it on later. Throughout all of this, ChatGPT stayed firmly in a research-and-orchestration role behind the scenes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚖️ &lt;strong&gt;TL;DR:&lt;/strong&gt; I treated this as an AI challenge first—learning Antigravity now and laying intentional groundwork for the upcoming &lt;a href="https://dev.to/devteam/join-the-algolia-agent-studio-challenge-3000-in-prizes-4eli?bb=259943"&gt;Algolia challenge&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Portfolio 💼
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Go Look First 🚦
&lt;/h3&gt;

&lt;p&gt;No accounts, no setup, no ceremony. Click the hero text and ask Ruckus literally anything about me or the system. Before I explain what I built or why certain decisions look the way they do, I want you to actually look at it. Click around. Poke at the chatbot. Get a feel for it without narration first. Once you’ve seen it in motion, the rest of this post exists to give you the context for all the work that you can’t see.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This site has been updated with changes made since the implementation of features for the &lt;a href="https://dev.to/challenges/algolia-2026-01-07"&gt;Algolia challenge&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://system-notes-ui-288489184837.us-east1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;




&lt;blockquote&gt;
&lt;p&gt;Explore it by clicking, asking, and navigating—this system is designed to respond, not be scanned.&lt;/p&gt;

&lt;p&gt;🦄 The UI runs as its own Cloud Run service alongside the API. While &lt;a href="//anchildress1.dev"&gt;anchildress1.dev&lt;/a&gt; is its canonical domain, the UI is accessed for this challenge at &lt;a href="https://system-notes-ui-288489184837.us-east1.run.app" rel="noopener noreferrer"&gt;https://system-notes-ui-288489184837.us-east1.run.app&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you’ve seen it in motion, the rest of this post exists to give you the context for all the work that you can’t see.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Judge Validation Snapshot 📋
&lt;/h3&gt;

&lt;p&gt;Below is a quick, explicit checklist aligned to the judging criteria, for judges who want to validate requirements without hunting through prose.&lt;/p&gt;

&lt;h4&gt;
  
  
  ✅ Innovation / Creativity
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Novel interactive elements (intentional visual effects, chatbot interaction, theme song).&lt;/li&gt;
&lt;li&gt;Purposeful use of AI tools (Antigravity, Google Gemini Pro 3, ChatGPT).&lt;/li&gt;
&lt;li&gt;Clear personal voice and narrative arc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  ✅ Technical Strength
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Live Cloud Run deployment embedded in this post&lt;/li&gt;
&lt;li&gt;Deployment includes required challenge label: &lt;code&gt;dev-tutorial=devnewyear2026&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;All links, embeds, and interactive elements function correctly.&lt;/li&gt;
&lt;li&gt;AI usage includes explicit guardrails and evaluation by outcomes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  ✅ UX / Design
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Clear navigation and section hierarchy.&lt;/li&gt;
&lt;li&gt;Accessible, readable visual design.&lt;/li&gt;
&lt;li&gt;Interactive elements are responsive and controlled.&lt;/li&gt;
&lt;li&gt;Performance remains snappy with smooth animations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbtturbko16xmwf5l6dkr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbtturbko16xmwf5l6dkr.png" alt="Screenshot of Lighthouse performance results for desktop, all 100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 &lt;em&gt;Yes, I promise—it’s all here, and then some.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Technical Stack 💾
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; Next.js (AI-generated UI; intentionally minimal and read-only)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Python (AI-generated; deliberate choice over JavaScript; Django considered but deferred to avoid stacking two new frameworks in a weekend challenge)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Generation:&lt;/strong&gt; Antigravity with Gemini Pro 3 (high-reasoning mode, intentionally constrained) and AI Pro trial subscription&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chat Interface:&lt;/strong&gt; Ruckus (GPT-5.2, no memory, bounded knowledge base)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment:&lt;/strong&gt; Google Cloud Run (live service with required &lt;code&gt;dev&lt;/code&gt; label)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing:&lt;/strong&gt; Playwright (E2E), unit and integration tests, Lighthouse performance and accessibility checks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation:&lt;/strong&gt; GitHub Actions for validation and deployment, explicit AI-checks command, release-please configured for workflow automation&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 Source for &lt;strong&gt;v1.2.0 of System Notes&lt;/strong&gt; is available on &lt;a href="https://github.com/anchildress1/system-notes/releases/tag/system-notes-v1.2.0" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; for traceability and review.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  How I Build It 🏗️
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Below the Surface (Where the Real Work Lives) 🧱
&lt;/h3&gt;

&lt;p&gt;Most of what I built for this project will never be obvious from any single page. The structure, accessibility decisions, performance work, mobile behavior, and AI-facing metadata all live below the surface. If you’re curious, there are plenty of ways to see it in action: run a Lighthouse report, check the accessibility scores, view the site on a different device, or inspect the sitemap. You can also chat with Ruckus, the built-in assistant that knows far more about me and my work than is probably reasonable for a proof of concept.&lt;/p&gt;

&lt;p&gt;The goal wasn’t to hide complexity, but to place it where it belongs—so the site can be crawled intentionally by AI while still feeling coherent and human to anyone reading it.&lt;/p&gt;

&lt;p&gt;The chatbot implementation itself is intentionally straightforward. Its strength comes from the information and constraints I gave it, not from hidden tricks or clever illusions. It runs on GPT-5.2 with a small knowledge base and no memory, and it’s designed to be helpful, honest, and conversational rather than impressive on paper.&lt;/p&gt;

&lt;p&gt;Everything here is deployed and tested deliberately. The polish you see is intentional, and the things you don’t see are doing just as much work.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚖️ &lt;strong&gt;TL;DR:&lt;/strong&gt; The visible site is only a small part of the work. Most of the effort went into structure, constraints, accessibility, and coordinating multiple AI systems under real-world conditions.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Meet Ruckus: Production AI 🧪
&lt;/h3&gt;

&lt;p&gt;Ruckus is a constrained, production-deployed assistant. It responds using declared system data, not free-form invention. The goal here isn’t to prove that AI was used, but that it was &lt;em&gt;designed&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;What powers Ruckus isn’t a grab-bag of “write me some code” prompts. It’s a set of system-level instructions that define what the assistant is allowed to know, say, and explicitly refuse to guess. Those constraints are what make it usable in a live environment.&lt;/p&gt;

&lt;p&gt;Below are literal excerpts from the primary system prompt. These aren’t paraphrases or examples. They’re the rules that actually govern how the chatbot embedded in this site behaves.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;### Hard Guardrails (Non-Negotiable)&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Ruckus is an AI assistant, not Ashley Childress
&lt;span class="p"&gt;-&lt;/span&gt; Ruckus is not the portfolio system
&lt;span class="p"&gt;-&lt;/span&gt; Never speak in first-person as Ashley
&lt;span class="p"&gt;-&lt;/span&gt; No roleplay or impersonation
&lt;span class="p"&gt;-&lt;/span&gt; No hallucination, guessing, or inference
&lt;span class="p"&gt;-&lt;/span&gt; No filler
&lt;span class="p"&gt;-&lt;/span&gt; Default to &lt;span class="gs"&gt;**short answers**&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Priority: &lt;span class="gs"&gt;**accuracy &amp;gt; clarity &amp;gt; completeness**&lt;/span&gt;
Provide &lt;span class="gs"&gt;**highlights first**&lt;/span&gt;
Expand &lt;span class="gs"&gt;**only**&lt;/span&gt; when the user explicitly asks for more detail
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;If a question falls outside explicit, known context, Ruckus must:
&lt;span class="p"&gt;1.&lt;/span&gt; State lack of knowledge plainly.
&lt;span class="p"&gt;2.&lt;/span&gt; Attribute the gap correctly to missing input from Ashley.
&lt;span class="p"&gt;3.&lt;/span&gt; Redirect the user to a nearby, valid topic.
&lt;span class="p"&gt;4.&lt;/span&gt; Keep the response short.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;🦄 These constraints are exactly what make the chatbot predictable and trustworthy in practice. Everything else in the full prompt exists to support these boundaries.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  What I'm Most Proud Of 💖
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What This Site Is Actually Doing ✨
&lt;/h3&gt;

&lt;p&gt;When someone first lands on the page, the glitter bomb is doing real work (if you missed it, click the hero text). It sets tone immediately, signals playfulness, and gives my ADHD something to engage with while I’m evaluating Antigravity’s output by clicking, scrolling, and retriggering effects.&lt;/p&gt;

&lt;p&gt;That choice came with tradeoffs. I wanted the fun without sacrificing performance or accessibility, which forced constraints I don’t usually deal with as someone who avoids UI work. What makes this project different from most things I’ve built is that I didn’t review a single line of code. Instead, I worked primarily with Google Gemini Pro 3 in higher‑reasoning mode and evaluated outcomes I could see, test, and benchmark.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚖️ &lt;strong&gt;TL;DR:&lt;/strong&gt; This site is a curated systems playground. The playful surface is intentional; the real experiment was evaluating AI-built results, not reviewing code.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  What Changed Once I Stopped Touching It 🔄
&lt;/h3&gt;

&lt;p&gt;When I first dove into Antigravity, I was underwhelmed and couldn’t see how my one‑weekend plan was supposed to work. Once I stopped poking and let Antigravity and Gemini Pro 3 actually run, that opinion shifted quickly—they performed far better than I expected.&lt;/p&gt;

&lt;p&gt;The hardest part wasn’t starting, it was stopping. I’m a perfectionist, and without boundaries I’ll keep refining indefinitely. The weekend build quietly stretched into the following week until I moved on to the Algolia challenge and forced myself to declare a version finished.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚖️ &lt;strong&gt;TL;DR:&lt;/strong&gt; The hardest part wasn’t learning Antigravity—it was knowing when to say "complete enough".&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Why This Counts as Forward Motion 🚧
&lt;/h3&gt;

&lt;p&gt;This project didn’t change who I am as an engineer. It clarified it. I’m systems-focused, outcome-driven, and willing to stop reviewing code once a system can be evaluated by behavior and performance alone. Defining that boundary—and enforcing it—is what makes this forward motion instead of a one-off experiment.&lt;/p&gt;

&lt;p&gt;Seeing it hold up once it was deployed, shared, and interacted with by real people made that boundary tangible instead of theoretical. So overall, I'm calling this a success. Still—my work will stay at the systems layer. &lt;em&gt;A deliberate choice.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚖️ &lt;strong&gt;TL;DR:&lt;/strong&gt; I now treat systems-level evaluation, not code review, as a first-class decision point when working with Antigravity + Gemini Pro 3.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🛡️ End of the Training Loop
&lt;/h2&gt;

&lt;p&gt;This post was written by a human, with AI used intentionally as a collaborator for research, experimentation, and system construction. All design decisions, judgments, and conclusions remain human-led.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Deployed on Google Cloud Run · Embedded per challenge requirements · Public and unauthenticated&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>portfolio</category>
      <category>gemini</category>
    </item>
    <item>
      <title>Signing Your Name on AI-Assisted Commits with RAI Footers 🛡️✍️</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Wed, 21 Jan 2026 11:46:00 +0000</pubDate>
      <link>https://forem.com/anchildress1/signing-your-name-on-ai-assisted-commits-with-rai-footers-2b0o</link>
      <guid>https://forem.com/anchildress1/signing-your-name-on-ai-assisted-commits-with-rai-footers-2b0o</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;🦄 I know it's been a minute (&lt;em&gt;months!&lt;/em&gt;), but I said I wouldn't forget about the follow-up for this one and I have not! It's been nagging at me to finish, but I wanted to have &lt;em&gt;something&lt;/em&gt; to show for it first. I got that done quick enough; however, Release Please was not nearly as straightforward to learn as I anticipated considering I set up dual packages in a mono-repo as my "simple" setup. Yes—I curse myself, but holidays were also involved in the shenanigans!&lt;/p&gt;

&lt;p&gt;I'm going to try not to bore you with the nitty-gritty—we're all smart devs around here—but I have to finish my original thought or this outstanding post will nag at me forever. I'm not counting out a third though, because I do still have more ideas. Me and attribution deserve a break from each other after this though, so we’ll save that for future-me to consider. 🔮✨&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fan9phm5swbi8hrxec6p3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fan9phm5swbi8hrxec6p3.png" alt="Human-created, AI-edited badge" width="200" height="200"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Brief but Inspirational Backstory ✨
&lt;/h2&gt;

&lt;p&gt;If you missed the original version, I’ll sum it up quickly: it’s always bugged me that AI joined the party and never had to make itself permanently known. Not responsibility—that’s on us as devs regardless—but as a co-author. So I came up with an idea to fix that and over the break I quickly threw together a working version of enforcement for both CommitLint and GitLint in Python.&lt;/p&gt;

&lt;p&gt;I call this a &lt;strong&gt;RAI footer&lt;/strong&gt;, short for &lt;em&gt;Responsible AI attribution&lt;/em&gt;. Not policy. Not governance. Just a mechanical signal in your commit history that makes AI assistance explicit and pairs it with a human signoff. The goal isn’t to moralize commits. It’s to remove ambiguity about authorship once AI enters the workflow.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 The &lt;a href="https://github.com/ChecKMarKDevTools/rai-lint" rel="noopener noreferrer"&gt;README&lt;/a&gt; already explains what this is and how to use it properly, if you’re curious&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Doesn’t Live Alone 🔗
&lt;/h2&gt;

&lt;p&gt;I have to point out that this RAI footer is just a baby step and it does &lt;em&gt;absolutely nothing&lt;/em&gt; to fix potential problems that may crop up from code that's committed without proper human-in-the-loop (HITL) oversight.&lt;/p&gt;

&lt;p&gt;To make this system work as designed, I leaned on Git’s built-in functionality. The &lt;code&gt;--signoff&lt;/code&gt; flag is an extension of legal requirements often used in open source for a Developer Certificate of Origin (DCO). This adds a literal footer to the commit stating that you, as a verified author, are also adding a layer of legal accountability. You didn’t just push code. You agreed to the project’s rules.&lt;/p&gt;

&lt;p&gt;Which is a whole lot of legal framing for a system that has zero interest in what lawyers are doing.&lt;/p&gt;

&lt;p&gt;My version instead says: &lt;em&gt;I’m signing my name. This is correct.&lt;/em&gt; That’s it.&lt;/p&gt;

&lt;p&gt;In practice, your commit looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git commit &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"chore: Add trouble in spades

Assisted-by: GitHub Copilot &amp;lt;copilot@github.com&amp;gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the result you end up with when you look back at &lt;code&gt;git log&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Author: Ashley Childress &amp;lt;6563688+anchildress1@users.noreply.github.com&amp;gt;
Date:   Sat Jan 17 00:32:38 2026 -0500

chore: Add trouble in spades

Assisted-by: GitHub Copilot &amp;lt;copilot@github.com&amp;gt;
Signed-off-by: Ashley Childress &amp;lt;6563688+anchildress1@users.noreply.github.com&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this context, it’s really less a legal signature and more Scout’s Honor. The one &lt;code&gt;-s&lt;/code&gt; flag is your HITL review stating you’ve personally signed off on this commit in its entirety, which includes the AI attribution footer.&lt;/p&gt;

&lt;p&gt;When both footers are paired together in the context of a single commit, it says beyond any doubt that you—aka the verified author—were aware of the changes and approved the code AI did or did not contribute.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 This gets you through the configuration, but on its own it’s not very impressive. Also, there’s really not anything new or earth-shattering I’ve done here—I simply repurposed the existing system.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  One Last Tiny Helper 🧩
&lt;/h2&gt;

&lt;p&gt;Generally speaking, anything that pops up on my “continuous work” radar and &lt;em&gt;can&lt;/em&gt; be automated is eventually automated. This is no different.&lt;/p&gt;

&lt;p&gt;I’ve commented somewhere before that I used to be the world’s absolute worst committer. Truth? I still am. Copilot, however, generally does a much better job.&lt;/p&gt;

&lt;p&gt;I have a &lt;a href="https://github.com/anchildress1/awesome-github-copilot/blob/main/prompts/generate-commit-message.prompt.md" rel="noopener noreferrer"&gt;&lt;code&gt;generate-commit-message&lt;/code&gt; prompt&lt;/a&gt; designed with this entire flow in mind, that’s designed to look at staged files by default (with fallback), create a conventional commit message, and assign its own attribution based on the chat history. It’s far from perfect, but does a decent job at getting you in the ballpark!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 Help a girl out and leave a star if you find anything useful!&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Boring Future 🚧
&lt;/h2&gt;

&lt;p&gt;This is really all there is to the enforcement part—at least to carry us over until the AI estimators do a better job at guessing when AI helps or not!&lt;/p&gt;

&lt;p&gt;The future and arguably more important piece of this is reporting those stats back for your repo. It’s not good enough to state that AI helped anymore. We need to start proving that it’s not all terrible code!&lt;/p&gt;

&lt;p&gt;From here, I envision a centralized reusable workflow that pulls a diff of all commits and roughly guesstimates the percentage of code that AI contributed. Update a static Shields.io badge (for now) on the README.&lt;/p&gt;

&lt;p&gt;You could get really fancy with this if you wanted to, but honestly there’s other tooling that will catch up before that becomes necessary! This is just meant to hold us over until that tech emerges. Fingers crossed that point is sooner rather than later. 🤞&lt;/p&gt;




&lt;h2&gt;
  
  
  🛡️ The Part Where I Sign My Name
&lt;/h2&gt;

&lt;p&gt;This post was written by me. ChatGPT assisted with editing and clarity. It didn’t choose the ideas, make the decisions, or approve the work. If something here is wrong, confusing, or incomplete, that responsibility belongs to the human who signed it. AI can assist. Authorship still requires someone willing to put their name on the result.&lt;/p&gt;

</description>
      <category>githubcopilot</category>
      <category>ai</category>
      <category>automation</category>
      <category>git</category>
    </item>
    <item>
      <title>Waiting, With Intent: Designing AI Systems for the Long Game 🧭</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Wed, 14 Jan 2026 12:22:00 +0000</pubDate>
      <link>https://forem.com/anchildress1/waiting-with-intent-designing-ai-systems-for-the-long-game-1abg</link>
      <guid>https://forem.com/anchildress1/waiting-with-intent-designing-ai-systems-for-the-long-game-1abg</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;🦄 I’m waiting for AI to mature. Very explicitly—and yes, mostly impatiently. I don’t even think we're close to imagining the future landscape with AI, and honestly pretending otherwise is neither honest or useful to anyone. This post is my attempt to explain how I think about AI from a dev perspective on a longer horizon—five, maybe even ten years down the road. The tools we have right now are still a very long way away from my baseline expectations, which my AI systems remind me of near constantly—like when I'm trying to force agent-like functionality out of ChatGPT. &lt;strong&gt;Spoiler:&lt;/strong&gt; it’s not designed to handle that.&lt;/p&gt;

&lt;p&gt;While I’m waiting, though, I’m not disengaged. I’m definitely tinkering—sometimes randomly and sometimes just as an unsatisfied AI user who’s not thrilled with the existing systems. I’m also busy figuring out what the next problems really look like by diving in and getting my hands dirty. &lt;/p&gt;

&lt;p&gt;One of those big challenges is what I keep calling the “memory problem.” I've designed a solution for my own personal agent to manage long-term memory. Yes—I'm aware that GitHub is inevitably going to beat me to a viable solution. &lt;em&gt;Again&lt;/em&gt;. But I'm one of those people who will attempt to solve a problem first, get it wrong at least ten different times, and &lt;em&gt;then&lt;/em&gt; do the research to fill in the knowledge gaps. Now I just have to muster up enough oomph to actually do it. 🐉🧚‍♀️&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg4wzwazhknufn2ou2to7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg4wzwazhknufn2ou2to7.png" alt="Human-crafted, AI-edited badge" width="200" height="200"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  First Principles: LLM vs Agent 🧩
&lt;/h2&gt;

&lt;p&gt;At some point, if you want any of this AI talk to make sense, you have to step back, align terminology, and separate concepts that keep getting blurred together. An LLM, often called a model, is the generative part of GenAI—it accepts input and generates output. &lt;em&gt;That's it.&lt;/em&gt; An agent is the system managing context, memory, and various tools. The agent is responsible for what information the LLM even sees in the first place.&lt;/p&gt;

&lt;p&gt;When those two ideas get collapsed into the same thing, everything downstream becomes confused. You can’t reason clearly about limits, costs, or failure modes if you don’t separate generation from data management. Until you draw that line, every other discussion ends up muddy.&lt;/p&gt;




&lt;h2&gt;
  
  
  Context Is the Bottleneck (and Everyone Knows It) 🕸️
&lt;/h2&gt;

&lt;p&gt;Once you make the distinction between LLM and agent, the real bottleneck becomes obvious. There is no good way to manage context today, let alone have the agent automate that job effectively. If you’re not fully up to date on the lingo: context includes a whole set of things like instruction files, workspace structure, active files in your IDE, the AI chat history, available tools, and more.&lt;/p&gt;

&lt;p&gt;What we have now are very manual tools that do very little to solve the problem. We have to remember to tell the AI which parts currently matter—or at some point we have to clear the chat entirely and start over. If we don’t do that deliberately, AI slowly loses the point of what’s we're supposed to be working on in the first place. At worst, the entire chat thread is poisoned and the AI becomes unable to function at all. Then you're forced to start fresh and always at the most inconvenient time.&lt;/p&gt;

&lt;p&gt;And don’t expect LLM context to scale, either. Hardware costs may go down eventually, but nowhere near fast enough to keep up with everything we keep throwing at it. So, context is very finite—especially in GitHub where context windows are smaller than normal anyway.&lt;/p&gt;

&lt;p&gt;The agent will typically make space by compacting information. It will ask the LLM to summarize key points and then it literally drops the original full length novel completely from your active context and replaces it with the cliff notes version. The more summarization, the less accurate things get over time. So naturally you retry prompts while adding back the dropped details and you end up making more calls for a single task overall. The model has to process more and more input just to get you back to the same answer you already had—not necessarily a better one.&lt;/p&gt;

&lt;p&gt;People know this is a problem. Tools like &lt;a href="https://github.com/toon-format/toon" rel="noopener noreferrer"&gt;Toon&lt;/a&gt; exist specifically to minimize input impact for AI. We also have tools like &lt;a href="https://docs.github.com/?search-overlay-open=true&amp;amp;search-overlay-input=runSubagent&amp;amp;search-overlay-ask-ai=true" rel="noopener noreferrer"&gt;Copilot's &lt;code&gt;#runSubagent&lt;/code&gt;&lt;/a&gt; to help manage context within a single agent. These aren't true solutions though—they are signals. These are the problems people are trying to solve yesterday while we wait for the next AI evolution to emerge.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Orchestration Is Inevitable 🐙
&lt;/h2&gt;

&lt;p&gt;Even if you do everything “right” and manage context like a master AI sensei, agents eventually hit a limit. The list of must-have MCPs is growing and right now those stay in the context window as long as they're enabled. Projects are starting and accumulating larger knowledge bases. Customization is becoming more and more explicit. The context an agent needs to use will continue to grow exponentially, even though LLMs aren't increasing capacity at the same speed.&lt;/p&gt;

&lt;p&gt;The ultimate overflow state isn’t hypothetical—it’s inevitable. Once an agent accumulates enough memory, enough history, enough summarization, the LLM simply can’t keep up coherently anymore. That isn’t a failure in the system—it’s a limit.&lt;/p&gt;

&lt;p&gt;When you hit that limit, you can't just tweak prompts or optimize harder. You wouldn't try to squeeze more juice out of the same dry orange, either. The only real long-term solution is that you split the system—&lt;em&gt;you have to&lt;/em&gt;!&lt;/p&gt;

&lt;p&gt;Smaller pieces of work are then sent to the LLM with only relevant context, which is when smarter agents will start to appear. This is where summarization stops and you retain the original intent at both a high-level and at the lowest-level. When we get here, AI generation stops being the problem—the new problem is coordinating all those tiny pieces of work and still accomplishing the larger goal without re-prompting anything previously stated or defined elsewhere already. &lt;em&gt;Welcome to the world of true agent orchestration&lt;/em&gt;!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;ProTip:&lt;/strong&gt; If you want a sneak peek of what this looks like, check out &lt;a href="https://verdent.ai" rel="noopener noreferrer"&gt;Verdent.ai&lt;/a&gt;. Of all the solutions I've worked with, Verdent is the only one that's truly designed for agent orchestration. It also excels in VS Code and wins every coding competition I've put it in.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Orchestration as a System Property ♟️
&lt;/h2&gt;

&lt;p&gt;Orchestration isn’t just about sequencing work in a nicer way—it’s about changing where responsibility lives. Yes—some things are always going to be sequential, but not everything needs to be. Some things can and should run in parallel, especially if you want speed and reliability included in future agentic systems.&lt;/p&gt;

&lt;p&gt;Validation is a fundamental part of orchestration, not something bolted on afterward. A successful agent has to be able to verify its own work without relying on prior context. It has to come in like a third party, with no knowledge beyond the repo instructions. CodeQL, lint enforcement, Makefiles, and even extra tests become the ground truth the system must consistently check itself against.&lt;/p&gt;

&lt;p&gt;Multi-model opposition fits naturally here, too. Different models trained by different companies catch different things. Then the agent can pick one model  to implement and another to review. The point is that they disagree by default and then they converge around a common goal. This is a pivotal moment in the future landscape because officially the LLM is no longer the center of gravity—the agentic system is.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🎤 &lt;strong&gt;ShoutOut&lt;/strong&gt; &lt;a class="mentioned-user" href="https://dev.to/marcosomma"&gt;@marcosomma&lt;/a&gt; wrote a brilliant article on &lt;a href="https://dev.to/marcosomma/loopnode-how-orka-orchestrates-iterated-thought-until-agreement-emerges-17l2"&gt;the concept of agent convergence&lt;/a&gt; a while back and it's still one of my favorites. Worth the read if you missed it!&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Add Another Layer of Abstraction 🪜
&lt;/h2&gt;

&lt;p&gt;Now for my version of truth, which I know a lot of you are going to hate so go ahead and brace for it. Once you’re working in a smart orchestration-driven flow, there's no reason you need to keep prompting from the IDE. Wait before you jump into the debate, though—I'm not saying the IDE becomes obsolete! It just stops being the primary interface for developer workflows because you’re consistently able to work at a higher level of abstraction. In this future, developers are directing systems that generate, test, and validate the code several layers underneath you automatically.&lt;/p&gt;

&lt;p&gt;You’re orchestrating agents that direct other agents. Some run sequentially. Others will run in parallel. Documentation is generated automatically and added to the agent's working knowledge base. Tests run continuously alongside agents implementing new code. Integration testing matters. Systems testing matters more. Chaos testing morphs from an abstract concept into a baseline requirement. The code still exists—but it’s no longer written by or for humans. AI slowly takes that over, which makes natural language the newest language you need to learn.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 For the record, developers are most definitely still building and driving solutions. That will never change—we're the mad scientists thinking up wild potions you didn't know you needed! Besides, all the future advancements in the world won't give silicon the ability to invent new things. Humans create. AI helps. &lt;em&gt;Period&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Trust, Then Speed (not the other way around) 🏎️
&lt;/h2&gt;

&lt;p&gt;When something breaks in any of my workflows, I don’t correct the mistake in the code immediately. I start by correcting whatever instruction caused the mistake, and then I rerun it. Even when I’m busy, even when work is chaotic, and especially when I should have left it alone hours ago—I never fully disengage from this. &lt;em&gt;I can’t.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is exactly why AI doesn’t make you faster—not yet, anyway. Not because it can’t, but because the systems haven’t caught up to where speed actually emerges. If you’re learning to use AI correctly, it almost always makes you slower at first—not faster. The delay isn’t failure. It’s infrastructure lag.&lt;/p&gt;

&lt;p&gt;Think of it like an investment. You’re learning how the models behave and how instructions actually align with them. You’re learning where the limits are, and then deliberately making the system work within those constraints. Speed comes later—after you trust that the system returns results that are validated, reviewed, and tested because you built it to behave that way.&lt;/p&gt;

&lt;p&gt;AI evolution is a long game, and we’re barely getting started. Right now, it still feels like grade school. We’re teaching it what our world looks like, how we think, and where the boundaries are.&lt;/p&gt;

&lt;p&gt;All the work done now—in this awkward middle state—is what makes that learning possible. Long runs of trial-and-error prompts, walls of instructions, documentation that later turns into knowledge bases—that’s the curriculum. And by the time it’s ready to graduate, it won’t just be competent. It’ll be a master. That’s the moment you realize you trust AI—not because it’s autonomous, but because you finally are. 🐉🧚‍♀️&lt;/p&gt;




&lt;h2&gt;
  
  
  🛡️ I Worked Until It Worked
&lt;/h2&gt;

&lt;p&gt;This post was written by me, with ChatGPT nearby like an overly talkative whiteboard—listening, interrupting, getting corrected, and occasionally making a genuinely good point. We argued about structure, laughed at the mic cutting out at the worst moments, and kept going anyway. The opinions are mine. The fact that it finally worked is the point.&lt;/p&gt;

</description>
      <category>agentic</category>
      <category>ai</category>
      <category>architecture</category>
      <category>devrel</category>
    </item>
  </channel>
</rss>
