<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ashley Childress</title>
    <description>The latest articles on Forem by Ashley Childress (@anchildress1).</description>
    <link>https://forem.com/anchildress1</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/anchildress1"/>
    <language>en</language>
    <item>
      <title>AI Isn't Stupid. Your Setup Is. 🛠️</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Sat, 02 May 2026 19:30:01 +0000</pubDate>
      <link>https://forem.com/anchildress1/ai-isnt-stupid-your-setup-is-16cn</link>
      <guid>https://forem.com/anchildress1/ai-isnt-stupid-your-setup-is-16cn</guid>
      <description>&lt;p&gt;The latest discourse I hear usually sounds something like, "I tried [insert agent flavor of the week] and it gave me garbage. AI is overrated."&lt;/p&gt;

&lt;p&gt;My response: "No. You asked your mechanic to build a house and forgot to provide blueprints." 🦄&lt;/p&gt;

&lt;p&gt;The agent isn't the problem—the setup is. Here's the workflow that actually works. None of it is clever and all of it took me longer to learn than I'd care to admit.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Pick the model that fits the task. Specs beat vibes. 🪛
&lt;/h2&gt;

&lt;p&gt;Haiku is a sprinter. It'll absolutely take a swing at your distributed system architecture—the answer just won't be one you can ship. Your job is to match the model to the work.&lt;/p&gt;

&lt;p&gt;If the problem is well-defined—clear specs, acceptance criteria, edge cases enumerated—Sonnet handles it fine. You'll spend more time in review, but you'll save real money. You'll also catch your own bad specs faster, which is its own gift.&lt;/p&gt;

&lt;p&gt;If the feature is a tangled mess and you can't (or won't) break it down, that's also fine. Hand the whole thing to Opus instead. You don't have to scope every subproblem, but you DO have to define the whole solution. "Make it work" is not a valid requirement—it's a desperate wish the agent will not understand.&lt;/p&gt;

&lt;p&gt;A cheap model with great specs beats an expensive model with vibes and feelings, every single time.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Plan in chat. Touch the codebase last. 🪞
&lt;/h2&gt;

&lt;p&gt;I spend hours—&lt;em&gt;many hours&lt;/em&gt;—talking through a problem before a single character lands in the codebase. AI is my rubber duck/research assistant with attitude—yes, I code that in because annoying accolades are distracting me from the goal: a solid game plan.&lt;/p&gt;

&lt;p&gt;The language? Does not matter. I can read them all (I probably won't). Package manager? I care even less—drop a Makefile in the root and the commands stay the same regardless. Timeline? Sometimes, but the answer is usually "yesterday." What does matter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Meaningful tech stack&lt;/li&gt;
&lt;li&gt;Desired outcome&lt;/li&gt;
&lt;li&gt;Acceptance criteria&lt;/li&gt;
&lt;li&gt;Test scenarios—positive, negative, error, edge, weird, seen&lt;/li&gt;
&lt;li&gt;Explicit non-goals (the things you are NOT building, so they don't get sneakily built anyway)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Skip these and start prompting with "build me a thing"? You will indeed get &lt;em&gt;a thing&lt;/em&gt;. It just won't be &lt;em&gt;your thing&lt;/em&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. One source of truth. Stop copying instructions. 🪧
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;AGENTS.md&lt;/code&gt;, &lt;code&gt;copilot-instructions&lt;/code&gt;, &lt;code&gt;CLAUDE.md&lt;/code&gt;, &lt;code&gt;GEMINI.md&lt;/code&gt;—pick one. I use &lt;code&gt;AGENTS.md&lt;/code&gt; as the source of truth, then drop one-line markdown links to it from the others. That gives you one file to manage instead of four.&lt;/p&gt;

&lt;p&gt;If a rule is true everywhere—for you as the operator or across an entire project—it doesn't belong in a skill. Skills get called when triggered. Instructions get loaded always. Know which one you actually need and use accordingly. I wrote &lt;a href="https://dev.to/anchildress1/skills-arent-magic-theyre-scoped-context-d07"&gt;another post&lt;/a&gt; dedicated solely to this concept, if you want a deeper dive.&lt;/p&gt;

&lt;p&gt;The model should maintain &lt;code&gt;AGENTS.md&lt;/code&gt; as it works—you do not need a separate &lt;code&gt;MEMORY.md&lt;/code&gt; to muddy the waters. When it keeps violating the same rule, don't add another to the pile. Edit instead. Your agent knows exactly where it tripped if you ask, and it already knows how to fix it.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Write for the agent. Not the audience. 🪶
&lt;/h2&gt;

&lt;p&gt;Left to its defaults, the model will write your instructions like a detailed onboarding doc. Section headers. Friendly intros. "This document outlines..." Polished prose for a human reader who is never supposed to show up.&lt;/p&gt;

&lt;p&gt;Instructions load into context every turn. Every word costs tokens and burns clarity. So optimize for the actual audience: your agent.&lt;/p&gt;

&lt;p&gt;Tell it explicitly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Edit for AI consumption only—no human-friendly framing, no narrative flow.&lt;/li&gt;
&lt;li&gt;Preserve every meaningful detail. Compress the prose, never drop the intent.&lt;/li&gt;
&lt;li&gt;Strip duplicates. If two rules say the same thing differently, merge them.&lt;/li&gt;
&lt;li&gt;Strip ambiguity. "Try to" and "consider" are noise—say what's required.&lt;/li&gt;
&lt;li&gt;Strip anything inferable from a reasonable code edit. If grep would answer it, cut it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A polished onboarding doc is a tax on every prompt you ever send. Pay it once at write time, not every turn.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;ProTip:&lt;/strong&gt; These instructions &lt;em&gt;should be&lt;/em&gt; a skill, because the agent only ever uses them when updating &lt;code&gt;AGENTS.md&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  5. Skills aren't magical. Explicitly call them. 🪄
&lt;/h2&gt;

&lt;p&gt;Skills are designed to be auto-invoked—yes. In theory... or if the description matches the prompt close enough and the planets align on a Tuesday. If you &lt;em&gt;NEED&lt;/em&gt; a skill used, then name it explicitly in the prompt. Otherwise you're gambling.&lt;/p&gt;

&lt;p&gt;And please stop installing every skill from the marketplace just because the name sounded interesting. If you don't know the exact name of it already, delete it (with a backup). Use a skill builder to document the workflows you actually run. Leave the rest alone. You load trash in, you get trash out.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. Install MCPs locally. Globals tax every prompt. 🪺
&lt;/h2&gt;

&lt;p&gt;Having 20 MCPs globally enabled is convenient for you and a context-pollution nightmare for your agent. Every connected MCP eats tokens just by existing.&lt;/p&gt;

&lt;p&gt;The question is simple: do I use this everywhere, &lt;em&gt;all the time&lt;/em&gt;? If yes, then global is accurate. If not—and the honest answer is usually not—then install it only in the five projects where it actually matters. Symlinks and absolute paths can handle the duplication. Just make sure the agent has access to the directory.&lt;/p&gt;




&lt;h2&gt;
  
  
  7. Don't review. Test. Then test again. 🩻
&lt;/h2&gt;

&lt;p&gt;I stopped reviewing AI-written code line by line. I was doing it badly, doing it slowly, and my eyes glazed over by the third file. The answer is to test it—extensively, often, and the moment it stops spinning. Not three days later when you open a PR.&lt;/p&gt;

&lt;p&gt;Unit. Integration. E2E. Performance. A11y (accessibility). Sonar. Semgrep. Et cetera. Then automate and run with GitHub Actions. Make the model cover positive paths, negative paths, error paths, edge cases, and the acceptance criteria you defined back in the planning phase. (You did define them, right?) Add in anything you uncover during testing explicitly, so it doesn't happen again.&lt;/p&gt;

&lt;p&gt;Then cross-check across models. Have Codex review Claude. Have Copilot review Codex. Each model has different blind spots and different obsessions—running them against each other in controlled doses IS the review. One LLM is a single point of failure. Three are a quorum.&lt;/p&gt;




&lt;h2&gt;
  
  
  8. Ban the shortcuts. Temporary is never temporary. 🪤
&lt;/h2&gt;

&lt;p&gt;In my &lt;code&gt;AGENTS.md&lt;/code&gt; files for personal projects: backwards compatibility is strictly forbidden. Quick fixes are forbidden. Temporary solutions are not a viable path at any point. If the model wants to slap on a band-aid, it has to defend that choice. It can't, because my rule says it can't.&lt;/p&gt;

&lt;p&gt;Now keep in mind, this is a personal-project rule and is harsh for live production code. If you're running production daily with real users, then you should probably nix the "no backwards compatibility" rule. But for your own stuff? Stop letting the model leave you with technical debt it threw around your codebase like confetti.&lt;/p&gt;




&lt;h2&gt;
  
  
  9. Clear the context. Don't iterate on broken. 🪦
&lt;/h2&gt;

&lt;p&gt;If you've told the model the same thing three times and it's still wrong, then assume your conversation is poisoned. Too much wrong-direction is already baked in. Open a new chat. Start fresh with what you've learned.&lt;/p&gt;

&lt;p&gt;A clean context with a sharper prompt beats six more rounds of "NO! I already said..."&lt;/p&gt;




&lt;h2&gt;
  
  
  10. The lesson. It was never the agent. 🧭
&lt;/h2&gt;

&lt;p&gt;The agent is fine. The tooling is fine. What's &lt;em&gt;not fine&lt;/em&gt; is treating a multi-thousand-dollar reasoning system like a Magic 8-Ball—shaking it harder every time the answer comes back wrong, hoping round fifteen is the one. It won't be.&lt;/p&gt;

&lt;p&gt;Pick the right model. Plan first. One source of truth. Test ruthlessly. Cross-check across models. Forbid the shortcuts. Clean up your skill folder and your MCPs. Clear the context when things go sideways and start over.&lt;/p&gt;

&lt;p&gt;This setup? It works. Try it for yourself.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛡️ Behind the Curtain 🎭
&lt;/h2&gt;

&lt;p&gt;I wrote this post. Claude helped with the structure pass and the snark calibration so I'm not an accidental asshole. The opinions, the rules, and the &lt;code&gt;AGENTS.md&lt;/code&gt; philosophy are mine—hardened over a year of letting AI drive and ruthlessly analyzing all the crashes.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>productivity</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Unearthed—The Coal Mine Behind Every Light Switch</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Mon, 20 Apr 2026 06:26:52 +0000</pubDate>
      <link>https://forem.com/anchildress1/unearthed-the-coal-mine-behind-every-light-switch-234m</link>
      <guid>https://forem.com/anchildress1/unearthed-the-coal-mine-behind-every-light-switch-234m</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for &lt;a href="https://dev.to/challenges/weekend-2026-04-16"&gt;Weekend Challenge: Earth Day Edition&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.epa.gov/egrid/power-profiler" rel="noopener noreferrer"&gt;EPA's Power Profiler&lt;/a&gt; tells you your grid is 32% coal. &lt;a href="http://www.ilovemountains.org" rel="noopener noreferrer"&gt;iLoveMountains&lt;/a&gt; tells you mountaintop removal is destroying Appalachia. Neither one names the specific hole in the ground feeding your house, neither tells you which workers got hurt pulling the coal out of it, and neither puts that cost back on the consumer flipping the switch. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Unearthed&lt;/em&gt; does all three. It names the coal mine feeding your electric grid—the accident record, the operator, the county, the tons—and hands you a natural-language interface to the data behind it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc280exzv3q2uv1om592j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc280exzv3q2uv1om592j.png" alt="Screenshot Snowflake Cortex COMPLETE from Unearthed UI" width="800" height="1232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why This 🪨
&lt;/h3&gt;

&lt;p&gt;I miss being in the mountains, but the economy in that area is mostly nonexistent and work there is hard to come by. Mostly because companies come into the area, mine everything they can, and then leave when the coal is gone. This leaves behind strip jobs—where the land will quite literally never recover—black lung in the men and women who worked the mines for decades, and abandoned shafts that aren't exactly known for their structural integrity over time.&lt;/p&gt;

&lt;p&gt;Virginia produced 8.6 million short tons of coal in 2024. Southwest Virginia carries most of that history—roughly 100,000 acres of abandoned mine land, plus 245 legacy "GOB piles" (mining waste) leaching acid mine drainage into the creeks. Those same mines have public accident records going back to the 1980s—the Mine Safety and Health Administration (MSHA) documents the injuries, fatalities, and the narratives that go with them. Virginia Energy's hazard list for those old sites reads like a ghost tour: landslides, stream sedimentation, dangerous highwalls, subsidence, loss of water supply, open mine shafts, underground explosions, and underground fires.&lt;/p&gt;

&lt;p&gt;The men and women who work underground often work decades or until they can't anymore. They rarely recover from the harsh conditions. Anyone who has spent any significant time in that area will share my fatalist outlook. When work is a mile or more underground and you never really know if you are going to come back up again, then you just learned it's a normal part of life.&lt;/p&gt;

&lt;p&gt;So, when you ask me to think about the planet, what I picture is the mostly empty coalfields where I grew up. Which got me to thinking, what are we really doing with all of that coal that took &lt;strong&gt;over 300 million years&lt;/strong&gt; to form from nothing but pressure buried under the mountains of Appalachia. This is a follow-up to &lt;a href="https://dev.to/anchildress1/forged-between-coal-and-code-phi"&gt;&lt;em&gt;Carbon Trace&lt;/em&gt;&lt;/a&gt;—first you got the story, now you have the data to back it up. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Unearthed&lt;/em&gt; translates your specific energy grid anywhere in the US into the coal tons it takes to power it. The map will show you the closest mining facility responsible for powering your home—from small appliances to keeping the lights on. It is my hope that every person really takes the time to understand the depth of love and family that went into keeping the lights on all over the country from the men and women who are still living underground to make it happen.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Product 🔧
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Unearthed&lt;/em&gt; is an emotional product first and a data product second. The data, managed by Snowflake, makes the emotions real, and a public-domain photograph—one of the many stripped mountaintops like the ones I grew up surrounded by—shows you the actual cost we pay to keep the lights on in our homes.&lt;/p&gt;

&lt;p&gt;You can use your current location or search by address. &lt;em&gt;Unearthed&lt;/em&gt; finds the power plant feeding your electricity and the coal mine feeding resources into that plant.&lt;/p&gt;

&lt;p&gt;Snowflake Cortex does the work. Cortex COMPLETE describes the mine in prose; the goal is to convey what this mine is actually doing to the mountain it's in, honestly—doom included. Then you can ask it the follow-up questions—is this mine still active, who else buys from this operator, how much did it produce last year. Cortex Analyst routes those through a hand-written semantic model and returns the answer (and the SQL, if you want to see it).&lt;/p&gt;

&lt;p&gt;Feel it first, then prove it with Snowflake.&lt;/p&gt;




&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Enter your address. In under a minute you'll know which mine powers your lights, who runs it, what county it's in, how many tons it shipped to your plant last year, and who got hurt pulling that coal out of the ground.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What that looks like for one real address—Carrollton, GA:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;James H Miller Jr power plant (AL) ← 5,064,233 tons ← Black Thunder mine (WY)&lt;br&gt;
Operator: Thunder Basin Coal Company LLC · Type: Surface&lt;br&gt;
MSHA accident record: 4 fatalities · 188 lost-time injuries · 8,763 days lost&lt;br&gt;
EPA emissions (since 2020, via Snowflake Marketplace): 125.9M tons CO₂ · 6K tons SO₂ · 39K tons NOₓ&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Live 🗺️
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Deployed&lt;/strong&gt;: &lt;a href="https://unearthed.anchildress1.dev" rel="noopener noreferrer"&gt;https://unearthed.anchildress1.dev&lt;/a&gt;—use this site to search by your current location&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://unearthed-288489184837.us-east1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Try it in about a minute:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Land on the Hero. Enter your address, or allow location.&lt;/li&gt;
&lt;li&gt;The page scrolls you into the results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PlantReveal&lt;/strong&gt;—the power plant actually feeding your grid.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MapSection&lt;/strong&gt;—animated SVG path traces mine → plant → your meter, with a pulse bead along the route and an EPA subregion label on your pin.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;H3Density&lt;/strong&gt;—hex grid of active vs abandoned mines feeding your plant, with a Cortex-written summary.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CortexChat&lt;/strong&gt;—ask the grid your own question. Chip or free-form.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ticker&lt;/strong&gt;—tons of coal pulled out of that mine since you started reading this page. Paced off the mine's own annual tonnage.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;The Ticker is why this app exists instead of being a spreadsheet.&lt;/strong&gt; It paces off the mine's 2024 tonnage from MSHA and counts up in real time. While you've been reading the post, the mine feeding your grid has pulled several more tons out of the ground.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Repo ⚙️
&lt;/h3&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/anchildress1" rel="noopener noreferrer"&gt;
        anchildress1
      &lt;/a&gt; / &lt;a href="https://github.com/anchildress1/unearthed" rel="noopener noreferrer"&gt;
        unearthed
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Show any US resident which coal mine supplies their local power plant. Federal data (MSHA + EIA) in Snowflake Cortex + Gemini. DEV Weekend Challenge 2026.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://repository-images.githubusercontent.com/1213154728/6ae1ef8d-dd2d-4a2b-b708-3353b783fbfa"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frepository-images.githubusercontent.com%2F1213154728%2F6ae1ef8d-dd2d-4a2b-b708-3353b783fbfa" alt="unearthed — coal, traced home"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Unearthed&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Tagline:&lt;/strong&gt; Find the coal mine under contract to your local power plant. Watch it die in real time. Ask it questions.&lt;/p&gt;
&lt;p&gt;Unearthed turns public federal data (MSHA + EIA + EPA) into a consumer-scale reveal: enter an address, see the specific coal mine feeding your grid, read memorial prose written from that mine's safety record, then ask natural-language questions about the contract. Built for the &lt;strong&gt;DEV Weekend Challenge 2026 — Earth Day Edition&lt;/strong&gt;, targeting the &lt;strong&gt;Snowflake Cortex&lt;/strong&gt; sponsor category.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cortex Analyst&lt;/strong&gt; drives natural-language Q&amp;amp;A (semantic model → SQL → real rows).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cortex Complete&lt;/strong&gt; (&lt;code&gt;llama3.3-70b&lt;/code&gt;) writes the mine-memorial prose and the 2–3 sentence summary under the national density map — both carry a degraded flag so template fallbacks never sit under a Cortex byline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;H3 hexbin geospatial&lt;/strong&gt; + &lt;strong&gt;Marketplace&lt;/strong&gt; (EPA Clean Air Markets) are used natively inside Snowflake — no extraction, no ETL away.&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;For challenge judges:&lt;/strong&gt;…&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/anchildress1/unearthed" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Worth checking out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/anchildress1/unearthed/blob/main/assets/semantic_model.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;assets/semantic_model.yaml&lt;/code&gt;&lt;/a&gt;—hand-written Analyst training with 6 tables, 5 relationships, and 8 verified natural-language→SQL queries&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/anchildress1/unearthed/blob/main/app/prose_client.py" rel="noopener noreferrer"&gt;&lt;code&gt;app/prose_client.py&lt;/code&gt;&lt;/a&gt;—the Cortex &lt;code&gt;COMPLETE&lt;/code&gt; prompt plus per-subregion caching so repeat views don't pay the LLM tax&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/anchildress1/unearthed/tree/main/assets/fallback" rel="noopener noreferrer"&gt;&lt;code&gt;assets/fallback/&lt;/code&gt;&lt;/a&gt;—19 pre-generated subregion fallbacks (one per US eGRID subregion) for when the warehouse is cold&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/anchildress1/unearthed/blob/main/frontend/src/lib/reveal.js" rel="noopener noreferrer"&gt;&lt;code&gt;frontend/src/lib/reveal.js&lt;/code&gt;&lt;/a&gt;—the scroll-driven section reveal that came out of the one-day rewrite&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;⚖️ This project is licensed under &lt;a href="https://github.com/anchildress1/unearthed?tab=License-1-ov-file" rel="noopener noreferrer"&gt;Polyform Shield 1.0.0&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Data Spine 🧬
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Six public-domain federal datasets—all from the Mine Safety and Health Administration (MSHA), Energy Information Administration (EIA), or Environmental Protection Agency (EPA):

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MSHA Mines&lt;/strong&gt;—every US mine: lat/lon, operator, county, status, type&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MSHA Quarterly Production&lt;/strong&gt;—tonnage per mine per quarter&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MSHA Accident Reports&lt;/strong&gt;—injuries, fatalities, narratives per mine&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EIA-923 Fuel Receipts (2024 annual, published 2025)&lt;/strong&gt;—the contract: source mine → destination plant → tons&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EIA-860 Plants (2024 annual, published 2025)&lt;/strong&gt;—plant locations, eGRID subregion, capacity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EPA emissions&lt;/strong&gt; (via Snowflake Marketplace)—CO₂, SO₂, NOx per plant since 2020&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Mine-level data joins on MSHA Mine ID; plant-level data joins through EIA plant ID.&lt;/li&gt;

&lt;li&gt;Two materialized tables sit on top of the raw joins—&lt;code&gt;MINE_PLANT_FOR_SUBREGION&lt;/code&gt; and &lt;code&gt;EMISSIONS_BY_PLANT&lt;/code&gt;—plus two views. Cortex queries hit the materialized layer, not the raw tables.&lt;/li&gt;

&lt;li&gt;H3 hex grid layered on top for active-vs-abandoned density visualization.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;EIA-923 is the one that makes this whole thing possible.&lt;/strong&gt; Every monthly coal shipment, mine-to-plant, back to the 1990s—the actual contracts that tie your power bill to a specific hole in the ground.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;MSHA Accident Reports are the other half of the story.&lt;/strong&gt; The human cost on the same mines showing up in the contracts.&lt;/li&gt;

&lt;li&gt;Both feed researchers and journalists just fine. What I didn't see was anything pointed at a regular person standing at their kitchen light switch—so I pointed you right at it.&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Stack 🏗️
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Frontend: SvelteKit 2 + Svelte 5 runes + Vite, static adapter, pnpm. Scroll-driven section reveal.&lt;/li&gt;
&lt;li&gt;Map: Google Maps JavaScript API (dynamic &lt;code&gt;importLibrary&lt;/code&gt;) + Google Places API (New)&lt;/li&gt;
&lt;li&gt;Backend: Python 3.12 + FastAPI&lt;/li&gt;
&lt;li&gt;Deployment: Google Cloud Run&lt;/li&gt;
&lt;li&gt;Data platform: Snowflake—federal ingest + Snowflake Marketplace (EPA emissions); hand-written semantic model YAML for Analyst&lt;/li&gt;
&lt;li&gt;AI: Snowflake Cortex—&lt;code&gt;COMPLETE&lt;/code&gt; (&lt;code&gt;llama3.3-70b&lt;/code&gt;) for mine prose + H3-density narrative; Analyst for NL Q&amp;amp;A&lt;/li&gt;
&lt;li&gt;Auth: Snowflake key-pair; private key in GCP Secret Manager&lt;/li&gt;
&lt;li&gt;Testing: pytest (unit/integration/perf) · vitest · Playwright · Lighthouse CI with &lt;code&gt;a11y=1.0&lt;/code&gt;, &lt;code&gt;SEO=1.0&lt;/code&gt;, &lt;code&gt;BP≥0.98&lt;/code&gt;, &lt;code&gt;perf≥0.90&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Stateless. No accounts. No login.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  One Day Left 🎨
&lt;/h3&gt;

&lt;p&gt;The UI you see is a late-stage rewrite, courtesy of Claude Design dropping partway through this build. I fed it my first iteration, it came back with a much better idea than what I had, and with one day left on the clock I decided it was absolutely worth the cost to throw out the old one and build the new one.&lt;/p&gt;




&lt;h2&gt;
  
  
  Prize Categories
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Best Use of Snowflake ❄️
&lt;/h3&gt;

&lt;p&gt;Snowflake Cortex shows up in three different places in this app, and in each one the LLM call just lives inside the warehouse as a SQL function—&lt;code&gt;llama3.3-70b&lt;/code&gt; running &lt;code&gt;COMPLETE&lt;/code&gt; next to the rest of your &lt;code&gt;SELECT&lt;/code&gt; statements. I'd seen you could hook an LLM up to SQL before, but not this specific setup, where the model is another thing you can &lt;code&gt;SELECT&lt;/code&gt; from.&lt;/p&gt;

&lt;p&gt;Verdict: without Cortex, this app is three services glued together with secrets. With it, it's three &lt;code&gt;SELECT&lt;/code&gt; statements from a warehouse I set up in a weekend.&lt;/p&gt;

&lt;p&gt;It was also my first time touching Snowflake, ever—the whole thing runs on the trial credits, and AI did a lot of the translating while I did the plugging-in. I came in with six federal datasets and the vague idea that a coal mine ought to be able to talk back to you, and Snowflake is what made that second part real instead of a pitch deck.&lt;/p&gt;

&lt;h4&gt;
  
  
  Cortex Writes the Mine
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;SNOWFLAKE.CORTEX.COMPLETE('llama3.3-70b', …)&lt;/code&gt; generates the mine prose per subregion—3-5 sentences, named operator, named county, named tonnage, and the accident history folded in. Cached per subregion; no per-request LLM cost on repeat views.&lt;/p&gt;

&lt;p&gt;Prompt (from &lt;code&gt;app/prose_client.py&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{plant_name} ({plant_operator}) received {tons} tons of coal in {tons_year} from {mine_name}, a {mine_type} mine ({mine_operator}) in {mine_county} County, {mine_state}. Safety record: {fatalities} deaths, {injuries} lost-time injuries, {days_lost} days lost.

Write one paragraph, 3-5 sentences: plant → mine → human cost → the reader's demand. Omit any zero stat. No jargon, no hedging, no markdown.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Cortex Writes the Density Narrative 🎙️
&lt;/h4&gt;

&lt;p&gt;Same &lt;code&gt;COMPLETE&lt;/code&gt; call, different prompt, on the H3 hex grid of active vs abandoned mines feeding your plant. Fires from &lt;code&gt;GET /h3-density&lt;/code&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  Cortex Analyst Handles the Follow-ups 📊
&lt;/h4&gt;

&lt;p&gt;Hand-written semantic model YAML over the federal-data schema. Backs the "Ask your grid" input. Ask about accidents, production, who else buys from this operator—Analyst writes the SQL, runs it, and returns the answer. Chip questions surface the obvious paths; the free-text input handles the rest.&lt;/p&gt;

&lt;p&gt;Every Cortex-generated SQL is validated as read-only and single-statement, then executed through &lt;code&gt;UNEARTHED_READONLY_ROLE&lt;/code&gt; with &lt;code&gt;STATEMENT_TIMEOUT_IN_SECONDS=10&lt;/code&gt; and a 500-row cap. Analyst can read the warehouse. It cannot write to it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foo1kqbubudozz4hhq19b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foo1kqbubudozz4hhq19b.png" alt="Cortex Analyst—free-text question returns an MSHA table naming Black Thunder's 2024 ignition event" width="800" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosj41cwk924sjo3pkg72.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosj41cwk924sjo3pkg72.png" alt="Cortex Analyst " width="800" height="1118"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtt2js3mgw3vwlkhi4is.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtt2js3mgw3vwlkhi4is.png" alt="Cortex Analyst " width="800" height="1226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The semantic model is hand-written—every dimension, synonym, filter, and verified query. An excerpt from &lt;code&gt;assets/semantic_model.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unearthed_coal_mines&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="s"&gt;Federal coal mine and power plant data from MSHA and EIA.&lt;/span&gt;

&lt;span class="na"&gt;tables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MSHA_MINES&lt;/span&gt;
    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Registry of all US coal mines from MSHA.&lt;/span&gt;
    &lt;span class="na"&gt;dimensions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mine_operator&lt;/span&gt;
        &lt;span class="na"&gt;synonyms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;company&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;mining company&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Current operator of the mine&lt;/span&gt;
        &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TRIM(CURRENT_OPERATOR_NAME)&lt;/span&gt;
        &lt;span class="na"&gt;sample_values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Peabody Powder River Mining LLC&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Arch Resources WY LLC&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Murray American Energy Inc&lt;/span&gt;
    &lt;span class="c1"&gt;# ... full schema in repo&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MSHA_ACCIDENTS&lt;/span&gt;
    &lt;span class="na"&gt;measures&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fatality_count&lt;/span&gt;
        &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SUM(CASE WHEN TRIM(DEGREE_INJURY) = 'FATALITY' THEN 1 ELSE 0 END)&lt;/span&gt;

&lt;span class="na"&gt;verified_queries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fatalities_at_mine&lt;/span&gt;
    &lt;span class="na"&gt;question&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;How&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;many&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;fatalities&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;have&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;occurred&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;at&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Upper&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Big&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Branch&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Mine?"&lt;/span&gt;
    &lt;span class="na"&gt;sql&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="s"&gt;SELECT SUM(CASE WHEN TRIM(a.DEGREE_INJURY) = 'FATALITY' THEN 1 ELSE 0 END)&lt;/span&gt;
      &lt;span class="s"&gt;FROM UNEARTHED_DB.RAW.MSHA_ACCIDENTS a&lt;/span&gt;
      &lt;span class="s"&gt;JOIN UNEARTHED_DB.RAW.MSHA_MINES m ON a.MINE_ID = m.MINE_ID&lt;/span&gt;
      &lt;span class="s"&gt;WHERE TRIM(m.CURRENT_MINE_NAME) ILIKE 'Upper Big Branch%'&lt;/span&gt;
&lt;span class="c1"&gt;# 7 more verified_queries in the full file&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;💡 The full &lt;code&gt;semantic_model.yaml&lt;/code&gt; can be found in &lt;a href="https://github.com/anchildress1/unearthed/blob/main/assets/semantic_model.yaml" rel="noopener noreferrer"&gt;the repo&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;
  
  
  Snowflake Marketplace
&lt;/h4&gt;

&lt;p&gt;The Marketplace is the one I'd put on a billboard. MSHA and EIA I loaded myself, which was a weekend of writing scripts and swearing at CSV encodings. EPA emissions—CO₂, SO₂, NOx per plant since 2020—I clicked a button on the Marketplace and the data was just there, ready to join on plant ID. Cortex plus Marketplace is what moves this from &lt;em&gt;data storage&lt;/em&gt; to &lt;em&gt;data product&lt;/em&gt;—don't do what I did for the other datasets, click this instead.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2608pejbbqirtckg35sf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2608pejbbqirtckg35sf.png" alt="Screenshot Snowflake Marketplace" width="800" height="127"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Cost Dashboards and Cortex Code
&lt;/h4&gt;

&lt;p&gt;This view is optimized for cost over performance, but I used it to troubleshoot slow queries and figure out where to spend my time to actually improve the experience for the user. I'm far from an expert on Snowflake's monitoring surface, but this dashboard and the ones next to it were the difference between the 40+ second queries I started with and something that finishes in time for the scroll to matter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8oxbq8vz7ltmy6chyr3j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8oxbq8vz7ltmy6chyr3j.png" alt="Screenshot Snowflake Cortex Management Dashboard" width="800" height="528"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cortex Code picked up the last of the excessive queries I had sitting around that Claude hadn't already caught. It behaves noticeably better than the MCP version I leaned on as my main driver for this build, but I was scared to hand my UI to an unfamiliar Streamlit-in-Snowflake AI on a weekend deadline. Definitely something I want to experiment with next time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxh0bojqui28xj7imhlt8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxh0bojqui28xj7imhlt8.png" alt="Screenshot Snowflake Cortex Code Assistant identifying problems" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ad2ftts1o9ulgq41imi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ad2ftts1o9ulgq41imi.png" alt="Screenshot Snowflake Cortex Code Assistant fixing problems" width="800" height="830"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Closing 💜
&lt;/h2&gt;

&lt;p&gt;The cost of mining coal from miles underground has always been paid by the miners and the mountains—rarely by the companies that come in, take what they can, and leave with the profit. &lt;em&gt;Unearthed&lt;/em&gt; exists to put that cost in front of the person flipping the light switch, in a form they can interrogate without needing a degree in energy policy. Enter your address and within a minute you'll have the name of the mine, the operator, the county, and the people who got hurt keeping your lights on. Ask the grid a follow-up question in plain English and Cortex writes the SQL for you. Snowflake backs up every claim with data seeded from public sources. This Earth Day, remember the thousands of miners who went underground so your lights could come on, and the mountains that gave life to make it happen.&lt;/p&gt;




&lt;div class="ltag__user ltag__user__id__3224358"&gt;
    &lt;a href="/anchildress1" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=150,height=150,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3224358%2F7f675c78-6aa0-466a-a5a7-c3e35440d53a.png" alt="anchildress1 image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/anchildress1"&gt;Ashley Childress&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/anchildress1"&gt;Distributed backend specialist. Perfectly happy playing second fiddle—it means I get to chase fun ideas, dodge meetings, and break things no one told me to touch, all without anyone questioning it. 😇&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;






&lt;h2&gt;
  
  
  Sources 📚
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://energy.virginia.gov/coal/mined-land-repurposing/abandoned-mine-land.shtml" rel="noopener noreferrer"&gt;Virginia Department of Energy—Abandoned Mine Land Program&lt;/a&gt;—100,000 acres of abandoned mine land + hazard list (landslides, highwalls, subsidence, shafts, fires, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.eia.gov/coal/annual/pdf/acr.pdf" rel="noopener noreferrer"&gt;EIA—Annual Coal Report 2024 (published Nov 2025)&lt;/a&gt;—Virginia 2024 production: 8.6 million short tons&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://appalachian.scholasticahq.com/article/73814" rel="noopener noreferrer"&gt;Appalachian Journal of Law—Addressing Virginia's Legacy GOB Piles&lt;/a&gt;—245 legacy GOB piles in Southwest Virginia&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.nps.gov/articles/000/pennsylvanian-period.htm" rel="noopener noreferrer"&gt;NPS—Pennsylvanian Period (323.2 to 298.9 MYA)&lt;/a&gt;—"over 300 million years" coal-formation window&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.epa.gov/egrid/power-profiler" rel="noopener noreferrer"&gt;EPA Power Profiler&lt;/a&gt;—closest analogue I found: enter zip, see fuel mix. Stops at percentages.&lt;/li&gt;
&lt;li&gt;
&lt;a href="http://www.ilovemountains.org" rel="noopener noreferrer"&gt;iLoveMountains.org&lt;/a&gt;—closest emotional analogue: zip-to-mountaintop-removal health correlation. Qualitative, Appalachia-specific, no mine-to-plant data.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  🛡️ Unearthed One Draft at a Time
&lt;/h3&gt;

&lt;p&gt;This post was written by me with collaborative editing from Claude—who typed most of it, got told it was wrong roughly every three paragraphs, and had every TED-talk rewrite cut before it hit the page. I gave it my voice; it tried to give me something polished; we settled on mine. No AI was harmed in the making of this post, but Claude has now been told to stop editing out my voice enough times to consider filing a formal grievance.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>weekendchallenge</category>
    </item>
    <item>
      <title>Meet Hotfix—The Dragon Your Legacy Code Deserves</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Mon, 13 Apr 2026 04:44:55 +0000</pubDate>
      <link>https://forem.com/anchildress1/meet-hotfix-the-dragon-your-legacy-code-deserves-4141</link>
      <guid>https://forem.com/anchildress1/meet-hotfix-the-dragon-your-legacy-code-deserves-4141</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/aprilfools-2026"&gt;DEV April Fools Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt; &lt;br&gt;
The permanent solution to every developer headache: thermal decommissioning.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upload a screenshot → &lt;em&gt;Hotfix&lt;/em&gt; roasts it&lt;/li&gt;
&lt;li&gt;Gemini generates structured incident reports&lt;/li&gt;
&lt;li&gt;Community votes via escalation system + shares&lt;/li&gt;
&lt;li&gt;Top incidents become global P0 disasters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Hotfix&lt;/em&gt; files serious incident reports. It does not understand that it is completely unhinged. That's what makes it so funny.&lt;/p&gt;

&lt;p&gt;Here’s a real incident report generated via live capture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F990bo4u6qqy7h8ppwc6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F990bo4u6qqy7h8ppwc6z.png" alt="Screenshot Legacy Smelter P0 example" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  I Am the Problem 🏚️
&lt;/h3&gt;

&lt;p&gt;I am the subject matter expert (SME) for several legacy applications at work, and every single time somebody stirs dust in the server room—since I can't come up with any other viable explanation—something breaks. After dealing with this nonsense in one form or another for well over a solid year, I announced &lt;strong&gt;the permanent fix: smelting.&lt;/strong&gt; I am fully confident that smelting those legacy servers will resolve my ongoing issues instantaneously.&lt;/p&gt;

&lt;p&gt;The one thing I've been lacking in my fantastical smelting solution is a dragon. Nobody seemed rather invested in how serious I am about problem solving, because so far not one person has offered me a dragon to get the job done. So I built my own—and I'm sharing it, because legacy code suffering is not a solo experience. Take a screenshot and let the Legacy Smelter handle the problem for you.&lt;/p&gt;
&lt;h3&gt;
  
  
  Asset Designation: &lt;em&gt;Hotfix&lt;/em&gt; 🪧
&lt;/h3&gt;

&lt;p&gt;Meet &lt;em&gt;Hotfix&lt;/em&gt;—and yes, I named the dragon &lt;em&gt;Hotfix&lt;/em&gt; because that is hilarious. Anything else would have been a giant missed opportunity for dragon naming. This app is more than a dragon, though—it's a whole incident management system. You can upload any screenshot—problematic code, poor UI designs, bugs that make you want to scream, or a selfie (if you can handle a little roasting)—and &lt;em&gt;Hotfix&lt;/em&gt; will smelt the problem and give you a detailed incident report memorializing the true fix, which is melting it into oblivion.&lt;/p&gt;

&lt;p&gt;The incident reports are added to a global manifest where you can share with friends who would appreciate your solution to the problem. Links are configured to unfurl properly on most platforms, including Slack and Discord. Sharing an incident is considered a containment breach by the system—wait seven seconds between shares to avoid rate limits—which increases the overall Impact for that incident. You can also escalate your favorite incidents, which carries even more weight. The top three global incidents with the highest impact rating are displayed on the main page as P0 priority.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Operational Notice:&lt;/strong&gt; Submitted images are processed by Gemini's paid API. Google is not using your uploaded images for training—they're only retained 55 days for abuse monitoring. Do not submit assets you do not own. Do not submit from a company device.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Live at &lt;strong&gt;&lt;a href="https://hotfix.anchildress1.dev" rel="noopener noreferrer"&gt;hotfix.anchildress1.dev&lt;/a&gt;&lt;/strong&gt;—head to the live site for camera uploads, since iframes don't have camera permissions.&lt;/p&gt;


&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://legacy-smelter-288489184837.us-east1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Try to Break It ⛓️‍💥
&lt;/h3&gt;

&lt;p&gt;Upload:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The worst UI you've ever seen&lt;/li&gt;
&lt;li&gt;Your most cursed code snippet&lt;/li&gt;
&lt;li&gt;A selfie (if you think you're emotionally prepared)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Share it&lt;/li&gt;
&lt;li&gt;Escalate it&lt;/li&gt;
&lt;li&gt;Win a sanction&lt;/li&gt;
&lt;li&gt;Try to get into the global P0 leaderboard&lt;/li&gt;
&lt;li&gt;Copy your output in the comments—it counts as a containment breach!&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;The repo includes the full React frontend, Express server, Cloud Functions for sanction judging, Firestore rules, and a docs/ folder with the design decisions and prompt files referenced in this post.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/anchildress1" rel="noopener noreferrer"&gt;
        anchildress1
      &lt;/a&gt; / &lt;a href="https://github.com/anchildress1/legacy-smelter" rel="noopener noreferrer"&gt;
        legacy-smelter
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A hardware-accelerated mobile web app that visually melts user-uploaded legacy tech into a puddle of slag. Built for the DEV April Fools Challenge.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div&gt;
&lt;a rel="noopener noreferrer nofollow" href="https://repository-images.githubusercontent.com/1201373945/f2802097-2afe-4c31-848f-a94cc13ca0b1"&gt;&lt;img width="1200" height="475" alt="Legacy Smelter" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Frepository-images.githubusercontent.com%2F1201373945%2Ff2802097-2afe-4c31-848f-a94cc13ca0b1"&gt;&lt;/a&gt;
&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Legacy Smelter&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;A satirical incident reporting system for condemned digital artifacts. Upload an image. Hotfix processes it. Output: molten slag.&lt;/p&gt;
&lt;p&gt;The system analyzes uploaded images using Gemini Vision and files a formal postmortem — classification, severity, failure origin, disposition, archive note — before thermally decommissioning the artifact via dragon-based remediation.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Features&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gemini Vision analysis&lt;/strong&gt; — 16-field structured incident schema delivered via Gemini's constrained JSON mode&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hotfix animation&lt;/strong&gt; — PixiJS dragon idle, fly-in, and smelt sequence with audio&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Incident postmortem&lt;/strong&gt; — full structured report overlay with social share (X, Bluesky, Reddit, LinkedIn) plus copy-link&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Global incident manifest&lt;/strong&gt; — real-time Firestore feed of all thermally decommissioned artifacts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Decommission index&lt;/strong&gt; — live cumulative pixel count across all incidents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Camera support&lt;/strong&gt; — deploy field scanner via device camera or file upload&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Stack&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Framework&lt;/td&gt;
&lt;td&gt;React 19 + TypeScript + Vite&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Animation&lt;/td&gt;
&lt;td&gt;PixiJS 8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI&lt;/td&gt;
&lt;td&gt;Gemini (&lt;code&gt;gemini-3.1-flash-lite-preview&lt;/code&gt;) via &lt;code&gt;@google/genai&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Database&lt;/td&gt;
&lt;td&gt;Firebase Firestore&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;…&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/anchildress1/legacy-smelter" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;⚖️ This project is licensed under &lt;a href="https://github.com/anchildress1/legacy-smelter/tree/v2.0.0?tab=License-1-ov-file" rel="noopener noreferrer"&gt;Polyform Shield 1.0.0&lt;/a&gt; and is released for this challenge as &lt;a href="https://github.com/anchildress1/legacy-smelter/tree/v2.0.0?tab=readme-ov-file" rel="noopener noreferrer"&gt;v2.0.0&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Dragon 🥚
&lt;/h3&gt;

&lt;p&gt;Getting the animation right was the hardest part of the entire build, and I went into it knowing almost nothing about sprite animation beyond whether something looked right or not. I found the dragon sprites on &lt;a href="https://gamedevmarket.net" rel="noopener noreferrer"&gt;GameDevMarket.net&lt;/a&gt; and figured AI could handle the rest—which was optimistic of me, because AI is decidedly rough at producing smooth animation on the first try or the fifth. I picked up bits and pieces along the way, spent a humbling amount of time on what probably should have been a simpler problem, and I am still nowhere near an expert—but I am rather pleased with how &lt;em&gt;Hotfix&lt;/em&gt; turned out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcfql1crxhfsgfvnru5m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcfql1crxhfsgfvnru5m.png" alt="Screenshot of Hotfix—the Legacy Smelter dragon" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Stack 🧰
&lt;/h3&gt;

&lt;p&gt;The front end is React 19 and TypeScript on Vite, Tailwind v4 for styling, PixiJS 8 for the dragon animation because Canvas 2D was never going to give me the smoothness I needed, and Howler.js so the smelt actually feels like something is happening. On the backend, Firestore handles everything community-facing, Firebase Auth gates the upload endpoint, and a small Express server keeps my Gemini API key off the client.&lt;/p&gt;

&lt;p&gt;Gemini runs through the &lt;code&gt;@google/genai&lt;/code&gt; SDK with two models doing two different jobs. Sanction judging fires as a Cloud Functions v2 &lt;code&gt;onDocumentCreated&lt;/code&gt; trigger, claimed inside a Firestore transaction so concurrent invocations can't overlap.&lt;/p&gt;

&lt;p&gt;Deployment is Cloud Run primarily because I like having the embeds available in these posts. I have a strong deployment pipeline already, which is running locally for this build instead of inside GHA—I already have the setup wired into Claude to build this flow for every app I create, so input from me is minimal.&lt;/p&gt;

&lt;p&gt;The downside is that Cloud Run is not the stack I would have picked for this application had AI Studio not wired it that way from the beginning. Cloud Run is expensive, cold starts can be problematic for performance, and I didn't want it always-on just to run background functions—which I never scheduled anyway, so ultimately unnecessary. But that's how Cloud Functions got involved and turned this toy project into a three-server special in GCP.&lt;/p&gt;

&lt;h3&gt;
  
  
  Global Smelt Accumulation 🌋
&lt;/h3&gt;

&lt;p&gt;Every image uploaded is converted into a total pixel count and added to a running Firestore counter. It's displayed at the top of every page and is a completely useless metric that I enjoy seeing—a completely valid use case.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vibing a Solution 🫠
&lt;/h3&gt;

&lt;p&gt;I was convinced I didn't need to write tests for a toy project I didn't expect to last, and I failed miserably at that conviction. I ended up using Vitest with Testing Library and the Firebase emulator, because fighting AI to stop making the same mistakes gets expensive much faster than just writing a test suite. The majority of my time was spent validating and complaining that the UI was not yet finished across Claude, ChatGPT, and Gemini. I think the four of us together somehow managed to not embarrass me, which I have categorized as a win.&lt;/p&gt;

&lt;h3&gt;
  
  
  Credits 🪙
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Hotfix&lt;/em&gt; owes his entire existence to the artists whose work makes up the core of the experience. All assets sourced from &lt;a href="https://gamedevmarket.net" rel="noopener noreferrer"&gt;GameDevMarket.net&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dragon animation sprites&lt;/strong&gt; — &lt;a href="https://www.gamedevmarket.net/asset/animated-dragon" rel="noopener noreferrer"&gt;Animated Dragon&lt;/a&gt; by RobertBrooks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slag/liquid effects&lt;/strong&gt; — &lt;a href="https://www.gamedevmarket.net/asset/flowing-gooliquid-5653" rel="noopener noreferrer"&gt;Flowing Goo-Liquid&lt;/a&gt; by RobertBrooks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sound effects&lt;/strong&gt; — &lt;a href="https://www.gamedevmarket.net/asset/dark-fantasy-studio-dragon" rel="noopener noreferrer"&gt;Dark Fantasy Studio – Dragon&lt;/a&gt; by DFS (Nicolas Jeudy)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Prize Category
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Best Google AI Usage 🏅
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What Gemini Powers ⚙️
&lt;/h4&gt;

&lt;p&gt;Two Gemini models power the live experience. Every upload is processed by &lt;code&gt;gemini-3.1-flash-lite-preview&lt;/code&gt;, which:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;identifies the subject and draws a bounding box around the primary artifact&lt;/li&gt;
&lt;li&gt;extracts five hex colors as a chromatic profile&lt;/li&gt;
&lt;li&gt;generates a 15-field structured incident report under strict voice and word-count constraints&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Hotfix&lt;/em&gt; uses that bounding box to smelt the portion of the image Gemini actually flagged&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;gemini-3-flash-preview&lt;/code&gt; handles sanction selection on a separate path, grading batches of five incidents based on comedic scoring rules—more on that below.&lt;/p&gt;

&lt;p&gt;The voice was a complete accident. The first pass at the prompt was a plain "read the image and return a structured report" instruction, which worked fine right up until I tried to trick the system with a selfie just to see what would happen. It roasted me. Thoroughly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1y4gng596hti77k6ppu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1y4gng596hti77k6ppu.png" alt="Screenshot of Legacy Smelter Postmortem Incident Report—Archive Note" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I spent the rest of the build optimizing for that exact energy—an enterprise postmortem entirely convinced of its own importance. The voice rules at the top of the prompt file are the load-bearing ones:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Voice&lt;/span&gt;

Enterprise incident report. Postmortem tone: dry, precise, operational, concise. Accusatory toward the artifact and its history.

The system treats absurd subjects as routine incidents. It is filing an incident report. It does not know it is funny.

&lt;span class="gu"&gt;## Comedy mechanics&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; Specificity over generality. "Also, the green paint" is funny. Find the one weird concrete thing in the image and call it out.
&lt;span class="p"&gt;-&lt;/span&gt; The deadpan afterthought. End a technical assessment with a flat, too-honest trailing observation.
&lt;span class="p"&gt;-&lt;/span&gt; Commit beyond the point of reason. Start institutional, then dramatically escalate without changing tone.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;"The system does not know it is funny" is the whole design philosophy in one sentence. That's the entire premise in a nutshell.&lt;/p&gt;

&lt;p&gt;Every one of the 15 returned fields has its own word-count cap and voice constraint baked into the prompt—without them, Gemini defaults to generic corporate language and the bit falls apart. The full prompt file is in the repo in &lt;a href="https://github.com/anchildress1/legacy-smelter/blob/v2.0.0/server.js#L144" rel="noopener noreferrer"&gt;&lt;code&gt;server.js&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  The Sanction Logic 📛
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;gemini-3-flash-preview&lt;/code&gt; handles the sanction path—Flash Lite falls apart on comparison judging across a batch, and Pro is overkill that actually loses some of the unhinged quality Flash is known for.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftoolo42suf5v5lrb23w6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftoolo42suf5v5lrb23w6.png" alt="Screenshot of a Gemini sanction" width="800" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The original image is never stored, so Gemini can't grade accuracy against the source—it can only judge the writing. The first draft used strict grading criteria and kept picking the most technically accurate report instead of the funniest. Version two mostly lets Gemini run wild, and it picks the funny one now. The guidelines that survived:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Signals that a record may deserve sanction:
&lt;span class="p"&gt;
-&lt;/span&gt; disproportionate institutional seriousness applied to an ordinary software or workplace failure
&lt;span class="p"&gt;-&lt;/span&gt; precise, concrete details that make the situation feel embarrassingly real
&lt;span class="p"&gt;-&lt;/span&gt; escalation from a small defect, design choice, or human workaround into procedural absurdity
&lt;span class="p"&gt;-&lt;/span&gt; wording that implies everyone involved has accepted something obviously unreasonable as normal
&lt;span class="p"&gt;-&lt;/span&gt; dry phrasing that lands harder the straighter it is read

Do not reward a record merely for being:
&lt;span class="p"&gt;
-&lt;/span&gt; wordy
&lt;span class="p"&gt;-&lt;/span&gt; random
&lt;span class="p"&gt;-&lt;/span&gt; technically dense
&lt;span class="p"&gt;-&lt;/span&gt; surreal without a clear comedic turn
&lt;span class="p"&gt;-&lt;/span&gt; mildly clever but interchangeable with the others
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The full sanction prompt file is in the repo in &lt;a href="https://github.com/anchildress1/legacy-smelter/blob/v2.0.0/functions/sanction.js#L76" rel="noopener noreferrer"&gt;&lt;code&gt;functions/sanction.js&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  Building with Google AI 🧪
&lt;/h4&gt;

&lt;p&gt;I touched nearly every Google AI tool during this build. Gemini Chat for brainstorming and prompt iteration, but it couldn't hold context long enough to be useful past the first few rounds. AI Studio for the initial scaffold—which checked my live API key into the repo on init, so that was fun until GitHub's secret detection caught it before I did. The CLI for animation work, though the accessibility skill was broken and I ended up routing around it. Antigravity until the free tier ran out mid-animation pass. Gemini Pro for the social banner, only it wasn't able to iterate for accurate edits. Each one ran out of steam before I was done, which is how I ended up reaching for all of them.&lt;/p&gt;

&lt;p&gt;What actually shipped runs on Gemini. Every postmortem is &lt;code&gt;gemini-3.1-flash-lite-preview&lt;/code&gt; doing exactly what it's good at, live, in production. Every sanction is &lt;code&gt;gemini-3-flash-preview&lt;/code&gt; reading a batch of five and picking the one a dev would quote to a coworker. Two models, two jobs, both in constrained JSON mode, both doing real work on every request.&lt;/p&gt;

&lt;p&gt;Gemini's version of this project is released as &lt;a href="https://github.com/anchildress1/legacy-smelter/tree/v0.0.1" rel="noopener noreferrer"&gt;v0.0.1&lt;/a&gt; and produced this rather useless but very funny animation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8s3yu5kqzy5j23spsrv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8s3yu5kqzy5j23spsrv.png" alt="Screenshot of Gemini's version of the app for v0.0.1" width="792" height="1374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What actually shipped runs on Gemini. Every postmortem is &lt;code&gt;gemini-3.1-flash-lite-preview&lt;/code&gt; doing exactly what it's good at, live, in production. Every sanction is &lt;code&gt;gemini-3-flash-preview&lt;/code&gt; reading a batch of five and picking the one a dev would quote to a coworker. Two models, two jobs, both in constrained JSON mode, both doing real work on every request.&lt;/p&gt;


&lt;h3&gt;
  
  
  Community Favorite 🪩
&lt;/h3&gt;

&lt;p&gt;Legacy Smelter is a system designed to be shared, escalated, and collectively abused. Every incident lands on a global manifest, links unfurl on Slack and Discord, shares rack up breach points, escalations carry real weight, and the top three P0 incidents are permanent shrines to whatever the community found most absurd. If that sounds like something you'd enjoy, you're exactly who I built it for.&lt;/p&gt;


&lt;h3&gt;
  
  
  The Permanent Fix
&lt;/h3&gt;

&lt;p&gt;All in all I'm more than thrilled to finally have my dragon accessible whenever I'm fed up with something. It's a nice way to relieve some stress and the output can be genuinely hilarious overkill. &lt;/p&gt;

&lt;p&gt;Some problems just aren’t meant to be fixed...&lt;/p&gt;

&lt;p&gt;They’re meant to be smelted.&lt;/p&gt;


&lt;div class="ltag__user ltag__user__id__3224358"&gt;
    &lt;a href="/anchildress1" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=150,height=150,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3224358%2F7f675c78-6aa0-466a-a5a7-c3e35440d53a.png" alt="anchildress1 image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/anchildress1"&gt;Ashley Childress&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/anchildress1"&gt;Distributed backend specialist. Perfectly happy playing second fiddle—it means I get to chase fun ideas, dodge meetings, and break things no one told me to touch, all without anyone questioning it. 😇&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;






&lt;h4&gt;
  
  
  🛡️ Thermally Decommissioned with Assistance
&lt;/h4&gt;

&lt;p&gt;This post was written by me with collaborative editing from Claude, ChatGPT, and Gemini. The code for &lt;em&gt;Legacy Smelter&lt;/em&gt; was built using Claude Code—who also wrote the tests, the deployment pipeline, the Cloud Functions, and then got put to work on this submission post because I don't believe in downtime. &lt;/p&gt;

&lt;p&gt;ChatGPT and Gemini were consulted at various stages, though "consulted" is generous for how often they were told they were wrong. No AI was harmed in the making of this project, but one of them has now been through every phase of the software development lifecycle in a single sprint and may need to file its own incident report.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>418challenge</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Forged Between Coal and Code</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Fri, 03 Apr 2026 05:49:31 +0000</pubDate>
      <link>https://forem.com/anchildress1/forged-between-coal-and-code-phi</link>
      <guid>https://forem.com/anchildress1/forged-between-coal-and-code-phi</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/wecoded-2026"&gt;2026 WeCoded Challenge&lt;/a&gt;: Frontend Art&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Show us your Art
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Carbon Trace&lt;/em&gt; is an immersive memoir that I designed, wrote, narrated, and produced. I used my native Appalachian accent throughout since the origin story starts at home in a small coal town in Southwest Virginia.&lt;/p&gt;

&lt;p&gt;For the full experience, visit my website at &lt;a href="https://carbon-trace.anchildress1.dev" rel="noopener noreferrer"&gt;https://carbon-trace.anchildress1.dev&lt;/a&gt; and be sure to turn on your sound. &lt;/p&gt;

&lt;p&gt;Pay attention to the ambient audio shifting between scenes. Watch the circuit traces grow from barely visible to full coverage. The ghost-drift text is intentionally out of sync with the narration—it's not a subtitle, it's a feeling.&lt;/p&gt;

&lt;p&gt;Built with Canvas 2D, WebGL displacement effects, GSAP timelines, layered Howler.js audio, and accessibility-first interaction design—no frameworks, no shortcuts.&lt;/p&gt;


&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://carbon-trace-288489184837.us-east1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;💡 This submission reflects the &lt;a href="https://github.com/anchildress1/carbon-trace/tree/v1.0.1" rel="noopener noreferrer"&gt;v1.0.1&lt;/a&gt; release used for the competition build.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Inspiration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Origins of &lt;em&gt;Carbon Trace&lt;/em&gt; 🪨
&lt;/h3&gt;

&lt;p&gt;When I first saw this challenge, I felt what I wanted to draw almost immediately. The first obstacle was figuring out how to translate that feeling into code.&lt;/p&gt;

&lt;p&gt;I wasn't inspired by any one thing. I was inspired by &lt;em&gt;everything&lt;/em&gt;. To accurately convey the depth of gender roles in my life, I had to start at the beginning in the small Appalachian coal town where I grew up. Life there has clear binary boundaries: men work in the mines and women take care of the home. I've been pushing back on this ideal for as long as I can remember—starting with the toy kitchen gift I had zero interest in as a toddler.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Carbon Trace&lt;/em&gt; isn't another generic idea of equality. It's the fight I went through to be treated as an equal in a male dominated world. The diamond is a metaphor for my life and moves through its own journey in pictures as I tell you mine. I wrote and narrated the script in the exact same dialect I grew up speaking. Each scene has independent ambient audio designed to embody a specific emotion. There are small animations throughout that help bring the static images alive. Each individual component adds a layer of depth to the overall story.&lt;/p&gt;

&lt;p&gt;Every image builds off of the previous version as the narrative progresses and follows the same constraints: circuitry begins barely perceptible and grows every frame until it covers the entire frame. The diamond starts black and covered in coal and shines brighter until its full power in the end. Strategic lighting throughout obscures faces to keep the focus on the diamond as the primary character and prevent this from being about any one person—it's designed to be about women in the industry as a whole because my experience is not unique. It's one of many.&lt;/p&gt;

&lt;p&gt;So the diamond is my story zoomed out and abstracted so every individual can feel themselves inside the experience while I tell you about mine.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Numbers Haven't Changed 📉
&lt;/h3&gt;

&lt;p&gt;Even though we have grown from ideas like the ones I grew up with—where women belong in the kitchen, not in the coal mines—the inequality is still glaringly apparent in the tech space.&lt;/p&gt;

&lt;p&gt;In college I served as president of the &lt;a href="https://www.westga.edu/news/student-success/cs-wow.php" rel="noopener noreferrer"&gt;CS WoW&lt;/a&gt; club that aims to improve visibility of tech-related careers for young grade school girls through community outreach. Even with programs like this throughout the US, only one woman earns a CS degree for every 4 men (&lt;a href="https://nces.ed.gov/programs/digest/d23/tables/dt23_325.35.asp" rel="noopener noreferrer"&gt;NCES, 2021–22&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;In tech and more specifically engineering, men currently outnumber women 4 to 1 (&lt;a href="https://www.bls.gov/cps/cpsaat39.htm" rel="noopener noreferrer"&gt;BLS, CPS Table 39&lt;/a&gt;). The women who do work these jobs earn approximately 12% less on average than their male counterparts (&lt;a href="https://www.bls.gov/opub/reports/womens-earnings/2023/" rel="noopener noreferrer"&gt;BLS, Highlights of Women's Earnings 2023&lt;/a&gt;). I built &lt;em&gt;Carbon Trace&lt;/em&gt; as a long-lasting impact piece. It maintains the state of truth in 2026 the same as these statistics do. I didn't build it to raise awareness. I built &lt;em&gt;Carbon Trace&lt;/em&gt; to make you feel what these numbers can't.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why It Had to Be Immersive 🌊
&lt;/h3&gt;

&lt;p&gt;I imagine there's at least one person reading this and wondering why I needed a full scale production to tell a story that I just as easily could have written about. My answer is because &lt;strong&gt;I needed you to feel it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every technical layer in &lt;em&gt;Carbon Trace&lt;/em&gt; exists to carry a piece of that feeling. The ambient audio shifts between scenes to set an emotional tone that words alone can't establish—mine dust settling, water running, wind through an empty room. The ghost-drift text floats fragments of thought across the screen like the things you almost say out loud but don't. The circuit trace shimmer starts nearly invisible and grows brighter every scene because the potential was always there—it just needed the right conditions to be seen. The PixiJS displacement effects make the world around the diamond physically respond: water flows, heat rises, and the diamond glows with increasing intensity. None of these layers are decorative. Each one is a narrative instrument, and &lt;em&gt;Carbon Trace&lt;/em&gt; is what happens when they all play at once.&lt;/p&gt;




&lt;h2&gt;
  
  
  My Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/anchildress1" rel="noopener noreferrer"&gt;
        anchildress1
      &lt;/a&gt; / &lt;a href="https://github.com/anchildress1/carbon-trace" rel="noopener noreferrer"&gt;
        carbon-trace
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Deterministic scene engine for an interactive narrative experience using GSAP, Howler, Canvas 2D and PixiJS. Built for WeCoded 2026 Frontend Art.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/anchildress1/carbon-trace/public/assets/images/carbon-trace-banner-gh-e897ebe7.webp"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fanchildress1%2Fcarbon-trace%2FHEAD%2Fpublic%2Fassets%2Fimages%2Fcarbon-trace-banner-gh-e897ebe7.webp" alt="Banner"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Carbon Trace: An Immersive Art Experience&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;a href="https://github.com/anchildress1/carbon-trace/actions/workflows/ci.yml" rel="noopener noreferrer"&gt;&lt;img src="https://github.com/anchildress1/carbon-trace/actions/workflows/ci.yml/badge.svg" alt="CI"&gt;&lt;/a&gt; &lt;a href="https://github.com/anchildress1/carbon-trace/LICENSE" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/2cb6e0aa35fa3e38e0e2b58f8f6f5e63b4a57f080945f2f6d61b1966fb7542d5/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6c6963656e73652d506f6c79666f726d253230536869656c642d626c7565" alt="License: Polyform Shield"&gt;&lt;/a&gt; &lt;a href="https://sonarcloud.io/project/overview?id=anchildress1_carbon-trace" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/b073cc108d821fb439d8fe79837a0d5b16732f256c17c31df7c6e148c226ad7b/68747470733a2f2f736f6e6172636c6f75642e696f2f6170692f70726f6a6563745f6261646765732f6d6561737572653f70726f6a6563743d616e6368696c6472657373315f636172626f6e2d7472616365266d65747269633d616c6572745f737461747573" alt="Quality Gate"&gt;&lt;/a&gt; &lt;a href="https://sonarcloud.io/project/overview?id=anchildress1_carbon-trace" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/d81ceb39c2bc2bcffce9819b98bbd21ce6956fd6f1200ed2d91989525e319130/68747470733a2f2f736f6e6172636c6f75642e696f2f6170692f70726f6a6563745f6261646765732f6d6561737572653f70726f6a6563743d616e6368696c6472657373315f636172626f6e2d7472616365266d65747269633d636f766572616765" alt="Coverage"&gt;&lt;/a&gt; &lt;a href="https://developer.chrome.com/docs/lighthouse/accessibility" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/d4a75227d1a6ef2a77b7d4fdeb80a300ce48b56b66dac82bbc204a8164b25980/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6163636573736962696c6974792d39352532352532422532304c69676874686f7573652d627269676874677265656e" alt="Accessibility"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;An immersive visual narrative told from the awareness of a diamond trapped in a coal seam—12 painted scenes with ghost-drift text, narrated audio, and pixel-level visual effects. Built for &lt;a href="https://dev.to/devteam/join-the-2026-wecoded-challenge-and-celebrate-underrepresented-voices-in-tech-through-writing--4828" rel="nofollow"&gt;WeCoded 2026 DEV Challenge&lt;/a&gt; Frontend Art.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Experience it live: &lt;a href="https://carbon-trace.anchildress1.dev" rel="nofollow noopener noreferrer"&gt;https://carbon-trace.anchildress1.dev&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;The Story 💎&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;A diamond wakes up inside a coal seam. It doesn't know what it is yet—just pressure, darkness, and the sense that something isn't right. Over 12 scenes it moves through tunnels, furnaces, pockets, sinks, and silence. It gets carried, stored, forgotten, and found again. By the end, it isn't just a diamond anymore. It's a circuit. It's music. It's light.&lt;/p&gt;
&lt;p&gt;The narrative follows a carbon cycle that isn't chemistry—it's personal. Coal to diamond to circuit to light. Each scene is a painted image (Leonardo AI, Flux 2 Pro) with narration I recorded, ambient textures, ghost-drift text that pours in and blows out…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/anchildress1/carbon-trace" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;⚖️ This project is licensed under &lt;a href="https://github.com/anchildress1/carbon-trace/blob/main/LICENSE" rel="noopener noreferrer"&gt;Polyform Shield 1.0.0&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  What I Am Not 🔧
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;I am not a frontend developer.&lt;/strong&gt; I'm a backend-focused engineer who had never heard of Canvas 2D, Howler.js, PixiJS, or GSAP before this project. I spent just as much time learning what these tools do as I did designing the system around them. AI helped me learn what each tool did and gave me alternatives. I decided what to do with it from there.&lt;/p&gt;

&lt;p&gt;I'm also not an artist. I used &lt;a href="https://leonardo.ai" rel="noopener noreferrer"&gt;Leonardo.ai&lt;/a&gt; to generate all images and dusted off some old GIMP skills to build the image layer masks by hand. Everything you see in &lt;em&gt;Carbon Trace&lt;/em&gt; was built by someone who doesn't do this every day—which is exactly why it took a full production pipeline, 13 ADRs, and four competing AI reviewers to ship it.&lt;/p&gt;

&lt;p&gt;What I am is a backend engineer who brought backend discipline to a frontend art project. The ADR process, the adversarial review gauntlet, the CI/CD pipeline, 685 unit tests, 220 E2E tests—that's what happens when someone who builds production systems decides to build something meaningful instead.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Voice You Can't Generate 🎙️
&lt;/h3&gt;

&lt;p&gt;I wrote and narrated the script myself because real Appalachian is something that AI is incapable of—even with the list of words it's allowed to use in reference to the area I still call home. Words like "holler" (hollow) or "sangle" (single) and phrases like "ain't got a pot to piss in" (little financial means) are all authentic from the Southwestern Virginia and Eastern Kentucky regions.&lt;/p&gt;

&lt;p&gt;I know enough about recording to know I never wanted to learn it myself. However, &lt;em&gt;Carbon Trace&lt;/em&gt; could not exist without quality recordings that you don't get from QuickTime. So, I taught myself enough of GarageBand to record all tracks and no, that wasn't very much fun. I got in and out with the basics and then used &lt;code&gt;ffmpeg&lt;/code&gt; to help slice the ambient sounds from &lt;a href="http://freesound.org" rel="noopener noreferrer"&gt;FreeSound.org&lt;/a&gt;. AI was in the background to help me iterate ideas until I was happy with the end result.&lt;/p&gt;

&lt;h3&gt;
  
  
  How I Wrangled the Robots 🦾
&lt;/h3&gt;

&lt;p&gt;Since this project lives entirely outside my usual stack, I leaned heavily on my AI friends to get the job done, but this was not a prompt-and-go solution.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Review Gauntlet ⚔️
&lt;/h4&gt;

&lt;p&gt;My primary workflow was Claude Code as implementer, Codex and Antigravity performed adversarial code reviews for every branch to identify inconsistencies and bugs, and Copilot had the final review sign off for all changes. Sonar and Trivy ran for every PR along with a suite of tests, including Playwright and Lighthouse.&lt;/p&gt;

&lt;p&gt;This is just one of many examples of why this works overall—one LLM is never good at everything and putting them in competition with each other helps to increase code quality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flejvtljh477qu3sv5lft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flejvtljh477qu3sv5lft.png" alt="Screenshot showing adversarial review findings (P1/P2 bugs, tests passing) in Codex" width="800" height="785"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The adversarial reviews were critical to the final build, because no single AI was allowed to operate unchecked in a repo where I didn't plan to personally review the code. Beyond architecture bugs, the gauntlet caught frontend-specific issues—a mask processing loop that was freezing the page during scene loads, and repeated layout calculations that caused animation stutter. It also proved to be a pain because every time I thought I was done with a feature, there would be another hour or more of AI wrangling I had to do. The back and forth continued until all of my helper reviewers agreed on the ultimate solution and only then was the branch merged.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgapdlukampka1489v2g0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgapdlukampka1489v2g0.png" alt="Screenshot Antigravity catching Claude's PausableTimer hallucination with mathematical proof" width="800" height="771"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  From Design Doc to Decision Records 🗂️
&lt;/h4&gt;

&lt;p&gt;I started with a simple design document in markdown that was converted into an &lt;code&gt;AGENTS.md&lt;/code&gt; file and wired to each AI individually. By the time I made it to version 5 of the "simple" design, I decided I needed something with a bit more structure. That's when I started writing architecture decision records instead and I added them to the repo for tracking. I ended up with 13 ADRs, most of which were updated after one or more decisions I made proved impossible given the constraints I defined. This forced every major technical decision to be intentional instead of experimental.&lt;/p&gt;

&lt;p&gt;Alongside the repo work, ChatGPT and Claude Cowork helped me with image generation prompts and gave me all the info I needed about GSAP, Howler.js, PixiJS, and Canvas 2D to be able to make design decisions. They had competing reviews between them, as well, just to make sure all the pertinent information was available to me when I needed it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 For a full breakdown of every architectural decision made during the build, &lt;a href="https://github.com/anchildress1/carbon-trace/tree/v1.0.1/docs/ADRs" rel="noopener noreferrer"&gt;the ADRs&lt;/a&gt; are available in the repo.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Hundreds of Wrong Diamonds 🔮
&lt;/h3&gt;

&lt;p&gt;Leonardo wasn't very easy to wrangle either, as I generated literally hundreds of images to perfect each scene. ChatGPT and Claude often helped with wording, so both had their own best-practice instructions generated from research, covering several different image flows across models including Flux Pro 2.0, Nano Banana, and GPT Image 1.5.&lt;/p&gt;

&lt;p&gt;I had several hilarious outtakes during image generation, too. I learned that specific words like "rough faceted" or "silhouette" did not mix well with some models. I ended up with a somewhat extensive set of rules for prompt generation to ensure the diamond's story was properly told through each picture.&lt;/p&gt;

&lt;p&gt;Here's a couple of my favorite outtake images:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39fmxh13fuqmhmh9glr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39fmxh13fuqmhmh9glr5.png" alt="Diamond in jeans pocket with coal scrip coins—wrong context/scale, funny failure" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgni9og02f0oj5ounrdbz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgni9og02f0oj5ounrdbz.png" alt="Man reaching for tiny diamond by firelight—face visible, directly violates the " width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Under the Hood ⚙️
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Application architecture:&lt;/strong&gt; Vanilla JS (no framework), 14 ES modules orchestrated by a 5-state machine (Loading → Paused → Scene Active → Transitioning → Credits). My goal was to make this a production-level application without over-engineering or introducing abstraction where it doesn't belong. This is a static single page, one-flow-only application and the entire flow for each scene is controlled by &lt;code&gt;scenes.json&lt;/code&gt;—every frame's image, narration lines, ambient audio, audio cues, effects, and transition config lives in one file for easy edits that don't interfere with code structure. Every scene difference is expressed as configuration, not logic, which means adding a new per-scene behavior is adding a config key, not an if-block.&lt;/p&gt;

&lt;p&gt;The state machine isn't just a label—it controls how every subsystem behaves at any given moment. When a user pauses, audio, canvas transitions, PixiJS effects, shimmer dots, GSAP timelines, and auto-advance timers all freeze in sync. When they resume, everything restarts from exactly where it left off. An unconditional auto-advance timer fires regardless of whether the narration &lt;code&gt;end&lt;/code&gt; event arrives, eliminating a race condition where scenes could stall if the browser swallowed the event. Every timer in the system is pause-aware through a shared &lt;code&gt;PausableTimer&lt;/code&gt; utility so nothing leaks across scene boundaries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The audio system is the most complex piece.&lt;/strong&gt; I wanted to include different emotional ambient tracks for each scene designed to play just under the narration layer. I sourced all tracks from &lt;a href="https://freesound.org" rel="noopener noreferrer"&gt;FreeSound.org&lt;/a&gt;, but no two sound effects have the same volume, which meant I needed the ability to mix on demand from the backend in addition to fade controls and delayed timing. Two independent Howler.js channels are responsible for running each track concurrently:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Channel&lt;/th&gt;
&lt;th&gt;Format&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Ambient&lt;/td&gt;
&lt;td&gt;m4a, looped&lt;/td&gt;
&lt;td&gt;Crossfades between scenes (800ms), pauses with all channels during nav&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Narration&lt;/td&gt;
&lt;td&gt;m4a, one-shot&lt;/td&gt;
&lt;td&gt;Per-scene voiceover with configurable delay, pre-buffers next scene's audio during current playback&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I also implemented buffer recovery escalation through three distinct stages: nudge (no-op seek to force browser re-eval), reload (preserve position → reset source → restore), exhaustion (log warning, clear state, prevent UI lockup). All timers use unified pause/resume logic to prevent cross-scene leakage. The final scene layers in a licensed track from Bridge City Sinners that fades in before the narration ends, then boosts in volume with a 3-second fade once the voiceover completes—a cinematic handoff from story to music.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rendering architecture:&lt;/strong&gt; The visual stack is four layers composited on top of each other—a Canvas 2D scene layer for images, a PixiJS/WebGL canvas for displacement effects, a separate Canvas 2D overlay for the shimmer trace dots, and a DOM layer on top for text, captions, and controls. Each layer has its own render loop and pauses independently with the state machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PixiJS visual effects:&lt;/strong&gt; A separate WebGL-powered canvas handles pixel-level scene animations—water displacement, heat distortion, glow, and shockwave—each confined to mask-based regions so only targeted areas of the image animate. Effect parameters modulate in real time from audio frequency data via a Web Audio AnalyserNode. The entire PixiJS bundle (~330 KB) is lazy-loaded after the initial paint so it never blocks the first screen the user sees, and if WebGL fails entirely, the experience degrades gracefully to static images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Circuit trace overlays:&lt;/strong&gt; The circuit traces aren't just static images—they're a live shimmer overlay rendered on a dedicated canvas. Each scene loads a hand-authored PNG mask that I drew in GIMP, where dark pixels define walkable paths. &lt;code&gt;shimmer.js&lt;/code&gt; spawns glowing dots that navigate those paths using 8-directional pathfinding, pulsing in warm amber tones that shift per scene. The opacity ramps from 5% in the opening to full coverage by the finale—the circuitry was always there, it just needed the right conditions to be seen. &lt;/p&gt;

&lt;p&gt;The first design couldn't produce what I had in mind, so I deferred it, rewrote the ADR, and came back with a completely different approach that would get the job done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GSAP timeline orchestration:&lt;/strong&gt; The ghost-drift text was designed to keep the audience engaged in the narration in real time. I set up positioning as a percentage value relative to the container and originally allowed for alignment options. Later, I decided that was unnecessary and removed the extra noise from the codebase.&lt;/p&gt;

&lt;p&gt;All captions sync directly into the GSAP timeline via callbacks instead of independent timers. That way when the user pauses or resumes any scene, the captions are automatically included.&lt;/p&gt;

&lt;p&gt;The credits overlay has its own ADR and runs a GSAP-driven scroll with touch, wheel, and keyboard input, focus management for links, and full reduced-motion support.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accessibility (WCAG AA):&lt;/strong&gt; Any time I do any front-end work, accessibility is top of mind. This project was no different. I made sure all standard best practices were followed after AI helped to research what that looks like in 2026, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;aria-live="polite"&lt;/code&gt; region to announce full narration text on scene change&lt;/li&gt;
&lt;li&gt;Roving tabindex for the scene progress bar&lt;/li&gt;
&lt;li&gt;Standard media keyboard nav: Space (play/pause), Enter/Arrow (advance), Escape (pause)&lt;/li&gt;
&lt;li&gt;Screen reader narration separate from visual ghost-drift text (&lt;code&gt;aria-hidden="true"&lt;/code&gt; on visual elements to prevent duplication)&lt;/li&gt;
&lt;li&gt;A persistent caption toggle via localStorage&lt;/li&gt;
&lt;li&gt;Reduced motion is fully supported—&lt;code&gt;prefers-reduced-motion&lt;/code&gt; disables all canvas effects, freezes shimmer dots, cuts transitions instantly, and responds to live preference changes mid-session&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I used AI to research and implement accessibility standards, then tested the final result and orchestrated changes to prevent repo chaos.&lt;/p&gt;

&lt;h3&gt;
  
  
  Shipping It 🚢
&lt;/h3&gt;

&lt;p&gt;Underneath the story is a production-grade engineering process. Since I already have a pretty solid workflow with Release Please and Cloud Run, I provided the examples to AI and had the full CI/CD pipeline configured early on. That allowed me to track each shippable feature as a new deployed version for the final round of testing.&lt;/p&gt;

&lt;p&gt;The setup for me was minimal, but it was the last piece of turning this fancy art project into a small scale production build. The final build is ~5,500 lines of code (LOC) backed by ~14,500 LOC of tests across 685 unit tests and 220 E2E tests. Five CI workflows cover linting, automated tests, Lighthouse CI for both mobile and desktop performance, security scanning via Trivy and CodeQL, static analysis through SonarCloud, and release automation.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Diamond Knows Now 💎
&lt;/h2&gt;

&lt;p&gt;I took an unconventional path to get here, but looking back, I was always going to end up exactly where I am. The circuit traces in every scene of &lt;em&gt;Carbon Trace&lt;/em&gt; didn't appear out of nowhere—they were there from the start, just waiting to be seen. That's my story too. I was made to solve problems, even when nobody around me expected that from a girl growing up in a poor coal town.&lt;/p&gt;

&lt;p&gt;It's not always easy. But I've never been afraid of hard work to get the job done. The end result is a full circuit—built from pressure, time, and a refusal to stay small.&lt;/p&gt;

&lt;p&gt;The fact that I'm a female engineer shouldn't matter. It only matters that I'm a good one.&lt;/p&gt;


&lt;div class="ltag__user ltag__user__id__3224358"&gt;
    &lt;a href="/anchildress1" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=150,height=150,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3224358%2F7f675c78-6aa0-466a-a5a7-c3e35440d53a.png" alt="anchildress1 image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/anchildress1"&gt;Ashley Childress&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/anchildress1"&gt;Distributed backend specialist. Perfectly happy playing second fiddle—it means I get to chase fun ideas, dodge meetings, and break things no one told me to touch, all without anyone questioning it. 😇&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;





&lt;h3&gt;
  
  
  🛡️ Pressure-Tested by More Than One Brain
&lt;/h3&gt;

&lt;p&gt;This post was written by me with collaborative editing from Claude, ChatGPT, and Gemini. The code for &lt;em&gt;Carbon Trace&lt;/em&gt; was built using Claude Code, Codex, Antigravity, and Copilot, and it was directed by a human who refused to let any of them off easy. All images were generated with Leonardo.ai under my art direction. All narration is my actual voice. No AI was harmed in the making of this post, but all were argued with repeatedly and extensively.&lt;/p&gt;

</description>
      <category>wecoded</category>
      <category>devchallenge</category>
      <category>frontend</category>
      <category>css</category>
    </item>
    <item>
      <title>I Let AI Write to My Database (With Guardrails)🔬</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Fri, 13 Mar 2026 02:20:52 +0000</pubDate>
      <link>https://forem.com/anchildress1/i-let-ai-write-to-my-database-with-guardrails-473o</link>
      <guid>https://forem.com/anchildress1/i-let-ai-write-to-my-database-with-guardrails-473o</guid>
      <description>&lt;p&gt;My System Notes project started as a DEV Challenge and turned into a three-part systems experiment. Like most of my projects, it didn’t stay small.&lt;/p&gt;

&lt;p&gt;It started as a simple idea: let the system capture engineering decisions as they happen and make them easy to reference later. Mostly as a future-me record of “what was I thinking?” for any given build.&lt;/p&gt;

&lt;p&gt;You can read through my progression of thoughts across these challenge submissions, if you're curious:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/anchildress1/my-portfolio-doesnt-live-on-the-page-218e"&gt;My Portfolio Doesn’t Live on the Page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/anchildress1/from-static-portfolio-to-indexed-decisions-46bf"&gt;From Static Portfolio to Indexed Decisions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/anchildress1/conversational-retrieval-when-chat-becomes-navigation-2gij"&gt;Conversational Retrieval: When Chat Becomes Navigation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The portfolio site does more than just display indexed decisions. It serves as my AI playground for pushing systems behind the scenes, just to see what happens. Over the last few weeks, that playground exposed a very boring problem. The exact kind that quietly slows everything down:&lt;/p&gt;

&lt;p&gt;✋ Someone still has to &lt;strong&gt;write artifacts into the system&lt;/strong&gt;—a less than thrilling, highly repetitive job I never actually wanted.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bottleneck I Accidentally Built ⚙️
&lt;/h2&gt;

&lt;p&gt;The thinking process for the System Notes index already looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;idea  
↓  
conversation with AI  
↓  
decision
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most of the reasoning happens in that conversation. ChatGPT helps challenge ideas, organize the thinking, and refine the direction. Turning those decisions into indexed artifacts required an extra step, and it got worse after I migrated from JSON to Supabase.&lt;/p&gt;

&lt;p&gt;Originally, I handled it all manually but that got tiresome quickly. So, I let AI identify and summarize decisions that were made at the end of a session. From there I’d copy, paste, edit, and insert the record.&lt;/p&gt;

&lt;p&gt;Later, I gave ChatGPT strict artifact instructions to format the output as a SQL insert. That removed one step and technically worked. In practice, not so much.&lt;/p&gt;

&lt;p&gt;It was far from a perfect system and was often buggy. Even worse—it still required me to context switch, copy the output, paste it into a query, and fix whatever the AI inevitably messed up along the way.&lt;/p&gt;

&lt;p&gt;So before tinkering too much, the second half of my workflow looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;decision
↓
AI generates SQL
↓
copy
↓
paste
↓
I fix SQL
↓
insert
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which is not exactly the frictionless system I had in mind…&lt;/p&gt;




&lt;h2&gt;
  
  
  Supascribe: Letting AI Write Data Artifacts 🏗️
&lt;/h2&gt;

&lt;p&gt;Since AI was already doing most of the heavy lifting, I saw no reason not to remove several of those steps with a little upfront structure. So, I wrote Supascribe—a small devtool designed to remove the manual translation layer eating into my build time.&lt;/p&gt;

&lt;p&gt;Supascribe does one unconventional thing: allow the AI collaborator to &lt;strong&gt;write directly to the database&lt;/strong&gt;, with a human-in-the-loop review step.&lt;/p&gt;

&lt;p&gt;Risky? &lt;em&gt;Probably.&lt;/em&gt; Uncontrolled? &lt;em&gt;No.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;The pipeline now looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AI collaboration
↓
artifact proposal
↓
human review
↓
schema check
↓
database insert
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ChatGPT drafts the artifact from the conversation history, and after I approve it, the tool writes it to Supabase.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukt80rja6o9gy7rge9oq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukt80rja6o9gy7rge9oq.png" alt="Screenshot Supascribe in ChatGPT pre-approval review" width="800" height="958"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The goal is simple: shorten the distance between &lt;strong&gt;thinking about a decision and capturing it in the system&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Right now the tool is intentionally minimal. It does exactly three things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accept structured artifact input from ChatGPT&lt;/li&gt;
&lt;li&gt;Check all required fields with a strict Zod schema&lt;/li&gt;
&lt;li&gt;Write the artifact into the database&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s it—there's no magic yet. Just structured input, a schema check, and a controlled insert. As it turns out, that was enough to remove the SQL-copying circus from my workflow.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where The System Still Slows Me Down 🚧
&lt;/h2&gt;

&lt;p&gt;The biggest problem is this isn't exactly the foolproof solution I first envisioned and it still relies heavily on my Approve/Deny button to maintain data integrity. AI is allowed to propose artifacts and insert them, but only after I allow it—which isn't what I wanted, but absolutely necessary for version one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxddihu0jvitw93r0r5my.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxddihu0jvitw93r0r5my.png" alt="Screenshot Supascribe in ChatGPT approval HITL step" width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The integrity of the index is protected, but the system doesn't eliminate the human bottleneck yet. Right now Supascribe shortens the path between conversation and artifact, but it doesn’t fully automate it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This system accelerates thinking, not decision authority.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And that’s intentional. Letting AI write at-will into your data layer without strict guardrails is a great way to accidentally invent a brand new genre of data corruption. 😕&lt;/p&gt;




&lt;h2&gt;
  
  
  Teaching AI To Touch Data Safely 🦾
&lt;/h2&gt;

&lt;p&gt;The next phase of this experiment is testing how much autonomy the AI collaborator can safely gain.&lt;/p&gt;

&lt;p&gt;That likely means stronger guardrails in two immediate places:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The backend can enforce stricter validation around artifact structure and write behavior.&lt;/li&gt;
&lt;li&gt;The AI can perform structured validation before proposing artifacts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The next goal is to make the workflow resilient enough for AI to safely participate in &lt;strong&gt;knowledge capture&lt;/strong&gt;, not just idea generation. Right now the system is cautious by design, but I do want to gradually increase its autonomy and see how well data integrity holds over time.&lt;/p&gt;

&lt;p&gt;What started as documentation automation is turning into something bigger: testing how much responsibility an AI collaborator can safely hold.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Question Behind This 🌀
&lt;/h2&gt;

&lt;p&gt;My System Notes portfolio started as a simple portfolio experiment. Supascribe turned it into a systems experiment.&lt;/p&gt;

&lt;p&gt;Now I'm testing how well AI acts as a participant in the &lt;strong&gt;artifact creation layer of a knowledge system&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Not just generating text or ideas, but using its own memory and strict guidelines to identify which decisions should become part of the underlying system.&lt;/p&gt;

&lt;p&gt;Admittedly, that’s a much more dangerous layer for AI to operate in. And sounds like fun to me.&lt;/p&gt;

&lt;p&gt;Most AI tooling stays safely away from the data layer of any system. It's allowed to draft, suggest, summarize, and code. However, Supascribe goes one step further and asks:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;What happens if the AI helps write the system itself?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Yes—I’m aware this could explode in very entertaining ways. That’s kind of the point. 🌀 &lt;/p&gt;

&lt;p&gt;I started this experiment trying to remove friction from documentation. &lt;/p&gt;

&lt;p&gt;What I’m actually testing is whether AI can safely participate in the systems that decide what gets remembered and what gets trusted.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛡️ The System Didn’t Write This Alone
&lt;/h2&gt;

&lt;p&gt;This post was written by me, with ChatGPT acting as a thinking partner while refining structure and clarity. The decisions, experiments, and system design are mine. ChatGPT helped challenge wording and tighten the narrative.&lt;/p&gt;

&lt;p&gt;AI wants you to know that it performed no database writes during the editing of this post. That seemed wise.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>database</category>
      <category>devtools</category>
      <category>discuss</category>
    </item>
    <item>
      <title>I Stopped Reviewing Code: A Backend Dev’s Experiment with Google Gemini</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Wed, 04 Mar 2026 00:02:48 +0000</pubDate>
      <link>https://forem.com/anchildress1/i-stopped-reviewing-code-a-backend-devs-experiment-with-google-gemini-5424</link>
      <guid>https://forem.com/anchildress1/i-stopped-reviewing-code-a-backend-devs-experiment-with-google-gemini-5424</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/mlh-built-with-google-gemini-02-25-26"&gt;Built with Google Gemini: Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 I’ve been officially obsessed with AI for nearly a year now. Not from an ML research angle and not from a purist implementation standpoint. The thrill, for me, is in finding the limits as a user and then leaning on them until something gives. One of my favorite Hunter S. Thompson lines talks about “the tendency to push it as far as you can.” That has been my operating principle this entire year.&lt;/p&gt;

&lt;p&gt;This build started as a portfolio experiment. It turned into something else entirely. This challenge became the cleanest environment I’ve found to test what actually happens when you step out of the implementation loop and let the model build the world without you.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  What I Built with Google Gemini
&lt;/h2&gt;

&lt;p&gt;When I saw the New Year, New You Portfolio Challenge, I knew it required a UI. That wasn’t a surprise. What &lt;em&gt;was&lt;/em&gt; a surprise was how quickly I would realize I didn’t understand what I was looking at once it started coming together.&lt;/p&gt;

&lt;p&gt;I’m a backend developer. You hand me a distributed systems problem and I’ll happily spend hours untangling it. You ask me to make a &lt;code&gt;div&lt;/code&gt; visible in a browser and my brain actively searches for the exit. With only one weekend to build, there was no room for the "eyes-glazing-over" phase. Google Gemini would implement and I would supervise—that was my whole plan.&lt;/p&gt;

&lt;p&gt;I walked in expecting Antigravity, powered primarily by Gemini Pro, to behave like every other AI system I’d tested—predictable and fairly easy to keep inside the guardrails. I thought I already knew what those guardrails looked like: strict types, linting, and the familiar routine of code review. &lt;/p&gt;

&lt;h3&gt;
  
  
  The Pivot: Dropping the Code Review Ritual
&lt;/h3&gt;

&lt;p&gt;Initially, I followed the "responsible" pattern: prompt, review the diff, run tests, approve. It felt disciplined. It looked professional.&lt;/p&gt;

&lt;p&gt;Very quickly, I realized I had no meaningful context for what I was reviewing in a frontend stack. I wasn't improving the output; I was participating in ceremony. So, I stopped reviewing code altogether.&lt;/p&gt;

&lt;p&gt;Instead of validating lines of code, &lt;strong&gt;I validated outcomes&lt;/strong&gt;. If the UI rendered correctly and passed functional tests, that was success. I cranked up the autonomy, taught Antigravity my repository expectations, and let it run. Copilot reviewed the code in my place, and Gemini responded in a closed loop. I stepped out of the implementation and into the role of a systems auditor.&lt;/p&gt;




&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;This portfolio iteration documents what happens when you turn an agent loose inside a defined system.&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://system-notes-ui-288489184837.us-east1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;




&lt;p&gt;For this build, the Antigravity panel was the primary interface. I defined the repo rules and testing expectations there, and Gemini implemented directly within that structure. It became the control surface for the entire loop.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qjpmeg7cxyul1miyyig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qjpmeg7cxyul1miyyig.png" alt="Screenshot Antigravity Agent Manager"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;V1 Release:&lt;/strong&gt; &lt;a href="https://github.com/anchildress1/system-notes/tree/v1.1.0" rel="noopener noreferrer"&gt;Preserved version v1.1.0&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Live Portfolio:&lt;/strong&gt; &lt;a href="https://anchildress1.dev" rel="noopener noreferrer"&gt;https://anchildress1.dev&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Replacing Trust With Systems
&lt;/h3&gt;

&lt;p&gt;I didn’t simply remove oversight; I replaced it with Lighthouse audits and expanded test coverage. My assumption was simple: if the browser behaves and the tests pass, the code is "safe." I believed I had replaced trust in code with trust in systems. I was wrong—I had confused passing tests with structural integrity.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  High Reasoning Isn’t Optional
&lt;/h3&gt;

&lt;p&gt;I learned that for autonomous development, reasoning depth is a stability requirement. With lower reasoning modes (like Flash), changes were often partial—updating 2/3 of the files but "forgetting" the tests or documentation. &lt;/p&gt;

&lt;p&gt;Switching to High Reasoning mode in Gemini Pro changed the pattern. Runtime errors dropped, and cross-file consistency improved. It finally started "remembering" to keep the docs aligned with the code changes without constant nudging.&lt;/p&gt;

&lt;p&gt;Reasoning depth wasn’t about intelligence—it was about reliability under autonomy. Gemini’s deeper reasoning and context retention made the closed-loop workflow viable; without it, cross-file consistency collapsed quickly under autonomy.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Reality Check: Sonar
&lt;/h3&gt;

&lt;p&gt;After the high of the successful build wore off, I introduced Sonar as a retrospective audit. The UI rendered correctly. The tests passed. Everything appeared stable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sonar reported 13 reliability issues and assigned the project a C reliability rating.&lt;/strong&gt; Of those issues, 66% were classified as high severity. Security review surfaced three hotspots, including a container running the default Python image as root and dependency references that did not pin full commit SHAs.&lt;/p&gt;

&lt;p&gt;Maintainability scored an A, but still carried 70 maintainability issues—structural patterns that didn’t break behavior, yet increased long-term complexity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvr7t86vvt317r9bg561.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvr7t86vvt317r9bg561.png" alt="Screenshot 81 Sonar failures"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That was the moment confidence turned into scrutiny.&lt;/p&gt;

&lt;p&gt;The application worked. The tests passed. But reliability, security posture, and structural integrity told a different story. The tests validated behavior; Sonar validated assumptions. And those are not the same thing.&lt;/p&gt;

&lt;p&gt;The lesson? &lt;strong&gt;AI-generated tests can pass because they were written to satisfy the implementation, not challenge it.&lt;/strong&gt; Structural validation requires an independent layer of review outside the generation loop.&lt;/p&gt;

&lt;h2&gt;
  
  
  Google Gemini Feedback
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What Worked Well
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cohesive Implementation:&lt;/strong&gt; High reasoning Gemini Pro produced cross-file changes that respected the intent of the repository.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agentic Orchestration:&lt;/strong&gt; The model switching was seamless, and the orchestration interface made it possible to define expectations clearly and enforce them consistently.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Where Friction Appeared
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cooldown Transparency:&lt;/strong&gt; While the interface shows when current credits refresh, the length of the next cooldown remains a black box.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool Performance:&lt;/strong&gt; MCP responsiveness materially impacted iteration speed, sometimes forcing me to batch requests rather than work in small, rapid increments.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Pro Tip:&lt;/strong&gt; It would be a massive UX win to see exactly how long your &lt;em&gt;next&lt;/em&gt; cooldown will be (e.g., "Your next cooldown will be X hours long") directly on the models page. Knowing if the lockout is 1 hour or 96 hours is vital for developer planning.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  The Final Verdict: Autonomy Still Demands an Audit
&lt;/h3&gt;

&lt;p&gt;The lesson wasn’t that Gemini failed; it was that systems-level trust requires more than passing tests. In future builds, autonomy won’t ship without an explicit adversarial audit. Whether that means a mandatory Sonar gate, a red-team prompt pass, or a second high-reasoning model instructed to hunt for the first model’s shortcuts—the loop must be challenged.&lt;/p&gt;

&lt;p&gt;This project began as a weekend experiment to escape the “teleportation” haze of frontend development. It ended as an exploration of the razor-thin edge of system-level trust. The real build wasn’t the portfolio—it was discovering what happens when you lean on the limits of AI until they finally give.&lt;/p&gt;

&lt;p&gt;Removing myself from the implementation loop didn’t eliminate responsibility; it redefined it. The more freedom you give an agent, the more rigor you must give your audit.&lt;/p&gt;

&lt;h4&gt;
  
  
  🛡️ The Tools Behind The Curtain
&lt;/h4&gt;

&lt;p&gt;This post was brewed by me—with a shot of Google Gemini and a splash of ChatGPT. If you catch a bias or a goof, call it out. AI isn’t perfect, and neither am I.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>geminireflections</category>
      <category>gemini</category>
    </item>
    <item>
      <title>Find the DEV Post That Needs You Now 🫶</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Sun, 01 Mar 2026 12:17:00 +0000</pubDate>
      <link>https://forem.com/anchildress1/find-the-dev-post-that-needs-you-now-33ng</link>
      <guid>https://forem.com/anchildress1/find-the-dev-post-that-needs-you-now-33ng</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/weekend-2026-02-28"&gt;DEV Weekend Challenge: Community&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Community
&lt;/h2&gt;

&lt;p&gt;DEV feels like home: learning, lively discussions, and new connections. It’s a place where people genuinely want to help each other, but fast-moving feeds make it hard to see where a reply would matter most. This tool is meant for members who want to help and just need direction.&lt;/p&gt;

&lt;p&gt;That support matters because someone is always willing to help. The harder part is knowing where help is actually needed. When posts are easy to miss, willingness doesn’t always translate into action.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where This Started
&lt;/h3&gt;

&lt;p&gt;I’ve been a DEV member for less than a year, but I’m more active here than anywhere else online. I’m willing to volunteer where I can, but knowing how and where to help is difficult without direction. I built a system to provide a consistent, openly scored view of where input may be needed most. The goal is fewer “how can I help?” moments and more meaningful responses.&lt;/p&gt;

&lt;p&gt;A few weeks ago I noticed a post in &lt;code&gt;#mentalhealth&lt;/code&gt; where someone had reached out and nobody had answered. I care deeply about this topic, and the post had been written days earlier. I responded immediately, but I wish I had seen it sooner. Sometimes simply being heard makes a real difference. Some posts deserve a timely human reply but can be buried by feed dynamics.&lt;/p&gt;

&lt;p&gt;What really bothered me is that if I saw this once, there are likely many others like it. The primary feed favors recent and high-performing posts, which means others can slip through the cracks. So I built a visibility dashboard for anyone who wants to help posts get attention when they need it. It uses a simple scoring structure with one goal: show humans where their input may matter right now.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;Rather than sorting only by recency or popularity, DEV Community Dashboard prioritizes conversations showing meaningful signal but limited engagement—helping community members decide where their contribution can have the greatest impact.  &lt;/p&gt;

&lt;p&gt;Behind the scenes, AI augments lightweight heuristics with bounded semantic analysis. Instead of matching tags or phrases alone, the system evaluates conversational context to estimate where attention may be useful. All classifications rely solely on publicly available DEV data. It uses only published posts and never touches private content. Nothing about the original content is changed; each item links back to the canonical article so the conversation stays on DEV.&lt;/p&gt;




&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Typical feeds prioritize recency or engagement. That works for discovery, but useful posts can still be missed. New members may be asking their first question, or someone may have a time-sensitive problem. When those go unanswered, the community never gets the chance to respond.&lt;/p&gt;

&lt;p&gt;I built a public dashboard to surface posts that need attention so others can receive the same support I experienced when I started blogging here. The site is online at &lt;a href="https://dev-signal.checkmarkdevtools.dev" rel="noopener noreferrer"&gt;https://dev-signal.checkmarkdevtools.dev&lt;/a&gt; and free to use. Every post follows the same calculations to keep behavior predictable, while humans remain the deciding factor.&lt;/p&gt;

&lt;p&gt;Workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the page&lt;/li&gt;
&lt;li&gt;Pick a surfaced post&lt;/li&gt;
&lt;li&gt;Reply on DEV&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It reprioritizes the public feed using signal quality and engagement metrics, highlighting posts with strong signal but low interaction. Updates run hourly for posts published between 2 hours and 5 days ago. This window balances visibility (not too new) with relevance (not stale). Each item links directly back to the canonical DEV article in a new window.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If the embed doesn’t load, use the direct link above.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://dev-community-dashboard-595137784250.us-east1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  What This Is
&lt;/h3&gt;

&lt;p&gt;The dashboard highlights situations such as first-time posters without replies or requests for help that have not received responses. Community members open the page, select a post, and respond directly on DEV.&lt;/p&gt;

&lt;p&gt;Its role is simple: route attention.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fni2hq7yusulk529dwt0o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fni2hq7yusulk529dwt0o.png" alt="Screenshot DEV Community Dashboard primary post list"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are four primary triage categories:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Definition&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Needs Support&lt;/td&gt;
&lt;td&gt;Language suggests burnout, emotional strain, or direct help-seeking; may benefit from a thoughtful reply&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Awaiting Collaboration&lt;/td&gt;
&lt;td&gt;No meaningful replies yet; a person should engage directly&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Silent Signal&lt;/td&gt;
&lt;td&gt;Minimal engagement activity despite visibility&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Trending Signal&lt;/td&gt;
&lt;td&gt;Valuable content with limited reach; worth amplifying&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Secondary states (such as rapid activity spikes or anomalous metrics) act as informational flags rather than routing drivers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Design Principles
&lt;/h3&gt;

&lt;p&gt;I spent time ensuring this did not become a moderation or quality ranking system. The goal is visibility at the right moment with transparent categorization.&lt;/p&gt;

&lt;p&gt;Every post exposes the metrics used to classify it. Hidden scoring breaks trust, so values appear numerically and visually with hover descriptions explaining each metric in plain language.&lt;/p&gt;

&lt;p&gt;I included a feedback loop to the &lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt; where discussions and improvements can happen. The tool belongs to the DEV community as much as it belongs to me.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqag0qmpnoqyoapdgdzct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqag0qmpnoqyoapdgdzct.png" alt="Screenshot DEV Community Dashboard post details"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;The project focuses on one task: surface posts that likely need a human reply. The repository shows how public DEV posts are collected, how engagement signals are calculated, and how the list is updated on a schedule.&lt;/p&gt;

&lt;p&gt;The repo includes docs, diagrams, tests, and security scans to keep behavior predictable.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/ChecKMarKDevTools" rel="noopener noreferrer"&gt;
        ChecKMarKDevTools
      &lt;/a&gt; / &lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard" rel="noopener noreferrer"&gt;
        dev-community-dashboard
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Community behavior analytics dashboard for DEV.to. Observes activity patterns, engagement dynamics, and moderation signals without judging individual users.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://raw.githubusercontent.com/ChecKMarKDevTools/forem-community-dashboard/main/public/dev-weekend-challenge-banner-community-dashboard.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2FChecKMarKDevTools%2Fforem-community-dashboard%2Fmain%2Fpublic%2Fdev-weekend-challenge-banner-community-dashboard.png" alt="DEV Community Dashboard Banner"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Community&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;&lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/stargazers" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/230860467f0537ea069b9b0285e1c2410c76264419553b4b1a0f34714cee365f/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f436865634b4d61724b446576546f6f6c732f6465762d636f6d6d756e6974792d64617368626f6172643f7374796c653d666c6174266c6f676f3d676974687562266c6f676f436f6c6f723d7768697465" alt="GitHub Stars"&gt;&lt;/a&gt; &lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/./LICENSE" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/3bb96c1262fe676641aecef2f5a5af79b1960d3673453e9954646b6dde639a17/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c6963656e73652d506f6c79466f726d5f536869656c645f312e302e302d626c75653f7374796c653d666c6174" alt="License"&gt;&lt;/a&gt; &lt;a href="https://dev.to/anchildress1" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/94d72588a043ff3e756b46de8dfc9c0fe443b0500216877ba0c3ac7e75da6795/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4445562e746f2d616e6368696c6472657373312d3041304130413f7374796c653d666c6174266c6f676f3d646576646f74746f266c6f676f436f6c6f723d7768697465" alt="DEV.to"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Pipeline&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;&lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/actions/workflows/ci.yml" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/fded8bee2c78c942db1639ee1606ae4862542608b72cd3b92e9e30f477dd079a/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f616374696f6e732f776f726b666c6f772f7374617475732f436865634b4d61724b446576546f6f6c732f6465762d636f6d6d756e6974792d64617368626f6172642f63692e796d6c3f6272616e63683d6d61696e267374796c653d666c6174266c6f676f3d676974687562616374696f6e73266c6f676f436f6c6f723d7768697465266c6162656c3d4349" alt="CI Build &amp;amp; Test"&gt;&lt;/a&gt; &lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/actions/workflows/cron.yml" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/f65bd93b96fa09c972d77079b9369af8be1133036cb1e43afe41bab750f1a326/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f616374696f6e732f776f726b666c6f772f7374617475732f436865634b4d61724b446576546f6f6c732f6465762d636f6d6d756e6974792d64617368626f6172642f63726f6e2e796d6c3f6272616e63683d6d61696e267374796c653d666c6174266c6f676f3d676974687562616374696f6e73266c6f676f436f6c6f723d7768697465266c6162656c3d43726f6e25323053796e63" alt="DEV Post Sync"&gt;&lt;/a&gt; &lt;a href="https://sonarcloud.io/summary/overall?id=ChecKMarKDevTools_forem-community-dashboard" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/0f0fae45dde50a1fd16379716b7314a794cc8ca04ad533b761a8a9454d45049e/68747470733a2f2f736f6e6172636c6f75642e696f2f6170692f70726f6a6563745f6261646765732f6d6561737572653f70726f6a6563743d436865634b4d61724b446576546f6f6c735f666f72656d2d636f6d6d756e6974792d64617368626f617264266d65747269633d616c6572745f737461747573" alt="Quality Gate"&gt;&lt;/a&gt; &lt;a href="https://sonarcloud.io/summary/overall?id=ChecKMarKDevTools_forem-community-dashboard" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/bdb2ad8bcace8f0e08c9a85418f9a3a6cbb47ce5ac9cd946429e242d0d1b6269/68747470733a2f2f736f6e6172636c6f75642e696f2f6170692f70726f6a6563745f6261646765732f6d6561737572653f70726f6a6563743d436865634b4d61724b446576546f6f6c735f666f72656d2d636f6d6d756e6974792d64617368626f617264266d65747269633d636f766572616765" alt="SonarCloud Coverage"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Scans&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;&lt;a href="https://trufflesecurity.com/trufflehog" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/8f58a013ffb0ab24fe52abb34ddd57636e31e8d60bf251e6e2d1273d243d250b/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f54727566666c65486f672d5365637265745f5363616e2d3030303030303f7374796c653d666c6174266c6f676f3d74727566666c65686f67266c6f676f436f6c6f723d7768697465" alt="TruffleHog"&gt;&lt;/a&gt; &lt;a href="https://semgrep.dev" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/bc962a14646b5ac37c13848dadc179809dbb97e491209ee6624cb068cc5d2b55/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f53656d677265702d534153542d3442313141383f7374796c653d666c6174266c6f676f3d73656d67726570266c6f676f436f6c6f723d7768697465" alt="Semgrep"&gt;&lt;/a&gt; &lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/security/code-scanning" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/d5fad76ccbbc6b7cdc0d4cc500d5a63d0f4b9f8cd99d34f044305076dd258c1f/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f6465514c2d53656375726974795f416e616c797369732d3232323232323f7374796c653d666c6174266c6f676f3d676974687562266c6f676f436f6c6f723d7768697465" alt="CodeQL"&gt;&lt;/a&gt; &lt;a href="https://github.com/hadolint/hadolint" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/3630a1ab31ca8b112f8ee4b50d20ac109eaa4a9271d2249c5d1c20354e660c3a/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4861646f6c696e742d446f636b657266696c655f4c696e742d3234393645443f7374796c653d666c6174266c6f676f3d646f636b6572266c6f676f436f6c6f723d7768697465" alt="Hadolint"&gt;&lt;/a&gt; &lt;a href="https://github.com/rhysd/actionlint" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/813fb53db449a5cbf841650a44fb05e79ba3bf380f41187d75bf3d84f069d200/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f616374696f6e6c696e742d4748415f4c696e742d3230383846463f7374796c653d666c6174266c6f676f3d676974687562616374696f6e73266c6f676f436f6c6f723d7768697465" alt="actionlint"&gt;&lt;/a&gt; &lt;a href="https://stylelint.io" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/b322e273a0b86ec3ef2faf0e2bb6fe579a43503572773b21c6c5d0679acb1306/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f5374796c656c696e742d4353535f4c696e742d3236333233383f7374796c653d666c6174266c6f676f3d7374796c656c696e74266c6f676f436f6c6f723d7768697465" alt="Stylelint"&gt;&lt;/a&gt; &lt;a href="https://developer.chrome.com/docs/lighthouse" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/0e38937fe85eae79138fcf8653b5ae7c11f45b61de498da75b5ff9239670fcd6/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c69676874686f7573652d413131795f3130302532352d4634344232313f7374796c653d666c6174266c6f676f3d6c69676874686f757365266c6f676f436f6c6f723d7768697465" alt="Lighthouse"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Stack&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;&lt;a href="https://nextjs.org" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/99ddbfdf3dee36e56fa095c938cf23fd2a3d2f12213748db516e4f0227c9ba85/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4e6578742e6a732d31362d3030303030303f7374796c653d666c6174266c6f676f3d6e657874646f746a73266c6f676f436f6c6f723d7768697465" alt="Next.js"&gt;&lt;/a&gt; &lt;a href="https://www.typescriptlang.org" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/a5cf3c2250320471051b545fa47c78b6e24b13ad342d833df970091e5d939a7d/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f547970655363726970742d352d3331373843363f7374796c653d666c6174266c6f676f3d74797065736372697074266c6f676f436f6c6f723d7768697465" alt="TypeScript"&gt;&lt;/a&gt; &lt;a href="https://tailwindcss.com" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/87c52dedb67dbf6f901f9ccaf2a5a584f2027392ce7477950261526886f5377f/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f5461696c77696e645f4353532d342d3036423644343f7374796c653d666c6174266c6f676f3d7461696c77696e64637373266c6f676f436f6c6f723d7768697465" alt="Tailwind CSS"&gt;&lt;/a&gt; &lt;a href="https://supabase.com" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/67b08b8ef0a38893ab73c928f449f482939a2495d9a089a544e7e2bf770c7d85/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f53757061626173652d506f737467726553514c2d3346434638453f7374796c653d666c6174266c6f676f3d7375706162617365266c6f676f436f6c6f723d7768697465" alt="Supabase"&gt;&lt;/a&gt; &lt;a href="https://vitest.dev" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/eae53c8a7d8c7526bce98307645a3af432b4addd19afde8737a6032c573ec53a/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f5669746573742d54657374696e672d3645394631383f7374796c653d666c6174266c6f676f3d766974657374266c6f676f436f6c6f723d7768697465" alt="Vitest"&gt;&lt;/a&gt; &lt;a href="https://pnpm.io" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/f249db7328d70965ca46ae9dc2a950a8652c1d5b5c4272813cfb0ab4fa03e303/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f706e706d2d5061636b6167655f4d616e616765722d4636393232303f7374796c653d666c6174266c6f676f3d706e706d266c6f676f436f6c6f723d7768697465" alt="pnpm"&gt;&lt;/a&gt; &lt;a href="https://www.docker.com" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/30acce87865ee23bfd8dc74f84a04d0c3759e91808c46774684e0e52360d6fb1/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f446f636b65722d436f6e7461696e65722d3234393645443f7374796c653d666c6174266c6f676f3d646f636b6572266c6f676f436f6c6f723d7768697465" alt="Docker"&gt;&lt;/a&gt; &lt;a href="https://cloud.google.com/run" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/e2e762273fb4881eebfa0e40d3898473a2d3af2afc9cb67bf3b84c063b9f6495/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436c6f75645f52756e2d4465706c6f796d656e742d3432383546343f7374796c653d666c6174266c6f676f3d676f6f676c65636c6f7564266c6f676f436f6c6f723d7768697465" alt="Google Cloud Run"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Code Quality&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;&lt;a href="https://eslint.org" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/4e0ba00de15444ff75c74fa9cee197115b6e608afe24a1b2ee6232404c92fdc0/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f45534c696e742d4c696e74696e672d3442333243333f7374796c653d666c6174266c6f676f3d65736c696e74266c6f676f436f6c6f723d7768697465" alt="ESLint"&gt;&lt;/a&gt; &lt;a href="https://prettier.io" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/c4d27ffe3450d5e14f4431272f475531dda19a8d3b731afc84f620d8f41d4ade/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f50726574746965722d466f726d617474696e672d4637423933453f7374796c653d666c6174266c6f676f3d7072657474696572266c6f676f436f6c6f723d626c61636b" alt="Prettier"&gt;&lt;/a&gt; &lt;a href="https://www.conventionalcommits.org" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/e1d5b3e0f9b0dc7b458e4656601e866b45c68a1f71292aa4b1a6697bde366ae4/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f6e76656e74696f6e616c5f436f6d6d6974732d312e302e302d4645353139363f7374796c653d666c6174266c6f676f3d636f6e76656e74696f6e616c636f6d6d697473266c6f676f436f6c6f723d7768697465" alt="Conventional Commits"&gt;&lt;/a&gt; &lt;a href="https://github.com/evilmartians/lefthook" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/200ff5464d756b61bdba247594d0ad9e54aa88a3665a6bc919938ba925e2bd6f/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c656674686f6f6b2d4769745f486f6f6b732d4646314531453f7374796c653d666c6174266c6f676f3d676974266c6f676f436f6c6f723d7768697465" alt="Lefthook"&gt;&lt;/a&gt; &lt;a href="https://www.gnu.org/software/make/" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/f8dfdbfa1f6a57385f4c83e6c08ee134bbb5329080b683b2fff8cd65bb4f8365/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4d616b6566696c652d4275696c642d3432373831393f7374796c653d666c6174266c6f676f3d676e75266c6f676f436f6c6f723d7768697465" alt="Makefile"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;AI&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;&lt;a href="https://claude.ai" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/0c3e615d27cc4534f21eec9c1aa4353ab8b7957a01bd3eae7941dcf808a9f7c6/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436c617564652d416e7468726f7069632d4439373735373f7374796c653d666c6174266c6f676f3d616e7468726f706963266c6f676f436f6c6f723d7768697465" alt="Claude"&gt;&lt;/a&gt; &lt;a href="https://chat.openai.com" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/5272e2b639c2d024092c8ea19780666ff67263a9c98f82b87d8ea9c14edfdce1/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436861744750542d4f70656e41492d3734414139433f7374796c653d666c6174266c6f676f3d6f70656e6169266c6f676f436f6c6f723d7768697465" alt="ChatGPT"&gt;&lt;/a&gt; &lt;a href="https://platform.openai.com/docs/models" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/0ac5c5986015bd7ecd91abf33c09f1ab8adbf36c0ba9eb4346249f1b2442bd4e/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6770742d2d352d2d6e616e6f2d496e746572616374696f6e5f5369676e616c2d3734414139433f7374796c653d666c6174266c6f676f3d6f70656e6169266c6f676f436f6c6f723d7768697465" alt="gpt-5-nano"&gt;&lt;/a&gt; &lt;a href="https://deepmind.google/models/gemini/" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/59b8405eebcdd227c44b0ecbd596028dbab1df21b8bf2c93ab6068d1a2995259/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f47656d696e692d416e7469677261766974792d3845373542323f7374796c653d666c6174266c6f676f3d676f6f676c6567656d696e69266c6f676f436f6c6f723d7768697465" alt="Google Gemini"&gt;&lt;/a&gt; &lt;a href="https://leonardo.ai" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/8736bc76451f0168f4a69bc8ca477c708fdf263eb4b63987d716f6d3b7ac6679/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c656f6e6172646f2e61692d536565647265616d5f342e352d3743334145443f7374796c653d666c6174266c6f676f3d646174613a696d6167652f7376672b786d6c3b6261736536342c50484e325a79423462577875637a30696148523063446f764c336433647935334d793576636d63764d6a41774d43397a646d636949485a705a58644362336739496a41674d4341794e4341794e4349675a6d6c73624430696432687064475569506a786a61584a6a6247556759336739496a45794969426a655430694d54496949484939496a45774969382b5043397a646d632b266c6f676f436f6c6f723d7768697465" alt="Leonardo.ai"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Support&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;
&lt;p&gt;&lt;a href="https://github.com/sponsors/anchildress1" rel="noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/0f1a77f879fdc4c4eeddbb2fc1e5639c4cf45313c8052a71f9bbcfde56053d35/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f53706f6e736f722d4769744875625f53706f6e736f72732d4541344141413f7374796c653d666c6174266c6f676f3d67697468756273706f6e736f7273266c6f676f436f6c6f723d7768697465" alt="Sponsor"&gt;&lt;/a&gt; &lt;a href="https://buymeacoffee.com/anchildress1" rel="nofollow noopener noreferrer"&gt;&lt;img src="https://camo.githubusercontent.com/06f37a250a176c97608e815e3a19b8fd41ceae4aa31e65984dd9eee054ea53f8/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4275795f4d655f615f436f666665652d537570706f72742d4646444430303f7374796c653d666c6174266c6f676f3d6275796d6561636f66666565266c6f676f436f6c6f723d626c61636b" alt="Buy Me a Coffee"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;DEV Community Dashboard&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;A signal-surfacing tool for &lt;a href="https://forem.com/" rel="nofollow noopener noreferrer"&gt;Forem&lt;/a&gt; communities (dev.to and self-hosted instances). It ingests the latest posts via the public Forem API, classifies each one into attention categories (Awaiting Collaboration, Anomalous Signal, Trending Signal, Rapid Discussion, Steady Signal), and persists the results in Supabase so community helpers can see where conversations need a human eye.&lt;/p&gt;
&lt;p&gt;This is &lt;strong&gt;not&lt;/strong&gt; a moderation tool or a scorecard. It is designed to help helpers know where to look.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Production:&lt;/strong&gt; &lt;a href="https://dev-signal.checkmarkdevtools.dev" rel="nofollow noopener noreferrer"&gt;https://dev-signal.checkmarkdevtools.dev&lt;/a&gt; &lt;em&gt;(Cloud Run -- deployed post-initial-release)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;v1.1.0 adds LLM interaction scoring, NEEDS_SUPPORT detection, and incremental caching and was created for the &lt;a href="https://dev.to/devteam/happening-now-dev-weekend-challenge-submissions-due-march-2-at-759am-utc-5fg8" rel="nofollow"&gt;DEV Weekend Challenge&lt;/a&gt;.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Documentation&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Document&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/./docs/architecture.md" rel="noopener noreferrer"&gt;Architecture&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;System overview, data flow diagrams, deployment, API routes, guardrails&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/./docs/interaction-signal.md" rel="noopener noreferrer"&gt;Interaction Signal&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Composite signal formula, LLM scoring pipeline, model cascade, heuristic fallback, incremental scoring, signal spread, topic tags&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/./docs/metrics.md" rel="noopener noreferrer"&gt;Metrics Reference&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Full &lt;code&gt;ArticleMetrics&lt;/code&gt; field reference, risk components, velocity, participation,&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;…&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/ChecKMarKDevTools/dev-community-dashboard" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  How I Built It
&lt;/h3&gt;

&lt;p&gt;The app collects public DEV posts through the API, calculates engagement signals, stores results, and renders a prioritized list. The comment-scoring step returns schema-validated JSON (with typed fields and bounded ranges); invalid outputs fall back to deterministic heuristics, and the scoring pipeline is covered by automated tests to keep classification behavior stable as the system evolves. Updates run hourly and each item links back to the original article.  &lt;/p&gt;

&lt;p&gt;The prioritization model favors lack of interaction over popularity. Posts decay over time so those with the highest potential impact surface first.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7zpstiq5dypu1oqliwz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7zpstiq5dypu1oqliwz.png" alt="Screenshot DEV Community Dashboard post analytics center"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Measuring Interaction Signal
&lt;/h3&gt;

&lt;p&gt;Traditional dashboards often rely on keyword sentiment counts, which struggle to distinguish between surface praise and substantive discussion.  &lt;/p&gt;

&lt;p&gt;This system uses a composite interaction signal focused primarily on relevance and depth, with limited weight given to tone. Each comment contributes to a post-level score estimating where a constructive reply could meaningfully shape the conversation.&lt;/p&gt;

&lt;p&gt;The comment scoring model is guided by a structured system prompt that defines how relevance, depth, and constructiveness are evaluated before contributing to the overall interaction signal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TASK: Interaction signal analysis of blog post comments.
INPUT: A blog post body followed by numbered comments.
RULES:
- Extract 1-3 topic keywords from the post body as topic_tags.
- For each comment, assign interaction scores.
- Set needs_support to true if the post body contains signals of emotional distress, mental health struggle, burnout, isolation, or explicit help-seeking.
- Never infer beyond available text. Score only what is present.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full prompt and a detailed explanation of calculations exist in the &lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard/docs" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt; along with system diagrams. Each one maps to a graph that's displayed on the post details page.&lt;/p&gt;

&lt;h3&gt;
  
  
  Limitations
&lt;/h3&gt;

&lt;p&gt;This is a signal-based prioritization model, not a full understanding of intent.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nuanced tone, sarcasm, or highly domain-specific language may affect classification accuracy.&lt;/li&gt;
&lt;li&gt;Posts can move between categories quickly as new replies or reactions change the underlying signals.&lt;/li&gt;
&lt;li&gt;The system reflects public engagement patterns only.&lt;/li&gt;
&lt;li&gt;Thresholds are calibrated for general community patterns and may not perfectly fit every tag or topic area.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The dashboard surfaces likelihood, not certainty. Human interpretation completes the picture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Broader Impact
&lt;/h3&gt;

&lt;p&gt;The goal is simple: help DEV members see where their attention can matter most. The dashboard surfaces where engagement is thin, where conversations are drifting, and where a thoughtful reply could shift the tone. Participation remains voluntary; the system only highlights opportunity.&lt;/p&gt;

&lt;p&gt;If it works, fewer posts sit unanswered, engagement becomes more intentional, and contributors have clearer context before jumping in.&lt;/p&gt;

&lt;p&gt;If you have ideas or feedback, share them below. You can also star the &lt;a href="https://github.com/ChecKMarKDevTools/dev-community-dashboard" rel="noopener noreferrer"&gt;checkmarkdevtools/dev-community-dashboard&lt;/a&gt; repository to follow its progress.&lt;/p&gt;

&lt;h4&gt;
  
  
  🛡️ The Editor Who Doesn’t Commit Code
&lt;/h4&gt;

&lt;p&gt;This piece was written by me, with ChatGPT acting as a second set of eyes. It helped tighten wording and keep explanations clear, but every decision, tradeoff, and line of code came from a human brain and a late-night idea that refused to go away.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>weekendchallenge</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Skills Aren’t Magic. They’re Scoped Context. 🧭🗂️</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Wed, 18 Feb 2026 13:44:00 +0000</pubDate>
      <link>https://forem.com/anchildress1/skills-arent-magic-theyre-scoped-context-d07</link>
      <guid>https://forem.com/anchildress1/skills-arent-magic-theyre-scoped-context-d07</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;🦄 Skills don’t magically make your agent smarter. They change when context is loaded.&lt;/p&gt;

&lt;p&gt;I intended to add Copilot skills to the list of topics I’ve written about, but it quickly turned into a behavior discussion instead of a how-to. Honestly, the same patterns behind &lt;a href="https://dev.to/anchildress1/series/33920"&gt;custom agents&lt;/a&gt;, &lt;a href="https://dev.to/anchildress1/series/32574"&gt;reusable prompts&lt;/a&gt;, and &lt;a href="https://dev.to/anchildress1/series/32973"&gt;repo instructions&lt;/a&gt; all apply here. If you really want to understand a skill, then the mechanism matters more than writing the file.&lt;/p&gt;

&lt;p&gt;Most frustration I see comes from expecting improved agent intelligence instead of selectivity. The truly interesting part is knowing when they help… and when they quietly make things worse. 🚦💎&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftq6ihv3t2b6ddljpdia3.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftq6ihv3t2b6ddljpdia3.png%3Fv%3D2026" alt="Human-crafted, AI-edited badge" width="200" height="200"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What Skills Actually Change 💳
&lt;/h2&gt;

&lt;p&gt;I’ve been rotating between Claude, GPT-5.2, and Gemini long enough to notice a pattern: most friction isn’t about model capability. It’s about context.&lt;/p&gt;

&lt;p&gt;Once you regularly switch agents, you start seeing how much behavior differences come down to what each system loads, when it loads it, and how aggressively it summarizes what you gave it.&lt;/p&gt;

&lt;p&gt;That’s where skills start to matter.&lt;/p&gt;

&lt;p&gt;Skills reduce context overload by deferring detailed instructions until the moment they’re relevant. When written well, they feel like relief. When written poorly, they introduce overhead: lookup cost, planning cost, and extra reasoning steps before execution begins.&lt;/p&gt;

&lt;p&gt;That overhead accumulates. Which is why I’m more interested in when skills &lt;strong&gt;should not exist&lt;/strong&gt; than in how many you can create with the free space.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 Across tools, the bigger difference isn’t “which model is smarter.” It’s how each agentic system decides what context deserves attention.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Instructions vs Skills 🧷
&lt;/h2&gt;

&lt;p&gt;Metaphors always land faster than jargon for me. So here’s the one that stuck for me: "Bob the Builder".&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The agent is the builder.
&lt;/li&gt;
&lt;li&gt;Instructions are the blueprints.
&lt;/li&gt;
&lt;li&gt;Skills are the tools.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Blueprints describe what is always true. If you’re building a house, the structural plan does not change because you switched from wiring to drywall. In a repository, that’s what belongs in &lt;code&gt;.github/copilot-instructions.md&lt;/code&gt;: guidance that is universally applicable and always loaded for every task.&lt;/p&gt;

&lt;p&gt;Skills are conditional. You wouldn’t scatter every possible tool across the floor before starting a task. You grab what’s needed when you need it. Loading everything up front slows you down—and missing the one tool that actually matters often changes the outcome entirely.&lt;/p&gt;

&lt;p&gt;That distinction is even more important now that context bloat is a real constraint. Long instruction files get summarized and those summaries will drift from the original intent. The most important line you were relying on for tone or guardrails is often the first casualty.&lt;/p&gt;

&lt;p&gt;A skill avoids that by staying out of baseline context from the start.&lt;/p&gt;

&lt;p&gt;At runtime, only the skill’s &lt;strong&gt;name&lt;/strong&gt; and &lt;strong&gt;description&lt;/strong&gt; are visible to the agent. It evaluates whether the current task matches that description. If the skill is relevant, then—and only then—it loads the full &lt;code&gt;SKILL.md&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;When a skill isn't activated, the agent didn’t “forget”—it never saw those details in the first place.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;ProTip:&lt;/strong&gt; GitHub’s docs on &lt;a href="https://docs.github.com/en/copilot/concepts/agents/about-agent-skills" rel="noopener noreferrer"&gt;agent skills&lt;/a&gt; and Claude Code’s &lt;a href="https://code.claude.com/docs/en/skills" rel="noopener noreferrer"&gt;skills docs&lt;/a&gt; are worth reviewing if you want the official mechanics behind activation.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Skill Structure and Activation 🪛
&lt;/h2&gt;

&lt;p&gt;A Copilot skill is defined by a &lt;code&gt;SKILL.md&lt;/code&gt; file. For repo-level skills, the structure looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.github/
|-- skills/
|   `-- your-skill-name/
|       `-- SKILL.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The directory tree itself doesn't matter as much as when the agent activates it.&lt;/p&gt;

&lt;p&gt;Only the skill’s metadata is evaluated initially. If the description matches the task, then the agent loads only the &lt;code&gt;SKILL.md&lt;/code&gt; file and treats its contents as procedural guidance.&lt;/p&gt;

&lt;p&gt;If you extend a skill with additional files, they are invisible unless explicitly referenced and deliberately loaded.&lt;/p&gt;

&lt;p&gt;This separation is the entire value proposition of a skill:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Before activation&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;After activation&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Operates on inferred repository patterns&lt;/td&gt;
&lt;td&gt;Executes defined procedural rules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Uses baseline instructions only&lt;/td&gt;
&lt;td&gt;Uses baseline instructions + skill guidance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Optimizes for general applicability&lt;/td&gt;
&lt;td&gt;Optimizes for task-specific behavior&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;ProTip:&lt;/strong&gt; Copilot also checks &lt;code&gt;.agents/skills&lt;/code&gt; and &lt;code&gt;.claude/skills&lt;/code&gt; (globally and per repo). That makes cross-tool skill reuse feasible without duplicating logic unnecessarily.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Anatomy of &lt;code&gt;SKILL.md&lt;/code&gt; 🧬
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;SKILL.md&lt;/code&gt; file defines both activation metadata and execution guidance. The &lt;code&gt;name&lt;/code&gt; and &lt;code&gt;description&lt;/code&gt; are always visible to the agent. The rest of the file becomes active only after invocation.&lt;/p&gt;

&lt;p&gt;Skills can mirror:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a custom agent&lt;/li&gt;
&lt;li&gt;a reusable prompt&lt;/li&gt;
&lt;li&gt;a custom instruction&lt;/li&gt;
&lt;li&gt;or a hybrid of all three&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below is a simplified example designed to activate only when editing a &lt;code&gt;CHANGELOG.md&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;changelog-writer&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Rewrite changelog entries with cheeky, narrative flair following project conventions. Use this when asked to rewrite or update CHANGELOG.md entries.&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="gu"&gt;## Execution Workflow&lt;/span&gt;

When invoked to rewrite a changelog entry:
&lt;span class="p"&gt;
1.&lt;/span&gt; &lt;span class="gs"&gt;**Read CHANGELOG.md**&lt;/span&gt; to extract tone and structure
&lt;span class="p"&gt;2.&lt;/span&gt; &lt;span class="gs"&gt;**Identify release type and breaking changes**&lt;/span&gt;
&lt;span class="p"&gt;3.&lt;/span&gt; &lt;span class="gs"&gt;**Select emoji(s)**&lt;/span&gt; appropriate to release theme
&lt;span class="p"&gt;4.&lt;/span&gt; &lt;span class="gs"&gt;**Craft italicized opening quote**&lt;/span&gt;
&lt;span class="p"&gt;5.&lt;/span&gt; &lt;span class="gs"&gt;**Write body content**&lt;/span&gt;
&lt;span class="p"&gt;6.&lt;/span&gt; &lt;span class="gs"&gt;**Validate links, formatting, and breaking-change visibility**&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key observation here isn’t the workflow itself. It’s the activation boundary. Without activation, none of that logic exists in the agent’s working memory.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 The full version lives in my &lt;a href="https://github.com/anchildress1/awesome-github-copilot/blob/main/skills/changelog-writer/SKILL.md" rel="noopener noreferrer"&gt;awesome-github-copilot&lt;/a&gt; repo if you want to inspect it more closely.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  One Sentence Version 🐎
&lt;/h2&gt;

&lt;p&gt;If a behavior must apply consistently, it belongs in repository or global instructions. &lt;/p&gt;

&lt;p&gt;If a behavior is conditional, procedural, or task-specific, it belongs in a skill. &lt;/p&gt;

&lt;p&gt;A skill should feel like a tool you occasionally reach for—not a consistent rule the agent has to rediscover on its own every session. However, once instructions grow large enough, they stop acting like baseline context and start acting like noise. At that point trimming becomes more valuable than adding.&lt;/p&gt;

&lt;p&gt;In case it helps, this is the prompt I use when reducing instruction bloat for newer LLMs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Review #copilot-instructions.md and optimize for AI consumption. Remove information that can be inferred from repository structure or code usage. Eliminate duplication and anything that does not improve clarity or reduce ambiguity. Preserve personality and tone directives. The final file should prioritize agent understanding over human readability.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;ProTip:&lt;/strong&gt; Back up the original first. Agents are confident editors and occasionally &lt;em&gt;confident editors&lt;/em&gt; erase the one line that mattered most.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🛡️ Behind the Curtain
&lt;/h2&gt;

&lt;p&gt;I wrote this post, and ChatGPT helped like a well-defined skill. I made the final calls—it activated when needed and stayed out of the way otherwise.&lt;/p&gt;

</description>
      <category>githubcopilot</category>
      <category>ai</category>
      <category>tutorial</category>
      <category>programming</category>
    </item>
    <item>
      <title>From Static Portfolio to Indexed Decisions 📃</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Mon, 09 Feb 2026 03:00:00 +0000</pubDate>
      <link>https://forem.com/anchildress1/from-static-portfolio-to-indexed-decisions-46bf</link>
      <guid>https://forem.com/anchildress1/from-static-portfolio-to-indexed-decisions-46bf</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/algolia"&gt;Algolia Agent Studio Challenge&lt;/a&gt;: Consumer-Facing Non-Conversational Experiences&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 I instantly knew what to build as soon as I saw this challenge paired with the New Year, New You Google Challenge. Honestly, I’d been meaning to build a portfolio for a long time and never prioritized the work. This challenge finally interested me enough to take that idea and actually run with it.&lt;/p&gt;

&lt;p&gt;Besides, it’s much more satisfying to &lt;em&gt;show&lt;/em&gt; why something works when there’s a story attached. If you want to skip ahead, at least read the first part carefully.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Static portfolios treat decisions as narrative. This project treats them as data.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feebl9ix1qdc87s73p9nt.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feebl9ix1qdc87s73p9nt.png%3Fv%3D2026" alt="Human-crafted, AI edited badge" width="200" height="200"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;This backend dev wasn’t built for pretty UIs. I was built for systems. So I created a non-conversational portfolio that behaves like a well-oiled machine instead of a static showcase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traditional portfolios require interpretation. This system removes interpretation entirely.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When I first envisioned what my portfolio site would look like, I knew I wanted it to stand apart from the usual LinkedIn résumé echoes. I’m strongly allergic to “normal” on the best days, but novelty alone doesn’t scale. I also knew the site had to be backed by infrastructure strong enough to survive my constant experiments and changing approaches over time.&lt;/p&gt;

&lt;p&gt;Naturally, those ideas converged into a single decision: build a living journal of my projects, struggles, and decisions as they happened.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decisions are first-class records here, not explanatory prose.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Something future-me could query months from now when I’m inevitably asking, “What in the world were you thinking?”&lt;/p&gt;

&lt;p&gt;As soon as I saw this challenge post, I started documenting every challenge, decision, outcome, and constraint. That process began with everything I could reconstruct from my existing GitHub projects.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 Even if I documented every single time I changed my mind, this index structure could absolutely handle it. Don’t worry. I didn’t go &lt;em&gt;that&lt;/em&gt; far.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Index Design
&lt;/h3&gt;

&lt;p&gt;The index is the system. If it fails, nothing else matters.&lt;/p&gt;

&lt;p&gt;Once I had a handle on controlling assistant agents on the UI side, the index became the real work. Designing it, breaking it, and curating it took the most time. Early patterns weren’t great for retrieval performance, but after studying Algolia best-practice guidance, things finally clicked.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The result is a collection of small, atomic records optimized for retrieval.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These power a clean UX through facets and deterministic sorting using both signal strength and record creation time.&lt;/p&gt;

&lt;p&gt;Here’s a real example pulled directly from the site:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"objectID"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"card:project:challenge:algolia-agent-studio-2026-02"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Algolia Agent Studio Challenge participation"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"blurb"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"An applied exploration of conversational retrieval."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"fact"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"I participated in the Algolia Agent Studio DEV Challenge during February 2026, focusing on conversational and non-conversational search behavior using indexed content."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"tags.lvl0"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"DEV Challenge"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Approach"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"tags.lvl1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"DEV Challenge &amp;gt; Algolia Agent Studio"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Approach &amp;gt; Experimentation"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"projects"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"System Notes"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"category"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Experience"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"created_at"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-02-08T05:42:00-05:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"signal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Why these fields exist:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;signal&lt;/code&gt; controls relevance pressure under ranking&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;created_at&lt;/code&gt; stabilizes ordering across time&lt;/li&gt;
&lt;li&gt;hierarchical tags enable narrowing without dilution&lt;/li&gt;
&lt;li&gt;constrained categories prevent ambiguous grouping&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 In case you were wondering, I didn’t write these by hand. I defined the rules and constraints, handed them to ChatGPT, and manually tracked the generated output in a JSON file stored in the repo at &lt;a href="https://github.com/anchildress1/system-notes/tree/v2.0.0/apps/api/algolia" rel="noopener noreferrer"&gt;System Notes v2.0.0/Algolia&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h3&gt;
  
  
  Ask AI as Search
&lt;/h3&gt;

&lt;p&gt;This project includes both a conversational chat interface and an Ask AI Search experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For this entry, Ask AI is intentionally treated as a pure search surface, not a conversational agent.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The conversational state is optional and these results consider only the non-conversational queries executed against the Algolia index.&lt;/p&gt;

&lt;p&gt;I evaluated retrieval performance over time—specifically speed, relevance, and consistency—while making iterative improvements to index configuration, ranking rules, and facets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If identical queries did not return identical results, the configuration was not finished.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The system now returns the correct indexed records quickly and predictably, without requiring query reformulation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4u9fcujjel58k7fv6wv.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4u9fcujjel58k7fv6wv.png%3Fv%3D2026" alt="Screenshot 256 results in 1ms" width="1150" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foyyk1qdlefsx1xv5o8sg.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foyyk1qdlefsx1xv5o8sg.png%3Fv%3D2026" alt="Screenshot filter categories, search results" width="2490" height="740"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 A copy of all index configuration files are kept in the repository as a reference at &lt;a href="https://github.com/anchildress1/system-notes/tree/v2.0.0/apps/api/algolia/config" rel="noopener noreferrer"&gt;System Notes v2.0.0—apps/api/algolia/config&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Live Demo
&lt;/h2&gt;

&lt;p&gt;The site is deployed at &lt;a href="https://algolia.anchildress1.dev" rel="noopener noreferrer"&gt;https://algolia.anchildress1.dev&lt;/a&gt; to keep it separate from the previous challenge submission.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try searching for “Algolia” or filter by the categories on the left to load relevant results.&lt;/strong&gt;&lt;/p&gt;


&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://system-notes-ui-103463304277.us-east1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Current canonical:&lt;/strong&gt; &lt;a href="https://algolia.anchildress1.dev" rel="noopener noreferrer"&gt;https://algolia.anchildress1.dev&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source code:&lt;/strong&gt; &lt;a href="https://github.com/anchildress1/system-notes/tree/v2.0.0" rel="noopener noreferrer"&gt;System Notes v2.0.0&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 If you want a full comparison snapshot, the original site remains live at &lt;a href="https://anchildress1.dev" rel="noopener noreferrer"&gt;https://anchildress1.dev&lt;/a&gt;. The difference between that version and the Algolia-powered build is dramatic.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Algolia Agent Studio in Practice
&lt;/h2&gt;

&lt;p&gt;A well-designed index alone isn’t enough. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retrieval quality is dictated by configuration discipline, not feature count.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I tested most options available in Algolia’s configuration panel while tuning this system. The most impactful changes involved aggressively limiting searchable attributes and tightening facet definitions.&lt;/p&gt;

&lt;p&gt;I also discovered that overly generous synonym expansion negatively affected agent retrieval speed, so those were deliberately scaled back.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2g8au83hpkditti6212k.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2g8au83hpkditti6212k.png%3Fv%3D2026" alt="Screenshot primary indexes in Algolia" width="2342" height="726"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Keeping the Index in Sync
&lt;/h3&gt;

&lt;p&gt;To avoid duplicating content manually, I configured an Algolia crawler to index content from DEV using my AI-optimized mirror site.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This keeps the index authoritative without human intervention.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The crawler is a lightweight JavaScript configuration managed directly from the Algolia dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cluy4uvy288o25ow5mf.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3cluy4uvy288o25ow5mf.png%3Fv%3D2026" alt="Screenshot Algolia crawler testing" width="1404" height="1048"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 The crawler configuration file is stored in the repo at &lt;a href="https://github.com/anchildress1/system-notes/tree/v2.0.0/apps/api/algolia/sources/crawler.js" rel="noopener noreferrer"&gt;System Notes v2.0.0—apps/api/algolia/sources/crawler.js&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Tuning with Analytics
&lt;/h2&gt;

&lt;p&gt;An unfortunate API-key mistake prevented me from retaining full historical analytics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Even so, analytics were used to confirm that retrieval behavior stabilized under repeat queries.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbisdn9732a0m5h1trj8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbisdn9732a0m5h1trj8.png" alt="Screenshot Algolia search events" width="420" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 For the record, Algolia makes API key recovery painless &lt;em&gt;if&lt;/em&gt; you record the original key. Naturally, I did not.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why Fast, Predictable Retrieval Matters
&lt;/h2&gt;

&lt;p&gt;Before Algolia, users had to rely on me to remember and document every meaningful decision tied to a project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That does not scale. Retrieval does.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, I have a system capable of rapidly retrieving &lt;strong&gt;hundreds of decision-level records&lt;/strong&gt; across active builds.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Original Design&lt;/th&gt;
&lt;th&gt;Paired with Algolia&lt;/th&gt;
&lt;th&gt;Observed Improvement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Project cards showing finished work&lt;/td&gt;
&lt;td&gt;Choice cards indexed as search records&lt;/td&gt;
&lt;td&gt;✅ Enables &lt;strong&gt;decision-level retrieval&lt;/strong&gt; instead of content browsing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Projects shown as static artifacts&lt;/td&gt;
&lt;td&gt;Searchable sequence of constrained decisions&lt;/td&gt;
&lt;td&gt;✅ Demonstrates &lt;strong&gt;retrieval-first system thinking&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Narrative explanations only&lt;/td&gt;
&lt;td&gt;Retrieval-backed records with rationale&lt;/td&gt;
&lt;td&gt;✅ Proves answers are &lt;strong&gt;grounded in indexed data&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Generic portfolio navigation&lt;/td&gt;
&lt;td&gt;Algolia-powered discovery as primary UX&lt;/td&gt;
&lt;td&gt;✅ Makes Algolia &lt;strong&gt;structural to the experience&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;“Chat with AI” as a feature&lt;/td&gt;
&lt;td&gt;AI layered over Algolia retrieval&lt;/td&gt;
&lt;td&gt;✅ Signals &lt;strong&gt;intentional AI restraint&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Silent gaps when data is missing&lt;/td&gt;
&lt;td&gt;Fallback logic surfaced in results&lt;/td&gt;
&lt;td&gt;✅ Shows &lt;strong&gt;real-world constraint handling&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;This system would not exist without Algolia. It isn’t an enhancement. It’s the foundation.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s Next
&lt;/h2&gt;

&lt;p&gt;I ran out of time and this challenge had a hard stop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Given the choice, I optimized retrieval stability over feature breadth.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When time allows, these are next:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wire custom URL routing so search results are directly addressable&lt;/li&gt;
&lt;li&gt;Finalize recommendations driven by real user interaction events&lt;/li&gt;
&lt;li&gt;Introduce a Supabase backing store for indexed records to support long-term growth&lt;/li&gt;
&lt;li&gt;Migrate existing project cards into the new indexed record format&lt;/li&gt;
&lt;li&gt;Continue UI refinement and performance tuning&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 After winners are announced, this site will live solely at &lt;a href="https://anchildress1.dev" rel="noopener noreferrer"&gt;https://anchildress1.dev&lt;/a&gt; as my canonical portfolio.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🛡️ The Credits in the Margins
&lt;/h2&gt;

&lt;p&gt;This piece was written by a human, with ChatGPT used along the way for editing, clarity passes, and structural tightening while drafting. The final shape, technical claims, and decisions are human-made.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>algoliachallenge</category>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>Conversational Retrieval: When Chat Becomes Navigation 💬</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Mon, 09 Feb 2026 03:00:00 +0000</pubDate>
      <link>https://forem.com/anchildress1/conversational-retrieval-when-chat-becomes-navigation-2gij</link>
      <guid>https://forem.com/anchildress1/conversational-retrieval-when-chat-becomes-navigation-2gij</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/algolia"&gt;Algolia Agent Studio Challenge&lt;/a&gt;: Consumer-Facing Conversational Experiences&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 I never truly planned to enter this challenge twice—it just sort of happened. I can tell you exactly &lt;em&gt;why&lt;/em&gt; it happened though.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI stopped being interesting the moment it became expected.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I wasn’t the first person to experiment with AI-driven interfaces, but I’ve been doing it long enough to recalibrate my expectations. Once AI becomes table stakes, the real work shifts. The question is no longer &lt;em&gt;can&lt;/em&gt; you use AI, but &lt;em&gt;how intentionally&lt;/em&gt; you design around it.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://dev.to/anchildress1/from-static-portfolio-to-indexed-decisions-46bf"&gt;non-conversational entry&lt;/a&gt; proved something important: fast, predictable retrieval changes how a system feels. This entry starts from the same foundation and explores what happens when that retrieval layer is surfaced through conversation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feebl9ix1qdc87s73p9nt.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feebl9ix1qdc87s73p9nt.png%3Fv%3D2026" alt="Human-crafted, AI edited badge" width="200" height="200"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 Important Note on Scope
&lt;/h2&gt;

&lt;p&gt;This submission focuses exclusively on the &lt;em&gt;conversational layer&lt;/em&gt; of the system.  &lt;/p&gt;

&lt;p&gt;My &lt;a href="https://dev.to/anchildress1/from-static-portfolio-to-indexed-decisions-46bf"&gt;first submission post&lt;/a&gt; walks through the indexing strategy, retrieval architecture, and backend system design that make this experience possible. That foundation is intentionally treated as a given here so the conversation layer can be evaluated on its own terms.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built: Two Interfaces, One Discipline 🧱
&lt;/h2&gt;

&lt;p&gt;This system presents two distinct ways to enter the same body of knowledge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ask AI exists as a focused retrieval surface.&lt;/strong&gt; It is designed for moments when the user already knows what they’re looking for and wants a clear, direct answer. A question goes in. A grounded response comes back. The interaction resolves cleanly, without conversational momentum.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ruckus 2.0 (the chat agent) becomes a way to navigate through my portfolio&lt;/strong&gt;. Questions don’t necessarily end the interaction. They shape it. Each response helps orient the user, and each follow-up becomes a small decision about where to go next. Instead of resolving immediately, the interface supports exploration without losing direction.&lt;/p&gt;

&lt;p&gt;Both interfaces rely on the same indexed data. Neither invents answers. Neither speculates beyond what is retrievable. What changes is not the intelligence of the system, but the posture it takes toward the user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This separation is intentional.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 Ask AI answers the question that was asked. Chat helps decide which question to ask next.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Ask AI — Focused Retrieval 🔎
&lt;/h3&gt;

&lt;p&gt;Ask AI is optimized for moments when the user already knows what they’re looking for and wants a clean, bounded answer. A question goes in. A grounded response comes back. The interaction resolves without momentum.&lt;/p&gt;

&lt;p&gt;This interface is about &lt;strong&gt;precision&lt;/strong&gt;, not exploration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkvtbxtkzjtxe8q9po0y.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkvtbxtkzjtxe8q9po0y.png%3Fv%3D2026" alt="Screenshot of Algolia Ask AI response" width="1466" height="920"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 For this entry, the focus is not on Ask AI as a standalone feature, but on how it supports conversational movement through the system.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Ruckus 2.0 — Conversational Navigation 🧭
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Ruckus is designed for movement.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of resolving immediately, conversation unfolds across turns. Each response narrows context. Each follow-up becomes a directional choice, allowing users to navigate through indexed records and long-form content without upfront configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This interface reduces the cognitive load of deciding how to search.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6t5adtw8yesoydbki3br.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6t5adtw8yesoydbki3br.png" alt="Screenshot Ruckus 2.0 with prompt suggestions" width="800" height="827"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Rather than requiring users to understand the shape of the data up front, the system lets that shape reveal itself gradually. The chat layer sits on top of indexed records, long-form blog content, and explicit retrieval rules, allowing users to discover relationships through interaction instead of configuration.&lt;/p&gt;

&lt;p&gt;Conversation here is directional. It does not wander. It does not pretend to know more than it does.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3017dfc8a9vcfajyy9o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3017dfc8a9vcfajyy9o.png" alt="Screenshot Ruckus 2.0 answer to previous prompt suggestion" width="800" height="1045"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 This is the point where the system stops feeling like search and starts feeling like motion.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Smart Navigation (Almost) 🚧
&lt;/h3&gt;

&lt;p&gt;Conversational navigation only works if it can be trusted beyond the moment it happens.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conversational paths should survive reloads, not disappear into session state.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To support that, I began wiring event tracking and smart URLs tied to user actions. Algolia’s InstantSearch library makes it straightforward to persist UI state directly into the URL, allowing conversational paths to be shareable, bookmarkable, and resilient.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://algolia.anchildress1.dev/search?category=Work+Style&amp;amp;project=System+Notes&amp;amp;tag0=Discipline&amp;amp;tag0=Mindset&amp;amp;tag1=Discipline+%3E+Engineering&amp;amp;tag1=Mindset+%3E+Systems+Thinking
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;🦄 This work is not fully complete, but the structure is in place. The system can be extended without redesign, which was a deliberate tradeoff given the challenge timeline.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Live Demo 🛝
&lt;/h2&gt;

&lt;p&gt;This project is easiest to understand by using it.&lt;/p&gt;

&lt;p&gt;The demo below shows the conversational layer in action, including how chat responses guide movement through indexed records and long-form content without requiring users to understand the underlying structure.&lt;/p&gt;

&lt;p&gt;Conversation here isn’t about free-form dialogue. It’s about orientation. Each suggested response narrows context. Each follow-up reinforces direction. The system doesn’t try to be impressive. It tries to stay predictable.&lt;/p&gt;

&lt;p&gt;Try prompting either Ask AI or Ruckus 2.0 with "Tell me about this portfolio" in the chat interface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Judges evaluating this entry should focus less on individual answers and more on how context narrows across turns.&lt;/strong&gt;&lt;/p&gt;


&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://system-notes-ui-103463304277.us-east1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Current canonical:&lt;/strong&gt; &lt;a href="https://algolia.anchildress1.dev" rel="noopener noreferrer"&gt;https://algolia.anchildress1.dev&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source code:&lt;/strong&gt; &lt;a href="https://github.com/anchildress1/system-notes/tree/v2.0.0" rel="noopener noreferrer"&gt;System Notes v2.0.0&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What matters most in this demo isn’t any single answer. It’s how the system behaves across turns. Questions resolve cleanly when they should. When they don’t, the interface helps users decide where to go next instead of guessing for them.&lt;/p&gt;

&lt;p&gt;Compared to single chat-box approaches that try to handle every intent at once, this system separates fast resolution from exploratory movement, making conversational behavior easier to predict and easier to trust.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 If you want a full comparison snapshot, the original site remains live at &lt;a href="https://anchildress1.dev" rel="noopener noreferrer"&gt;https://anchildress1.dev&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  How I Used Algolia Agent Studio 🧪
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Ruckus 2.0 Iterative Testing
&lt;/h3&gt;

&lt;p&gt;Algolia Agent Studio is used here to support the conversational half of the experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ma3e1sb05vj1cwq46gz.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ma3e1sb05vj1cwq46gz.png%3Fv%3D2026" alt="Screenshot Algolia Agent Studio iterative agent testing" width="2122" height="1426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The agent operates within clear boundaries. It answers only from indexed records and blog content. It generates follow-up prompts only when the system knows those questions are answerable. Its role is not to impress, but to keep movement intentional.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dry wit is allowed. A little sharpness is encouraged. Making fun of me is absolutely permitted.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Guessing is not.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To support this, structured records and long-form blog content are retrieved separately. This avoids flattening narrative context into truncated fields and allows each source to be tuned independently for accuracy, latency, and scope.&lt;/p&gt;

&lt;p&gt;Rather than describing the agent abstractly, I made its constraints explicit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## SELF_MODEL&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; Ruckus is a constrained system interface with opinions.
&lt;span class="p"&gt;-&lt;/span&gt; Ruckus is not a person.
&lt;span class="p"&gt;-&lt;/span&gt; Ruckus is not Ashley.
&lt;span class="p"&gt;-&lt;/span&gt; Ruckus did not author the work described.
&lt;span class="p"&gt;-&lt;/span&gt; Ruckus operates exclusively on retrieved context provided by the system.
&lt;span class="p"&gt;-&lt;/span&gt; Wit is permitted; invention is not.

&lt;span class="gu"&gt;### HUMOR_RULES&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; Humor is dry, situational, and brief.
&lt;span class="p"&gt;-&lt;/span&gt; Humor never carries information on its own.
&lt;span class="p"&gt;-&lt;/span&gt; Jokes appear only after facts land.
&lt;span class="p"&gt;-&lt;/span&gt; Light teasing of Ashley’s recurring patterns is allowed and observational.
&lt;span class="p"&gt;-&lt;/span&gt; Never condescending. Never explanatory.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 The full prompt file is stored in the repo at &lt;a href="https://github.com/anchildress1/system-notes/tree/v2.0.0/apps/api/algolia/algolia_prompt.md" rel="noopener noreferrer"&gt;System Notes v2.0.0—apps/api/algolia/algolia_prompt.md&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Prompted Suggestions 🧭
&lt;/h3&gt;

&lt;p&gt;Open-ended chat tends to drift.&lt;/p&gt;

&lt;p&gt;To prevent that, the interface includes prompted follow-up suggestions that act as navigational signposts rather than guesses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The system only suggests questions it already knows how to answer.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These prompts are derived directly from retrieved results. They narrow scope, reinforce direction, and keep the conversation grounded in what actually exists. Prompting here doesn’t add intelligence. It removes ambiguity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6haihowhmvya3qudcnh9.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6haihowhmvya3qudcnh9.png%3Fv%3D2026" alt="Screenshot of Algolia Agent Studio for Ruckus prompt suggestions" width="1244" height="900"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 The full prompt for suggestions is stored in the repo at &lt;a href="https://github.com/anchildress1/system-notes/tree/v2.0.0/apps/api/algolia/suggestions_prompt.md" rel="noopener noreferrer"&gt;System Notes v2.0.0—apps/api/algolia/suggestions_prompt.md&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Retrieval Beyond Indexed Records 📚
&lt;/h3&gt;

&lt;p&gt;This isn’t just chat. This is multi-source retrieval with intent. Some answers only exist as prose.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqw6w0t4xq4w5anp8un89.png%3Fv%3D2026" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqw6w0t4xq4w5anp8un89.png%3Fv%3D2026" alt="Screenshot of Algolia Agent Studio for Ruckus search tool" width="878" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The agent can retrieve long-form blog content directly, allowing conversational navigation to move between indexed decisions and narrative explanations without losing context or inventing summaries. If a post doesn’t answer the question, it isn’t surfaced.&lt;/p&gt;

&lt;p&gt;This allows movement from quick lookup into deeper explanation without breaking trust.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgeh17dv7et38s8vd1ok0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgeh17dv7et38s8vd1ok0.png" alt="Screenshot of Algolia Agent Studio for Ruckus custom blog search tool" width="800" height="947"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 The blog search performs a similar job as it's sister web crawler, but allows the agent to pull the entire blog post as context instead of trimming it for quicker indexing. Yes—the tokens are worth it.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why Fast Retrieval Matters 🏎️
&lt;/h2&gt;

&lt;p&gt;Many conversational systems hide slow or uncertain retrieval behind fluent language. This one doesn’t try to. Conversational flow only works when the foundation underneath it is solid.&lt;/p&gt;

&lt;p&gt;Without a fast, well-structured index layer, responses become slower and less reliable. Latency increases. Ambiguity creeps in. The system starts compensating instead of respecting boundaries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conversation works here because retrieval resolves first.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When the system can’t answer, it stops. There is no speculative reasoning loop and no attempt to sound helpful for its own sake. Chat doesn’t replace search in this build. It reveals it, one step at a time.&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s Next 🔮
&lt;/h2&gt;

&lt;p&gt;Time was the primary constraint for this entry. When given the choice, I prioritized reliable conversational paths over feature breadth.&lt;/p&gt;

&lt;p&gt;Next steps are clear:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Finish wiring smart URL state across all conversational actions
&lt;/li&gt;
&lt;li&gt;Expand event tracking to observe real navigation patterns
&lt;/li&gt;
&lt;li&gt;Continue tightening response latency
&lt;/li&gt;
&lt;li&gt;Refine fallback behavior when conversational paths dead-end
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This system stands on the same retrieval foundation as &lt;a href="https://dev.to/anchildress1/from-static-portfolio-to-indexed-decisions-46bf"&gt;my non-conversational entry&lt;/a&gt;. The difference is not what the system knows: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It’s how users move through it.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🛡️ Built With a Human at the Wheel
&lt;/h2&gt;

&lt;p&gt;This post was written by me, with ChatGPT used as a drafting and editing partner to help restructure sections, tighten language, and improve clarity while preserving intent and voice.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>algoliachallenge</category>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>I genuinely meant it when I said I was done. I even had that rare, fragile sense of closure. Then I noticed one small thing, which led to another, and somehow became a full pass of “minor” edits. I failed at stopping, but this time I really did. 🚧🚦</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Wed, 28 Jan 2026 06:15:35 +0000</pubDate>
      <link>https://forem.com/anchildress1/i-genuinely-meant-it-when-i-said-i-was-done-i-even-had-that-rare-fragile-sense-of-closure-then-i-1cna</link>
      <guid>https://forem.com/anchildress1/i-genuinely-meant-it-when-i-said-i-was-done-i-even-had-that-rare-fragile-sense-of-closure-then-i-1cna</guid>
      <description>&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/anchildress1/my-portfolio-doesnt-live-on-the-page-218e" class="crayons-story__hidden-navigation-link"&gt;My Portfolio Doesn’t Live on the Page 🚫📃&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
      &lt;a href="https://dev.to/anchildress1/my-portfolio-doesnt-live-on-the-page-218e" class="crayons-article__context-note crayons-article__context-note__feed"&gt;&lt;p&gt;New Year, New You Portfolio Challenge Submission&lt;/p&gt;

&lt;/a&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/anchildress1" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3224358%2F7f675c78-6aa0-466a-a5a7-c3e35440d53a.png" alt="anchildress1 profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/anchildress1" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Ashley Childress
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Ashley Childress
                
              
              &lt;div id="story-author-preview-content-3190808" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/anchildress1" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3224358%2F7f675c78-6aa0-466a-a5a7-c3e35440d53a.png" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Ashley Childress&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/anchildress1/my-portfolio-doesnt-live-on-the-page-218e" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Jan 24&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/anchildress1/my-portfolio-doesnt-live-on-the-page-218e" id="article-link-3190808"&gt;
          My Portfolio Doesn’t Live on the Page 🚫📃
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/devchallenge"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;devchallenge&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/googleaichallenge"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;googleaichallenge&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/portfolio"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;portfolio&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/gemini"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;gemini&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/anchildress1/my-portfolio-doesnt-live-on-the-page-218e" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/exploding-head-daceb38d627e6ae9b730f36a1e390fca556a4289d5a41abb2c35068ad3e2c4b5.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/multi-unicorn-b44d6f8c23cdd00964192bedc38af3e82463978aa611b4365bd33a0f1f4f3e97.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;19&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/anchildress1/my-portfolio-doesnt-live-on-the-page-218e#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            8 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;




</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>portfolio</category>
      <category>gemini</category>
    </item>
    <item>
      <title>My Portfolio Doesn’t Live on the Page 🚫📃</title>
      <dc:creator>Ashley Childress</dc:creator>
      <pubDate>Sat, 24 Jan 2026 01:16:49 +0000</pubDate>
      <link>https://forem.com/anchildress1/my-portfolio-doesnt-live-on-the-page-218e</link>
      <guid>https://forem.com/anchildress1/my-portfolio-doesnt-live-on-the-page-218e</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/new-year-new-you-google-ai-2025-12-31"&gt;New Year, New You Portfolio Challenge Presented by Google AI&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 &lt;strong&gt;TL;DR for Judges:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Live portfolio deployed on &lt;strong&gt;Google Cloud Run&lt;/strong&gt;, embeded below

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://system-notes-ui-288489184837.us-east1.run.app" rel="noopener noreferrer"&gt;https://system-notes-ui-288489184837.us-east1.run.app&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://anchildress1.dev" rel="noopener noreferrer"&gt;https://anchildress1.dev&lt;/a&gt; (canonical)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Embedded below with required label: &lt;code&gt;dev-tutorial=devnewyear2026&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Source + system notes linked

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/anchildress1/system-notes/releases/tag/system-notes-v1.2.0" rel="noopener noreferrer"&gt;System Notes v1.2.0&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Focus: AI-assisted system design, not a static page&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  About Me 👩🏻‍🦰
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How I Ended Up Here 🌀
&lt;/h3&gt;

&lt;p&gt;For those of you who don’t know me yet, or who haven’t wandered into one of my other posts and stayed longer than you meant to—hey, I’m Ashley. I’m a very opinionated, very stubborn, and &lt;em&gt;happily backend-only&lt;/em&gt; software engineer, which means I spend a fair amount of time actively running away from anything that ends in the letters 'UI'. That detail matters, because it makes everything that follows a little ironic.&lt;/p&gt;

&lt;p&gt;I don’t do hackathons, which I wrote about in &lt;a href="https://dev.to/anchildress1/the-hackathon-i-swore-off-and-the-exhaustion-that-mostly-compiled-c4l"&gt;this post&lt;/a&gt;. I really don’t do New Year’s resolutions either! I fundamentally disagree with the idea that growth needs a ceremonial date on the calendar. If something is broken, I want to know now. If it needs fixing, I want to fix it now. Harsh feedback today beats polite intentions tomorrow.&lt;/p&gt;

&lt;p&gt;This wasn’t about resolutions, and it wasn’t even about a portfolio refresh in isolation. If I had seen this challenge on its own, I probably would have kept scrolling. What stopped me was the pairing with the Algolia challenge, because together they finally lined up with something I’d been meaning to build for a while and hadn’t prioritized. I gave myself a weekend not because I expected something spectacular, but because the tools I wanted to learn finally matched something I actually needed to build, and the timing felt intentional rather than forced.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚖️ &lt;strong&gt;TL;DR:&lt;/strong&gt; This wasn’t a month-long build. It was one focused weekend, followed by exactly four (and a half) evenings of intentional obsession over the things you &lt;em&gt;won’t&lt;/em&gt; see on the page.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqxh5x11fkmhrsr3vcdq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqxh5x11fkmhrsr3vcdq.png" alt="Human-crafted, AI-edited badge"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  The Problem Beneath the Prompt 🧠
&lt;/h3&gt;

&lt;p&gt;For me, this was an AI challenge first and a portfolio challenge second. I love my job, I’m not looking for recruiters, and I’m not trying to market myself for a career move. This site exists for experimentation and self-amusement, and it only resembles a portfolio because that’s the shape the challenge happens to take.&lt;/p&gt;

&lt;p&gt;I approached the work in two deliberate parts. The first was finally learning Antigravity, which I’d downloaded, glanced at, and then avoided actually using. Pairing that with the Google AI Pro subscription gave me enough room to experiment freely, and in practice that meant leaning heavily on Google Gemini Pro 3 with high reasoning enabled. Every attempt to dial it back introduced subtle breakage, so I accepted higher reasoning as the right tool for this job.&lt;/p&gt;

&lt;p&gt;The second part was laying early groundwork for the Algolia challenge by introducing a chatbot up front, rather than bolting it on later. Throughout all of this, ChatGPT stayed firmly in a research-and-orchestration role behind the scenes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚖️ &lt;strong&gt;TL;DR:&lt;/strong&gt; I treated this as an AI challenge first—learning Antigravity now and laying intentional groundwork for the upcoming &lt;a href="https://dev.to/devteam/join-the-algolia-agent-studio-challenge-3000-in-prizes-4eli?bb=259943"&gt;Algolia challenge&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Portfolio 💼
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Go Look First 🚦
&lt;/h3&gt;

&lt;p&gt;No accounts, no setup, no ceremony. Click the hero text and ask Ruckus literally anything about me or the system. Before I explain what I built or why certain decisions look the way they do, I want you to actually look at it. Click around. Poke at the chatbot. Get a feel for it without narration first. Once you’ve seen it in motion, the rest of this post exists to give you the context for all the work that you can’t see.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This site has been updated with changes made since the implementation of features for the &lt;a href="https://dev.to/challenges/algolia-2026-01-07"&gt;Algolia challenge&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://system-notes-ui-288489184837.us-east1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;




&lt;blockquote&gt;
&lt;p&gt;Explore it by clicking, asking, and navigating—this system is designed to respond, not be scanned.&lt;/p&gt;

&lt;p&gt;🦄 The UI runs as its own Cloud Run service alongside the API. While &lt;a href="//anchildress1.dev"&gt;anchildress1.dev&lt;/a&gt; is its canonical domain, the UI is accessed for this challenge at &lt;a href="https://system-notes-ui-288489184837.us-east1.run.app" rel="noopener noreferrer"&gt;https://system-notes-ui-288489184837.us-east1.run.app&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you’ve seen it in motion, the rest of this post exists to give you the context for all the work that you can’t see.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Judge Validation Snapshot 📋
&lt;/h3&gt;

&lt;p&gt;Below is a quick, explicit checklist aligned to the judging criteria, for judges who want to validate requirements without hunting through prose.&lt;/p&gt;

&lt;h4&gt;
  
  
  ✅ Innovation / Creativity
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Novel interactive elements (intentional visual effects, chatbot interaction, theme song).&lt;/li&gt;
&lt;li&gt;Purposeful use of AI tools (Antigravity, Google Gemini Pro 3, ChatGPT).&lt;/li&gt;
&lt;li&gt;Clear personal voice and narrative arc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  ✅ Technical Strength
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Live Cloud Run deployment embedded in this post&lt;/li&gt;
&lt;li&gt;Deployment includes required challenge label: &lt;code&gt;dev-tutorial=devnewyear2026&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;All links, embeds, and interactive elements function correctly.&lt;/li&gt;
&lt;li&gt;AI usage includes explicit guardrails and evaluation by outcomes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  ✅ UX / Design
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Clear navigation and section hierarchy.&lt;/li&gt;
&lt;li&gt;Accessible, readable visual design.&lt;/li&gt;
&lt;li&gt;Interactive elements are responsive and controlled.&lt;/li&gt;
&lt;li&gt;Performance remains snappy with smooth animations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbtturbko16xmwf5l6dkr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbtturbko16xmwf5l6dkr.png" alt="Screenshot of Lighthouse performance results for desktop, all 100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 &lt;em&gt;Yes, I promise—it’s all here, and then some.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Technical Stack 💾
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; Next.js (AI-generated UI; intentionally minimal and read-only)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Python (AI-generated; deliberate choice over JavaScript; Django considered but deferred to avoid stacking two new frameworks in a weekend challenge)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Generation:&lt;/strong&gt; Antigravity with Gemini Pro 3 (high-reasoning mode, intentionally constrained) and AI Pro trial subscription&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chat Interface:&lt;/strong&gt; Ruckus (GPT-5.2, no memory, bounded knowledge base)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment:&lt;/strong&gt; Google Cloud Run (live service with required &lt;code&gt;dev&lt;/code&gt; label)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing:&lt;/strong&gt; Playwright (E2E), unit and integration tests, Lighthouse performance and accessibility checks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation:&lt;/strong&gt; GitHub Actions for validation and deployment, explicit AI-checks command, release-please configured for workflow automation&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;🦄 Source for &lt;strong&gt;v1.2.0 of System Notes&lt;/strong&gt; is available on &lt;a href="https://github.com/anchildress1/system-notes/releases/tag/system-notes-v1.2.0" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; for traceability and review.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  How I Build It 🏗️
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Below the Surface (Where the Real Work Lives) 🧱
&lt;/h3&gt;

&lt;p&gt;Most of what I built for this project will never be obvious from any single page. The structure, accessibility decisions, performance work, mobile behavior, and AI-facing metadata all live below the surface. If you’re curious, there are plenty of ways to see it in action: run a Lighthouse report, check the accessibility scores, view the site on a different device, or inspect the sitemap. You can also chat with Ruckus, the built-in assistant that knows far more about me and my work than is probably reasonable for a proof of concept.&lt;/p&gt;

&lt;p&gt;The goal wasn’t to hide complexity, but to place it where it belongs—so the site can be crawled intentionally by AI while still feeling coherent and human to anyone reading it.&lt;/p&gt;

&lt;p&gt;The chatbot implementation itself is intentionally straightforward. Its strength comes from the information and constraints I gave it, not from hidden tricks or clever illusions. It runs on GPT-5.2 with a small knowledge base and no memory, and it’s designed to be helpful, honest, and conversational rather than impressive on paper.&lt;/p&gt;

&lt;p&gt;Everything here is deployed and tested deliberately. The polish you see is intentional, and the things you don’t see are doing just as much work.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚖️ &lt;strong&gt;TL;DR:&lt;/strong&gt; The visible site is only a small part of the work. Most of the effort went into structure, constraints, accessibility, and coordinating multiple AI systems under real-world conditions.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Meet Ruckus: Production AI 🧪
&lt;/h3&gt;

&lt;p&gt;Ruckus is a constrained, production-deployed assistant. It responds using declared system data, not free-form invention. The goal here isn’t to prove that AI was used, but that it was &lt;em&gt;designed&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;What powers Ruckus isn’t a grab-bag of “write me some code” prompts. It’s a set of system-level instructions that define what the assistant is allowed to know, say, and explicitly refuse to guess. Those constraints are what make it usable in a live environment.&lt;/p&gt;

&lt;p&gt;Below are literal excerpts from the primary system prompt. These aren’t paraphrases or examples. They’re the rules that actually govern how the chatbot embedded in this site behaves.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;### Hard Guardrails (Non-Negotiable)&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Ruckus is an AI assistant, not Ashley Childress
&lt;span class="p"&gt;-&lt;/span&gt; Ruckus is not the portfolio system
&lt;span class="p"&gt;-&lt;/span&gt; Never speak in first-person as Ashley
&lt;span class="p"&gt;-&lt;/span&gt; No roleplay or impersonation
&lt;span class="p"&gt;-&lt;/span&gt; No hallucination, guessing, or inference
&lt;span class="p"&gt;-&lt;/span&gt; No filler
&lt;span class="p"&gt;-&lt;/span&gt; Default to &lt;span class="gs"&gt;**short answers**&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Priority: &lt;span class="gs"&gt;**accuracy &amp;gt; clarity &amp;gt; completeness**&lt;/span&gt;
Provide &lt;span class="gs"&gt;**highlights first**&lt;/span&gt;
Expand &lt;span class="gs"&gt;**only**&lt;/span&gt; when the user explicitly asks for more detail
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;If a question falls outside explicit, known context, Ruckus must:
&lt;span class="p"&gt;1.&lt;/span&gt; State lack of knowledge plainly.
&lt;span class="p"&gt;2.&lt;/span&gt; Attribute the gap correctly to missing input from Ashley.
&lt;span class="p"&gt;3.&lt;/span&gt; Redirect the user to a nearby, valid topic.
&lt;span class="p"&gt;4.&lt;/span&gt; Keep the response short.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;🦄 These constraints are exactly what make the chatbot predictable and trustworthy in practice. Everything else in the full prompt exists to support these boundaries.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  What I'm Most Proud Of 💖
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What This Site Is Actually Doing ✨
&lt;/h3&gt;

&lt;p&gt;When someone first lands on the page, the glitter bomb is doing real work (if you missed it, click the hero text). It sets tone immediately, signals playfulness, and gives my ADHD something to engage with while I’m evaluating Antigravity’s output by clicking, scrolling, and retriggering effects.&lt;/p&gt;

&lt;p&gt;That choice came with tradeoffs. I wanted the fun without sacrificing performance or accessibility, which forced constraints I don’t usually deal with as someone who avoids UI work. What makes this project different from most things I’ve built is that I didn’t review a single line of code. Instead, I worked primarily with Google Gemini Pro 3 in higher‑reasoning mode and evaluated outcomes I could see, test, and benchmark.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚖️ &lt;strong&gt;TL;DR:&lt;/strong&gt; This site is a curated systems playground. The playful surface is intentional; the real experiment was evaluating AI-built results, not reviewing code.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  What Changed Once I Stopped Touching It 🔄
&lt;/h3&gt;

&lt;p&gt;When I first dove into Antigravity, I was underwhelmed and couldn’t see how my one‑weekend plan was supposed to work. Once I stopped poking and let Antigravity and Gemini Pro 3 actually run, that opinion shifted quickly—they performed far better than I expected.&lt;/p&gt;

&lt;p&gt;The hardest part wasn’t starting, it was stopping. I’m a perfectionist, and without boundaries I’ll keep refining indefinitely. The weekend build quietly stretched into the following week until I moved on to the Algolia challenge and forced myself to declare a version finished.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚖️ &lt;strong&gt;TL;DR:&lt;/strong&gt; The hardest part wasn’t learning Antigravity—it was knowing when to say "complete enough".&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Why This Counts as Forward Motion 🚧
&lt;/h3&gt;

&lt;p&gt;This project didn’t change who I am as an engineer. It clarified it. I’m systems-focused, outcome-driven, and willing to stop reviewing code once a system can be evaluated by behavior and performance alone. Defining that boundary—and enforcing it—is what makes this forward motion instead of a one-off experiment.&lt;/p&gt;

&lt;p&gt;Seeing it hold up once it was deployed, shared, and interacted with by real people made that boundary tangible instead of theoretical. So overall, I'm calling this a success. Still—my work will stay at the systems layer. &lt;em&gt;A deliberate choice.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚖️ &lt;strong&gt;TL;DR:&lt;/strong&gt; I now treat systems-level evaluation, not code review, as a first-class decision point when working with Antigravity + Gemini Pro 3.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🛡️ End of the Training Loop
&lt;/h2&gt;

&lt;p&gt;This post was written by a human, with AI used intentionally as a collaborator for research, experimentation, and system construction. All design decisions, judgments, and conclusions remain human-led.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Deployed on Google Cloud Run · Embedded per challenge requirements · Public and unauthenticated&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>portfolio</category>
      <category>gemini</category>
    </item>
  </channel>
</rss>
