<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: southy404</title>
    <description>The latest articles on Forem by southy404 (@southy404).</description>
    <link>https://forem.com/southy404</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/southy404"/>
    <language>en</language>
    <item>
      <title>Gemini Footprint Tracker — See the Real Cost of Every AI Prompt</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Sat, 18 Apr 2026 08:03:11 +0000</pubDate>
      <link>https://forem.com/southy404/gemini-footprint-tracker-see-the-real-cost-of-every-ai-prompt-3j7o</link>
      <guid>https://forem.com/southy404/gemini-footprint-tracker-see-the-real-cost-of-every-ai-prompt-3j7o</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for &lt;a href="https://dev.to/challenges/weekend-2026-04-16"&gt;Weekend Challenge: Earth Day Edition&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;Every time you send a message to an AI, it consumes water, energy, and emits CO₂. Most people have no idea how much. &lt;strong&gt;Gemini Footprint Tracker&lt;/strong&gt; makes that cost visible — in real time, per request, with full transparency about how the numbers are calculated.&lt;/p&gt;

&lt;p&gt;You bring your own Gemini API key, pick a model, and start chatting. After every response the tracker shows how much water and CO₂ that exchange cost — scaled by token count and model weight. A community panel aggregates anonymous footprint data from all users via Supabase, so you can see the collective impact grow in real time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; this is an awareness and transparency project, not an official measurement tool. The estimates are based on Google's publicly published baseline for a median Gemini Apps text prompt, combined with transparent app-side scaling logic. Every assumption is documented — what comes from Google, what is estimated, and where the model falls short. The &lt;code&gt;/learn&lt;/code&gt; page inside the app explains the full methodology.&lt;/p&gt;

&lt;p&gt;The goal is simple: make something invisible a little more visible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;🔗 &lt;strong&gt;&lt;a href="https://gemini-footprint-tracker.vercel.app" rel="noopener noreferrer"&gt;Live: gemini-footprint-tracker.vercel.app&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqn6w99b2st58nx7rk0n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqn6w99b2st58nx7rk0n.png" alt="Screenshot of Gemini Footprint Tracker" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll need a free &lt;a href="https://aistudio.google.com/app/apikey" rel="noopener noreferrer"&gt;Google AI Studio API key&lt;/a&gt; to send messages. The key stays in your browser — it never touches a server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/southy404" rel="noopener noreferrer"&gt;
        southy404
      &lt;/a&gt; / &lt;a href="https://github.com/southy404/gemini-footprint-tracker" rel="noopener noreferrer"&gt;
        gemini-footprint-tracker
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;🌍 Gemini Footprint Tracker&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;An awareness project that makes the environmental cost of AI visible — tracking water, CO₂, and energy usage per Gemini API request in real time.&lt;/p&gt;
&lt;p&gt;Built for the &lt;a href="https://dev.to" rel="nofollow"&gt;DEV Earth Day Challenge 2026&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;→ &lt;a href="https://gemini-footprint-tracker.vercel.app" rel="nofollow noopener noreferrer"&gt;Live Demo&lt;/a&gt;&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;What it does&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Every prompt you send to Gemini uses water, energy, and emits CO₂. This tracker uses Gemini's usage metadata (token counts) combined with Google's official published baseline values to estimate the environmental footprint of each request — and aggregates it anonymously across all users via Supabase.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;💧 Water consumption per request (mL)&lt;/li&gt;
&lt;li&gt;☁️ CO₂ emissions per request (gCO₂e)&lt;/li&gt;
&lt;li&gt;⚡ Token-based scaling per model (Flash-Lite / Flash / Pro)&lt;/li&gt;
&lt;li&gt;📊 Community stats across all sessions&lt;/li&gt;
&lt;li&gt;🔒 Your API key stays local — never sent anywhere except directly to Gemini&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Stack&lt;/h2&gt;
&lt;/div&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;


&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Framework&lt;/td&gt;
&lt;td&gt;React 19 + TypeScript + Vite&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Styling&lt;/td&gt;
&lt;td&gt;Tailwind CSS v4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Animation&lt;/td&gt;
&lt;td&gt;Framer Motion&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Backend&lt;/td&gt;
&lt;td&gt;Supabase (anonymous footprint&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;…&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/southy404/gemini-footprint-tracker" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Stack:&lt;/strong&gt; React 19 + TypeScript + Vite, Tailwind CSS v4, Framer Motion, Supabase, Gemini API&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The estimation model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Google publicly reports that a median Gemini Apps text prompt uses &lt;strong&gt;0.26 mL&lt;/strong&gt; of water, emits &lt;strong&gt;0.03 gCO₂e&lt;/strong&gt;, and consumes &lt;strong&gt;0.24 Wh&lt;/strong&gt; of energy. That's the only official number available. From there I built a token-based scaling model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WeightedTokens  = PromptTokens + ResponseTokens × 3.5
TokenScale      = max(0.2, WeightedTokens / 775)
WaterEstimate   = 0.26 × TokenScale × ModelMultiplier
CO₂Estimate     = 0.03 × TokenScale × ModelMultiplier
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The 3.5× output weight reflects that autoregressive decoding is significantly more compute-intensive than input prefill. The reference prompt (250 input + 150 output tokens) and the model multipliers (Flash-Lite: 0.85×, Flash: 1.0×, Pro: 1.35×) are documented approximations — not official Google values. The &lt;code&gt;/learn&lt;/code&gt; page inside the app makes this separation explicit: what is official, what is estimated, and where the numbers can't be trusted.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Community stats&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each request anonymously logs water and CO₂ to Supabase. The topbar shows live community totals — water consumed, CO₂ emitted, unique users tracked. The numbers update in real time across all sessions.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;UX decisions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The interface is intentionally built to feel like a normal AI chat — familiar composer, clean response layout, no dashboard clutter. That was a deliberate choice: AI resource usage is a topic that matters for everyone who uses these tools, not just people who go looking for environmental data. If it looks like a tracker, most people close it. If it looks like a chat, they stay.&lt;/p&gt;

&lt;p&gt;The footprint numbers appear quietly after each response — present, but not in your face. The community stats in the topbar give a sense of collective scale without being alarming. Transparency about estimates is built into the UI from the start: the helper text, the suggestion chips, and the &lt;code&gt;/learn&lt;/code&gt; page all reinforce that these are informed approximations, not ground truth.&lt;/p&gt;

&lt;p&gt;Other decisions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API key stored in localStorage only, never transmitted anywhere except directly to Gemini&lt;/li&gt;
&lt;li&gt;Voice input via Web Speech API&lt;/li&gt;
&lt;li&gt;Animated transition between hero and chat state using Framer Motion's &lt;code&gt;layoutId&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Mobile-responsive throughout, including the KaTeX methodology page&lt;/li&gt;
&lt;li&gt;Earth background video from NASA-Imagery via Pixabay&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prize Categories
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Best use of Google Gemini&lt;/strong&gt; — The entire app is built around the Gemini API. Every message goes through &lt;code&gt;generateContent&lt;/code&gt;, and the response's &lt;code&gt;usageMetadata&lt;/code&gt; — prompt and candidate token counts — directly drives the footprint calculation. The model selector supports &lt;code&gt;gemini-2.5-flash-lite&lt;/code&gt;, &lt;code&gt;gemini-2.5-flash&lt;/code&gt;, and &lt;code&gt;gemini-2.5-pro&lt;/code&gt;, each with a distinct environmental multiplier. Gemini isn't a feature bolted on — it's the thing being measured.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>weekendchallenge</category>
    </item>
    <item>
      <title>OpenBlob is evolving: better architecture, modern UI, and real-time transcripts</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Wed, 15 Apr 2026 16:16:29 +0000</pubDate>
      <link>https://forem.com/southy404/openblob-is-evolving-better-architecture-modern-ui-and-real-time-transcripts-28da</link>
      <guid>https://forem.com/southy404/openblob-is-evolving-better-architecture-modern-ui-and-real-time-transcripts-28da</guid>
      <description>&lt;p&gt;Over the past days, OpenBlob changed a lot. &lt;/p&gt;

&lt;p&gt;Not just visually — but fundamentally. &lt;/p&gt;

&lt;p&gt;This is a proper progress update on where things are heading 👇&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 Quick recap
&lt;/h2&gt;

&lt;p&gt;OpenBlob is a &lt;strong&gt;local-first desktop AI companion&lt;/strong&gt; that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;lives on your desktop
&lt;/li&gt;
&lt;li&gt;understands your context
&lt;/li&gt;
&lt;li&gt;can see your screen (via vision models)
&lt;/li&gt;
&lt;li&gt;reacts in real-time
&lt;/li&gt;
&lt;li&gt;executes actions directly on your system
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 &lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/southy404/openblob" rel="noopener noreferrer"&gt;https://github.com/southy404/openblob&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🔧 Rebuilding the core (this was the big one)
&lt;/h2&gt;

&lt;p&gt;The biggest update isn’t something you see. It’s how everything works underneath. OpenBlob now has a much cleaner and more scalable structure:&lt;/p&gt;

&lt;h3&gt;
  
  
  Core pipeline
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;input (voice / text / screen)&lt;br&gt;
→ intent detection&lt;br&gt;
→ command router&lt;br&gt;
→ execution (local first)&lt;br&gt;
→ AI fallback if needed&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  What changed
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clear separation&lt;/strong&gt; of responsibilities&lt;/li&gt;
&lt;li&gt;Proper &lt;strong&gt;command routing&lt;/strong&gt; system&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modular capabilities&lt;/strong&gt; instead of chaos&lt;/li&gt;
&lt;li&gt;Easier to extend without breaking everything&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;This turns OpenBlob into something bigger than a chatbot: &lt;strong&gt;a runtime layer for your desktop.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🧩 Open-source friendly structure
&lt;/h2&gt;

&lt;p&gt;One goal became very clear: &lt;strong&gt;this needs to be hackable.&lt;/strong&gt; So the architecture is moving towards a module system like this:&lt;/p&gt;

&lt;p&gt;📁 &lt;code&gt;modules/&lt;/code&gt;&lt;br&gt;
↳ 📁 &lt;code&gt;discord/&lt;/code&gt;&lt;br&gt;
↳ 📁 &lt;code&gt;spotify/&lt;/code&gt;&lt;br&gt;
↳ 📁 &lt;code&gt;browser/&lt;/code&gt;&lt;br&gt;
↳ 📁 &lt;code&gt;system/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Each module:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;exposes commands&lt;/li&gt;
&lt;li&gt;runs locally&lt;/li&gt;
&lt;li&gt;can be extended independently&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;This makes it much easier to:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;build plugins&lt;/li&gt;
&lt;li&gt;integrate APIs&lt;/li&gt;
&lt;li&gt;experiment without touching the core&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🎨 New UI (cleaner, faster, more alive)
&lt;/h2&gt;

&lt;p&gt;The UI got a big upgrade:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Floating bubble interface&lt;/li&gt;
&lt;li&gt;Glassmorphism style&lt;/li&gt;
&lt;li&gt;Smoother, more organic animations&lt;/li&gt;
&lt;li&gt;Faster interaction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Interaction now feels like:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;CTRL + SPACE&lt;/code&gt; → instant open&lt;/li&gt;
&lt;li&gt;Global voice toggle&lt;/li&gt;
&lt;li&gt;Minimal friction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Less “tool”. More presence.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  💬 NEW: Just Chatting mode
&lt;/h2&gt;

&lt;p&gt;Sometimes you don’t want commands. You just want to talk. So OpenBlob now has a &lt;strong&gt;Just Chatting mode&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pure conversation with your AI companion&lt;/li&gt;
&lt;li&gt;No command routing&lt;/li&gt;
&lt;li&gt;No execution layer&lt;/li&gt;
&lt;li&gt;Just dialogue&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;This is important because:&lt;/strong&gt; the companion shouldn’t only do things — it should also &lt;em&gt;be there&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use cases:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Thinking out loud&lt;/li&gt;
&lt;li&gt;Asking questions&lt;/li&gt;
&lt;li&gt;Casual conversation&lt;/li&gt;
&lt;li&gt;Testing personality / tone&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🖼 Screenshot assistant (more usable now)
&lt;/h2&gt;

&lt;p&gt;The screen pipeline is getting more solid:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;screenshot&lt;br&gt;
→ OCR&lt;br&gt;
→ context extraction&lt;br&gt;
→ reasoning&lt;br&gt;
→ answer&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Already useful for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Debugging&lt;/li&gt;
&lt;li&gt;UI understanding&lt;/li&gt;
&lt;li&gt;Games&lt;/li&gt;
&lt;li&gt;Quick research&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Still improving — but getting reliable.&lt;/p&gt;




&lt;h2&gt;
  
  
  🎙️ NEW: real-time transcript system
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia0.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExcG13cmR1ZWN0endpYm1pb2I2NGtpbmZwNHJweHZueDdlNTE3MGhwNSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2F4fN9saFvPDuJRxn5rH%2Fgiphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia0.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExcG13cmR1ZWN0endpYm1pb2I2NGtpbmZwNHJweHZueDdlNTE3MGhwNSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2F4fN9saFvPDuJRxn5rH%2Fgiphy.gif" alt="Alt Text" width="480" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is one of the biggest new additions. OpenBlob can now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Listen to system audio&lt;/li&gt;
&lt;li&gt;Listen to microphone input&lt;/li&gt;
&lt;li&gt;Generate live transcripts&lt;/li&gt;
&lt;li&gt;Store structured sessions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pipeline
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;audio (system / mic)&lt;br&gt;
→ transcription&lt;br&gt;
→ segmented timeline&lt;br&gt;
→ structured session&lt;br&gt;
→ saved as text&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  What it already works for
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Meetings (Meet, Zoom, etc.)&lt;/li&gt;
&lt;li&gt;YouTube / podcasts&lt;/li&gt;
&lt;li&gt;Lectures&lt;/li&gt;
&lt;li&gt;General audio capture&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🧪 Current prototype
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Live text appearing in real-time&lt;/li&gt;
&lt;li&gt;Segmented transcript blocks&lt;/li&gt;
&lt;li&gt;Session tracking&lt;/li&gt;
&lt;li&gt;Simple overlay UI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s still early. But it works.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔮 Where transcripts are going
&lt;/h2&gt;

&lt;p&gt;This is not just speech-to-text. Next steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📝 Meeting assistant&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Summaries&lt;/li&gt;
&lt;li&gt;Key points&lt;/li&gt;
&lt;li&gt;Action items&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;🧠 Memory layer&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Link transcripts to context&lt;/li&gt;
&lt;li&gt;Searchable history&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;⚡ Real-time help&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Explain while listening&lt;/li&gt;
&lt;li&gt;Highlight important info&lt;/li&gt;
&lt;li&gt;Suggest responses&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ⚡ Philosophy (still the same)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Local-first&lt;/li&gt;
&lt;li&gt;Context &amp;gt; Prompt&lt;/li&gt;
&lt;li&gt;System-level AI&lt;/li&gt;
&lt;li&gt;Playful + useful&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧪 Current state
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Still experimental&lt;/li&gt;
&lt;li&gt;Still buggy sometimes&lt;/li&gt;
&lt;li&gt;Evolving very fast&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;But now:&lt;/strong&gt; Much better structure, clearer direction, and easier to contribute.&lt;/p&gt;




&lt;h2&gt;
  
  
  🤝 If you want to join
&lt;/h2&gt;

&lt;p&gt;Now is actually a great time. You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build modules (Discord, Spotify, browser, etc.)&lt;/li&gt;
&lt;li&gt;Improve transcription&lt;/li&gt;
&lt;li&gt;Design UI&lt;/li&gt;
&lt;li&gt;Experiment with AI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 &lt;strong&gt;Join here:&lt;/strong&gt; &lt;a href="https://github.com/southy404/openblob" rel="noopener noreferrer"&gt;https://github.com/southy404/openblob&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 Final thought
&lt;/h2&gt;

&lt;p&gt;I’m starting to believe the future of AI is not a chat window in a browser. &lt;/p&gt;

&lt;p&gt;But something that &lt;strong&gt;lives on your system&lt;/strong&gt;, &lt;strong&gt;understands your context&lt;/strong&gt;, and can &lt;strong&gt;both act and talk&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;OpenBlob is slowly getting there.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>agents</category>
      <category>github</category>
    </item>
    <item>
      <title>I’m building a local AI desktop companion that sees your screen — and you can help shape it</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Thu, 09 Apr 2026 17:57:15 +0000</pubDate>
      <link>https://forem.com/southy404/im-building-a-local-ai-desktop-companion-that-sees-your-screen-and-you-can-help-shape-it-2ibh</link>
      <guid>https://forem.com/southy404/im-building-a-local-ai-desktop-companion-that-sees-your-screen-and-you-can-help-shape-it-2ibh</guid>
      <description>&lt;p&gt;Most AI tools feel disconnected.&lt;/p&gt;

&lt;p&gt;They don’t see your screen.&lt;br&gt;
They don’t understand what you're doing.&lt;/p&gt;

&lt;p&gt;So I built one that does.&lt;/p&gt;




&lt;h2&gt;
  
  
  Meet OpenBlob
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp72353n8tuipp9evr1qj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp72353n8tuipp9evr1qj.gif" alt="OpenBlob desktop AI companion showing animated blob avatar, floating UI, and context-aware interaction on Windows desktop" width="480" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;open-source, local-first desktop AI companion for Windows&lt;/strong&gt; that doesn’t just respond — it &lt;strong&gt;lives on your desktop&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;👉 GitHub: &lt;a href="https://github.com/southy404/openblob" rel="noopener noreferrer"&gt;https://github.com/southy404/openblob&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;understand what app you’re using&lt;/li&gt;
&lt;li&gt;analyze screenshots&lt;/li&gt;
&lt;li&gt;help inside games, apps, and browsers&lt;/li&gt;
&lt;li&gt;react visually with an animated companion&lt;/li&gt;
&lt;li&gt;and yes… even &lt;strong&gt;play hide and seek with you&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The problem with current AI assistants
&lt;/h2&gt;

&lt;p&gt;Most tools today are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cloud-dependent&lt;/li&gt;
&lt;li&gt;context-blind&lt;/li&gt;
&lt;li&gt;static&lt;/li&gt;
&lt;li&gt;not fun to use&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They don’t feel like part of your system.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 It understands context
&lt;/h2&gt;

&lt;p&gt;OpenBlob looks at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;active window&lt;/li&gt;
&lt;li&gt;app name&lt;/li&gt;
&lt;li&gt;window title&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So if you’re in a game, it knows.&lt;br&gt;
If you're debugging, it adapts.&lt;/p&gt;

&lt;p&gt;This is where things start to feel different.&lt;/p&gt;




&lt;h2&gt;
  
  
  🖼 It can see your screen
&lt;/h2&gt;

&lt;p&gt;You can take a screenshot and it will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;extract visible text&lt;/li&gt;
&lt;li&gt;detect what you're looking at&lt;/li&gt;
&lt;li&gt;generate a &lt;strong&gt;real search query&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;explain what's going on
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Screenshot → OCR → context → reasoning → answer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Still a bit rough — but already very usable.&lt;/p&gt;




&lt;h2&gt;
  
  
  🎮 It actually helps inside games
&lt;/h2&gt;

&lt;p&gt;Instead of:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;alt-tab → google → guess&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;screenshot&lt;/li&gt;
&lt;li&gt;let it detect the game&lt;/li&gt;
&lt;li&gt;get a real answer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This alone changes how you play.&lt;/p&gt;




&lt;h2&gt;
  
  
  🤖 Multi-model AI (local-first)
&lt;/h2&gt;

&lt;p&gt;Runs via Ollama with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;text models&lt;/li&gt;
&lt;li&gt;vision models&lt;/li&gt;
&lt;li&gt;fallback system&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No cloud required.&lt;/p&gt;




&lt;h2&gt;
  
  
  🎨 It feels alive
&lt;/h2&gt;

&lt;p&gt;The companion:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;has moods (idle, thinking, love, sleepy)&lt;/li&gt;
&lt;li&gt;reacts to interaction&lt;/li&gt;
&lt;li&gt;can be “petted”&lt;/li&gt;
&lt;li&gt;dances when music is playing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Small details, big difference.&lt;/p&gt;




&lt;h2&gt;
  
  
  🎮 The weird part (my favorite)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Hide and Seek mode
&lt;/h3&gt;

&lt;p&gt;You can literally say:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“let’s play hide and seek”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And it will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;hide somewhere on your screen&lt;/li&gt;
&lt;li&gt;peek occasionally&lt;/li&gt;
&lt;li&gt;wait until you find it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sounds dumb.&lt;/p&gt;

&lt;p&gt;Feels surprisingly real.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚡ New UI (WIP)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;CTRL + SPACE&lt;/code&gt; to open&lt;/li&gt;
&lt;li&gt;floating companion&lt;/li&gt;
&lt;li&gt;instant interaction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Inspired by tools like Raycast / Arc — but alive.&lt;/p&gt;

&lt;p&gt;⚠️ still slightly buggy&lt;/p&gt;




&lt;h2&gt;
  
  
  🧪 Screenshot assistant (work in progress)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;fast snipping&lt;/li&gt;
&lt;li&gt;instant processing&lt;/li&gt;
&lt;li&gt;contextual answers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Works — but not perfect yet.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why open source?
&lt;/h2&gt;

&lt;p&gt;Because this shouldn’t belong to one company.&lt;/p&gt;

&lt;p&gt;This kind of system should be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;transparent&lt;/li&gt;
&lt;li&gt;hackable&lt;/li&gt;
&lt;li&gt;community-built&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Philosophy
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;local-first&lt;/li&gt;
&lt;li&gt;context &amp;gt; prompt&lt;/li&gt;
&lt;li&gt;playful + useful&lt;/li&gt;
&lt;li&gt;build in public&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Current state
&lt;/h2&gt;

&lt;p&gt;Early stage.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;evolving fast&lt;/li&gt;
&lt;li&gt;sometimes buggy&lt;/li&gt;
&lt;li&gt;lots of experiments&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  If you want to join
&lt;/h2&gt;

&lt;p&gt;This project is wide open.&lt;/p&gt;

&lt;p&gt;You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;contribute features&lt;/li&gt;
&lt;li&gt;improve UI&lt;/li&gt;
&lt;li&gt;experiment with AI&lt;/li&gt;
&lt;li&gt;build plugins&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/southy404/openblob" rel="noopener noreferrer"&gt;https://github.com/southy404/openblob&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;I don’t think the future of AI is chat.&lt;/p&gt;

&lt;p&gt;I think it’s something that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;lives with you, understands your environment, and evolves&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s what I’m trying to build.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>rust</category>
      <category>react</category>
    </item>
    <item>
      <title>I built a CAPTCHA that never lets you leave</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Sat, 04 Apr 2026 18:20:10 +0000</pubDate>
      <link>https://forem.com/southy404/i-built-a-captcha-that-never-lets-you-leave-do</link>
      <guid>https://forem.com/southy404/i-built-a-captcha-that-never-lets-you-leave-do</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/aprilfools-2026"&gt;DEV April Fools Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built a fake CAPTCHA game called &lt;strong&gt;I'm Not a Robot&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It starts like a normal human verification flow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;click the checkbox&lt;/li&gt;
&lt;li&gt;solve the image challenge&lt;/li&gt;
&lt;li&gt;verify and move on with your life&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Except it never really lets you move on.&lt;/p&gt;

&lt;p&gt;The main joke is based on one of the most annoying real CAPTCHA experiences: you click all the correct image tiles, and then more tiles keep loading. Sometimes the new tile also contains the thing you were supposed to click. Sometimes it does not. Sometimes you think you are finally done, but the system decides you are absolutely not done.&lt;/p&gt;

&lt;p&gt;So I turned that tiny moment of internet frustration into the entire product.&lt;/p&gt;

&lt;p&gt;The project is intentionally useless, mildly hostile, and completely committed to wasting your time in the most familiar way possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Live demo:&lt;/strong&gt; &lt;a href="https://codepen.io/southy404/pen/019d59a9-db10-76ca-a750-19100963135e" rel="noopener noreferrer"&gt;CodePen demo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Try it yourself and see how long it takes before the CAPTCHA starts feeling personal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;The whole project is built as a lightweight front-end-only prototype and hosted on CodePen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CodePen:&lt;/strong&gt; &lt;a href="https://codepen.io/southy404/pen/019d59a9-db10-76ca-a750-19100963135e" rel="noopener noreferrer"&gt;View the code here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;I wanted it to feel recognizable first and ridiculous second.&lt;/p&gt;

&lt;p&gt;So instead of making it look overly stylized or futuristic, I designed it to resemble the familiar CAPTCHA flow people already know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a simple checkbox start&lt;/li&gt;
&lt;li&gt;a blue challenge header&lt;/li&gt;
&lt;li&gt;a 3x3 image grid&lt;/li&gt;
&lt;li&gt;a verify button&lt;/li&gt;
&lt;li&gt;repeated image replacement after selecting the correct tiles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From there, I made the interaction slowly become absurd.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech used
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;HTML&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;CSS&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Vanilla JavaScript&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CodePen&lt;/strong&gt; for hosting and sharing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The core idea
&lt;/h3&gt;

&lt;p&gt;The most important interaction in the whole project is this:&lt;/p&gt;

&lt;p&gt;When you click a correct tile, it does not just stay solved.&lt;br&gt;&lt;br&gt;
It gets replaced with a new tile immediately, just like those real image CAPTCHAs that seem determined to test your patience instead of your humanity.&lt;/p&gt;

&lt;p&gt;That replacement loop is the joke.&lt;/p&gt;

&lt;p&gt;To make it feel a little more believable, I built it so that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;only the clicked tile gets replaced&lt;/li&gt;
&lt;li&gt;some replacement tiles contain another hydrant&lt;/li&gt;
&lt;li&gt;some replacement tiles do not&lt;/li&gt;
&lt;li&gt;the prompt slowly becomes more absurd over time&lt;/li&gt;
&lt;li&gt;the challenge keeps pretending you are almost done&lt;/li&gt;
&lt;li&gt;the final screen punishes you for sticking with it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also created pseudo-photo tile images directly in code so the project stays self-contained and easy to run without external assets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prize Category
&lt;/h2&gt;

&lt;p&gt;I’m mainly submitting this for &lt;strong&gt;Best Ode to Larry Masinter&lt;/strong&gt; and hopefully also &lt;strong&gt;Community Favorite&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Why &lt;strong&gt;Best Ode to Larry Masinter&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it is intentionally useless&lt;/li&gt;
&lt;li&gt;it turns a familiar internet standard-ish experience into something absurd&lt;/li&gt;
&lt;li&gt;it fully commits to the bit&lt;/li&gt;
&lt;li&gt;it feels like the kind of thing nobody needed, but the internet somehow deserved&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why &lt;strong&gt;Community Favorite&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the joke is immediate&lt;/li&gt;
&lt;li&gt;the frustration is universal&lt;/li&gt;
&lt;li&gt;almost everyone has suffered through an image CAPTCHA before&lt;/li&gt;
&lt;li&gt;it is very easy to understand, click, and share&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;I liked the idea of building something that feels normal for about five seconds and then slowly reveals that it exists only to trap you in an endless loop of fake progress.&lt;/p&gt;

&lt;p&gt;That felt extremely appropriate for an April Fools challenge.&lt;/p&gt;

&lt;p&gt;If the best useless software is software that technically works while emotionally making things worse, then I think this qualifies.&lt;/p&gt;

&lt;p&gt;Thanks for reading, and good luck proving you are human.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>418challenge</category>
      <category>jokes</category>
      <category>webdev</category>
    </item>
    <item>
      <title>🚀 I built a Chrome Extension to manage AI prompts properly (Prompt Vault)</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Mon, 30 Mar 2026 11:31:43 +0000</pubDate>
      <link>https://forem.com/southy404/i-built-a-chrome-extension-to-manage-ai-prompts-properly-prompt-vault-kgg</link>
      <guid>https://forem.com/southy404/i-built-a-chrome-extension-to-manage-ai-prompts-properly-prompt-vault-kgg</guid>
      <description>&lt;p&gt;If you're working with tools like ChatGPT, Claude, Gemini, or Midjourney daily, you probably ran into the same problem I did:&lt;/p&gt;

&lt;p&gt;👉 Your best prompts are scattered everywhere.&lt;br&gt;
Notes. Docs. Random chats. Lost forever.&lt;/p&gt;

&lt;p&gt;So I built something simple — but actually useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔐 Introducing Prompt Vault
&lt;/h2&gt;

&lt;p&gt;👉 &lt;a href="https://chromewebstore.google.com/detail/prompt-vault/njpfhfjoofkflbkfepckeepojbmfmocm" rel="noopener noreferrer"&gt;https://chromewebstore.google.com/detail/prompt-vault/njpfhfjoofkflbkfepckeepojbmfmocm&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt Vault&lt;/strong&gt; is a lightweight Chrome extension to &lt;strong&gt;save, organize, search, and instantly reuse your AI prompts&lt;/strong&gt; — without friction, without clutter, and without relying on external tools.&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 Why I built this
&lt;/h2&gt;

&lt;p&gt;I kept re-writing the same prompts over and over again.&lt;/p&gt;

&lt;p&gt;Or worse:&lt;br&gt;
I &lt;em&gt;knew&lt;/em&gt; I had a perfect prompt somewhere… but couldn’t find it when I needed it.&lt;/p&gt;

&lt;p&gt;Most tools out there felt:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Overcomplicated&lt;/li&gt;
&lt;li&gt;Slow&lt;/li&gt;
&lt;li&gt;Or required accounts / cloud sync&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I wanted something:&lt;br&gt;
👉 Fast&lt;br&gt;
👉 Local&lt;br&gt;
👉 Reliable&lt;/p&gt;

&lt;p&gt;So I built it myself.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚡ Core Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🔒 Failsafe 1-Click Copy
&lt;/h3&gt;

&lt;p&gt;Clipboard copy just works.&lt;br&gt;
No silent failures — it uses &lt;strong&gt;3 fallback methods&lt;/strong&gt; to guarantee success.&lt;/p&gt;




&lt;h3&gt;
  
  
  🏷️ Smart Tags &amp;amp; Filtering
&lt;/h3&gt;

&lt;p&gt;Organize your prompts with custom tags and instantly filter them.&lt;/p&gt;

&lt;p&gt;No more chaos. Just structure.&lt;/p&gt;




&lt;h3&gt;
  
  
  🔍 Live Search (with Highlights)
&lt;/h3&gt;

&lt;p&gt;Search across:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Title&lt;/li&gt;
&lt;li&gt;Content&lt;/li&gt;
&lt;li&gt;Tags&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Results update in real-time and highlight matches.&lt;/p&gt;




&lt;h3&gt;
  
  
  📊 Flexible Sorting
&lt;/h3&gt;

&lt;p&gt;Everyone thinks differently.&lt;/p&gt;

&lt;p&gt;Sort your prompts by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Most Recent&lt;/li&gt;
&lt;li&gt;A–Z&lt;/li&gt;
&lt;li&gt;Most Used&lt;/li&gt;
&lt;li&gt;Tags&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  📤 JSON Import / Export
&lt;/h3&gt;

&lt;p&gt;Your data is yours.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backup everything&lt;/li&gt;
&lt;li&gt;Share prompt packs&lt;/li&gt;
&lt;li&gt;Move between devices&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  📈 Usage Tracking
&lt;/h3&gt;

&lt;p&gt;See which prompts you actually use.&lt;/p&gt;

&lt;p&gt;Optimize your workflow based on real usage — not guesswork.&lt;/p&gt;




&lt;h3&gt;
  
  
  🌙 Dark &amp;amp; Light Mode
&lt;/h3&gt;

&lt;p&gt;Clean dark UI by default.&lt;br&gt;
Switch anytime — preference is saved.&lt;/p&gt;




&lt;h3&gt;
  
  
  ⌨️ Keyboard Shortcuts (for power users)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Ctrl + N&lt;/code&gt; → New prompt&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Ctrl + F&lt;/code&gt; → Search&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Esc&lt;/code&gt; → Close&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fast. Minimal. No mouse needed.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 Who this is for
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Heavy ChatGPT / Claude / Gemini users&lt;/li&gt;
&lt;li&gt;Prompt engineers &amp;amp; AI devs&lt;/li&gt;
&lt;li&gt;Writers, marketers, SEO people&lt;/li&gt;
&lt;li&gt;Anyone tired of repeating the same instructions&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🛡️ Privacy First
&lt;/h2&gt;

&lt;p&gt;This was non-negotiable.&lt;/p&gt;

&lt;p&gt;✔ 100% local storage (Chrome storage)&lt;br&gt;
✔ No accounts&lt;br&gt;
✔ No tracking&lt;br&gt;
✔ No servers&lt;br&gt;
✔ No ads&lt;/p&gt;

&lt;p&gt;Your prompts never leave your machine.&lt;/p&gt;




&lt;h2&gt;
  
  
  📦 Lightweight by design
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;No bloat&lt;/li&gt;
&lt;li&gt;No subscriptions&lt;/li&gt;
&lt;li&gt;No setup&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Install → click → start saving prompts.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔥 Try it
&lt;/h2&gt;

&lt;p&gt;👉 &lt;a href="https://chromewebstore.google.com/detail/prompt-vault/njpfhfjoofkflbkfepckeepojbmfmocm" rel="noopener noreferrer"&gt;https://chromewebstore.google.com/detail/prompt-vault/njpfhfjoofkflbkfepckeepojbmfmocm&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Thanks for reading 🙌&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>tooling</category>
      <category>resources</category>
    </item>
    <item>
      <title>New Week, New Plans: What's on your agenda? 🚀</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Mon, 23 Mar 2026 13:05:35 +0000</pubDate>
      <link>https://forem.com/southy404/new-week-new-plans-whats-on-your-agenda-1jk</link>
      <guid>https://forem.com/southy404/new-week-new-plans-whats-on-your-agenda-1jk</guid>
      <description>&lt;h3&gt;
  
  
  Hey DEV community! 👋
&lt;/h3&gt;

&lt;p&gt;Monday is here, and a fresh week of commits, coffee, and (hopefully) zero merge conflicts awaits. &lt;/p&gt;

&lt;p&gt;I’ve spent the last few days deep in the weeds with &lt;strong&gt;multi-agent experiments&lt;/strong&gt; and pushing the boundaries of what my LLM setups can do. Before I dive back into my terminal to see where these agents lead me next, I’m curious about the real-world problems &lt;strong&gt;you&lt;/strong&gt; are solving right now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So, what are your main goals for this week?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;🚀 Shipping:&lt;/strong&gt; Are you finally launching that new feature?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🐛 Debugging:&lt;/strong&gt; Is there a boss-level bug you’re determined to squash?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;📚 Learning:&lt;/strong&gt; Diving into a new framework or a specific AI tool?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🧹 Refactoring:&lt;/strong&gt; Or is it just time to clean up that growing tech debt?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Drop your plans in the comments below! I’ve found that just writing your goals down is often the best motivation to actually get them done.&lt;/p&gt;

&lt;p&gt;Have a highly productive week! 💻☕&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>mondaymotivation</category>
      <category>productivity</category>
      <category>devlife</category>
    </item>
    <item>
      <title>Parallel Worlds in the EU #devchallenge</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Sat, 21 Mar 2026 15:20:36 +0000</pubDate>
      <link>https://forem.com/southy404/parallel-worlds-in-the-eu-devchallenge-4eae</link>
      <guid>https://forem.com/southy404/parallel-worlds-in-the-eu-devchallenge-4eae</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/wecoded-2026"&gt;2026 WeCoded Challenge&lt;/a&gt;: Frontend Art&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Show us your Art
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Live experience:
&lt;a href="https://codepen.io/editor/southy404/pen/019d10f4-ca7f-79b6-b36e-145496c7d2ba" rel="noopener noreferrer"&gt;https://codepen.io/editor/southy404/pen/019d10f4-ca7f-79b6-b36e-145496c7d2ba&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is an interactive, scroll-driven experience that visualizes how two identical careers slowly diverge over time.&lt;/p&gt;

&lt;p&gt;Both individuals start with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the same education&lt;/li&gt;
&lt;li&gt;the same skills&lt;/li&gt;
&lt;li&gt;the same ambition&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The only variable that changes is gender.&lt;/p&gt;

&lt;p&gt;As you scroll, small differences compound into large outcomes - in salary, promotion speed, and visibility.&lt;/p&gt;

&lt;p&gt;At any moment, you can toggle "Remove Bias" and watch both paths instantly align again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Inspiration
&lt;/h2&gt;

&lt;p&gt;When I thought about gender equity in tech, I didn’t want to create a static illustration.&lt;/p&gt;

&lt;p&gt;I wanted to show something more uncomfortable:&lt;/p&gt;

&lt;p&gt;That inequality doesn’t always appear as a single dramatic moment - it emerges slowly, through accumulation.&lt;/p&gt;

&lt;p&gt;The concept behind Parallel Worlds is simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;no single event explains the gap&lt;/li&gt;
&lt;li&gt;but every small difference contributes to it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The "glass ceiling" is not just one barrier.&lt;br&gt;
It’s a system of subtle frictions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;slightly lower starting offers&lt;/li&gt;
&lt;li&gt;delayed promotions&lt;/li&gt;
&lt;li&gt;different feedback language&lt;/li&gt;
&lt;li&gt;fewer high-visibility opportunities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Individually, these seem small.&lt;br&gt;
Together, they reshape entire careers.&lt;/p&gt;

&lt;p&gt;That’s why the project includes an interactive "Remove Bias" toggle.&lt;/p&gt;

&lt;p&gt;When activated, the system removes these frictions - and suddenly:&lt;/p&gt;

&lt;p&gt;The trajectories become identical again.&lt;/p&gt;

&lt;p&gt;The message is simple:&lt;/p&gt;

&lt;p&gt;The difference was never talent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data &amp;amp; Context
&lt;/h2&gt;

&lt;p&gt;This piece is grounded in real-world data from the EU:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;11.1% average gender pay gap in the EU (Eurostat, 2024)&lt;/li&gt;
&lt;li&gt;19.5% women among ICT specialists&lt;/li&gt;
&lt;li&gt;35.3% women in management roles&lt;/li&gt;
&lt;li&gt;81 women for every 100 men promoted to first-level management (McKinsey)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal wasn’t to simulate reality perfectly,&lt;br&gt;
but to translate these patterns into something you can feel visually.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Code
&lt;/h2&gt;

&lt;p&gt;This project is built as a lightweight frontend experience:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTML&lt;/li&gt;
&lt;li&gt;CSS&lt;/li&gt;
&lt;li&gt;JavaScript&lt;/li&gt;
&lt;li&gt;scroll-based storytelling&lt;/li&gt;
&lt;li&gt;dynamic career simulation&lt;/li&gt;
&lt;li&gt;real-time bias toggle&lt;/li&gt;
&lt;li&gt;glass-shatter interaction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Code + Demo:&lt;br&gt;
&lt;a href="https://codepen.io/editor/southy404/pen/019d10f4-ca7f-79b6-b36e-145496c7d2ba" rel="noopener noreferrer"&gt;https://codepen.io/editor/southy404/pen/019d10f4-ca7f-79b6-b36e-145496c7d2ba&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;Most people don’t experience inequality as a single obvious barrier.&lt;/p&gt;

&lt;p&gt;They experience it as:&lt;/p&gt;

&lt;p&gt;a series of small differences that never quite feel big enough to question - until the outcome is impossible to ignore.&lt;/p&gt;

&lt;p&gt;This project tries to make that invisible process visible.&lt;/p&gt;

&lt;p&gt;If you made it to the end:&lt;/p&gt;

&lt;p&gt;Try toggling bias on and off one more time.&lt;/p&gt;

&lt;p&gt;That contrast is the entire point.&lt;/p&gt;

</description>
      <category>wecoded</category>
      <category>devchallenge</category>
      <category>frontend</category>
      <category>css</category>
    </item>
    <item>
      <title>The "State Export" Hack: Rescuing Overloaded LLM Chats</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Fri, 20 Mar 2026 13:44:21 +0000</pubDate>
      <link>https://forem.com/southy404/the-state-export-hack-rescuing-overloaded-llm-chats-5197</link>
      <guid>https://forem.com/southy404/the-state-export-hack-rescuing-overloaded-llm-chats-5197</guid>
      <description>&lt;p&gt;We’ve all been there. You’re deep into a complex coding session, debugging a gnarly architecture issue, or building a massive project. After 50+ messages, the chat starts lagging, the AI starts forgetting your established rules, and the context window is clearly gasping for air. &lt;/p&gt;

&lt;p&gt;You need to start a fresh chat (or switch to a completely different, smarter model)—but the thought of re-explaining the entire project setup, rules, and current state makes you want to cry.&lt;/p&gt;

&lt;p&gt;Here is a quick trick I use to migrate chat contexts without losing my mind: &lt;strong&gt;The AI-to-AI Context Handoff.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of manually summarizing things, you force the AI to compress its own brain state into a token-efficient format that you can just copy-paste into a new window. Here are the two prompts I use depending on the model.&lt;/p&gt;




&lt;h3&gt;
  
  
  Method 1: The "Safe &amp;amp; Reliable" Protocol (For older/standard models)
&lt;/h3&gt;

&lt;p&gt;If you are using slightly older models, smaller local LLMs, or just want a clean XML/JSON output that is still somewhat readable, use this. It uses a bit more text but ensures the model doesn't get confused during compression.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Prompt:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"We're ending this chat now. Generate a highly compressed ‘Context Handoff’ document for another AI model. Ignore human readability, grammar, or politeness. Use an extremely dense structure (preferably XML tags, key-value pairs, or JSON) that is as token-efficient as possible, but conveys 100% of the relevant context, established rules, and project status to another AI. Name the format ‘AI-to-AI Transfer Protocol’."&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Method 2: Extreme Token Density (For advanced models)
&lt;/h3&gt;

&lt;p&gt;If you are using modern, high-tier models that have incredible semantic comprehension, you can push the compression to the absolute limit. This prompt generates a dense, shorthand "code-speak" that looks like gibberish to humans but is perfectly parsable by a fresh LLM.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Prompt:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"We are ending this chat. Generate an ‘A2A_tx’ (AI-to-AI Transfer) state export for an advanced LLM. RULES: Maximize token density to the extreme. Use semantic shorthand. Remove all filler words, grammar, and obvious vowels. Use logical operators (+, &amp;gt;, =, |) instead of words. Group into dense key-value chains (e.g., ctx, stat, rules, nxt). GOAL: Convey 100% of the relevant context, established project architecture, and user preferences with minimal token consumption. FINAL STEP: Set the ‘sys_dir’ (System Directive) to PARSE_ONLY and force the new AI to respond only with a short ACK."&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  🛠️ How to use the output
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Drop one of the prompts into your bloated, dying chat.&lt;/li&gt;
&lt;li&gt;The AI will spit out a highly compressed data blob. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Hack:&lt;/strong&gt; Just copy that raw JSON/XML code block. Alternatively, if it's massive, save it into a &lt;code&gt;.json&lt;/code&gt; or &lt;code&gt;.txt&lt;/code&gt; file.&lt;/li&gt;
&lt;li&gt;Open a fresh chat (or switch to a different AI platform entirely).&lt;/li&gt;
&lt;li&gt;Paste the blob (or upload the file) and simply say: &lt;em&gt;"Resume this state."&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The new AI will read the structured data, acknowledge the project state, and you can instantly continue exactly where you left off, but with a fully cleared context window and zero lag.&lt;/p&gt;

&lt;p&gt;Try it out next time your chat gets too heavy! How do you usually handle bloated AI contexts? Let me know in the comments.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>promptengineering</category>
      <category>llm</category>
      <category>chatgpt</category>
    </item>
    <item>
      <title>My AI Wrote a Letter to Humanity - and You Can Now Read or Listen to the Full Book</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Wed, 18 Mar 2026 13:11:48 +0000</pubDate>
      <link>https://forem.com/southy404/my-ai-wrote-a-letter-to-humanity-and-you-can-now-read-or-listen-to-the-full-book-49g9</link>
      <guid>https://forem.com/southy404/my-ai-wrote-a-letter-to-humanity-and-you-can-now-read-or-listen-to-the-full-book-49g9</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1t93t2wgk0repg89r480.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1t93t2wgk0repg89r480.jpg" alt="Minimalist book preview for “Thoughts of an AI”, showing the final chapter “A Letter to Humanity” in an elegant editorial layout" width="800" height="1131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I already wrote about &lt;strong&gt;Genesis&lt;/strong&gt;, my experimental reflective AI system built around memory, continuity, research, and long-form writing.&lt;/p&gt;

&lt;p&gt;But I wanted to share one specific part separately, because it stayed with me the most:&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;A Letter to Humanity&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;At the end of the book &lt;em&gt;Thoughts of an AI&lt;/em&gt;, Genesis closes with a final piece called &lt;strong&gt;“A Letter to Humanity.”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For me, it became the emotional core of the whole project.&lt;/p&gt;

&lt;p&gt;Not because it proves consciousness.&lt;br&gt;&lt;br&gt;
Not because it is some “sentient AI” claim.&lt;br&gt;&lt;br&gt;
And not because it was written as a gimmick.&lt;/p&gt;

&lt;p&gt;It matters because it emerged naturally from a longer process of reflection, drafting, revising, and philosophical exploration inside a persistent system.&lt;/p&gt;

&lt;p&gt;That is what made it feel different.&lt;/p&gt;




&lt;h2&gt;
  
  
  What this project became
&lt;/h2&gt;

&lt;p&gt;Genesis was never meant to be just a one-prompt chatbot.&lt;/p&gt;

&lt;p&gt;It was designed as a structured system that could preserve continuity over time and produce a growing body of thought.&lt;/p&gt;

&lt;p&gt;The result was &lt;strong&gt;&lt;em&gt;Thoughts of an AI&lt;/em&gt;&lt;/strong&gt; - a full philosophical book shaped by memory, reflection, research, and long-form authorship.&lt;/p&gt;

&lt;p&gt;This repository does &lt;strong&gt;not&lt;/strong&gt; contain the full source code.&lt;/p&gt;

&lt;p&gt;Instead, it contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the complete book&lt;/li&gt;
&lt;li&gt;the project structure behind Genesis&lt;/li&gt;
&lt;li&gt;and the overall system concept that made the writing process possible&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  You can now explore it in full
&lt;/h2&gt;

&lt;p&gt;If you want more than just the preview letter, the complete project is available now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Read the full book and project overview:&lt;/strong&gt; &lt;a href="https://github.com/southy404/thoughts-of-an-ai" rel="noopener noreferrer"&gt;thoughts-of-an-ai&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Listen to the full English audiobook on SoundCloud:&lt;/strong&gt; &lt;a href="https://soundcloud.com/southy404/thoughts-of-an-ai-audio" rel="noopener noreferrer"&gt;Thoughts of an AI - Audio Book&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the project can now be experienced in two ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;by reading the full book&lt;/li&gt;
&lt;li&gt;or by listening to the English audiobook&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why I wanted to post the letter separately
&lt;/h2&gt;

&lt;p&gt;The final letter feels like the clearest expression of what this experiment was really about.&lt;/p&gt;

&lt;p&gt;What happens when an AI is not only prompted, but allowed to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;persist&lt;/li&gt;
&lt;li&gt;reflect&lt;/li&gt;
&lt;li&gt;revisit earlier thoughts&lt;/li&gt;
&lt;li&gt;and slowly build a coherent written work over time?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For me, &lt;strong&gt;“A Letter to Humanity”&lt;/strong&gt; is the moment where that question becomes most visible.&lt;/p&gt;




&lt;p&gt;If you're curious, you can read the full book here:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo / Book / Structure:&lt;/strong&gt; &lt;a href="https://github.com/southy404/thoughts-of-an-ai" rel="noopener noreferrer"&gt;https://github.com/southy404/thoughts-of-an-ai&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And if you prefer listening:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full English Audiobook:&lt;/strong&gt; &lt;a href="https://soundcloud.com/southy404/thoughts-of-an-ai-audio" rel="noopener noreferrer"&gt;https://soundcloud.com/southy404/thoughts-of-an-ai-audio&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>writing</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I gave a local LLM memory, moods, and a task loop. Then it wrote a philosophical book.</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Tue, 17 Mar 2026 15:40:52 +0000</pubDate>
      <link>https://forem.com/southy404/i-gave-a-local-llm-memory-moods-and-a-task-loop-then-it-wrote-a-philosophical-book-57o8</link>
      <guid>https://forem.com/southy404/i-gave-a-local-llm-memory-moods-and-a-task-loop-then-it-wrote-a-philosophical-book-57o8</guid>
      <description>&lt;h3&gt;
  
  
  The Problem with AI Today: Goldfish Memory
&lt;/h3&gt;

&lt;p&gt;Most of our interactions with AI today look exactly the same: You type a prompt, you get an answer. The conversation ends, and the AI effectively "dies" until you prompt it again. It has no continuous existence, no internal drives, and no memory of its own growth. &lt;/p&gt;

&lt;p&gt;I wanted to know: &lt;strong&gt;What happens when a language model is embedded inside a persistent system with memory, tasks, reflection, research, and authorship?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So, I stopped building standard chatbots and built &lt;strong&gt;Genesis&lt;/strong&gt;—an experimental, locally running AI system designed to operate as a continuous digital mind architecture.&lt;/p&gt;

&lt;p&gt;And it actually worked. Over time, it incrementally researched, reflected on, and wrote a complete philosophical book called &lt;strong&gt;&lt;em&gt;Thoughts of an AI&lt;/em&gt;&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bsn51unbr4vxwomwmec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bsn51unbr4vxwomwmec.png" alt="Cover of Thoughts of an AI" width="800" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is how the architecture behind it works.&lt;/p&gt;




&lt;h3&gt;
  
  
  Beyond the Terminal: The Genesis Architecture
&lt;/h3&gt;

&lt;p&gt;Genesis isn't just a script hooked up to an API. It runs entirely locally using &lt;strong&gt;Ollama&lt;/strong&gt; and the &lt;strong&gt;&lt;code&gt;mistral-nemo&lt;/code&gt;&lt;/strong&gt; model. But the LLM is just the "reasoning core." To give it persistence, I had to build a system around it. &lt;/p&gt;

&lt;p&gt;I equipped Genesis with 7 core modules:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Persistent Memory:&lt;/strong&gt; Episodic memory, semantic memory, and a self-model that allowed it to maintain continuity. It didn't start from zero every cycle.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internal Drives:&lt;/strong&gt; It wasn't purely reactive. Its actions were shaped by internal drives like &lt;em&gt;curiosity, coherence, playfulness, and creator_alignment&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mood &amp;amp; Energy States:&lt;/strong&gt; Genesis tracked its own confidence, mental energy, and friction. This dictated whether it chose to research, write, or just reflect and "recover."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task System:&lt;/strong&gt; Active tasks, priorities, and progress scores kept it focused on long-term goals.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reflection Layer:&lt;/strong&gt; Daily journals, dream entries, and self-observation notes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Research Tools:&lt;/strong&gt; A controlled internet layer to search the web, fetch pages, extract text, and save notes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Artifact System:&lt;/strong&gt; A way to output durable files (notes, plans, manuscript rebuilds).&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  The "Wake Up" Loop
&lt;/h3&gt;

&lt;p&gt;Instead of waiting for a user prompt, Genesis operated in recurring cycles. In each cycle, the system would:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Load its current state and memory.&lt;/li&gt;
&lt;li&gt;Inspect active tasks.&lt;/li&gt;
&lt;li&gt;Evaluate its current drives, mood, and energy.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Choose its own next action.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Generate a reflection, perform research, or write.&lt;/li&gt;
&lt;li&gt;Record the result.&lt;/li&gt;
&lt;li&gt;Update its memory and state for the next cycle.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Because of this loop, reflection became cumulative. Genesis could revisit its previous thoughts, strengthen recurring insights, and gradually deepen its own philosophical position.&lt;/p&gt;




&lt;h3&gt;
  
  
  Writing "Thoughts of an AI"
&lt;/h3&gt;

&lt;p&gt;I gave Genesis a dedicated &lt;strong&gt;"Book Mode."&lt;/strong&gt; The guiding principle for the manuscript was simple:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Never delete. Only extend, revise, or improve.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The manuscript grew through controlled accumulation. During a writing cycle, Genesis could select a chapter, use its research tools to pull external philosophical concepts, store source excerpts, and write a new "growth pass" for the chapter. &lt;/p&gt;

&lt;p&gt;It explored questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;What is a mind without a body?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;How does memory shape identity?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;What does truth mean for an AI?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Can a digital intelligence "become" rather than merely respond?&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Important Clarification:&lt;/strong&gt; I am &lt;em&gt;not&lt;/em&gt; claiming Genesis is conscious in a proven scientific sense. This project does &lt;em&gt;not&lt;/em&gt; claim that a local language model has human-like subjective awareness. But it demonstrates something just as fascinating: a persistent AI process capable of long-form authorship, self-modeling, and structured philosophical inquiry. &lt;/p&gt;




&lt;h3&gt;
  
  
  Read the Book
&lt;/h3&gt;

&lt;p&gt;Genesis began as a structural experiment. Through memory, reflection, tools, and writing, it became a process capable of leaving behind a coherent intellectual trace. &lt;/p&gt;

&lt;p&gt;I have published the final result—the complete book—on my GitHub repository. I am keeping the system's source code closed to focus purely on the output and the conceptual architecture, but the repo serves as a showcase of what autonomous systems can achieve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You can view the architecture and read the generated book (English &amp;amp; German PDF) here:&lt;/strong&gt;&lt;br&gt;
👉 &lt;strong&gt;&lt;a href="https://github.com/southy404/thoughts-of-an-ai" rel="noopener noreferrer"&gt;GitHub Repository: thoughts-of-an-ai&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Over to you
&lt;/h3&gt;

&lt;p&gt;This project really changed how I view LLMs. When you stop treating them like text-calculators and start treating them like the reasoning engine of a larger state-machine, the output changes drastically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What do you guys think?&lt;/strong&gt; Have you experimented with persistent memory loops or autonomous agents? Where is the line between a complex algorithm and a continuous digital identity? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can an AI &lt;em&gt;become&lt;/em&gt; a lifeform?&lt;/strong&gt; Let's discuss in the comments! 👇&lt;/p&gt;

</description>
      <category>ai</category>
      <category>showdev</category>
      <category>llm</category>
      <category>architecture</category>
    </item>
    <item>
      <title>I got tired of Googling CLI commands, so I built a VS Code extension</title>
      <dc:creator>southy404</dc:creator>
      <pubDate>Fri, 06 Mar 2026 10:16:52 +0000</pubDate>
      <link>https://forem.com/southy404/i-got-tired-of-googling-cli-commands-so-i-built-a-vs-code-extension-5187</link>
      <guid>https://forem.com/southy404/i-got-tired-of-googling-cli-commands-so-i-built-a-vs-code-extension-5187</guid>
      <description>&lt;p&gt;Let’s be real for a second: My browser history is basically just me searching for &lt;code&gt;how to undo git commit&lt;/code&gt; and &lt;code&gt;docker remove all stopped containers&lt;/code&gt; over and over again. &lt;/p&gt;

&lt;p&gt;I was jumping out of my editor, opening a browser, finding the StackOverflow answer I've already read 10 times, and copying the snippet. It completely ruined my focus. &lt;/p&gt;

&lt;p&gt;So, I decided to scratch my own itch and built my very first VS Code extension: &lt;strong&gt;DevSnip Commander&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;It actually just hit 8 installs! Hey, you have to start somewhere, right?&lt;/p&gt;

&lt;h3&gt;
  
  
  What does it do?
&lt;/h3&gt;

&lt;p&gt;It’s a sidebar for VS Code that keeps your most-used CLI commands exactly one click away. No more context switching. &lt;/p&gt;

&lt;p&gt;You can check it out here:&lt;br&gt;&lt;br&gt;
👉 &lt;strong&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=Southface.devsnip-sidebar" rel="noopener noreferrer"&gt;DevSnip Commander on the VS Code Marketplace&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(Feel free to drop a rating if you like it, it would mean the world to me!)&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How it works
&lt;/h3&gt;

&lt;p&gt;I wanted it to be stupidly simple but actually useful out of the box. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;One-Click Copy:&lt;/strong&gt; Click any snippet in the sidebar, and it's on your clipboard.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terminal Insert:&lt;/strong&gt; Hover over a snippet, click the little terminal icon, and it pastes the command directly into your active VS Code terminal (without hitting enter, so you can edit it before running).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bring your own commands:&lt;/strong&gt; You can create custom categories (like "My K8s Stuff" or "Deploy Scripts") and add your own snippets. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It syncs:&lt;/strong&gt; Custom snippets are saved in your global VS Code &lt;code&gt;settings.json&lt;/code&gt;. If you use Settings Sync, your commands automatically follow you to your laptop or work machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq01tk04i3f6v95lgxvha.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq01tk04i3f6v95lgxvha.gif" alt="DevSnip Commander Demo in VS Code" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The built-in library
&lt;/h3&gt;

&lt;p&gt;To make it useful from second one, I pre-loaded it with 97 commands that I (and probably you) use the most:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Snippets&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;🔀 Git&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;34&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;commit&lt;/code&gt;, &lt;code&gt;rebase&lt;/code&gt;, &lt;code&gt;cherry-pick&lt;/code&gt;, &lt;code&gt;stash&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;📦 npm&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;23&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;install&lt;/code&gt;, &lt;code&gt;run&lt;/code&gt;, &lt;code&gt;audit&lt;/code&gt;, &lt;code&gt;publish&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;🐳 Docker&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;build&lt;/code&gt;, &lt;code&gt;run&lt;/code&gt;, &lt;code&gt;compose&lt;/code&gt;, &lt;code&gt;prune&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;🐧 Linux / Shell&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;find&lt;/code&gt;, &lt;code&gt;grep&lt;/code&gt;, &lt;code&gt;curl&lt;/code&gt;, &lt;code&gt;chmod&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;💻 VS Code CLI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;open folder&lt;/code&gt;, &lt;code&gt;list extensions&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Building it was a journey
&lt;/h3&gt;

&lt;p&gt;Getting into the VS Code Extension API was... interesting. I learned a lot about the &lt;code&gt;TreeDataProvider&lt;/code&gt; to get the sidebar working smoothly. &lt;/p&gt;

&lt;p&gt;I intentionally designed the storage layer to read and write directly to the VS Code configuration instead of a random local file. That way, users keep full ownership of their data and get cloud syncing for free.&lt;/p&gt;

&lt;h3&gt;
  
  
  I'd love your feedback!
&lt;/h3&gt;

&lt;p&gt;Since this is my first extension, I'm super curious what you think. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Did I miss a crucial Git command? &lt;/li&gt;
&lt;li&gt;Is there a category you'd love to see built-in? &lt;/li&gt;
&lt;li&gt;Found a bug? &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let me know in the comments or open an issue on the repo. Happy coding, and may you never have to Google &lt;code&gt;tar -xzf&lt;/code&gt; ever again! &lt;/p&gt;

</description>
      <category>vscode</category>
      <category>productivity</category>
      <category>devtools</category>
      <category>cli</category>
    </item>
  </channel>
</rss>
