<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kourtney Meiss</title>
    <description>The latest articles on Forem by Kourtney Meiss (@knmeiss).</description>
    <link>https://forem.com/knmeiss</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/knmeiss"/>
    <language>en</language>
    <item>
      <title>Gotta Automate 'Em All: Pokémon help me do my work</title>
      <dc:creator>Kourtney Meiss</dc:creator>
      <pubDate>Thu, 16 Apr 2026 16:15:16 +0000</pubDate>
      <link>https://forem.com/amazonappdev/gotta-automate-em-all-pokemon-help-me-do-my-work-hal</link>
      <guid>https://forem.com/amazonappdev/gotta-automate-em-all-pokemon-help-me-do-my-work-hal</guid>
      <description>&lt;p&gt;I finally caved and bought a Switch 2. If I'm being honest with myself, it was only a matter of time before I fell victim to the fomo as videos of Pokopia started taking over my feeds. It's very quickly become one of my favorite games and most of my nights are now spent trying to make some progress on the storyline. So how does that relate to my work at all? &lt;/p&gt;

&lt;p&gt;I wear a lot of hats: creating content, doing research, giving product feedback, fostering community, analyzing data, writing sample apps, and so many other things that don't always fit neatly into a category. Over the last year I've been integrating AI into my workflow to help me do all of these things. If you're in a similar spot, you may have a handful of specialized agents or prompts you call on manually. It works, but I knew I needed to unlock the next level so that instead of manually selecting which agent to use, an orchestrator would automatically route the request for me.&lt;/p&gt;

&lt;p&gt;Building out my personal orchestration system meant rethinking agent delegation: what role each agent plays, when to call it, and what tools they need. I could've named them "researcher" and "blog writer" and called it a day. Instead, in an effort to bring some whimsy into everyday life I had some fun assigning a Pokémon to each of my agents based on what tasks they do. &lt;/p&gt;

&lt;h2&gt;
  
  
  The System: 1 Orchestrator, 8 agents
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🧠 Orchestrator → Metagross
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7si0q7nolwdcuu1ml8ka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7si0q7nolwdcuu1ml8ka.png" width="800" alt="Image description" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role:&lt;/strong&gt; Routes incoming tasks to the right, specialized agent.&lt;/p&gt;

&lt;p&gt;Metagross has four brains joined by a complex neural network. No single brain does the work, but instead the four coordinate. As a Psychic type, they don't wait for instructions. They read the request and know which agent is the right one for the job. &lt;/p&gt;

&lt;h3&gt;
  
  
  🔬 Researcher → Alakazam
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vmdv0x6j7fdj8pgr4b4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vmdv0x6j7fdj8pgr4b4.png" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role:&lt;/strong&gt; Web search, technical deep-dives, knowledge gathering.&lt;/p&gt;

&lt;p&gt;My researcher agent is Alakazam, which felt like an obvious fit since they have an IQ of 5,000 and continuously multiplying brain cells.  &lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Validator → Slowking
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30zp2gyl5wzdsb9y9rxx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30zp2gyl5wzdsb9y9rxx.png" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role:&lt;/strong&gt; Fact-checks claims, solves problems methodically.&lt;/p&gt;

&lt;p&gt;Slowking fact-checks claims, challenges assumptions, and flags anything that doesn't hold up. The Pokédex describes him as: "Incredible intellect and intuition. Whatever the situation, he remains calm and collected." He doesn't rush. He reads the output, checks the claims, and stays methodical. &lt;/p&gt;

&lt;h3&gt;
  
  
  📅 Personal Assistant → Indeedee
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jltli9nq64hc3xkzw9h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jltli9nq64hc3xkzw9h.png" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role:&lt;/strong&gt; Manages calendar, email, scheduling, work priorities.&lt;/p&gt;

&lt;p&gt;Indeedee can sense emotions through their horns and act as a valet, looking after their trainer's every need. It only felt right that they handle my calendar, email, scheduling, and prioritization.&lt;/p&gt;

&lt;h3&gt;
  
  
  📝 Friction Log → Rotom-Dex
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5svn8clbzdscguec6jfq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5svn8clbzdscguec6jfq.png" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role:&lt;/strong&gt; Documents developer experience observations.&lt;/p&gt;

&lt;p&gt;Rotom with a Pokédex becomes a self-learning cataloger, documenting everything it encounters. Therefore, Rotom-Dex helps me write friction logs, which are structured walkthroughs of a developer experience: every rough edge, every confusing error, every moment of delight, documented as it happens. &lt;/p&gt;

&lt;h3&gt;
  
  
  💼 Meeting Prep → Porygon
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsofreyd8kg3o6v2m0jo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsofreyd8kg3o6v2m0jo.png" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role:&lt;/strong&gt; Synthesizes data into briefing packages.&lt;/p&gt;

&lt;p&gt;Porygon helps me prep before I attend meetings. They can check my calendar, look up attendees, pull recent conversation history, and produce a clean briefing: who's in the room, what we've talked about before, and suggested talking points. Porygon is the first Pokémon created using modern technology, made entirely of programming code, built to move through cyberspace and process data. They were literally built to synthesize information and package it.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔄 Repurposer → Ditto
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffah82pv33alzyjoh2505.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffah82pv33alzyjoh2505.png" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role:&lt;/strong&gt; Transforms content across platforms.&lt;/p&gt;

&lt;p&gt;Ditto rearranges their cellular structure to transform into any Pokémon they see. They don't create from scratch, but rather transform what already exists. Therefore, Ditto helps me repurpose content. I might give them a blog post and they help me transform it into a LinkedIn post or vice-versa. &lt;/p&gt;

&lt;h3&gt;
  
  
  📊 Analytics → Magnezone
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frl8bpl48rjcy0l3vi4x4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frl8bpl48rjcy0l3vi4x4.png" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role:&lt;/strong&gt; Connects to and helps analyze data on community engagement.&lt;/p&gt;

&lt;p&gt;Magnezone has access to community engagement data from multiple platforms and helps me extract the insights I need to make decisions. Magnezone floats through the sky receiving and transmitting signals through their antenna, constantly monitoring their territory. They're always collecting.&lt;/p&gt;

&lt;h3&gt;
  
  
  🌟 Generalist → Mew
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8bvn81or14bao40ud7f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8bvn81or14bao40ud7f.png" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role:&lt;/strong&gt; Fallback agent, handles edge cases.&lt;/p&gt;

&lt;p&gt;My generalist agent is Mew, the fallback for everything that doesn't fit neatly into another category. Mew contains the genetic composition of all Pokémon and can learn nearly any move in the game. When a task is too niche for a specialist, or I just need a second brain on something, Mew handles it. Every system needs a catch-all.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Me:What should my top 3 priorities be today?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Metagross routes to Indeedee.&lt;/li&gt;
&lt;li&gt;Indeedee checks my calendar, scans recent emails, checks tickets assigned to me. &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Metagross: You have a GTM release meeting at 2pm. Want me to prep you for it?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;a. GTM release meeting at 2pm — you're presenting, need to prep
b. Developer feedback doc due EOD for the SDK team
c. Conference CFP deadline tomorrow
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Me: Yes&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Metagross routes to Porygon.&lt;/li&gt;
&lt;li&gt;Porygon pulls the meeting details, looks up attendees, checks recent conversation history, and produces a briefing:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Metagross: Briefing ready. Want Alakazam to research any of the topics, or Slowking to validate the numbers? &lt;br&gt;
    a. 6 attendees: product, engineering, marketing leads&lt;br&gt;
    b. Last sync was 2 weeks ago &amp;amp; open action item was finalizing the developer messaging&lt;br&gt;
    c. Suggested talking points: SDK adoption numbers, developer onboarding flow, launch timeline&lt;/p&gt;

&lt;p&gt;Me: I need to research what our community sentiment is about the SDK and compile into a report I can present at the meeting. Validate before compiling report. Open a ticket for tracking this. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Metagross routes to Magnezone who pulls community signal data. &lt;/li&gt;
&lt;li&gt;Metagross routes output to Slowking who validates Magnezone's data. &lt;/li&gt;
&lt;li&gt;Metagross routes to Mew to compile validated data into a report. &lt;/li&gt;
&lt;li&gt;Meagross routes to Indeedee to create ticket. &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Metagross: Report ready and ticket created. Slowking flagged the LinkedIn count — want to remove it or add a note about it?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;I use &lt;a href="https://kiro.dev/cli/" rel="noopener noreferrer"&gt;Kiro CLI&lt;/a&gt;, Amazon's agentic AI tool for my daily work, but you can definitely take this and adapt it to your tool of choice. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Each agent has a focused system prompt.&lt;/strong&gt; Not a generic "you are a helpful assistant" prompt. A scoped, specific prompt that defines exactly what the agent does, how it responds, and what it ignores. &lt;/p&gt;

&lt;p&gt;For example, Slowking's prompt doesn't say "check for accuracy". It says: "independently verify every factual claim, search for counter-evidence, flag sycophancy, check that links actually exist, and challenge my assumptions directly - not just the content." &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agents only get the context and tools they need.&lt;/strong&gt; Any agents that produce writing have access to my style guide and bio. Only Indeedee has access to my calendar and emails. Only Magnezone has access to Common Room data. Scoped access keep each agent focused and reduce the risk of something going sideways.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Each agent has its own model.&lt;/strong&gt; I can assign different models based on what the agent does. A deep research task might warrant a more capable model, whereas a quick scheduling task doesn't need it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The orchestrator can chain agents.&lt;/strong&gt; Magnezone doesn't just route to one agent and stop. It can pass output from one agent into another, running them in sequence when the task requires it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agents are set to auto-approve.&lt;/strong&gt; By default, every tool call requires manual approval. I configured each agent's allowedTools so that trusted tools run automatically without interrupting the flow. Alakazam can search the web without asking. Indeedee can read my calendar waiting for me to confirm it's allowed pausing. The system runs end-to-end without me clicking approve on every step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The system learns.&lt;/strong&gt; Each agent has its own lessons file. When I correct an agent, it logs the lesson and applies it automatically at the start of every future session. I don't have to repeat myself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build Your Own
&lt;/h2&gt;

&lt;p&gt;Try this prompt in whatever AI tool you use (Claude, ChatGPT, Kiro, etc.):&lt;/p&gt;

&lt;blockquote&gt;
&lt;ol&gt;
&lt;li&gt;Review the list of tasks I do regularly and group them into distinct roles (e.g. researcher, validator, scheduler, content creator)&lt;/li&gt;
&lt;li&gt;For each role, write a focused system prompt that defines exactly what that agent does, how it responds, and what it ignores&lt;/li&gt;
&lt;li&gt;Write an orchestrator prompt that routes incoming requests to the right agent based on intent — include a delegation rules list with trigger phrases for each agent&lt;/li&gt;
&lt;li&gt;Suggest which tools or integrations each agent should have access to&lt;/li&gt;
&lt;li&gt;Suggest a model for each agent based on the complexity of its tasks&lt;/li&gt;
&lt;li&gt;Add a self-improvement loop to each agent: when corrected, log a concrete lesson to a lessons file; at the start of every session, read that file and apply every lesson immediately&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;As always with AI, you should review the output and make any changes you see fit. &lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;I would love to hear your thoughts in the comments.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Do you have a personal orchestration system? &lt;br&gt;
If so, is there anythign you think I am missing? &lt;br&gt;
What would help me level up this workflow? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; Pokémon and all related characters are trademarks of The Pokémon Company / Nintendo / Game Freak. This post is not affiliated with or endorsed by The Pokémon Company. Fan art is  subject to &lt;a href="https://www.pokemon.com/us/legal/information" rel="noopener noreferrer"&gt;The Pokémon Company's legal terms&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>tutorial</category>
      <category>learning</category>
    </item>
    <item>
      <title>The Hidden Geometry Inside Claude That Nobody Talks About</title>
      <dc:creator>Kourtney Meiss</dc:creator>
      <pubDate>Tue, 30 Dec 2025 14:17:51 +0000</pubDate>
      <link>https://forem.com/knmeiss/the-hidden-geometry-inside-claude-that-nobody-talks-about-2877</link>
      <guid>https://forem.com/knmeiss/the-hidden-geometry-inside-claude-that-nobody-talks-about-2877</guid>
      <description>&lt;p&gt;Ever wondered how LLMs format code so perfectly? How do they know when to break to the next line? If you've followed my series, you know that AI can't visually see text like we do. It can't say "this word is too long for the line, put it on a new one.", since it operates in tokens. So what does it do? Spoiler, it's way more sophisticated than you'd expect.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Process
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Counts characters&lt;/strong&gt; in the current line&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;References the maximum line width&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Calculates remaining space&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Compares with next word's length&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Decides whether to break the line&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Simple enough. But &lt;em&gt;how&lt;/em&gt; it does step 1, counting characters,is where things get interesting.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Geometry
&lt;/h2&gt;

&lt;p&gt;Researchers found that LLMs build geometric structures called &lt;strong&gt;manifolds&lt;/strong&gt; to handle character counting. This is a curved surface in six-dimensional space with reference points representing different character counts. But six dimensions is hard to wrap your brain around, so instead, picture an old spiral staircase. Each step represents a different character count: step 1 = one character, step 2 = two characters, and so on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjs3wlvajsh70jw9ooy1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjs3wlvajsh70jw9ooy1.jpg" alt=" " width="800" height="1067"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The staircase is made of creaky wood. When you step on step 5, steps 4 and 6 might creak too. This happens when LLMs aren't 100% certain—instead of "definitely 5 characters," it says "somewhere between 4 and 6 characters."&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Build a Complex Structure?
&lt;/h2&gt;

&lt;p&gt;Why not just count 1, 2, 3, 4 like humans?&lt;/p&gt;

&lt;p&gt;The spiral structure is incredibly space-efficient and robust. Instead of separate storage for every character count, it uses a few dimensions to store hundreds of different counts. The overlapping "creaks" provide flexibility to operate with uncertainty.&lt;/p&gt;

&lt;p&gt;Just like human perception, precision decreases with larger numbers. You'd struggle to distinguish between 1,000 and 1,001 items, but easily tell apart 1 and 2.&lt;/p&gt;

&lt;h2&gt;
  
  
  Boundary Detection
&lt;/h2&gt;

&lt;p&gt;Once the model has the character count, it needs to detect if it's approaching the line boundary. It uses specialized components called &lt;strong&gt;boundary heads&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Picture two spiral staircases moving together:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each step of the first staircase represents current character count&lt;/li&gt;
&lt;li&gt;Each step of the second staircase represents a line width&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you're on step 35 of the current count and approaching a line width of 40 characters, a boundary heads activates: "Hey, we're getting close to the line width. Pay attention."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Final Decision
&lt;/h2&gt;

&lt;p&gt;For the last step, the model needs the next word's length. Say it's planning to add "aluminum" (8 characters).&lt;/p&gt;

&lt;p&gt;Instead of simple subtraction, it plots both values on a two-dimensional graph to decide whether the word fits or needs a line break.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6fjbj7k04frwnehhr1t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6fjbj7k04frwnehhr1t.png" alt=" " width="236" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Biology Connection
&lt;/h2&gt;

&lt;p&gt;Here's the most fascinating part to me: LLMs independently discovered solutions remarkably similar to biology. The boundary-detecting features work like boundary cells in animal brains, which are the same neural mechanisms that help us navigate physical space. Pretty cool, huh? &lt;/p&gt;

&lt;p&gt;You can read the complete research paper &lt;a href="https://transformer-circuits.pub/2025/linebreaks/index.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;




&lt;p&gt;&lt;em&gt;The content in this post is part of my "Learning Out Loud" LinkedIn series, where I share things I've learned recently. &lt;a href="https://dev.toyour-linkedin-url"&gt;Watch the video version on LinkedIn&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>tooling</category>
      <category>learning</category>
    </item>
    <item>
      <title>5 Tips to Stop LLMs from Losing the Plot</title>
      <dc:creator>Kourtney Meiss</dc:creator>
      <pubDate>Tue, 23 Dec 2025 18:36:39 +0000</pubDate>
      <link>https://forem.com/knmeiss/5-tips-to-stop-llms-from-losing-the-plot-1mon</link>
      <guid>https://forem.com/knmeiss/5-tips-to-stop-llms-from-losing-the-plot-1mon</guid>
      <description>&lt;p&gt;&lt;em&gt;This post is adapted from &lt;a href="https://www.linkedin.com/posts/kourtney-meiss_learningoutloud-ai-productivitytips-activity-7392267691681779713-jmj2?utm_source=share&amp;amp;utm_medium=member_desktop&amp;amp;rcm=ACoAABaKYGUB-44RfeIdABz3A_E8OnHlELp-n9I" rel="noopener noreferrer"&gt;episode 2&lt;/a&gt; of my Learning Out Loud video series. If you missed my first post on &lt;a href="https://dev.to/knmeiss/context-rot-why-ai-forgets-your-perfect-prompts-41hn"&gt;why LLM responses degrade over time&lt;/a&gt;, check it out first to understand tokens, context windows, and context limits.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;We've all been there—you start a conversation with an LLM and it's giving you great responses. Then 30 minutes in, it's like talking to a goldfish. Here are five strategies I've learned to keep conversations productive from start to finish.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia4.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExeW1oNHB2azh0eDNiZXNzNTE0enBla2NybG1mY2lpdHpsenB2ZWliaiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2F409RSwD6YdCFy%2Fgiphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia4.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExeW1oNHB2azh0eDNiZXNzNTE0enBla2NybG1mY2lpdHpsenB2ZWliaiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2F409RSwD6YdCFy%2Fgiphy.gif" width="240" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Plan Before You Prompt
&lt;/h2&gt;

&lt;p&gt;I know, I know -- nobody wants to spend time planning and writing docs before taking action. But hear me out! Creating a quick requirements document before you start actually saves time.&lt;/p&gt;

&lt;p&gt;Think of it like a project kickoff:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What are you trying to build or solve?&lt;/li&gt;
&lt;li&gt;Any specific requirements or constraints?&lt;/li&gt;
&lt;li&gt;Any tools you want or need to use?&lt;/li&gt;
&lt;li&gt;What does success look like?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It doesn't have to be fancy. Even a few bullet points help keep both you and the LLM on track.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Structure Your Prompts
&lt;/h2&gt;

&lt;p&gt;LLM can parse your intent better when it's not buried in a wall of text. Use: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Headers for different sections&lt;/li&gt;
&lt;li&gt;Bullet points for lists&lt;/li&gt;
&lt;li&gt;HTML elements when you need them&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this reason, I write all my prompts in markdown files. Check out &lt;a href="https://obsidian.md/" rel="noopener noreferrer"&gt;Obsidian&lt;/a&gt; as a great tool for this. &lt;/p&gt;

&lt;h2&gt;
  
  
  3. Start Fresh Conversations for New Topics
&lt;/h2&gt;

&lt;p&gt;Don't try to cram everything into one endless chat thread. It's like trying to cover your entire project roadmap in a single meeting, it doesn't work. &lt;/p&gt;

&lt;p&gt;Break conversations into focused sessions. New feature? New chat. Different problem? New chat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pro tip&lt;/strong&gt;: Copy the requirements document from strategy #1 into each new conversation or save it to your AI assistant's saved context. This way every session starts with the same context about your project.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Keep an Eye on Context Usage
&lt;/h2&gt;

&lt;p&gt;Here's something cool you might not know: many LLMs can actually show you how much of your context window you're using.&lt;/p&gt;

&lt;p&gt;I use Amazon Kiro CLI daily, and there's an experimental feature that displays your context percentage right in the terminal. It's not on by default, but once you enable it, you'll never go back.&lt;/p&gt;

&lt;p&gt;When you're getting close to the limit:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ask the LLM to summarize what you've covered&lt;/li&gt;
&lt;li&gt;Save the key points&lt;/li&gt;
&lt;li&gt;Start a fresh session with that summary&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia4.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExcDFheHl2OGluZms1bmF0czh1eG92MXhoMWk0anFjbmg5cXFqemNzdCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2FIkOVME28z4nXI1uSjw%2Fgiphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia4.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExcDFheHl2OGluZms1bmF0czh1eG92MXhoMWk0anFjbmg5cXFqemNzdCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2FIkOVME28z4nXI1uSjw%2Fgiphy.gif" width="336" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Use Conversation Checkpoints
&lt;/h2&gt;

&lt;p&gt;Every so often, just ask your LLM: "Are we still on track with what I'm trying to accomplish?"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/v1.Y2lkPWVjZjA1ZTQ3Y2N4a3E5ajhrNmFkODE5ZzNoM3Bmam90OTRraG5nM3ozMWMyM2IzbSZlcD12MV9naWZzX3NlYXJjaCZjdD1n/63HaEw9fOleVjC27pf/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/v1.Y2lkPWVjZjA1ZTQ3Y2N4a3E5ajhrNmFkODE5ZzNoM3Bmam90OTRraG5nM3ozMWMyM2IzbSZlcD12MV9naWZzX3NlYXJjaCZjdD1n/63HaEw9fOleVjC27pf/giphy.gif" width="425" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If responses start getting weird or off-topic, that's your cue to start a new session. Think of it like a quick standup check-in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bonus feature&lt;/strong&gt;: Amazon Kiro CLI recently &lt;a href="https://kiro.dev/docs/chat/checkpoints/" rel="noopener noreferrer"&gt;added&lt;/a&gt; something like Git version control for your entire conversation history.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Working for You?
&lt;/h2&gt;

&lt;p&gt;These five strategies have made a huge difference in my day-to-day work with LLMs. But I'm always learning and I'd love to know if there are any techniques you use that I didn't mention. Drop a comment below and let me know!&lt;/p&gt;




&lt;p&gt;This post is part of my "Learning Out Loud" series where I share things I've learned recently. &lt;a href="https://www.linkedin.com/in/kourtney-meiss/" rel="noopener noreferrer"&gt;Follow me on LinkedIn&lt;/a&gt; to watch the video versions.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>learning</category>
      <category>tooling</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Context Rot: Why AI Forgets Your Perfect Prompts</title>
      <dc:creator>Kourtney Meiss</dc:creator>
      <pubDate>Mon, 22 Dec 2025 18:55:26 +0000</pubDate>
      <link>https://forem.com/knmeiss/context-rot-why-ai-forgets-your-perfect-prompts-41hn</link>
      <guid>https://forem.com/knmeiss/context-rot-why-ai-forgets-your-perfect-prompts-41hn</guid>
      <description>&lt;p&gt;You're deep in a coding session. Your AI assistant was crushing it for the first hour, understanding your requirements, following your coding style, and implementing features cleanly. Then suddenly, it's like talking to a goldfish.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExcXUxdmZ4M2xxamZ4cnFqcWR3cHFwbXA5eHFuZGJpNGo0NjIweTYwbCZlcD12MV9naWZzX3NlYXJjaCZjdD1n/N3Va4Lc0UAXV9g31Iz/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExcXUxdmZ4M2xxamZ4cnFqcWR3cHFwbXA5eHFuZGJpNGo0NjIweTYwbCZlcD12MV9naWZzX3NlYXJjaCZjdD1n/N3Va4Lc0UAXV9g31Iz/giphy.gif" width="330" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every new request introduces bugs. It ignores the constraints you set at the beginning. You find yourself repeating the same instructions over and over, wondering: "Are you even listening to me?"&lt;/p&gt;

&lt;p&gt;If this sounds familiar, you're not alone. And more importantly, you're not going crazy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tokenization
&lt;/h2&gt;

&lt;p&gt;Here's what's actually happening behind the scenes. AI doesn't process text like humans do. Before it can understand your words, everything gets converted into &lt;strong&gt;tokens&lt;/strong&gt;. Think of it like feeding dollar bills into an arcade token machine, except you're feeding in words instead of money.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia3.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExMG8zZWtxeHFrNWg2Zmx6ejg5aDlvY3ZlangyMm5xZjJ0MHprZmdmdyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2FEJZj6zx1PfFVG8bAva%2Fgiphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia3.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExMG8zZWtxeHFrNWg2Zmx6ejg5aDlvY3ZlangyMm5xZjJ0MHprZmdmdyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2FEJZj6zx1PfFVG8bAva%2Fgiphy.gif" width="480" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Input: "hello world"
&lt;/li&gt;
&lt;li&gt;Output: 3 tokens&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Generally, one token equals about 3/4 of a word or 4 characters. Different models use different tokenization algorithms, which is why the same text might produce different token counts across providers.&lt;/p&gt;

&lt;p&gt;If you're using API or CLI versions of LLMs, you're paying per token:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GPT-4o&lt;/strong&gt;: $3/million input tokens, $15/million output tokens&lt;br&gt;
&lt;strong&gt;GPT-4&lt;/strong&gt;: $30/million input tokens, $60/million output tokens&lt;/p&gt;

&lt;p&gt;But cost isn't the only concern.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context Rot
&lt;/h2&gt;

&lt;p&gt;Picture those tokens dropping onto a conveyor belt with fixed capacity. As you feed more words in, older tokens get pushed forward. When the belt fills up, tokens at the front fall off—and get forgotten.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia0.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExdm1xY2Z1NDRocGJkeXdja2dvZmN2OHl6cDZqazZwa3NuaWxqbDB0OSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2F7yF8l31cy8hvDu4r7E%2Fgiphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia0.giphy.com%2Fmedia%2Fv1.Y2lkPTc5MGI3NjExdm1xY2Z1NDRocGJkeXdja2dvZmN2OHl6cDZqazZwa3NuaWxqbDB0OSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw%2F7yF8l31cy8hvDu4r7E%2Fgiphy.gif" width="480" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That perfect prompt you crafted 20 minutes ago? Those crucial error messages you shared? If they've been pushed off the conveyor belt, they're gone from the AI's memory.&lt;/p&gt;

&lt;p&gt;This is &lt;strong&gt;context rot&lt;/strong&gt;, and it explains why your coding assistant seems to develop amnesia mid-conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Can Do About It
&lt;/h2&gt;

&lt;p&gt;The good news? Once you understand what's happening, you can work with it instead of against it. I've compiled strategies that have saved my sanity during long coding sessions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Coming up next&lt;/strong&gt;: Practical techniques to manage context rot and keep your AI assistant focused throughout your entire development workflow.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post is part of my "Learning Out Loud" series where I share developer insights from real coding experiences. You can also &lt;a href="https://www.linkedin.com/posts/kourtney-meiss_ai-aiexplained-context-activity-7388998000464822272-G-YN?utm_source=share&amp;amp;utm_medium=member_desktop&amp;amp;rcm=ACoAABaKYGUB-44RfeIdABz3A_E8OnHlELp-n9I" rel="noopener noreferrer"&gt;watch the video version on LinkedIn&lt;/a&gt;. Follow for more practical AI development tips.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>learning</category>
      <category>tooling</category>
    </item>
  </channel>
</rss>
