<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Gjergji Make</title>
    <description>The latest articles on Forem by Gjergji Make (@gjergji_make).</description>
    <link>https://forem.com/gjergji_make</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/gjergji_make"/>
    <language>en</language>
    <item>
      <title>Running Retrospectives with AI: Treating Your Model Like a Teammate</title>
      <dc:creator>Gjergji Make</dc:creator>
      <pubDate>Sat, 14 Feb 2026 10:09:02 +0000</pubDate>
      <link>https://forem.com/gjergji_make/running-retrospectives-with-ai-treating-your-model-like-a-teammate-3e09</link>
      <guid>https://forem.com/gjergji_make/running-retrospectives-with-ai-treating-your-model-like-a-teammate-3e09</guid>
      <description>&lt;p&gt;AI tools are no longer autocomplete assistants.&lt;/p&gt;

&lt;p&gt;They are drafting product requirements, shaping architecture decisions, generating UX prompts, estimating effort, and even advising on strategy. In many teams, AI is quietly doing work that used to belong to a product manager, solution architect, or analyst.&lt;/p&gt;

&lt;p&gt;And yet, we rarely ask:&lt;/p&gt;

&lt;p&gt;How well are we collaborating with it?&lt;/p&gt;

&lt;p&gt;If AI is taking on human-like work, shouldn’t we run retrospectives on that collaboration the same way we would with a teammate?&lt;/p&gt;

&lt;p&gt;Recently, I ran an experiment.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;h2&gt;
  
  
  The Experiment: A Human–AI Product Collaboration
&lt;/h2&gt;

&lt;p&gt;Over the course of a long working session, I collaborated with an AI tool to define a series of interconnected product features.&lt;/p&gt;

&lt;p&gt;The project involved:&lt;br&gt;
• Multi-step user workflows&lt;br&gt;
• Status-based behavior logic&lt;br&gt;
• Dynamic form rendering&lt;br&gt;
• Role-based restrictions&lt;br&gt;
• UI redesign iterations&lt;br&gt;
• Effort estimation&lt;br&gt;
• Customer-facing documentation&lt;/p&gt;

&lt;p&gt;In other words: not a trivial task.&lt;/p&gt;

&lt;p&gt;Instead of using AI for isolated answers, I treated it like a product partner. For each feature, I asked it to provide:&lt;/p&gt;

&lt;p&gt;• A short feature briefing&lt;br&gt;
• A suggested solution approach&lt;br&gt;
• A ballpark effort estimate&lt;br&gt;
• Customer-facing user flow tables&lt;br&gt;
• Business objectives&lt;br&gt;
• Important open questions&lt;br&gt;
• UX mockup prompts&lt;/p&gt;

&lt;p&gt;The structure looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Feature name:
Context:
Target users:
Constraints:
Related features:

Please deliver:
1. Briefing
2. Suggested solution
3. Estimate
4. Customer-facing flow tables
5. Business objectives
6. Important inputs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As the session progressed, the AI began reusing patterns across features — status modes, user roles, dynamic rendering approaches. It maintained continuity.&lt;/p&gt;

&lt;p&gt;But like any real collaboration, there were frictions.&lt;/p&gt;

&lt;p&gt;So we ran a retrospective.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;The AI Retrospective&lt;/p&gt;

&lt;p&gt;At the end of the session, I asked the AI to reflect on:&lt;/p&gt;

&lt;p&gt;• What went well&lt;br&gt;
• What caused friction&lt;br&gt;
• How it could improve&lt;br&gt;
• How I could improve&lt;/p&gt;

&lt;p&gt;The prompt was simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I want to run a retrospective on this entire conversation.
Please review all messages in this chat and act as a retrospective participant.

Provide:
    1.  What went well in how I asked questions and how you responded
    2.  What went wrong or caused friction (misunderstandings, rework, unclear prompts, over/under-detail)
    3.  Improvements for you (the AI) to communicate better with me
    4.  Improvements for me (the human) to get better results from you

Be honest, specific, and practical. Avoid generic advice.
End with a short, actionable to-do list for both of us.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What came back wasn’t defensive or generic. It was surprisingly useful.&lt;/p&gt;

&lt;p&gt;What worked well&lt;/p&gt;

&lt;p&gt;• Clear structure in prompts&lt;br&gt;
• Consistent output expectations&lt;br&gt;
• Iterative refinement instead of trying to get everything perfect upfront&lt;br&gt;
• Reusing patterns across features&lt;/p&gt;

&lt;p&gt;What caused friction&lt;/p&gt;

&lt;p&gt;• Late-stage constraints requiring rework&lt;br&gt;
• Ambiguity about global rules (e.g., status behavior across components)&lt;br&gt;
• Over-detail in some areas where high-level thinking was enough&lt;/p&gt;

&lt;p&gt;This wasn’t about blaming the tool. It surfaced something important:&lt;/p&gt;

&lt;p&gt;Human–AI collaboration fails for the same reasons human–human collaboration fails: unclear assumptions and evolving constraints.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;The AI’s Self-Reflection&lt;/p&gt;

&lt;p&gt;What surprised me most was the AI’s willingness to critique itself.&lt;/p&gt;

&lt;p&gt;It acknowledged things like:&lt;/p&gt;

&lt;p&gt;• It should lock assumptions more explicitly instead of silently proceeding.&lt;br&gt;
• It should ask sharper clarification questions when a decision affects multiple future components.&lt;br&gt;
• It sometimes over-explained architecture when the human already had senior-level context.&lt;br&gt;
• It could reference earlier artifacts instead of rewriting similar sections.&lt;/p&gt;

&lt;p&gt;In other words, it identified behavior-level improvements — not just content-level ones.&lt;/p&gt;

&lt;p&gt;That’s when it stopped feeling like a tool and started feeling like a teammate.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;Improving the Human–AI Relationship&lt;/p&gt;

&lt;p&gt;One of the most useful outcomes was a set of reusable prompt templates for different collaboration modes.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploring New Ideas
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I want to explore an idea at a high level.

Context:
- Product:
- Target users:
- Problem:
- Rough idea:

Please:
- Clarify the problem
- Suggest 2–3 solution approaches
- Identify major risks
- Ask up to 3 critical clarifying questions

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Defining Product Requirements
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Feature name:
Context:
Target users:
Constraints:

Deliver:
1. Briefing
2. Suggested solution
3. Estimate
4. Customer-facing flow
5. Business objectives
6. Open decisions

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Stress-Testing Before Commitment
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Here’s the idea:
Key assumptions:
What I’m worried about:

Please:
- Challenge assumptions
- Identify failure modes
- Suggest what to validate first
- Tell me what to cut if time is tight

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The key learning?
&lt;/h2&gt;

&lt;p&gt;AI collaboration improves dramatically when:&lt;br&gt;
• You distinguish exploratory from final decisions&lt;br&gt;
• You declare global rules early&lt;br&gt;
• You ask for structured outputs&lt;br&gt;
• You explicitly request critique&lt;/p&gt;

&lt;p&gt;This isn’t prompt engineering. It’s collaboration design.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Retrospectives May Become Standard Practice
&lt;/h2&gt;

&lt;p&gt;We’re entering a phase where AI systems:&lt;/p&gt;

&lt;p&gt;• Draft architecture&lt;br&gt;
• Shape requirements&lt;br&gt;
• Influence roadmap thinking&lt;br&gt;
• Write customer-facing documentation&lt;/p&gt;

&lt;p&gt;If that’s the case, running retrospectives with AI isn’t experimental — it’s responsible.&lt;/p&gt;

&lt;p&gt;The goal isn’t to “train” the AI.&lt;/p&gt;

&lt;p&gt;The goal is to improve the working relationship.&lt;/p&gt;

&lt;p&gt;Just like with human teammates, the quality of outcomes depends on:&lt;/p&gt;

&lt;p&gt;• Clear expectations&lt;br&gt;
• Shared structure&lt;br&gt;
• Honest feedback&lt;br&gt;
• Iteration&lt;/p&gt;

&lt;p&gt;AI won’t replace teams.&lt;/p&gt;

&lt;p&gt;But teams that learn how to collaborate with AI — and reflect on that collaboration — will likely outperform those who don’t.&lt;/p&gt;

&lt;p&gt;Maybe the future of agile includes one more participant in the retro.&lt;/p&gt;

&lt;p&gt;And maybe that participant learns faster than the rest of us.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;If you’re using AI in product or engineering work, have you ever run a retrospective on the collaboration itself?&lt;/p&gt;

</description>
      <category>retrospective</category>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
