<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kelly Lewandowski</title>
    <description>The latest articles on Forem by Kelly Lewandowski (@kelly_lewandowski_845215e).</description>
    <link>https://forem.com/kelly_lewandowski_845215e</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kelly_lewandowski_845215e"/>
    <language>en</language>
    <item>
      <title>Where do retrospective action items belong? (Probably not in Jira)</title>
      <dc:creator>Kelly Lewandowski</dc:creator>
      <pubDate>Sun, 03 May 2026 21:36:30 +0000</pubDate>
      <link>https://forem.com/kelly_lewandowski_845215e/where-do-retrospective-action-items-belong-probably-not-in-jira-239i</link>
      <guid>https://forem.com/kelly_lewandowski_845215e/where-do-retrospective-action-items-belong-probably-not-in-jira-239i</guid>
      <description>&lt;p&gt;You know the cycle. Your team runs a great retro. People are honest. Three or four genuinely good action items go up on the board. Someone says "I'll put these in Jira." Everyone nods. Two weeks later you're sitting in the next retro and someone raises the same problem.&lt;/p&gt;

&lt;p&gt;That moment, multiplied across thousands of teams, is why a 2023 Scrum Alliance survey found only 35% of teams consistently complete their retro action items. The other 65% are running the conversation, generating the insight, and then losing it within about a week.&lt;/p&gt;

&lt;p&gt;I've watched this play out enough times that I have an opinion now, and it's not the one I started with. Most retro action items don't belong in Jira. Some do. The ones that don't belong there are the ones that quietly disappear.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft78a96b1bchqnegwrwhw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft78a96b1bchqnegwrwhw.jpg" alt="The action item void" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The two camps every team falls into
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Camp 1: dump everything into Jira.&lt;/strong&gt; It feels rigorous. Jira is the system of record for the rest of the work, so if a thing is real it should have a ticket. "Improve our PR review process" becomes JIRA-4471, gets assigned to Sarah, and joins the queue. Sarah remembers it for about six days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Camp 2: leave them in the meeting notes.&lt;/strong&gt; Someone pastes the action items into a Confluence page or a shared doc. The expectation is that people will check back. Nobody does. By Wednesday they couldn't find the doc if you paid them.&lt;/p&gt;

&lt;p&gt;Both camps fail for the same reason. The place you store an action item should match the cadence and shape of the work it represents. Jira is built for prioritised, customer-facing work that flows through a board. Retro action items usually aren't that. The shared doc has the opposite problem: it tracks nothing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Jira feels right and usually isn't
&lt;/h2&gt;

&lt;p&gt;Putting an action item in Jira looks responsible. You're treating it like real work. It tends to fall flat anyway, for four reasons.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No stakeholders.&lt;/strong&gt; A Jira ticket usually exists because a customer or a PM wants something shipped. A retro action item exists because the team wants to fix itself. Without external pressure on the assignee, it sinks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grooming kills it.&lt;/strong&gt; Backlogs get prioritised by impact, urgency, and customer pain. "Add a 5-item PR checklist" loses to every customer-facing ticket. Forever.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invisible to the next retro.&lt;/strong&gt; When the team brainstorms again, nobody opens Jira to check what's still open from last sprint. The information is technically there. The flow doesn't bring it back into the room.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Process improvements don't scope down.&lt;/strong&gt; "Communicate better about blockers" or "do refinement before planning" aren't tickets. They're rituals. Forcing them into a story-points-and-acceptance-criteria shape distorts them and makes them harder to do, not easier.&lt;/p&gt;

&lt;h2&gt;
  
  
  The action items that DO belong in Jira
&lt;/h2&gt;

&lt;p&gt;I want to be fair. Some retro action items absolutely belong in your tracker. The test is whether the action is real shippable work.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Add integration tests for the checkout flow." Engineering work, has an outcome a PM cares about, ships.&lt;/li&gt;
&lt;li&gt;"Fix the three flakiest tests in CI." Real work, scoped, owned, ships.&lt;/li&gt;
&lt;li&gt;"Document the on-call runbook for the auth service." Borderline. Real work but it'll never win against customer features in grooming. Probably better in your retro tool with a Jira ticket linked when someone actually picks it up.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The clean rule: if the action item would survive a sprint planning conversation as a regular ticket on its own merits, it belongs in your tracker. Push it there with a one-click export, link the ticket back to the retro item, and move on.&lt;/p&gt;

&lt;h2&gt;
  
  
  The action items that DON'T belong in Jira
&lt;/h2&gt;

&lt;p&gt;Almost everything else.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Stop scheduling refinement on Fridays."&lt;/li&gt;
&lt;li&gt;"Run a 5-minute kudos round at the end of every retro."&lt;/li&gt;
&lt;li&gt;"Try a 4-day sprint for the next two cycles as an experiment."&lt;/li&gt;
&lt;li&gt;"Stop merging on Fridays unless a release manager approves."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are commitments and rituals. They matter, but none of them have a definition of done that fits a sprint board. They need somewhere they can be revisited weekly without competing for priority against feature work.&lt;/p&gt;

&lt;p&gt;That somewhere is your retro tool.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0f0kv9g2z246wk78e86.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0f0kv9g2z246wk78e86.jpg" alt="Two paths" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the retro tool is the right default
&lt;/h2&gt;

&lt;p&gt;Four things engineering trackers don't do well:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continuity.&lt;/strong&gt; When the next retro opens, last sprint's open action items are right there. The team sees them before brainstorming new ones. This single behaviour does more for follow-through than any other intervention I've tried.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context.&lt;/strong&gt; The action item lives next to the retro item that produced it. Click through and you see the original "we keep merging on Fridays and breaking things" thread, the discussion, the votes. Tickets in Jira get stripped of all that and become a one-line subject nobody can decode three weeks later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cadence.&lt;/strong&gt; Retro action items want a weekly heartbeat. A ticket on a sprint board wants daily standup attention or it gets buried.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pattern detection.&lt;/strong&gt; When the same problem comes up across three retros over six weeks, that's a different problem than a one-off complaint. Jira can't surface that, because it doesn't know the items are related.&lt;/p&gt;

&lt;p&gt;That last one is what tipped me. The first time I saw a tool flag "this team has raised CI flakiness in three retros over six weeks and only one action item from those retros has been completed", I realised I'd been undercounting how often we punted on the same thing. It was uncomfortable, and it was the most useful single data point I'd gotten about my team in a year.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this looks like in practice (Kollabe)
&lt;/h2&gt;

&lt;p&gt;I work on &lt;a href="https://kollabe.com?utm_source=devto" rel="noopener noreferrer"&gt;Kollabe&lt;/a&gt;, so take this with the appropriate grain of salt. The patterns generalise; if your tool of choice does these differently, swap in the equivalent.&lt;/p&gt;

&lt;p&gt;Action items live with the retro that produced them. Each has an owner, a due date, and a status. They show up in the next retro before the team starts on new ones.&lt;/p&gt;

&lt;p&gt;Every Monday morning, anyone with open action items assigned to them gets a single email summarising what's still open. Not a Slack ping in a channel where it scrolls past lunch. A direct email at the start of the week, addressed to one person.&lt;/p&gt;

&lt;p&gt;If you're on a paid plan, the team also gets a weekly insights report every Monday. Three things in it actually matter for action items: completion rate (with the top overdue items by name), recurring themes flagged across multiple retros, and a team health score that includes follow-through and blocker persistence. When the score drops, it's almost always because the team is generating action items faster than it's closing them. That's a leadership signal, not a process complaint.&lt;/p&gt;

&lt;p&gt;For the action items that are real shippable work, one click pushes them into Jira, GitHub Projects, or Linear, with a link back to the retro item. The point isn't the specific product. The point is the loop: action item, owner, weekly nudge, next retro shows the open ones, AI catches when the same theme keeps reappearing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvt1v8btcexfv6q2hsi8d.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvt1v8btcexfv6q2hsi8d.jpg" alt="The weekly loops" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  When the "everything in Jira" approach actually works
&lt;/h2&gt;

&lt;p&gt;There's one case where Jira-only is fine: very small teams (3-5 engineers, no PMs, strong self-driving culture). They don't need a separate place for retro action items because they have so few moving pieces that the items don't get lost. The whole team holds the list in their heads.&lt;/p&gt;

&lt;p&gt;If that's you, the rest of this post doesn't apply. Once your team gets above seven people, or you add a layer of PMs and stakeholders, the Jira-only pattern starts breaking. You'll feel it within a quarter, usually as the same complaint coming up in three retros in a row while everyone insists "we're working on it".&lt;/p&gt;

&lt;h2&gt;
  
  
  A practical rule of thumb
&lt;/h2&gt;

&lt;p&gt;When the team agrees on an action item, ask one question before deciding where it lives:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If we don't do this, who's disappointed?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If the answer is "a customer" or "the PM" or "the company", it's a Jira ticket. If the answer is "us" or "future-us" or "the next retro", it stays in the retro tool with an owner, a due date, and a Monday reminder.&lt;/p&gt;

&lt;p&gt;One question, no debate, routes about 90% of action items correctly. The other 10% is borderline and you'll argue about it for thirty seconds, which is the right amount of time to argue about anything in a retro.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn93grfvsf9tc0qg9yuk8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn93grfvsf9tc0qg9yuk8.jpg" alt="Reaccuring themes moment" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing
&lt;/h2&gt;

&lt;p&gt;Retros are cheap to run and expensive to ignore. The format you pick for the conversation matters less than where the action items end up afterward. If they don't have a home that nudges the assignee, persists across sprints, and surfaces patterns when the same thing keeps coming up, you're paying for the meeting and not collecting the dividend.&lt;/p&gt;

&lt;p&gt;If you want a setup that does the nudging and the pattern detection out of the box, &lt;a href="https://kollabe.com/retrospectives?utm_source=devto" rel="noopener noreferrer"&gt;Kollabe's retrospectives tool&lt;/a&gt; handles it on the free plan, with weekly insights reports on Premium. Or steal the workflow and apply it to whatever you're already using. The "who's disappointed" rule is free.&lt;/p&gt;

</description>
      <category>agile</category>
      <category>productivity</category>
      <category>scrum</category>
      <category>devops</category>
    </item>
    <item>
      <title>How my team killed manual standups with Claude + Kollabe MCP</title>
      <dc:creator>Kelly Lewandowski</dc:creator>
      <pubDate>Wed, 29 Apr 2026 10:21:21 +0000</pubDate>
      <link>https://forem.com/kelly_lewandowski_845215e/how-my-team-killed-manual-standups-with-claude-kollabe-mcp-99</link>
      <guid>https://forem.com/kelly_lewandowski_845215e/how-my-team-killed-manual-standups-with-claude-kollabe-mcp-99</guid>
      <description>&lt;p&gt;The standup didn't die because we hated the meeting. It died because the update part (yesterday, today, blockers) turned into a five-minute morning tax I was already paying in PRs and Jira tickets. I'd close my IDE, open the standup tool, and re-type the same information into a shorter form.&lt;/p&gt;

&lt;p&gt;Three weeks ago I stopped. Now Claude reads my activity, drafts the update, I edit it for thirty seconds, and submit. Same content. None of the typing tax. The unexpected part? My EM said the team's standups got &lt;em&gt;better&lt;/em&gt;, not worse.&lt;/p&gt;

&lt;p&gt;Here's what we tried, what stuck, and what surprised me.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we replaced
&lt;/h2&gt;

&lt;p&gt;Our old standup was async-by-policy and screenshot-by-reality. Eight engineers, four timezones, a Slack channel pinned to the top of the workspace, and a 9am Sydney "soft deadline" that meant nothing to the engineer in Berlin who'd just woken up. People wrote the update they remembered, not the update that was true. The 4pm thing where you helped a teammate debug a deploy? Forgotten. The PR you shipped before lunch? Forgotten. The blocker you mentioned in standup three days ago? Still there, mentioned for the third time, with no thread connecting the three appearances.&lt;/p&gt;

&lt;p&gt;A 2023 Atlassian survey put manager time on status collection at around 17% of the working week. That number tracks for me. The expensive bit was never the meeting itself, which was already 60 seconds of skimming. The expensive bit was the writing and the chasing and the synthesising into something a stakeholder could read on a Friday.&lt;/p&gt;

&lt;p&gt;We didn't want to replace the standup. We wanted to remove the typing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6p1oriphsl5owf69t8aq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6p1oriphsl5owf69t8aq.jpg" alt="tired developer" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The new workflow, end to end
&lt;/h2&gt;

&lt;p&gt;Every engineer on the team runs one saved Claude prompt in the morning. Five steps, none of which involve typing into the standup tool.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The engineer opens Claude Desktop. Or Cursor. Or Claude Code. Pick your client.&lt;/li&gt;
&lt;li&gt;They run their saved standup prompt.&lt;/li&gt;
&lt;li&gt;Claude reads yesterday's GitHub PRs, Jira transitions, and (if they've connected it) calendar.&lt;/li&gt;
&lt;li&gt;Claude drafts answers for each standup question. The engineer reads, edits anything wrong, adds a real blocker if there is one.&lt;/li&gt;
&lt;li&gt;The engineer says "submit". Claude calls Kollabe's &lt;code&gt;standup_submit_answers&lt;/code&gt; tool. The update lands in the standup view exactly as if the engineer had typed it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The whole loop is about thirty seconds on a normal day. On a day with a real blocker it's two minutes, because the engineer adds context the AI can't infer.&lt;/p&gt;

&lt;p&gt;The prompt itself is the only thing worth copying:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are drafting my Kollabe standup for today.

1. Use the Kollabe MCP to find my standup.
2. Pull my activity from yesterday using the GitHub MCP (PRs opened/merged/reviewed,
   commits on branches I own) and the Jira MCP (issues I transitioned, commented on,
   or that were assigned to me).
3. Use Kollabe to get the question list for my standup.
4. Draft an answer for each question, in plain language, no bullets longer than 12 words.
   - "Yesterday" = what I shipped or moved.
   - "Today" = what's actually on my calendar / picked up, not aspirational.
   - "Blockers" = empty unless I genuinely have one.
5. Show me the draft. Wait for me to approve.
6. On approval, submit via Kollabe.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save it as a Claude Project, a slash command, or a snippet in your client of choice. Mine lives as a saved prompt called &lt;code&gt;/standup&lt;/code&gt; in Claude Code so I can fire it from the terminal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this beats a Slackbot, a form, or just yelling in chat
&lt;/h2&gt;

&lt;p&gt;I've used the Slackbots. I've built the forms. I've yelled in chat. None of those solved the real problem, which is that the update was always a half-remembered version of the day. An AI draft built from actual activity is &lt;em&gt;grounded&lt;/em&gt;. It catches the 4pm thing because the 4pm thing exists in your commit history.&lt;/p&gt;

&lt;p&gt;There's a more boring reason this matters too: structured data. When the standup is text in a Slack channel, the AI summary on Friday is doing pattern recognition on noise. When the standup is structured submissions in Kollabe, the same summary works against typed fields it actually understands. You can ask "which blockers appeared more than once this sprint" and get a real answer instead of a guess.&lt;/p&gt;

&lt;p&gt;The other thing took me a sprint to notice: acting identity. The MCP token submits the standup as the engineer, with their role and permissions, against a real audit trail. There's no bot user posting on someone's behalf. The blocker is owned by the person who hit it, and the threading goes to the person who can fix it. Sounds like a small detail. It isn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  The manager-side win
&lt;/h2&gt;

&lt;p&gt;Before this, my EM read about twelve standups a day, mostly skimmed, asked clarifying questions in DMs. By Friday she'd compile a sprint roll-up by hand, and her Friday morning was that.&lt;/p&gt;

&lt;p&gt;Now she runs one prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Summarise the team's Kollabe standups for the last 5 working days
from the "Web" space. Group recurring themes. Flag any blocker that
appears in more than one submission.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Behind the scenes that's &lt;code&gt;standup_list_submissions&lt;/code&gt; for the date range, &lt;code&gt;standup_get_summary&lt;/code&gt; for the days where Kollabe has already produced one, and Claude clustering across the rest. The output is a markdown digest she can paste into a doc.&lt;/p&gt;

&lt;p&gt;For end-of-sprint reporting she runs the same prompt with a fortnightly window. End-of-quarter, monthly, same prompt with a different range. No typing.&lt;/p&gt;

&lt;p&gt;She told me last week, almost as a side note: "I used to spend Friday mornings making the report. Now I spend it asking better questions." That's the line that convinced me this wasn't a productivity-hack post. It's a job-shape-changing post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72i7zg930u0o4ky2t3dw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72i7zg930u0o4ky2t3dw.jpg" alt="Calm standup" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What about the live standup?
&lt;/h2&gt;

&lt;p&gt;Some teams still gather. Timezone-overlap teams, junior-heavy teams, teams who actually like seeing each other. We used to be one of them, and one of our smaller teams still is.&lt;/p&gt;

&lt;p&gt;Live standup with pre-submitted updates is a different meeting than live standup without. Everyone walks in with their answers already in the tool. Nobody reads aloud. The fifteen minutes goes to discussion of blockers and dependencies, not roll call.&lt;/p&gt;

&lt;p&gt;Stop reading what's on the screen. The screen already says what you did. The room is for what you need.&lt;/p&gt;

&lt;p&gt;That reframe, more than the AI bit, is what changed the meeting for the smaller team. AI just made it cheap to always have the screen pre-filled.&lt;/p&gt;

&lt;h2&gt;
  
  
  The book-of-record argument
&lt;/h2&gt;

&lt;p&gt;This is the part nobody talks about until they've been on Kollabe for a quarter and then tried to leave.&lt;/p&gt;

&lt;p&gt;Slack standups vanish. They scroll, they're searchable in theory, they're un-readable in practice after a week. The blocker mentioned three days ago lives in someone's memory or it doesn't exist at all.&lt;/p&gt;

&lt;p&gt;A persistent standup tool that AI-summarises is a book of record for the team's day-to-day. When someone asks "when did we first notice the OAuth issue", the answer is two clicks, not a Slack archaeology session. Sprint review prep becomes one prompt. Onboarding a new manager becomes "go read the last four weeks". Performance review season becomes a thing the data already exists for.&lt;/p&gt;

&lt;p&gt;You don't notice you need this until you've had it for a while and then can't go back.&lt;/p&gt;

&lt;h2&gt;
  
  
  What didn't work
&lt;/h2&gt;

&lt;p&gt;I'm not going to pretend any of this was clean.&lt;/p&gt;

&lt;p&gt;For the first two days, Claude over-quoted commit messages verbatim. The standup looked like a git log. Fix: I added "summarise, don't transcribe" to the prompt and the issue disappeared.&lt;/p&gt;

&lt;p&gt;One engineer turned off the auto-pull workflow because she likes writing her standup with a coffee. That's fine. Both workflows coexist. Kollabe doesn't care whether the submission came from a chat panel or a keyboard.&lt;/p&gt;

&lt;p&gt;The worst one: an engineer connected his calendar MCP to the workflow and Claude pulled the title of a 1:1 ("promotion conversation: ") into the "Today" line. He noticed before submitting. We now strip 1:1 titles from the calendar prompt by default. If you connect calendar to anything that posts, do this first, not after.&lt;/p&gt;

&lt;p&gt;These are the kinds of things you'll hit. None of them are dealbreakers. All of them are worth knowing about before you ship the workflow to your team.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogsxkckauvgikzw6utx8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogsxkckauvgikzw6utx8.jpg" alt="deal-breakers" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup, in four minutes
&lt;/h2&gt;

&lt;p&gt;Connect Claude Desktop (or your client) to Kollabe MCP. There's a &lt;a href="https://kollabe.com/posts/connect-kollabe-to-claude-desktop-mcp-setup-guide" rel="noopener noreferrer"&gt;60-second setup guide&lt;/a&gt;. The config snippet is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"kollabe"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://kollabe.com/api/mcp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"transport"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the client, approve the OAuth prompt, pick your org. Drop the prompt above into a Claude Project or save it as a slash command.&lt;/p&gt;

&lt;p&gt;First standup with the new workflow: review carefully, edit liberally, submit. Second standup: review less. Third: tab-tab-submit and you're done before your coffee's cold.&lt;/p&gt;

&lt;h2&gt;
  
  
  The point isn't the time saved
&lt;/h2&gt;

&lt;p&gt;It's that the standup is now a real reflection of yesterday, instead of whatever I remembered at 9am.&lt;/p&gt;

&lt;p&gt;Our team isn't in the meeting business. We're in the shipping business. Less plumbing, more shipping.&lt;/p&gt;

&lt;p&gt;If you want to try it, the &lt;a href="https://kollabe.com/posts/connect-kollabe-to-claude-desktop-mcp-setup-guide" rel="noopener noreferrer"&gt;Kollabe MCP server&lt;/a&gt; is free on the trial. One OAuth click and you're in.&lt;/p&gt;

&lt;p&gt;Looking to lear more?&lt;br&gt;
Check out a few more of my articles like:&lt;br&gt;
&lt;a href="https://kollabe.com/posts/what-is-mcp-model-context-protocol" rel="noopener noreferrer"&gt;What is an MCP?&lt;/a&gt;&lt;br&gt;
&lt;a href="https://kollabe.com/posts/connect-kollabe-to-claude-desktop-mcp-setup-guide" rel="noopener noreferrer"&gt;Connect Kollabe to Claude in less then 60 seconds&lt;/a&gt;&lt;br&gt;
&lt;a href="https://kollabe.com/posts/auto-draft-standup-from-github-jira" rel="noopener noreferrer"&gt;Auto draft your daily standup from github prs and jira activity&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
      <category>agile</category>
    </item>
    <item>
      <title>Stop Asking AI to Write Your User Stories (Do This Instead)</title>
      <dc:creator>Kelly Lewandowski</dc:creator>
      <pubDate>Thu, 09 Apr 2026 21:10:29 +0000</pubDate>
      <link>https://forem.com/kelly_lewandowski_845215e/stop-asking-ai-to-write-your-user-stories-do-this-instead-573e</link>
      <guid>https://forem.com/kelly_lewandowski_845215e/stop-asking-ai-to-write-your-user-stories-do-this-instead-573e</guid>
      <description>&lt;p&gt;Most teams using AI in sprint refinement start in the wrong place. They ask it to draft user stories from scratch, then spend the rest of refinement fixing what it got wrong.&lt;/p&gt;

&lt;p&gt;There's a better approach, and it doesn't involve handing your backlog over to ChatGPT.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with AI-drafted stories
&lt;/h2&gt;

&lt;p&gt;AI-generated user stories have a specific failure mode: they sound right. Grammatically clean, properly formatted, structurally valid. "As a user, I want to filter results so I can find what I need." That's technically a user story. It could also describe literally any product ever built.&lt;/p&gt;

&lt;p&gt;The stories pass a quick glance in refinement because nobody pushes back on something that reads well. Then two days into the sprint, the developer implementing it has five clarifying questions and zero answers.&lt;/p&gt;

&lt;p&gt;I've watched this happen. The team saves 10 minutes in refinement and loses two hours in back-and-forth later that week.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where AI actually helps
&lt;/h2&gt;

&lt;p&gt;The real time savings come from using AI &lt;em&gt;after&lt;/em&gt; a human writes the first draft. Specifically:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Expanding acceptance criteria.&lt;/strong&gt; You write the happy path, then feed it to an LLM and ask: "What edge cases am I missing? What assumptions am I making?" It'll catch empty states, permission boundaries, concurrency problems, and error paths you didn't think about. A Capgemini survey from 2024 found that AI-expanded acceptance criteria reduced rework tickets by about 15%. The time saved in refinement is nice, but fewer mid-sprint surprises is the real win.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4u2dtxy5px9h3iin6ug.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4u2dtxy5px9h3iin6ug.webp" alt="AI refinement" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Catching dependencies.&lt;/strong&gt; If you give the model your data model or API surface alongside the story, it's surprisingly good at flagging cross-team dependencies and migration risks that slip past human review. The trick is context. A prompt with just the story text gives you generic output. A prompt with the story plus your schema gives you specific flags you can act on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Splitting big stories.&lt;/strong&gt; When a story is clearly too big for a sprint, prompting an LLM to "split by user workflow step" or "split by data variation" works better than asking for a generic breakdown. The pattern you give it matters more than the model you use.&lt;/p&gt;

&lt;h2&gt;
  
  
  A workflow that works
&lt;/h2&gt;

&lt;p&gt;Here's the pattern we've seen work well for teams:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Product owner writes rough draft stories with basic acceptance criteria (doesn't need to be polished)&lt;/li&gt;
&lt;li&gt;Feed each story to an LLM with relevant context (data model, related stories, constraints) and ask it to list edge cases, implicit assumptions, and missing scenarios&lt;/li&gt;
&lt;li&gt;Review the AI output as a team during refinement. Discard the noise, keep the genuine catches&lt;/li&gt;
&lt;li&gt;Estimate with the fuller picture. Stories that went through this process tend to surface complexity earlier, so you get fewer "wait, what about..." interruptions during planning poker&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Some teams report refinement sessions running 20-30% shorter. But the bigger payoff shows up later in the sprint when clarification requests drop.&lt;/p&gt;

&lt;h2&gt;
  
  
  The stuff to watch out for
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;False completeness.&lt;/strong&gt; The AI generates 12 acceptance criteria and the team assumes it's exhaustive. It's not. The model can't know what it doesn't know about your system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skill erosion.&lt;/strong&gt; If your junior devs stop learning to break down work because AI does it for them, you've traded short-term speed for a long-term problem. Have less experienced people write the first draft, then use AI to expand on it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generic prompts, generic output.&lt;/strong&gt; "Write a user story for search" gives you nothing useful. "Write a user story for full-text search across project names and descriptions, for a user managing 50+ projects" gives you something you can actually work with.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to skip it entirely
&lt;/h2&gt;

&lt;p&gt;Bug fixes with clear repro steps, copy changes, and straightforward CRUD work are all faster to refine the old way. Save the AI pass for stories where the problem space is fuzzy or your team keeps discovering unknowns mid-sprint.&lt;/p&gt;

&lt;p&gt;I wrote a longer piece on this with more prompting examples and a section on specific pitfalls. If you want the full version, it's here: &lt;a href="https://kollabe.com/posts/ai-assisted-backlog-refinement" rel="noopener noreferrer"&gt;AI-Assisted Backlog Refinement: Using LLMs to Write Better User Stories&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agile</category>
      <category>ai</category>
      <category>productivity</category>
      <category>management</category>
    </item>
    <item>
      <title>Your Agile Ceremonies Weren't Designed for 10 Time Zones</title>
      <dc:creator>Kelly Lewandowski</dc:creator>
      <pubDate>Tue, 10 Mar 2026 20:01:39 +0000</pubDate>
      <link>https://forem.com/kelly_lewandowski_845215e/your-agile-ceremonies-werent-designed-for-10-time-zones-14n5</link>
      <guid>https://forem.com/kelly_lewandowski_845215e/your-agile-ceremonies-werent-designed-for-10-time-zones-14n5</guid>
      <description>&lt;p&gt;The Scrum Guide doesn't mention time zones. It was written for teams that could stand in a circle every morning, hash out sprint scope over a whiteboard, and grab coffee together between meetings.&lt;/p&gt;

&lt;p&gt;That's not most teams anymore. If your engineers sit in New York, Berlin, and Bangalore, you're dealing with a 10.5-hour spread. Sprint planning at 9am Eastern is 7:30pm in India. Your "quick retro" at 4pm Berlin time hits Bangalore at 8:30pm.&lt;/p&gt;

&lt;p&gt;The usual fix is rotating who gets the bad meeting time. That's fair, but it still treats every ceremony as a synchronous event. And that's the actual problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Not every ceremony needs a meeting
&lt;/h2&gt;

&lt;p&gt;This is the question most teams skip: which ceremonies actually require everyone talking at the same time?&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Ceremony&lt;/th&gt;
&lt;th&gt;Needs sync?&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Daily standup&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Written updates are faster to consume and don't require timezone coordination&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sprint planning&lt;/td&gt;
&lt;td&gt;Partially&lt;/td&gt;
&lt;td&gt;Scope negotiation needs real-time discussion, but context-sharing doesn't&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sprint review&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Live stakeholder feedback is the whole point&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Retrospective&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Honest team conversations about dynamics need tone of voice and real-time energy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Backlog refinement&lt;/td&gt;
&lt;td&gt;Hybrid&lt;/td&gt;
&lt;td&gt;Async pre-read, sync for questions and estimation&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The daily standup is the easiest ceremony to move async, and it frees up your overlap window for the ceremonies that actually benefit from live discussion.&lt;/p&gt;

&lt;h2&gt;
  
  
  The overlap window
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3774k7sfyd2x1zunlux9.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3774k7sfyd2x1zunlux9.webp" alt="Overlap window" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Map out when your team members are all within working hours. For most globally distributed teams, this is 2-4 hours. Some get less.&lt;/p&gt;

&lt;p&gt;For that NYC/Berlin/Bangalore spread, the overlap is roughly 14:00-16:00 UTC. Two hours. That's it.&lt;/p&gt;

&lt;p&gt;Those two hours are sacred. One meeting per day, max. Everything else happens async. If you're burning your overlap window on status updates, you won't have time left for sprint planning or retros, which are the ceremonies that actually suffer without real-time discussion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A small trick that helps:&lt;/strong&gt; ask people to flex 30-60 minutes in either direction. A Berlin dev starting at 10am and a Bangalore dev staying until 7:30pm buys you an extra hour. Rotate who flexes so nobody's always the one adjusting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sprint planning without the 2-hour meeting
&lt;/h2&gt;

&lt;p&gt;The async-prep-sync-decision pattern cuts planning meetings in half:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;48 hours before:&lt;/strong&gt; PO shares candidate backlog items with acceptance criteria and context. Team reads and posts questions async.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;24 hours before:&lt;/strong&gt; Team runs async estimation (planning poker works well here — everyone votes independently and you can spot disagreements before the call).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;During overlap:&lt;/strong&gt; Live session focuses only on resolving disagreements and committing to the sprint goal. 45-60 minutes instead of 2 hours.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The information transfer happens async. The negotiation happens sync. You stop wasting synchronous time on things people could have read on their own.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why retros need to stay synchronous
&lt;/h2&gt;

&lt;p&gt;I'd argue retros are the ceremony you should fight hardest to keep live. The candid, sometimes uncomfortable conversations about how the team works don't land the same way in a shared doc. Text strips out tone. Written responses feel less safe than spoken ones.&lt;/p&gt;

&lt;p&gt;That said, you can make the sync portion shorter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have people add retro items to the board &lt;em&gt;before&lt;/em&gt; the meeting. This gives everyone, especially those in less convenient time zones, equal chance to contribute.&lt;/li&gt;
&lt;li&gt;Keep it to 60 minutes. Distributed retros lose energy faster than in-person ones.&lt;/li&gt;
&lt;li&gt;Use anonymous voting. Power dynamics get amplified on screen.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Distributed teams need retros more than co-located ones. Miscommunication and unclear handoffs pile up silently when there are no hallway conversations to catch them. The retro is where that stuff surfaces. Skip it and the problems just compound.&lt;/p&gt;

&lt;h2&gt;
  
  
  The sample week
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3lqltai50brlwgtu7oq.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3lqltai50brlwgtu7oq.webp" alt="sample week" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's what it looks like in practice with a 2-hour overlap (14:00-16:00 UTC):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Day&lt;/th&gt;
&lt;th&gt;Overlap window&lt;/th&gt;
&lt;th&gt;Async&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Monday&lt;/td&gt;
&lt;td&gt;Sprint planning (60 min)&lt;/td&gt;
&lt;td&gt;Standup updates, planning pre-read&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tuesday&lt;/td&gt;
&lt;td&gt;Open for ad-hoc sync&lt;/td&gt;
&lt;td&gt;Standup updates, refinement pre-read&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Wednesday&lt;/td&gt;
&lt;td&gt;Refinement (45 min)&lt;/td&gt;
&lt;td&gt;Standup updates, estimation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Thursday&lt;/td&gt;
&lt;td&gt;Open for ad-hoc sync&lt;/td&gt;
&lt;td&gt;Standup updates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Friday&lt;/td&gt;
&lt;td&gt;Retro or review (60 min)&lt;/td&gt;
&lt;td&gt;Standup updates, retro board input&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;One meeting per day in the overlap window. The rest of the time, people build things.&lt;/p&gt;

&lt;h2&gt;
  
  
  What usually goes wrong
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Defaulting to HQ time.&lt;/strong&gt; If leadership is in New York and every meeting happens during East Coast hours, your other offices are permanently accommodating. People notice and stop engaging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skipping documentation.&lt;/strong&gt; Co-located teams have hallway conversations. Distributed teams don't. If you didn't write it down, it didn't happen for anyone who wasn't on the call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adding more meetings to compensate.&lt;/strong&gt; The instinct when async communication feels lacking is to schedule more syncs. This makes it worse. Fix the async communication instead.&lt;/p&gt;




&lt;p&gt;I wrote a longer version of this with concrete team agreements, sprint review strategies for multiple timezone clusters, and a deeper breakdown of the async-first approach. You can &lt;a href="https://www.kollabe.com/posts/agile-ceremonies-across-time-zones" rel="noopener noreferrer"&gt;read the full post on the Kollabe blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If your team is already doing async standups or hybrid planning, I'd love to hear what's working. Drop a comment.&lt;/p&gt;

</description>
      <category>agile</category>
      <category>webdev</category>
      <category>management</category>
      <category>developers</category>
    </item>
    <item>
      <title>Your team ships faster with AI. Here's why you need retros more than ever 🤖</title>
      <dc:creator>Kelly Lewandowski</dc:creator>
      <pubDate>Sun, 01 Mar 2026 19:27:13 +0000</pubDate>
      <link>https://forem.com/kelly_lewandowski_845215e/your-team-ships-faster-with-ai-heres-why-you-need-retros-more-than-ever-e31</link>
      <guid>https://forem.com/kelly_lewandowski_845215e/your-team-ships-faster-with-ai-heres-why-you-need-retros-more-than-ever-e31</guid>
      <description>&lt;p&gt;70% of developers say AI coding tools make them more productive. Only 17% say those tools improve team collaboration.&lt;/p&gt;

&lt;p&gt;That stat from Stack Overflow's 2025 survey stuck with me. We're all shipping faster individually, but nobody's talking about the team-level side effects.&lt;/p&gt;

&lt;h2&gt;
  
  
  The numbers that should make you uncomfortable
&lt;/h2&gt;

&lt;p&gt;A few data points that changed how I think about this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;METR study&lt;/strong&gt;: Experienced devs using AI tools took 19% &lt;em&gt;longer&lt;/em&gt; to complete tasks, while believing they were 20% faster. A 40-point perception gap.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DORA 2024&lt;/strong&gt;: A 25% increase in AI usage correlates with a 7.2% &lt;em&gt;decrease&lt;/em&gt; in delivery stability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitClear&lt;/strong&gt;: Code churn jumped from 3.1% to 7.9% between 2020-2024. Refactoring dropped from 25% to under 10%.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More code is shipping. Whether it's the right code is a different question.&lt;/p&gt;

&lt;h2&gt;
  
  
  The collaboration problem nobody's fixing
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa08qj5vnte95ys556jt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa08qj5vnte95ys556jt.jpg" alt="solo and split teams" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://arxiv.org/html/2509.10956" rel="noopener noreferrer"&gt;two-year longitudinal study&lt;/a&gt; found that AI adoption shifts work toward individualized coding tasks and away from collaborative coordination. The collaboration problems that existed before AI (silos, communication gaps, unclear ownership) stayed completely unresolved.&lt;/p&gt;

&lt;p&gt;AI is making individuals faster. It is not making teams better.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where retros come in
&lt;/h2&gt;

&lt;p&gt;When your team spends 90% of the sprint heads-down with an AI pair programmer, the 60 minutes in a retro might be the most important hour in the entire sprint.&lt;/p&gt;

&lt;p&gt;The questions need updating though. For AI-assisted teams, these are the ones that matter:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;AI tool effectiveness&lt;/strong&gt; - Where did AI help? Where did it waste your time?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Knowledge distribution&lt;/strong&gt; - Who actually understands the code that shipped?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customer connection&lt;/strong&gt; - Did shipping faster translate into customer value?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code quality signals&lt;/strong&gt; - Is churn going up? Are PRs getting rubber-stamped?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Team norms&lt;/strong&gt; - What are your unspoken rules about AI usage?&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The DORA quote that sums it up
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;"AI makes good teams great. And bad teams worse, faster."&lt;br&gt;
-- Google DORA 2025 Report&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The practices that separate good teams from bad ones (psychological safety, shared understanding, honest feedback) are exactly what retros are built around. As AI handles more of the mechanical work, the human conversations get rarer. And rarer means more valuable.&lt;/p&gt;




&lt;p&gt;I wrote a longer piece diving into the research and practical retro questions for AI-assisted teams: &lt;a href="https://kollabe.com/posts/why-retrospectives-matter-more-with-ai-coding-tools" rel="noopener noreferrer"&gt;Read the full post on Kollabe&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>management</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>User Personas That Developers Actually Care About ⛄️</title>
      <dc:creator>Kelly Lewandowski</dc:creator>
      <pubDate>Wed, 25 Feb 2026 18:27:48 +0000</pubDate>
      <link>https://forem.com/kelly_lewandowski_845215e/user-personas-that-developers-actually-care-about-a2n</link>
      <guid>https://forem.com/kelly_lewandowski_845215e/user-personas-that-developers-actually-care-about-a2n</guid>
      <description>&lt;p&gt;Most user personas end up in a slide deck that nobody opens after sprint two. They read like a casting call for a fictional character, packed with hobbies and favorite coffee orders but missing anything that would change how your team builds software.&lt;/p&gt;

&lt;p&gt;I've been thinking about why that is, and I think the problem is that most personas are written for product managers and designers, not for the full team. Developers never see them, so they never use them. And a persona nobody uses is just creative writing.&lt;/p&gt;

&lt;p&gt;Here's what actually works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keep it to one page
&lt;/h2&gt;

&lt;p&gt;A persona that fits on a single page gets used. One that requires scrolling gets ignored. Here's what belongs:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Section&lt;/th&gt;
&lt;th&gt;What to include&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Demographics&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Name, job title, company size, location&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Goals&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;3-5 professional goals related to your product&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Pain points&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Current frustrations and blockers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Behavioral patterns&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;How they work, buy, and make decisions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Technical context&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Devices, tools, comfort level with tech&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;What to leave out: favorite color, what they eat for breakfast, detailed personal backstories. If a data point wouldn't change a product decision, it doesn't belong.&lt;/p&gt;

&lt;h2&gt;
  
  
  A quick example
&lt;/h2&gt;

&lt;p&gt;Here's a persona for a B2B project management tool:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Sarah Chen&lt;/strong&gt;, Senior Engineering Manager, Series B SaaS company (120 people)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Goals:&lt;/strong&gt; Ship features predictably, reduce meeting overhead, give leadership visibility into sprint progress without daily check-ins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pain points:&lt;/strong&gt; Current tools don't surface blockers early enough. Sprint retros feel repetitive. Estimations are inconsistent across sub-teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Behavioral patterns:&lt;/strong&gt; Prefers async communication. Reviews dashboards every morning before standup. Evaluates new tools by trying the free tier herself before involving procurement.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"I don't need another dashboard. I need my team to spend less time talking about work and more time doing it."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Every detail here connects to a product decision. Sarah's preference for async work means your product needs strong notifications and reporting. Her evaluation behavior tells you the free tier has to be self-serve and compelling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where personas actually get used
&lt;/h2&gt;

&lt;p&gt;A persona that doesn't show up in daily work is just a character sketch. Here's where they actually get used:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;During backlog refinement&lt;/strong&gt;, ask "Which persona does this serve?" If the answer is "all of them," the story is probably too broad. If the answer is "none of them," it might not belong in the backlog.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When writing user stories&lt;/strong&gt;, replace "As a user" with the persona's name. "As Sarah, a remote engineering manager, I want to see sprint progress without attending standup" is immediately clearer and easier to estimate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In sprint planning&lt;/strong&gt;, if your primary persona's top pain point isn't addressed in the upcoming sprint, that's worth flagging.&lt;/p&gt;

&lt;p&gt;And in &lt;strong&gt;retros&lt;/strong&gt;, try asking "Are we building for our personas, or have we drifted?" Two-minute gut check, surprisingly useful.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7tjxsc1vpg61cwt9e04.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7tjxsc1vpg61cwt9e04.jpg" alt="raising hand in meeting" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How many do you need?
&lt;/h2&gt;

&lt;p&gt;Three to four. More than that and your team can't keep them straight. Fewer than two and you're probably treating a diverse user base as a monolith.&lt;/p&gt;

&lt;p&gt;Cover these types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Primary user&lt;/strong&gt; - the person who uses your product daily&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Buyer&lt;/strong&gt; - the person who makes the purchase decision (often different in B2B)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Influencer&lt;/strong&gt; - someone who recommends or blocks adoption&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You might also create a &lt;strong&gt;detractor&lt;/strong&gt; persona to understand who your product is &lt;em&gt;not&lt;/em&gt; for. Knowing who to say no to is just as useful as knowing who to say yes to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Don't wait for perfect research
&lt;/h2&gt;

&lt;p&gt;Start with provisional personas based on what you know today. Pull from support tickets, sales call notes, analytics. Five to eight user interviews per segment is enough to spot patterns.&lt;/p&gt;

&lt;p&gt;A rough persona that the team actually references beats a polished one that nobody reads. You can always refine later.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 5-minute version
&lt;/h2&gt;

&lt;p&gt;If you want to get a structured persona fast, I built a free &lt;a href="https://kollabe.com/tools/user-persona-generator" rel="noopener noreferrer"&gt;AI User Persona Generator&lt;/a&gt; that creates one from a product description in seconds. It's not a replacement for real research, but it gives you something concrete to react to instead of starting from a blank page.&lt;/p&gt;




&lt;p&gt;I wrote a more detailed version of this guide with step-by-step instructions and more examples on our blog: &lt;strong&gt;&lt;a href="https://kollabe.com/posts/how-to-create-user-personas" rel="noopener noreferrer"&gt;How to Create User Personas That Actually Improve Your Product&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If your team uses planning poker or retrospectives, you might also find these useful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kollabe.com/posts/how-to-write-user-stories" rel="noopener noreferrer"&gt;How to Write User Stories Your Team Can Actually Estimate&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kollabe.com/tools/user-persona-generator" rel="noopener noreferrer"&gt;Free User Persona Generator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kollabe.com/tools/user-story-generator" rel="noopener noreferrer"&gt;Free User Story Generator&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>productivity</category>
      <category>ux</category>
      <category>agile</category>
      <category>design</category>
    </item>
  </channel>
</rss>
