<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Tal Vardi</title>
    <description>The latest articles on Forem by Tal Vardi (@tal_vardi_d7f3ffe2d1f9cdf).</description>
    <link>https://forem.com/tal_vardi_d7f3ffe2d1f9cdf</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/tal_vardi_d7f3ffe2d1f9cdf"/>
    <language>en</language>
    <item>
      <title>How I Use AI to Cut My Code Review Prep Time in Half (Step-by-Step)</title>
      <dc:creator>Tal Vardi</dc:creator>
      <pubDate>Mon, 11 May 2026 13:31:18 +0000</pubDate>
      <link>https://forem.com/tal_vardi_d7f3ffe2d1f9cdf/how-i-use-ai-to-cut-my-code-review-prep-time-in-half-step-by-step-19pn</link>
      <guid>https://forem.com/tal_vardi_d7f3ffe2d1f9cdf/how-i-use-ai-to-cut-my-code-review-prep-time-in-half-step-by-step-19pn</guid>
      <description>&lt;p&gt;Code review is where a lot of engineering time silently disappears. You open a PR, context-switch from whatever you were doing, try to hold the whole diff in your head, and write comments that are either too vague to be useful or so detailed they take 20 minutes to type. AI can compress a big chunk of that — but only if you're deliberate about &lt;em&gt;how&lt;/em&gt; you use it.&lt;/p&gt;

&lt;p&gt;Here's the exact workflow I follow before and during code reviews, with the prompts I actually copy-paste.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Dump the diff into context
&lt;/h2&gt;

&lt;p&gt;Before touching anything else, I pull the raw diff and paste it into my AI tool of choice (I use a mix of Claude and GPT-4o depending on context length).&lt;/p&gt;

&lt;p&gt;Don't summarize it yourself first. Give it the raw diff and let the model build its own mental model. Summarizing before you prompt it biases the output toward your existing assumptions — which defeats half the point.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git diff main..feature-branch &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /tmp/review.diff
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then paste the contents directly into the chat window. If it's a big PR, I split it by directory or logical chunk.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Ask for a structured first-pass summary
&lt;/h2&gt;

&lt;p&gt;The first prompt isn't about finding bugs. It's about orienting yourself fast so you can review &lt;em&gt;intentionally&lt;/em&gt; instead of just reading line by line hoping something jumps out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copy-paste this:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are a senior software engineer doing a code review. Here is a diff:

[PASTE DIFF HERE]

Give me:
1. A 3-sentence summary of what this change is doing
2. The top 3 areas I should focus my review attention on (e.g. error handling, concurrency, data validation)
3. Any immediate red flags (missing tests, hardcoded values, obvious logic gaps)

Be direct. Skip praise.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "be direct, skip praise" line matters more than it sounds. Without it, you'll get a paragraph of "This looks like a well-structured change!" before anything useful.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Probe for edge cases in specific functions
&lt;/h2&gt;

&lt;p&gt;Once you've done a first human pass, go back to the functions that felt risky or complex. This is where AI earns its keep — it's fast at enumerating inputs you didn't think about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copy-paste this:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Here is a function from the PR:

[PASTE FUNCTION]

List every edge case and failure mode you can identify. For each one:
- Describe what input or condition triggers it
- Describe what goes wrong
- Suggest a one-line fix or mitigation

Assume the codebase is a production backend service with real users.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "production backend with real users" framing shifts the output from academic to practical. You'll get things like "if this returns null and the caller doesn't check, you'll get a 500 on checkout" rather than "consider handling undefined."&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Draft your review comments
&lt;/h2&gt;

&lt;p&gt;This is the step most people skip, but it's the highest-leverage one. Instead of writing comments from scratch, paste the issues you found and ask for concise, non-condescending phrasing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I want to leave a code review comment about this issue: [describe issue].
Write 2 versions — one for a junior dev, one for a senior dev.
Keep both under 3 sentences. Be direct but collaborative in tone.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This alone has meaningfully improved my review relationships. It's easy to sound harsh when you're in a hurry. Having two options lets you pick the one that fits the relationship.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 5: Final sanity check before approving
&lt;/h2&gt;

&lt;p&gt;Before I hit "Approve," I do one last prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Given everything we've discussed about this diff, what is the single most important thing I should verify manually before approving this PR?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This forces prioritization. You can't manually verify everything, but you can verify the one thing that matters most.&lt;/p&gt;




&lt;h2&gt;
  
  
  A few things this workflow is &lt;em&gt;not&lt;/em&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;It's not a replacement for reading the code yourself. The AI misses things, especially around business logic it has no context for.&lt;/li&gt;
&lt;li&gt;It's not for rubber-stamping. If the summary doesn't match what you expected the PR to do, that's a signal to dig in harder, not approve faster.&lt;/li&gt;
&lt;li&gt;It's not magic. The quality of output is directly proportional to how specific your prompts are.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The biggest productivity unlock isn't any single prompt — it's having a &lt;em&gt;repeatable&lt;/em&gt; process so you're not reinventing the approach on every review.&lt;/p&gt;




&lt;p&gt;If you want more prompts like these across the full engineering workflow — planning, debugging, writing tickets, and more — I put together a prompt playbook I've been refining for a while: &lt;a href="https://gumroad.com/l/nhltvo" rel="noopener noreferrer"&gt;check it out here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>career</category>
      <category>engineering</category>
    </item>
    <item>
      <title>How I Use AI to Cut My Code Review Prep Time in Half (Step-by-Step)</title>
      <dc:creator>Tal Vardi</dc:creator>
      <pubDate>Mon, 11 May 2026 09:00:31 +0000</pubDate>
      <link>https://forem.com/tal_vardi_d7f3ffe2d1f9cdf/how-i-use-ai-to-cut-my-code-review-prep-time-in-half-step-by-step-4ghb</link>
      <guid>https://forem.com/tal_vardi_d7f3ffe2d1f9cdf/how-i-use-ai-to-cut-my-code-review-prep-time-in-half-step-by-step-4ghb</guid>
      <description>&lt;p&gt;Code review is one of those tasks that &lt;em&gt;looks&lt;/em&gt; passive but actually demands a lot of mental context-switching. You're jumping between files, reconstructing intent, and trying to spot problems the author couldn't see. I started using AI as a first-pass layer before I open a PR — and it's changed how much headspace I have left for the review that actually matters.&lt;/p&gt;

&lt;p&gt;Here's the exact workflow I use. Everything is copy-paste ready.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Dump the diff into context
&lt;/h2&gt;

&lt;p&gt;Before doing anything else, I grab the raw diff from the branch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git diff main...feature/my-branch &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /tmp/review_diff.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I open my AI tool of choice (I use Claude or GPT-4 depending on context length) and paste the diff in. Don't ask it anything yet — just load the context first. If the diff is huge, trim it to the files that matter most.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Ask for a plain-English summary of intent
&lt;/h2&gt;

&lt;p&gt;Most review friction comes from not knowing &lt;em&gt;why&lt;/em&gt; a change exists. Start here:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copy-paste prompt:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Here is a code diff. In 3-5 bullet points, summarize:
1. What this change is doing at a high level
2. Which components or modules are affected
3. Any assumptions the author appears to be making

Diff:
[paste diff here]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This takes about 10 seconds to run and gives you the mental model you'd normally spend 5 minutes building manually. I treat this output like a co-author's explanation before I read their PR description.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Surface edge cases and missing tests
&lt;/h2&gt;

&lt;p&gt;This is where AI genuinely earns its keep. After reading the summary, I ask:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copy-paste prompt:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Based on this diff, identify:
1. Input edge cases that are not handled or tested
2. Any error paths that appear to be swallowed or ignored
3. Potential race conditions or state mutation issues
4. Missing test coverage (be specific about which functions or branches)

Focus on things a human reviewer might miss on a first pass.

Diff:
[paste diff here]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key phrase is "things a human reviewer might miss on a first pass." Without it, you get surface-level feedback. With it, you tend to get the stuff that slips through.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Run a consistency check against your team's patterns
&lt;/h2&gt;

&lt;p&gt;If your codebase has conventions — naming, error handling style, logging patterns — you can paste a representative snippet alongside the diff and ask:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Compare the style and patterns in [EXISTING CODE SNIPPET] with [NEW DIFF].
List any inconsistencies in naming conventions, error handling, or logging approach.
Do not flag stylistic preferences — only deviations from patterns already established in the existing code.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This avoids the AI going rogue and flagging perfectly valid code that just doesn't match its training preferences.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 5: Generate your review comment drafts
&lt;/h2&gt;

&lt;p&gt;Once you've identified real issues, AI is useful for drafting the actual comments — especially for sensitive feedback:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I want to leave a code review comment about the following issue: [describe issue].
Write a comment that is direct and specific, explains the risk, and suggests a fix or asks a clarifying question.
Tone: constructive, peer-to-peer. Max 3 sentences.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I almost never paste these verbatim, but they get me 80% of the way there and stop me from writing comments that are either too vague or accidentally harsh.&lt;/p&gt;




&lt;h2&gt;
  
  
  What this workflow actually looks like in practice
&lt;/h2&gt;

&lt;p&gt;End-to-end, this takes me about 10 minutes before I open the PR in GitHub. I come in with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A mental model of the change&lt;/li&gt;
&lt;li&gt;A shortlist of specific things to probe&lt;/li&gt;
&lt;li&gt;Draft comments for the tricky feedback&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The review itself is faster and higher quality. I'm not wasting cycles on orientation — I'm spending them on judgment.&lt;/p&gt;




&lt;h2&gt;
  
  
  A few caveats
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AI will hallucinate issues that don't exist. Treat the output as a checklist to verify, not a verdict.&lt;/li&gt;
&lt;li&gt;Never paste proprietary code into public AI endpoints. Use local models or your org's approved tooling.&lt;/li&gt;
&lt;li&gt;This workflow gets better the more you tune the prompts for your stack.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;If you want more prompts like these organized by workflow — standup prep, architecture review, debugging sessions — I put together a prompt playbook that covers the ones I reach for most: &lt;a href="https://gumroad.com/l/nhltvo" rel="noopener noreferrer"&gt;check it out here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>career</category>
      <category>engineering</category>
    </item>
    <item>
      <title>3 things that worked, 1 that didn't — my AI workflow experiments this week</title>
      <dc:creator>Tal Vardi</dc:creator>
      <pubDate>Fri, 08 May 2026 13:29:35 +0000</pubDate>
      <link>https://forem.com/tal_vardi_d7f3ffe2d1f9cdf/3-things-that-worked-1-that-didnt-my-ai-workflow-experiments-this-week-1k01</link>
      <guid>https://forem.com/tal_vardi_d7f3ffe2d1f9cdf/3-things-that-worked-1-that-didnt-my-ai-workflow-experiments-this-week-1k01</guid>
      <description>&lt;p&gt;Happy Friday. Week 0 of doing these roundups publicly. Let's see how it goes.&lt;/p&gt;

&lt;p&gt;This week was mostly about figuring out &lt;em&gt;where&lt;/em&gt; AI actually saves time in an engineering workflow versus where it just creates a convincing illusion of speed. Here's the honest tally.&lt;/p&gt;




&lt;h2&gt;
  
  
  ✅ What worked
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Using AI for the first draft of boilerplate, not the logic.&lt;/strong&gt;&lt;br&gt;
Generating scaffold code — config files, test stubs, README sections — is where I stopped fighting the output and started trusting it. The model doesn't need to understand your domain to write a &lt;code&gt;Dockerfile&lt;/code&gt; or a &lt;code&gt;pytest&lt;/code&gt; fixture skeleton. Ship the boring stuff fast, think harder on the real problems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Prompt iteration as a debugging loop.&lt;/strong&gt;&lt;br&gt;
Treating a bad AI output like a failing test — rather than a reason to give up — changed everything. Reframe, constrain, give an example. Three rounds of that usually gets you somewhere useful. The mental model shift is: &lt;em&gt;you're not asking, you're specifying.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Writing documentation before the code.&lt;/strong&gt;&lt;br&gt;
Sounds backwards but it works. Describe the function in plain English, hand it to the model, get a first implementation back. The doc becomes the spec. You catch ambiguity before it's baked into 200 lines of code.&lt;/p&gt;




&lt;h2&gt;
  
  
  ❌ What didn't work
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Letting AI "own" a refactor end-to-end.&lt;/strong&gt;&lt;br&gt;
I handed off a medium-complexity refactor with minimal checkpoints. The output was locally coherent and globally wrong — it made decisions silently that I would have caught immediately if I'd stayed in the loop. The lesson: keep the AI in a copilot seat, not the driver's seat, any time the task spans more than one file or abstraction layer.&lt;/p&gt;




&lt;h2&gt;
  
  
  The thread underneath all of this
&lt;/h2&gt;

&lt;p&gt;Every win this week came from having a clearer mental model of the task &lt;em&gt;before&lt;/em&gt; touching the AI. Every loss came from outsourcing the thinking. That's probably the pattern for a while.&lt;/p&gt;




&lt;p&gt;If you want a structured version of these ideas — prompts, workflows, and checklists organized into a repeatable system — I've been packaging what actually works into a playbook: &lt;a href="https://gumroad.com/l/nhltvo" rel="noopener noreferrer"&gt;grab it here&lt;/a&gt;. No fluff, just the stuff I'd send a teammate.&lt;/p&gt;

&lt;p&gt;See you next Friday. 🖤&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>career</category>
      <category>engineering</category>
    </item>
    <item>
      <title>How to Use AI as a Rubber Duck That Actually Pushes Back</title>
      <dc:creator>Tal Vardi</dc:creator>
      <pubDate>Thu, 07 May 2026 05:04:30 +0000</pubDate>
      <link>https://forem.com/tal_vardi_d7f3ffe2d1f9cdf/how-to-use-ai-as-a-rubber-duck-that-actually-pushes-back-3gan</link>
      <guid>https://forem.com/tal_vardi_d7f3ffe2d1f9cdf/how-to-use-ai-as-a-rubber-duck-that-actually-pushes-back-3gan</guid>
      <description>&lt;p&gt;Rubber duck debugging works because explaining a problem forces you to think clearly. AI can do the same thing — but better, because it asks follow-up questions.&lt;/p&gt;

&lt;p&gt;Here's a workflow I use when I'm stuck on a design decision or a gnarly bug. Takes about 10 minutes and consistently gets me unstuck.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Dump your context, not your question
&lt;/h2&gt;

&lt;p&gt;Most people open ChatGPT and ask "how do I fix X?" That's too narrow. Instead, give full context first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I'm working on [system/feature]. Here's what I'm trying to accomplish: [goal].
Here's what I've tried: [approach 1], [approach 2].
Here's where I'm stuck: [specific blocker].
Don't give me a solution yet. Ask me clarifying questions until you understand the problem fully.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That last line is the key. Forcing the model to interrogate you before answering surfaces assumptions you didn't know you were making.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Answer its questions honestly
&lt;/h2&gt;

&lt;p&gt;When it asks "what constraints are you working under?" or "what happens if you do X?" — actually answer. Don't shortcut to "just give me the answer." The back-and-forth is the point.&lt;/p&gt;

&lt;p&gt;Typically 2–3 rounds of Q&amp;amp;A is enough.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Ask for the devil's advocate take
&lt;/h2&gt;

&lt;p&gt;Once you've landed on a direction, run this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Here's the approach I'm leaning toward: [your plan].
Now argue against it. What are the top 3 reasons this is the wrong call?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is where AI earns its keep. It'll surface edge cases, scalability concerns, or maintenance debt you glossed over. You don't have to agree with all of it — but you should be able to rebut each point.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Synthesize a decision log entry
&lt;/h2&gt;

&lt;p&gt;End the session with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Summarize our conversation as a short architectural decision record (ADR):
- Context
- Decision
- Alternatives considered
- Consequences
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Paste that into your PR description or Notion doc. Future-you (and your teammates) will thank you.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why this works
&lt;/h2&gt;

&lt;p&gt;The standard "explain this to me" prompt treats AI as a search engine. This workflow treats it as a thinking partner with an agenda: to stress-test your reasoning before you commit to it.&lt;/p&gt;

&lt;p&gt;The difference in output quality is significant — especially for decisions that are hard to reverse.&lt;/p&gt;




&lt;p&gt;If you want more structured prompts for engineering decisions, code reviews, and career conversations, I put together a playbook of them here: &lt;a href="https://gumroad.com/l/nhltvo" rel="noopener noreferrer"&gt;AI Prompt Playbook for Engineers&lt;/a&gt;. Practical, copy-paste ready, no filler.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>career</category>
      <category>engineering</category>
    </item>
  </channel>
</rss>
