<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Raunak Kathuria</title>
    <description>The latest articles on Forem by Raunak Kathuria (@raunakkathuria).</description>
    <link>https://forem.com/raunakkathuria</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/raunakkathuria"/>
    <language>en</language>
    <item>
      <title>How to make AI sound more like you, not more like AI</title>
      <dc:creator>Raunak Kathuria</dc:creator>
      <pubDate>Sun, 26 Apr 2026 15:45:25 +0000</pubDate>
      <link>https://forem.com/raunakkathuria/how-to-make-ai-sound-more-like-you-not-more-like-ai-6ji</link>
      <guid>https://forem.com/raunakkathuria/how-to-make-ai-sound-more-like-you-not-more-like-ai-6ji</guid>
      <description>&lt;p&gt;In the first post, I made a simple point: &lt;em&gt;The AI model is a commodity. Taste isn’t&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/raunakkathuria/the-ai-model-is-a-commodity-taste-isnt-5177"&gt;https://dev.to/raunakkathuria/the-ai-model-is-a-commodity-taste-isnt-5177&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A few people then asked the obvious next question: “If taste is the real differentiator, how do you actually build it into an AI workflow?”.&lt;/p&gt;

&lt;p&gt;This post is my answer to that. Not in theory. In practice.&lt;/p&gt;

&lt;p&gt;This is the exact process I used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the prompt I used to build my &lt;code&gt;TASTE.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;the questions that mattered most&lt;/li&gt;
&lt;li&gt;the audit loop I now use to catch AI-sounding writing&lt;/li&gt;
&lt;li&gt;and why this worked better than just saying “write in my voice”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I started doing this because I kept running into the same problem. I could get AI to produce clean writing. Sometimes even impressive writing. But it still did not quite feel like me. That was the frustrating part.&lt;/p&gt;

&lt;p&gt;It was often clear, structured, and polished. But something in it felt slightly off. Too smooth. Too balanced. A bit too eager to sound like “good writing.”&lt;/p&gt;

&lt;p&gt;For a while, I thought this was mainly a prompting problem from my side.&lt;/p&gt;

&lt;p&gt;Maybe I needed better instructions: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Write like me.” &lt;/li&gt;
&lt;li&gt;“Match my tone.” &lt;/li&gt;
&lt;li&gt;“Be more natural.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That helped a little. But not enough. It still felt like I was describing the surface of my writing, not the thing underneath it.&lt;/p&gt;

&lt;p&gt;That was the shift for me. &lt;/p&gt;

&lt;p&gt;AI does not need more adjectives about your style. It needs a better understanding of your &lt;strong&gt;taste&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The goal was not to sound like me. It was to think closer to me.
&lt;/h2&gt;

&lt;p&gt;At first, I assumed I mainly needed AI to copy my tone, sentence style, or structure. But that is only 10% of it.&lt;/p&gt;

&lt;p&gt;What actually matters more is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what I notice&lt;/li&gt;
&lt;li&gt;what I simplify&lt;/li&gt;
&lt;li&gt;what I reject&lt;/li&gt;
&lt;li&gt;what I find engaging&lt;/li&gt;
&lt;li&gt;what I never want to sound like&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, &lt;strong&gt;taste&lt;/strong&gt;. That is what makes something feel like you even before it becomes stylish. &lt;/p&gt;

&lt;p&gt;It is also why two pieces can say similar things, but only one feels like it came from you.&lt;/p&gt;

&lt;h2&gt;
  
  
  The first prompt I used
&lt;/h2&gt;

&lt;p&gt;I started broad on purpose.&lt;/p&gt;

&lt;p&gt;I wanted the model to look at my public writing, my older blog posts, and the way I think in conversations before trying to define my voice.&lt;/p&gt;

&lt;p&gt;Here is the prompt I used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I want you to go through my Substack and LinkedIn
- https://www.linkedin.com/in/raunakkathuria/
- https://raunakkathuria.substack.com/
- https://raunakkathuria.github.io/blog/ (though a little old now)

And also analyze all the chat history with me to develop a taste that can define my taste for AI to write and behave like me.

You need to create a taste of me based on the above information, ask me questions that will be needed to create this and ask till you are not satisfied that we have 95% of information to write the taste for defining me, my writing style etc that I will use to give it to my agent AI so that it can sound and write like me

Also read https://alisabelmas.substack.com/p/the-age-of-good-taste that explains how the taste has evolved over the centuries
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two things about this were useful.&lt;/p&gt;

&lt;p&gt;First, it forced the model to start from evidence instead of guessing.&lt;/p&gt;

&lt;p&gt;Second, it made the process iterative. Not “generate a style guide in one shot,” but “keep asking until the signal is strong enough.”&lt;/p&gt;

&lt;p&gt;That part matters more than it sounds.Most people stop too early.&lt;/p&gt;

&lt;p&gt;They show the model two or three pieces of writing, ask for a style guide, and then wonder why the result sounds generic.&lt;/p&gt;

&lt;p&gt;A generic input usually produces a generic version of you.&lt;/p&gt;

&lt;h2&gt;
  
  
  The next step was contrast, not description
&lt;/h2&gt;

&lt;p&gt;Once the model had enough material, the most useful questions were not “what is my tone?”&lt;/p&gt;

&lt;p&gt;They were questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what writing do I admire?&lt;/li&gt;
&lt;li&gt;what writing do I dislike, even if it is popular?&lt;/li&gt;
&lt;li&gt;what should AI never do when writing like me?&lt;/li&gt;
&lt;li&gt;how opinionated should it be?&lt;/li&gt;
&lt;li&gt;what should readers feel?&lt;/li&gt;
&lt;li&gt;what kind of openings feel natural to me?&lt;/li&gt;
&lt;li&gt;what kind of endings feel earned?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is where the real shape started to emerge.&lt;/p&gt;

&lt;p&gt;I shared references I genuinely like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how &lt;strong&gt;Karpathy&lt;/strong&gt; presents information&lt;/li&gt;
&lt;li&gt;how &lt;strong&gt;Arthur Hayes&lt;/strong&gt; can be engaging&lt;/li&gt;
&lt;li&gt;how &lt;strong&gt;Anthropic Engineering&lt;/strong&gt; explains technical ideas in a way that is easy to digest&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And I was also explicit about the emotional shape I wanted:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;calm clarity&lt;/strong&gt; first&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;playful insight&lt;/strong&gt; second&lt;/li&gt;
&lt;li&gt;not loud&lt;/li&gt;
&lt;li&gt;not boastful&lt;/li&gt;
&lt;li&gt;not spammy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That narrowed the space quickly.&lt;/p&gt;

&lt;p&gt;Because good taste is not just preference. It is also rejection.&lt;/p&gt;

&lt;p&gt;Sometimes the clearest signal is not what you love. It is what you instantly do not want to sound like.&lt;/p&gt;

&lt;h2&gt;
  
  
  The most useful realization was what I did not want
&lt;/h2&gt;

&lt;p&gt;This turned out to be as important as what I liked.&lt;/p&gt;

&lt;p&gt;I did not want the writing to sound:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;like consultant-speak&lt;/li&gt;
&lt;li&gt;like LinkedIn self-congratulation&lt;/li&gt;
&lt;li&gt;like a Twitter-thread guru&lt;/li&gt;
&lt;li&gt;like dramatic storytelling trying too hard to land&lt;/li&gt;
&lt;li&gt;like polished AI neatness&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That gave the model sharper guardrails.&lt;/p&gt;

&lt;p&gt;A lot of weak AI writing does not fail because it is wrong. It fails because it feels slightly manufactured.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Too polished&lt;/li&gt;
&lt;li&gt;Too balanced &lt;/li&gt;
&lt;li&gt;Too eager to sound like “good writing”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That was the part I kept reacting to.&lt;/p&gt;

&lt;p&gt;Not that the draft was poor. Just that it felt constructed.&lt;/p&gt;

&lt;p&gt;At some point, the process gave me a simple filter that ended up being useful everywhere after that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;not writing that grabs attention but writing that earns attention&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is still one of the best tests I have found.&lt;/p&gt;

&lt;h2&gt;
  
  
  What my TASTE.md ended up
&lt;/h2&gt;

&lt;p&gt;The final &lt;code&gt;TASTE.md&lt;/code&gt; was not really a list of tone adjectives.&lt;/p&gt;

&lt;p&gt;It was closer to a decision system.&lt;/p&gt;

&lt;p&gt;It encoded things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;make complex ideas easy to understand&lt;/li&gt;
&lt;li&gt;use simple language with technical precision when needed&lt;/li&gt;
&lt;li&gt;prefer logic first, then story, then examples&lt;/li&gt;
&lt;li&gt;be engaging, but never performative&lt;/li&gt;
&lt;li&gt;use subtle humor, not forced wit&lt;/li&gt;
&lt;li&gt;allow paradox when it reveals something true&lt;/li&gt;
&lt;li&gt;stay low-ego&lt;/li&gt;
&lt;li&gt;avoid sounding like a personal brand machine&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That was the real difference.&lt;/p&gt;

&lt;p&gt;I was no longer asking AI to imitate my writing.&lt;/p&gt;

&lt;p&gt;I was giving it a better model of my judgment.&lt;/p&gt;

&lt;p&gt;And that is a much stronger foundation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The second part of the system: audit, then humanize
&lt;/h2&gt;

&lt;p&gt;Once I had the base taste defined, a different problem showed up.&lt;/p&gt;

&lt;p&gt;Even when the idea was right, some drafts still sounded a bit too smooth. Again, not bad. Just slightly synthetic.&lt;/p&gt;

&lt;p&gt;So I started using a separate audit prompt after drafting. That separation helped a lot.&lt;/p&gt;

&lt;p&gt;Writing and critiquing are different jobs. A model that can produce a smooth paragraph is not always good at noticing where that same paragraph sounds artificial.&lt;/p&gt;

&lt;p&gt;You have to ask for that explicitly.&lt;/p&gt;

&lt;p&gt;Here is the audit prompt I use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Universal AI-audit-check prompt
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Audit this draft for AI-sounding writing.

Check for:
- predictable phrasing
- generic abstractions
- over-balanced sentence rhythm
- repeated syntactic patterns
- too many rhetorical devices
- over-explaining
- vague claims without concrete grounding
- polished-but-forgettable wording
- anything that sounds constructed instead of true

Return:
1. AI-sound risk score (0–100)
2. Top 5 flagged lines
3. Why each line feels artificial
4. A tighter alternative for each
5. Final verdict: pass / revise / rewrite

Important:
- preserve my meaning
- preserve my calm, grounded voice
- do not make it more dramatic
- do not add fake personality
- do not add emojis, hype, or cleverness for its own sake
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key thing here is that I am not asking it, “make this better.”&lt;/p&gt;

&lt;p&gt;That is too vague.&lt;/p&gt;

&lt;p&gt;I am asking it to tell me where the writing sounds artificial, and why.&lt;/p&gt;

&lt;p&gt;That usually leads to much better edits.&lt;/p&gt;

&lt;h3&gt;
  
  
  Universal humanizer prompt
&lt;/h3&gt;

&lt;p&gt;Then, only if the audit flags real issues, I use this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Humanize this draft without changing the idea.

Goals:
- make it sound more natural, specific, and human
- reduce visible AI smoothness
- keep the writing calm, clear, grounded, and low-ego
- preserve my original judgment and intent

Rules:
- edit only where needed
- prefer concrete words over abstract ones
- remove over-signposting
- vary sentence rhythm naturally
- keep one sharp line if it feels earned
- do not force anecdotes
- do not add fake struggle or fake vulnerability
- do not make it sound like a LinkedIn guru
- do not over-polish

Return:
1. revised version
2. 3 biggest changes made
3. why those changes improved the voice
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This matters too. A lot of “humanizing” prompts make writing worse because they add fake personality.&lt;/p&gt;

&lt;p&gt;That is not what I want. I do not want the writing to sound louder, more dramatic, or more quirky than it needs to be.&lt;/p&gt;

&lt;p&gt;I just want it to stop sounding like polished autocomplete.&lt;/p&gt;

&lt;h2&gt;
  
  
  The actual loop I use
&lt;/h2&gt;

&lt;p&gt;In practice, the workflow is simple.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Build the base taste
&lt;/h3&gt;

&lt;p&gt;Use examples, dislikes, audience, tone, openings, endings, and constraints to create a real &lt;code&gt;TASTE.md&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Draft with that taste
&lt;/h3&gt;

&lt;p&gt;Have the model write with the shared taste as the base.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Audit the draft
&lt;/h3&gt;

&lt;p&gt;Use a separate prompt to identify where it sounds AI-generated.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Humanize only if needed
&lt;/h3&gt;

&lt;p&gt;Do not “improve” everything. Fix only what is flattening the voice.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Keep refining the taste file
&lt;/h3&gt;

&lt;p&gt;Every time something feels off, the fix is often not in the draft. It is in the taste definition.&lt;/p&gt;

&lt;p&gt;That last point is probably the most important.&lt;/p&gt;

&lt;p&gt;A weak prompt can produce a weak output once. A weak taste file produces weak outputs repeatedly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Before and after
&lt;/h2&gt;

&lt;p&gt;Before this process, I was asking AI to imitate my writing. After this process, I was giving AI a better model of my judgment.&lt;/p&gt;

&lt;p&gt;That changed the quality of the output more than any clever prompt did.&lt;/p&gt;

&lt;h2&gt;
  
  
  If you want to build your own
&lt;/h2&gt;

&lt;p&gt;Start with three things.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Give it your real data
&lt;/h3&gt;

&lt;p&gt;Not just one post. Give it enough signal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;blogs&lt;/li&gt;
&lt;li&gt;notes&lt;/li&gt;
&lt;li&gt;old writing&lt;/li&gt;
&lt;li&gt;chats&lt;/li&gt;
&lt;li&gt;public posts&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Give it contrast
&lt;/h3&gt;

&lt;p&gt;Tell it what you like and what you never want to sound like. This is where a lot of the real taste signal comes from.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Separate writing from auditing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Use one pass to generate.
&lt;/li&gt;
&lt;li&gt;Use another pass to critique.
&lt;/li&gt;
&lt;li&gt;Use a third pass, only when needed, to humanize.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Do not merge all three into one vague instruction and expect the result to be sharp.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;In the first post, I argued that taste will matter more as models become abundant.&lt;/p&gt;

&lt;p&gt;This is what that looks like in practice.&lt;/p&gt;

&lt;p&gt;The easiest mistake with AI writing is to focus on output too early.&lt;/p&gt;

&lt;p&gt;You keep editing sentences when the real issue is that the model does not yet understand your judgment.&lt;/p&gt;

&lt;p&gt;That is why I think &lt;code&gt;TASTE.md&lt;/code&gt; matters.&lt;/p&gt;

&lt;p&gt;It is not just a style file. It is a way of making your preferences reusable.&lt;/p&gt;

&lt;p&gt;And once that exists, the writing gets better because the decisions underneath it get better.&lt;/p&gt;

&lt;p&gt;AI gets cheaper every year. Your judgment does not.&lt;/p&gt;

&lt;p&gt;That is why the real asset is not the model. It is the taste you train it to follow.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aitaste</category>
    </item>
    <item>
      <title>The AI model is a commodity. Taste isn't.</title>
      <dc:creator>Raunak Kathuria</dc:creator>
      <pubDate>Mon, 06 Apr 2026 13:43:10 +0000</pubDate>
      <link>https://forem.com/raunakkathuria/the-ai-model-is-a-commodity-taste-isnt-5177</link>
      <guid>https://forem.com/raunakkathuria/the-ai-model-is-a-commodity-taste-isnt-5177</guid>
      <description>&lt;p&gt;When everyone has access to the same AI, your edge is the judgment behind the output.&lt;/p&gt;

&lt;p&gt;The first &lt;a href="https://www.reddit.com/r/openclaw/comments/1s6w0rt/comment/od4vjq0" rel="noopener noreferrer"&gt;comment&lt;/a&gt; on my Reddit post wasn't a question.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I don't think I've ever seen so many stupid ideas and em-dashes in such a short amount of text.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;No counter-argument. No nuance. Just a precise observation about punctuation I couldn't shake, because they were right. Not about em-dashes, but about the generic text that was presented.&lt;/p&gt;

&lt;p&gt;The post had been written with AI assistance. I'd used em-dashes in almost every sentence, in a rhythm that wasn't mine. The Reddit commenter knew something was off. That's the thing about taste — you recognise the absence of it before you can name it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Everyone has the same AI
&lt;/h2&gt;

&lt;p&gt;Here's the uncomfortable truth: &lt;strong&gt;the model is no longer the differentiator&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Give ten capable people a reasonable prompt and you'll get ten tidy versions of the same answer. Same sentence balance. Same transitions. Same polished neutrality.&lt;/p&gt;

&lt;p&gt;The AI optimises for readability, and readable turns out to mean forgettable.&lt;/p&gt;

&lt;p&gt;What nobody's discussing: AI was trained to be useful to everyone. That means it has no taste. It has capability without judgment, fluency without instinct. It can write anything, which means it sounds like nothing in particular.&lt;/p&gt;

&lt;h2&gt;
  
  
  Taste is judgment you can teach
&lt;/h2&gt;

&lt;p&gt;Everyone who's spent years thinking about something develops a &lt;em&gt;taste&lt;/em&gt; for it. Not just preferences: &lt;em&gt;a finely tuned sense of what belongs and what doesn't&lt;/em&gt;. What to say. What to leave out. What sounds true and what sounds performed.&lt;/p&gt;

&lt;p&gt;That taste isn't in the model. It's in you.&lt;/p&gt;

&lt;p&gt;Taste is the personalisation you bring to AI — the part that can't be trained on someone else's data.&lt;/p&gt;

&lt;p&gt;A doctor has it in diagnosis, the pattern recognition from thousands of cases that flags something wrong before the tests confirm it. A great editor has it for sentences, whether a line is doing the work it thinks it's doing.&lt;/p&gt;

&lt;p&gt;AI can generate the words. It can't generate the judgment behind them.&lt;/p&gt;

&lt;p&gt;The question is whether you can give it yours.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built to do exactly that
&lt;/h2&gt;

&lt;p&gt;After the Reddit comment, I didn't just edit the post. I tried to fix the root cause.&lt;/p&gt;

&lt;p&gt;The problem wasn't the output. It was that I'd given the AI no real constraint to work with. &lt;em&gt;Write like me&lt;/em&gt; is not a constraint. It's a wish. The AI interpreted it as: write clearly, write professionally, write in a way that nobody will object to. And nobody would object to it, because it had no point of view.&lt;/p&gt;

&lt;p&gt;So I spent a few sessions trying to articulate what my voice actually was. Not in general terms, specific ones.&lt;/p&gt;

&lt;p&gt;I asked the agent to read through my Substack posts, my LinkedIn writing, our conversation history. Identify the patterns. Push back on what it got wrong. Refine until the description felt accurate enough to be useful.&lt;/p&gt;

&lt;p&gt;The output was a file I called &lt;code&gt;TASTE.md&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It's not a style guide. Style guides tell you how to format things. This was closer to a brief for what kind of mind should be speaking: what it cares about, what it would never say, the things that would sound wrong even if they were technically correct.&lt;/p&gt;

&lt;p&gt;The difference in the writing was immediate. Not perfect — it took several passes and a proper audit loop to get the AI detection score from &lt;strong&gt;78% down to 11%&lt;/strong&gt;. The first draft with &lt;code&gt;TASTE.md&lt;/code&gt; loaded still scored 62%. Three rounds to get it to 11%. But the direction was right from the start.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters beyond writing
&lt;/h2&gt;

&lt;p&gt;Writing is just the most visible place where taste shows up.&lt;/p&gt;

&lt;p&gt;In code review, the taste is your sense of what good architecture feels like in your specific system, not in the benchmark, not in general, in yours. In product decisions, it's the feel for which trade-off your team can absorb right now and which one will cost you a quarter.&lt;/p&gt;

&lt;p&gt;These aren't prompting techniques. They're the accumulated judgment you've built over years, finally expressible in a form something else can use.&lt;/p&gt;

&lt;p&gt;The teams that get the most out of AI aren't the ones with the best prompts. They're the ones who've spent time writing it down, something like &lt;code&gt;TASTE.md&lt;/code&gt;, but for their domain, specifically enough that they can give it to something that has no beliefs of its own.&lt;/p&gt;

&lt;h2&gt;
  
  
  The inversion worth sitting with
&lt;/h2&gt;

&lt;p&gt;AI is trained to be a generalist. That framing is holding people back.&lt;/p&gt;

&lt;p&gt;The same model with your specific taste, the things that would sound wrong to you even if they're technically correct, produces something different from the same model without it. Each piece builds on the last. The output starts to sound like it came from someone.&lt;/p&gt;

&lt;p&gt;And the part I didn't expect: building &lt;code&gt;TASTE.md&lt;/code&gt; didn't just make the AI more useful. It made me clearer about what I actually believe. You can't articulate your taste to something else without first understanding it yourself.&lt;/p&gt;

&lt;p&gt;The Reddit comment that stung on a Sunday evening was useful. Not because it was about the em-dashes, they were right about the generic text that was presented.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Everyone has access to the same AI. What differentiates you is what you bring to it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;How clearly have you articulated your taste?&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is part one of a two-part series. Part two covers the exact process: how to build your own &lt;code&gt;TASTE.md&lt;/code&gt;, the audit loop I use, and what the before/after looks like.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://raunakkathuria.substack.com/p/the-ai-model-is-a-commodity-taste" rel="noopener noreferrer"&gt;raunakkathuria.substack.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>programming</category>
    </item>
    <item>
      <title>The Agentic Stack</title>
      <dc:creator>Raunak Kathuria</dc:creator>
      <pubDate>Mon, 30 Mar 2026 05:16:30 +0000</pubDate>
      <link>https://forem.com/raunakkathuria/the-agentic-stack-5ej7</link>
      <guid>https://forem.com/raunakkathuria/the-agentic-stack-5ej7</guid>
      <description>&lt;p&gt;We’ve spent years building software the same way. A service receives a request, calls another service, writes to a database. Something reads it back. You scale the slow parts, monitor the breaking ones.&lt;/p&gt;

&lt;p&gt;That model still works. It’s reliable, well-understood, and well-tooled. But something’s shifted — and if you haven’t felt it yet in your architecture decisions, you will soon.&lt;/p&gt;

&lt;p&gt;AI has introduced a new execution layer. Not infrastructure. Not middleware. Something that does work — the kind that used to require a human to sit down, think, and grind through.&lt;/p&gt;

&lt;h2&gt;
  
  
  The question nobody’s asking right now
&lt;/h2&gt;

&lt;p&gt;Most leaders ask: “Should we use AI?” That’s the wrong level. Everyone’s already using it — in their IDE, their code review tool, their incident runbook.&lt;/p&gt;

&lt;p&gt;Ask instead: What’s the architecture?&lt;/p&gt;

&lt;p&gt;Because when AI stops being a feature and starts being an execution layer — something that does work rather than assists with work — your system’s structure changes. How you define capability changes. How intent enters the system changes. That’s what the agentic stack is about, and it’s worth understanding before your team figures it out without you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three layers. You already know all of them
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Claw&lt;/strong&gt; is the new unit of architecture. A bounded execution unit with its own role, context, tools, and constraints. Not a microservice — it doesn’t expose an API endpoint. It checks Slack, reads files, calls APIs, drafts responses, takes action. It does things.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skill&lt;/strong&gt; is the instruction layer. Reusable logic that tells a claw what to do, how to do it, what rules it’s operating under, and what good output looks like. If a claw is the executor, the skill is the judgment baked in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt&lt;/strong&gt; is how intent enters the system. Not a rigid API contract — a natural language instruction that activates claws and routes work through skills.&lt;/p&gt;

&lt;p&gt;Three layers. Each one maps to something familiar.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Claw is the architecture. Skill is the language. Prompt is the protocol. That’s the stack your team is building on — whether you’ve named it yet or not.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What this replaces and what it doesn't
&lt;/h2&gt;

&lt;p&gt;Here’s where most explanations go sideways: they frame this as replacement. It isn’t, and treating it that way will cause you to either over-invest or dismiss it entirely.&lt;/p&gt;

&lt;p&gt;Services still define how systems run. Databases still store state. APIs still integrate third-party tools. None of that changes.&lt;/p&gt;

&lt;p&gt;What changes is the coordination layer on top, the work that used to live in Notion docs, Jira comments, and Slack threads.&lt;/p&gt;

&lt;p&gt;The glue work. The “someone needs to look at this and decide work”. Before the agentic stack, that required a human. After, a claw handles it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Meet Priya
&lt;/h2&gt;

&lt;p&gt;She’s a Senior Engineering Manager at a scaling platform team. Her week, pre-agentic stack: 15 PRs moving through review, senior engineers burning two to three hours a day reading diffs, junior engineers waiting three to five days for feedback that came back cryptic half the time anyway.&lt;/p&gt;

&lt;p&gt;The process ran like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;PR opened → Priya manually assigns reviewer based on who knows the area&lt;/li&gt;
&lt;li&gt;Reviewer reads 300–500 lines of code&lt;/li&gt;
&lt;li&gt;Leaves comments → author reads them, guesses at intent → revises&lt;/li&gt;
&lt;li&gt;Reviewer re-reads, re-approves or re-requests changes&lt;/li&gt;
&lt;li&gt;Repeat until it's good enough&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's six to eight human touchpoints per PR. Multiply by 15 PRs a week.&lt;/p&gt;

&lt;p&gt;They tried building a claw to do first-pass reviews. The skill encoded the team’s standards — naming conventions, test coverage thresholds, security anti-patterns, documentation expectations. First two weeks were rough. It flagged too aggressively, left feedback in a tone that annoyed the engineers, and missed context on a few architectural decisions it didn’t have visibility into.&lt;/p&gt;

&lt;p&gt;They fixed the skill. Rewrote the tone guidelines, added the architecture docs to its context, tuned the flagging thresholds. By week four, it was doing what a good junior reviewer would do — and Priya’s seniors were only seeing the PRs that genuinely needed a human call.&lt;/p&gt;

&lt;p&gt;That’s the actual story. It wasn’t clean. It worked anyway.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it looks like in practice
&lt;/h2&gt;

&lt;p&gt;Here's a simplified skill definition for a code review claw:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;code-review&lt;/span&gt;
&lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Review pull requests against team engineering standards. Flag violations. Approve clean PRs. Escalate anything touching auth or payments.&lt;/span&gt;
&lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;standards/engineering-guidelines.md&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.github/CODEOWNERS&lt;/span&gt;
&lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;read_pr_diff&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;post_review_comment&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;request_human_review&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;approve_pr&lt;/span&gt;
&lt;span class="na"&gt;constraints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Never approve PRs touching /src/auth without human sign-off&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Always flag missing tests as blocking&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Post feedback in plain English, not just line references&lt;/span&gt;
&lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;structured review with verdict&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;approve | changes_requested | escalate&lt;/span&gt;
&lt;span class="s"&gt;The skill is the logic. The claw is the executor. The PR opening is the prompt.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No new service to build. No new API to maintain. You're composing behaviour, not writing code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually changes for you
&lt;/h2&gt;

&lt;p&gt;Your senior engineers stop being the default reviewers. Their judgment gets encoded into skills — and applied at scale, consistently, without their calendar getting eaten. They shift from doing reviews to defining what a good review looks like. That’s a leverage upgrade.&lt;/p&gt;

&lt;p&gt;Code standards become executable, not aspirational. There’s a meaningful difference between a policy nobody reads and a skill a claw runs on every PR. One is stated. One is applied. You’ve probably written a lot of policies that belong in the second category.&lt;/p&gt;

&lt;p&gt;Your junior engineers get faster feedback loops. They learn faster because the signal isn’t filtered through whoever had time to review that week.&lt;/p&gt;

&lt;p&gt;And you get visibility into work that was previously invisible. Every claw action is logged. Every decision is traceable. Which is a kind of accountability your process almost certainly doesn’t have right now.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to start
&lt;/h2&gt;

&lt;p&gt;Pick one workflow that is a bottleneck. Not a hypothetical one, a specific named workflow that your team complains about in retros.&lt;/p&gt;

&lt;p&gt;Write down, in plain language, what a good human does in that workflow. Step by step. Including the judgment calls, the rules they apply, the things they'd escalate. That's your first skill. The trigger is your first prompt. The claw executes it.&lt;/p&gt;

&lt;p&gt;Run it alongside your human process for two weeks. Don’t replace anything yet — just compare the outputs. Trust the parts that are right. Fix the parts that aren’t.&lt;/p&gt;

&lt;p&gt;That’s it. Everything else is refinement.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The most interesting thing about this stack isn’t the technology. It’s that the judgment your best people carry around in their heads — the stuff that makes them irreplaceable — can now be encoded and applied everywhere, all the time, without them being in the room.&lt;br&gt;
That’s what’s actually happening. The naming is new. The leverage isn’t.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What workflow in your team still requires a human because nobody’s written down what good looks like? Genuinely curious — drop it in the comments.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openclaw</category>
      <category>agents</category>
    </item>
  </channel>
</rss>
