<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Jyoti Bisht</title>
    <description>The latest articles on Forem by Jyoti Bisht (@joeyouss).</description>
    <link>https://forem.com/joeyouss</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/joeyouss"/>
    <language>en</language>
    <item>
      <title>How to Audit Your Own Developer Experience in One Afternoon</title>
      <dc:creator>Jyoti Bisht</dc:creator>
      <pubDate>Wed, 06 May 2026 11:52:48 +0000</pubDate>
      <link>https://forem.com/joeyouss/how-to-audit-your-own-developer-experience-in-one-afternoon-1m14</link>
      <guid>https://forem.com/joeyouss/how-to-audit-your-own-developer-experience-in-one-afternoon-1m14</guid>
      <description>&lt;p&gt;Most teams think their DX is better than it is. &lt;strong&gt;Not because they're deluded but because they know too much&lt;/strong&gt;. They know where the docs are, how auth works, what that error actually means. They've never experienced their own product as a stranger.&lt;/p&gt;

&lt;p&gt;In this blog, I have describe the checklist I use to force that perspective. It takes about two hours. It will find things that embarrass you. That's the point.&lt;/p&gt;

&lt;p&gt;When I audit a developer experience, the first thing I do is try to forget everything I know. I open an incognito window, grab a fresh API key, and pretend I'm a developer who just saw this product on a Reddit thread and has 20 minutes before their next standup. That constraint is everything (I like to call it stress-test because I am the one stressed). &lt;br&gt;
Because that's actually how developers find you. &lt;/p&gt;

&lt;p&gt;If you come from an engineering background and I do you already know what this feels like. You've been that developer. You've rage-closed a docs tab because the quickstart assumed you knew something you didn't. You've given up on an API not because the API was bad, but because getting to the first working call felt like too much work.&lt;/p&gt;

&lt;p&gt;That experience is your most valuable tool as a DevEx engineer. Use it.&lt;/p&gt;

&lt;p&gt;Before I run any checklist, I spend time thinking through the mental state of a developer hitting the product for the first time.&lt;/p&gt;

&lt;h2&gt;
  
  
  The zero-to-one moment
&lt;/h2&gt;

&lt;p&gt;This is the thing I care most about. Everything else in the audit is important, but this is the one that decides whether a developer ever becomes a user at all. &lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Can you find the docs from the homepage in two clicks?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I know it sounds trivial. It isn't. I've seen products with brilliant APIs where I couldn't find the documentation without using the search bar. If your developers have to search for your docs, you've already made them work harder than they should.&lt;/p&gt;

&lt;p&gt;✅  &lt;strong&gt;Do you land on a quickstart, not a reference?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A reference is for people who already know what they're doing. A quickstart is for everyone else. If the first page a new developer lands on is a full API reference index, you've told them: "figure it out yourself." A quickstart says: "here's the most important thing, let's do it together." And that is why all the &lt;a href="https://developer.harness.io/docs/cloud-cost-management" rel="noopener noreferrer"&gt;docs I write&lt;/a&gt; have a quickstart guide.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Is auth explained once, clearly, in one place?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Auth is where developers get lost more than anywhere else. I've seen products where the API key setup is explained in the quickstart, the concepts section, and a help article all with slightly different instructions. Pick one place. Make it canonical. Link everything else there.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;What does the first error look like?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I do this deliberately and I send a bad request and read what comes back. Because this is what every developer sees the first time they make a mistake, which is guaranteed to happen. If the error is &lt;code&gt;400 Bad Request&lt;/code&gt; with no further detail, that's a UX failure. I write it down.&lt;/p&gt;




&lt;h2&gt;
  
  
  Docs quality
&lt;/h2&gt;

&lt;p&gt;✅  &lt;strong&gt;Does every endpoint have a working example?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not just the popular ones. Pick three at random from the tail of your reference docs. Do they have examples? Do the examples work?&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Are examples in at least two languages?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Python and JavaScript cover the majority of developers I've worked with. If your docs are curl-only, you're asking every developer to translate before they can try. That translation cost is real — not because it's hard, but because it's friction at the exact moment you want zero friction.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Are error codes documented with actual fixes?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A table of error codes with one-line descriptions is not documentation. What I want to see and what I build when I have the ability to make the decision is error documentation that tells you why it happens, what the common causes are, and what to do about it. That's the difference between a developer fixing a problem in 30 seconds and opening a support ticket.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Is the changelog maintained?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I check the date on the last entry. If it's more than 60 days old, I flag it not because the product hasn't changed, but because if it hasn't been documented, developers will be working with outdated assumptions. Trust erodes quietly when changelogs go stale.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Do the docs match the actual API?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I run examples. Three of them, at random. Not the ones in the quickstart (those get tested all the time). The ones in the middle of the reference. If any of them fail, that's a critical finding. Docs that don't match reality aren't docs. They're misinformation.&lt;/p&gt;




&lt;h2&gt;
  
  
  SDK and tooling
&lt;/h2&gt;

&lt;p&gt;✅ &lt;strong&gt;Does it install clean?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Fresh environment, one command, no flags. If I have to add &lt;code&gt;--legacy-peer-deps&lt;/code&gt; or pin a version to get a clean install, I write it down. Because that's what every developer hits, and most of them won't know why (+ it shouldn't be on them to figure this out).&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Does the SDK version in the docs match the latest published version?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I check npm or PyPI. If there's a major version gap between what the docs show and what's published, every code example in the docs is _potentially _broken and developers won't know it until they hit a confusing error that has nothing to do with their code.&lt;/p&gt;

&lt;p&gt;✅ *&lt;em&gt;Is retry/rate limit handling documented or built in? *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Developers shouldn't have to implement exponential backoff from scratch. Either provide it in the SDK, or document the pattern explicitly. Both is better.&lt;/p&gt;




&lt;h2&gt;
  
  
  Error experience
&lt;/h2&gt;

&lt;p&gt;I give this its own phase because I feel strongly about it. Error messages are UX. They're the moment where a developer is most frustrated, most likely to give up, and most in need of help. How you write your errors tells developers exactly how much you thought about them.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Missing required field, what does the error say?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I remove a required parameter and send the request. Does the error say which field is missing? Or does it say &lt;code&gt;invalid request&lt;/code&gt;? One of these is a 10-second fix. The other is a debugging session.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Rate limit hit, what does the error say?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I hammer the endpoint until I get a 429. Does the response include &lt;code&gt;retry-after&lt;/code&gt;? Does it explain what the limit is? Does it tell me how to request higher limits if I need them? Or does it just tell me I've been rate limited and leave me to figure out the rest?&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Do errors link to the relevant docs?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the one I push hardest for when I'm in a position to make the call. An error that links to its own documentation is worth ten well-written help articles, because it finds the developer at exactly the moment they need it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Last but not the least: Time to first value
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Time the whole thing.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I time it. From landing on the homepage to a working API call. I don't skip steps, I don't use internal knowledge, I don't ask anyone for help. Whatever number I get, that's the number. &lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Count every gate.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Email verification, account approval, plan selection, sales contact requirement, waitlist. Every one of these is something I'd have to justify keeping if I had the authority to remove it. Some gates are necessary. Most aren't. Write them all down.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Does a sandbox exist?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The cognitive cost of setting up a local environment is real. A browser-based sandbox removes it entirely. If I can try the API before I write a line of code, my likelihood of continuing goes up significantly.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Is there a sample app to clone?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developers copy-paste their way into new technologies. That's not laziness, it's efficiency. A well-maintained sample repository that runs in five minutes is the highest-leverage thing a DevEx team can ship. Give developers something good to copy.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Is pricing visible without signing up?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I shouldn't have to create an account to understand what something costs. Hiding pricing creates a trust gap before I've even tried the product.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I do with the findings
&lt;/h2&gt;

&lt;p&gt;After this, you'll have a list of failures. Some will be obvious ("our error messages are useless"), some will be subtle ("the changelog is three months stale"), some will be structural ("auth is documented in four different places").&lt;/p&gt;

&lt;p&gt;Prioritise by the metric that matters most: &lt;strong&gt;time to first value.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Everything that sits between a developer and their first working API call is a critical fix. &lt;/p&gt;

&lt;p&gt;The audit is most useful when you do it with someone who hasn't worked on the product. Better still: watch an actual developer try your API for the first time, say nothing, and write down every moment they slow down or get confused. That's your entire roadmap.&lt;/p&gt;

&lt;p&gt;Best,&lt;br&gt;
Joe.&lt;br&gt;
&lt;em&gt;Confidence level: high. Hallucination probability: non-zero.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devex</category>
      <category>devrel</category>
      <category>developer</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Stop prompting Codex like ChatGPT</title>
      <dc:creator>Jyoti Bisht</dc:creator>
      <pubDate>Wed, 06 May 2026 09:52:27 +0000</pubDate>
      <link>https://forem.com/joeyouss/stop-prompting-codex-like-chatgpt-5m4</link>
      <guid>https://forem.com/joeyouss/stop-prompting-codex-like-chatgpt-5m4</guid>
      <description>&lt;p&gt;There's a pattern I see from almost every developer who tries &lt;a href="https://developers.openai.com/codex" rel="noopener noreferrer"&gt;Codex&lt;/a&gt; for the first time and walks away underwhelmed.&lt;br&gt;
They open it up, type something like:&lt;/p&gt;

&lt;p&gt;"Hey, can you help me refactor my authentication module to use the new JWT library and also update the tests and maybe clean up the error handling while you're at it?"&lt;/p&gt;

&lt;p&gt;Codex starts. It works for a while. Then it returns something that's fine but not complete in itself. They tweak it in chat. It gets better, then breaks something else. Twenty minutes later they're in a back-and-forth loop that feels slower than just doing it themselves.&lt;/p&gt;

&lt;p&gt;Here's the thing: &lt;strong&gt;that's not a Codex problem. That's a ChatGPT habit applied to the wrong tool.&lt;/strong&gt; (YES, habits compound)&lt;/p&gt;

&lt;p&gt;ChatGPT is a conversation. You talk, it responds, you refine. The back-and-forth is the interface. Ambiguity is fine — you'll clarify it next message.&lt;br&gt;
Codex is an agent. It reads your task, opens your repo, writes code, runs commands, checks the output, and commits a result. It's not waiting for your next message. It's working.&lt;/p&gt;

&lt;p&gt;The big mindset shift is this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ChatGPT is great for conversation. Codex is great for work.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can brainstorm with Codex. You can ask questions. You can explore options. B*&lt;em&gt;ut Codex becomes much more useful when you stop prompting it like a chatbot and start assigning it clear, bounded tasks.&lt;/em&gt;*&lt;/p&gt;

&lt;p&gt;I personally like giving Codex atomic tasks. What are atomic tasks? &lt;strong&gt;An atomic task&lt;/strong&gt; has 3 properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;One clear outcome&lt;/strong&gt;: you know exactly what "done" looks like&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A bounded scope&lt;/strong&gt;: it touches one module, one feature, one concern&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A verifiable result&lt;/strong&gt;: there's a test, a lint check, or a visible output that confirms success&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it. If you can't describe the done state in one sentence, the task is big and higher are the chances of you getting frustrated in the longer run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Atomic does not mean tiny. It means bounded.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Anatomy Of A Good Codex Prompt
&lt;/h3&gt;

&lt;p&gt;A good Codex prompt usually has four parts:&lt;/p&gt;

&lt;p&gt;Context: What is happening now?&lt;br&gt;
Goal: What should be true after the change?&lt;br&gt;
Constraints: What should Codex preserve or avoid?&lt;br&gt;
Verification: How should Codex check the work?&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The export button currently downloads an empty CSV when filters are applied. Fix the export logic so it respects active filters. Keep the existing CSV column order unchanged. Add or update tests for filtered exports, then run the relevant test command.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This prompt is short, but it gives Codex almost everything it needs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The context is the broken export behavior.
&lt;/li&gt;
&lt;li&gt;The goal is filtered CSV export.
&lt;/li&gt;
&lt;li&gt;The constraint is preserving the existing format.
&lt;/li&gt;
&lt;li&gt;The verification is tests plus the relevant command.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is a real task.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;ChatGPT-style prompt&lt;/th&gt;
&lt;th&gt;Codex-style task&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Add rate limiting to the API. We're getting hammered and need to protect the endpoints.&lt;/td&gt;
&lt;td&gt;Add per-IP rate limiting to the /api/search endpoint using the existing express-rate-limit package (already in package.json). Limit: 30 requests per minute. On exceed: return 429 with { error: 'rate_limit_exceeded' }. Add a test in tests/search.test.ts that verifies the 429 response on the 31st request.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;We need to migrate from the old OpenAI SDK to the new Responses API. Can you update the codebase?&lt;/td&gt;
&lt;td&gt;In src/services/completion.ts only, migrate the getCompletion() function from openai.createCompletion() (legacy) to openai.responses.create() (Responses API). Map the parameters according to this table: [table]. Keep the existing function signature so callers don't change. Run the existing unit tests for this file after the change&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Not everything needs precision
&lt;/h3&gt;

&lt;p&gt;Not every Codex task needs military precision. For exploratory work like "build a prototype of X" or "show me what this codebase does" — broader prompts are fine, because you're not expecting a final piece or something that you can just deploy ASAP.&lt;/p&gt;

&lt;p&gt;The rule of thumb: precision scales with consequence.&lt;/p&gt;

&lt;p&gt;BUT, the unlock with Codex isn't better prompting - it's better decomposition. Before you open Codex, spend 2 minutes answering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is the smallest independently verifiable unit of this work?&lt;/li&gt;
&lt;li&gt;What files does it touch?&lt;/li&gt;
&lt;li&gt;What does done look like?&lt;/li&gt;
&lt;li&gt;What's out of scope?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're migrating an SDK across 40 files, that's not one task. It's 40 tasks, or at least 8 grouped by module, each with its own checkpoint. Run them in parallel. Review each one. Merge when clean.&lt;/p&gt;

&lt;p&gt;This is also why &lt;a href="https://developers.openai.com/codex/skills" rel="noopener noreferrer"&gt;Codex's Skills&lt;/a&gt; feature exists. Once you've figured out the right task shape for something you do repeatedly you save that pattern as a skill and invoke it by name next time. The decomposition work pays forward.&lt;/p&gt;

&lt;p&gt;Want to go deeper? Check out the &lt;a href="https://developers.openai.com/cookbook/examples/gpt-5/codex_prompting_guide" rel="noopener noreferrer"&gt;Codex Prompting Guide&lt;/a&gt; and the &lt;a href="https://developers.openai.com/codex/guides/agents-md" rel="noopener noreferrer"&gt;AGENTS.md docs&lt;/a&gt;, both are worth reading before you set up your next project.&lt;/p&gt;

&lt;p&gt;As for Codex Skills, I will cover that in my next blog.&lt;/p&gt;

&lt;p&gt;Regards,&lt;br&gt;
Joe.&lt;br&gt;
&lt;em&gt;I wrote this. An agent would’ve also fixed the bugs.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>openai</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
