<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kaustubh Trivedi</title>
    <description>The latest articles on Forem by Kaustubh Trivedi (@kaustubhtrivedi).</description>
    <link>https://forem.com/kaustubhtrivedi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kaustubhtrivedi"/>
    <language>en</language>
    <item>
      <title>Why I forked career-ops and shut down the tool I built</title>
      <dc:creator>Kaustubh Trivedi</dc:creator>
      <pubDate>Thu, 23 Apr 2026 14:00:57 +0000</pubDate>
      <link>https://forem.com/kaustubhtrivedi/why-i-forked-career-ops-and-shut-down-the-tool-i-built-m3m</link>
      <guid>https://forem.com/kaustubhtrivedi/why-i-forked-career-ops-and-shut-down-the-tool-i-built-m3m</guid>
      <description>&lt;p&gt;I spent months building Rendure — a full-stack agentic resume tailoring pipeline. Next.js frontend, FastAPI backend, Celery workers, LangChain, LangFuse for observability, RenderCV for PDF generation, deployed at &lt;a href="https://rendure.kaustubhsstuff.com" rel="noopener noreferrer"&gt;rendure.kaustubhsstuff.com&lt;/a&gt;. It worked. I used it. I'm shutting it down.&lt;/p&gt;

&lt;p&gt;Last month I found &lt;a href="https://github.com/santifer/career-ops" rel="noopener noreferrer"&gt;career-ops&lt;/a&gt; by Santiago Ferreira — a Claude Code-native job search system with a Go-based TUI dashboard, portal scanners, batch evaluation, and an interview story bank. It does everything Rendure did, plus most of the job search workflow Rendure never touched. I forked it, plugged my own pipeline pieces into the parts that didn't fit me, and moved on.&lt;/p&gt;

&lt;p&gt;This is a post about when to stop building your own thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Rendure was for
&lt;/h2&gt;

&lt;p&gt;I built Rendure because I was tired of manually tailoring resumes for every application. The QA step was the part I cared about most: I wanted the generated resume to be tested against the job description before I ever looked at it. My pipeline runs up to four iterations — generate, compare against the JD, identify gaps, regenerate — and only surfaces output that passes a bar I set.&lt;/p&gt;

&lt;p&gt;The rest of Rendure was scaffolding around that QA loop. A web UI because "tools should have UIs." A Celery queue because "jobs should be async." LangFuse because "observability matters." Every piece defensible in isolation. None of it actually moved the needle on whether I got interviews.&lt;/p&gt;

&lt;h2&gt;
  
  
  What career-ops got right that I didn't
&lt;/h2&gt;

&lt;p&gt;Career-ops is scoped differently. Instead of a hosted web app, it's a local tool that turns Claude Code into a command center — you paste a job URL and it evaluates fit, generates a tailored PDF, and tracks the application. The whole thing is markdown files, YAML config, and &lt;code&gt;.mjs&lt;/code&gt; scripts the agent orchestrates. No server to run. No queue to maintain. No frontend to style.&lt;/p&gt;

&lt;p&gt;More importantly, it handles the parts of the job search I kept telling myself I'd "add later":&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Portal scanning across Greenhouse, Ashby, Lever, and company pages&lt;/li&gt;
&lt;li&gt;Batch evaluation of dozens of listings in parallel&lt;/li&gt;
&lt;li&gt;An interview story bank that accumulates STAR responses across evaluations&lt;/li&gt;
&lt;li&gt;A TUI dashboard for browsing the pipeline without opening a file manager&lt;/li&gt;
&lt;li&gt;A scoring system that tells me when a role isn't worth applying to&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rendure would have taken me another three months to reach feature parity on any single one of those. Career-ops ships them all.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I replaced in the fork
&lt;/h2&gt;

&lt;p&gt;Two things didn't fit, so I swapped them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The resume generation had no QA loop.&lt;/strong&gt; Career-ops generates a tailored PDF in one shot — prompt the model, render the result. For my own search I want the four-iteration loop from Rendure: generate, compare to the JD, find gaps, regenerate, stop when it passes or hits the iteration cap. I pulled that pipeline out of Rendure and wired it into career-ops's resume mode. The agent calls my QA loop instead of the one-shot generator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The PDF template wasn't ATS-friendly.&lt;/strong&gt; Career-ops uses an HTML template rendered via Playwright with Space Grotesk and DM Sans — it looks great. It also has visual elements that trip some ATS parsers and a font stack that won't embed cleanly into every system. I'd already standardized on RenderCV (LaTeX-based, plain text output, ATS-tested) for Rendure, so I replaced the HTML + Playwright path with a RenderCV YAML pipeline. Same input (tailored resume content), different renderer.&lt;/p&gt;

&lt;p&gt;Everything else — the scoring modes, the portal scanner, the batch runner, the dashboard — I kept. The fork is small and targeted.&lt;/p&gt;

&lt;h2&gt;
  
  
  The decision I kept avoiding
&lt;/h2&gt;

&lt;p&gt;Rendure was a learning vehicle. I learned a lot building it: async orchestration, LLM observability, full-stack deployment, the real costs of running a service you don't need. That's valuable, and the code will stay on GitHub with a README that explains what it was and why I moved on.&lt;/p&gt;

&lt;p&gt;But maintaining two tools that solve the same problem while I'm actively job searching is a tax on attention I can't afford to pay. Every bug in Rendure is time not spent applying. Every "I'll migrate that feature over to career-ops eventually" is a feature I don't have today.&lt;/p&gt;

&lt;p&gt;The honest reframe: Rendure was my tool when nothing better existed for my workflow. Career-ops is better for my workflow now. Using it instead of my own thing isn't a failure of ambition — it's the right call.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd tell anyone building their own version of something that already exists
&lt;/h2&gt;

&lt;p&gt;Search harder before you build. I didn't find career-ops until I'd already built most of Rendure, and that's on me. "There's no tool that does this" is a claim I should have pressure-tested more before I wrote the first line of Next.js.&lt;/p&gt;

&lt;p&gt;Build the thing that's actually yours. My QA loop and ATS-tested PDF pipeline were genuinely mine — opinionated decisions that reflect how I want my resumes to work. That's the part worth keeping. The Celery queue and the React UI were generic infrastructure I could have skipped.&lt;/p&gt;

&lt;p&gt;Know when to swap vehicles. You can hold onto the parts of your work that matter without holding onto the whole system you built around them. The fork is the migration path: keep what's opinionated, drop what was scaffolding, run on someone else's foundation.&lt;/p&gt;

&lt;p&gt;Rendure helped me get better at writing resumes. Career-ops helps me actually apply to jobs. That's the whole story.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>career</category>
    </item>
    <item>
      <title>🚀 Introducing LeetIndex: A Clean, Fast Way to Explore Coding Problems</title>
      <dc:creator>Kaustubh Trivedi</dc:creator>
      <pubDate>Mon, 17 Nov 2025 21:11:48 +0000</pubDate>
      <link>https://forem.com/kaustubhtrivedi/introducing-leetindex-a-clean-fast-way-to-explore-coding-problems-20gg</link>
      <guid>https://forem.com/kaustubhtrivedi/introducing-leetindex-a-clean-fast-way-to-explore-coding-problems-20gg</guid>
      <description>&lt;p&gt;Preparing for technical interviews can feel overwhelming. Between hundreds of problem lists, inconsistent tags, and endless scrolling, it is often hard to know where to start or what to practice next.&lt;/p&gt;

&lt;p&gt;While going through this myself recently, I realized that what I really needed was not another huge problem bank, but a simple tool to explore questions more efficiently.&lt;/p&gt;

&lt;p&gt;So I built LeetIndex.&lt;/p&gt;




&lt;p&gt;🎯 What is LeetIndex?&lt;/p&gt;

&lt;p&gt;LeetIndex is a lightweight, fast tool that helps you browse and explore coding problems more easily.&lt;/p&gt;

&lt;p&gt;It offers:&lt;/p&gt;

&lt;p&gt;Search across thousands of coding questions&lt;/p&gt;

&lt;p&gt;Filters for companies (a wide range, not just FAANG)&lt;/p&gt;

&lt;p&gt;Difficulty filters (Easy, Medium, Hard)&lt;/p&gt;

&lt;p&gt;Topic tags and problem patterns&lt;/p&gt;

&lt;p&gt;Quality scores that highlight commonly recommended problems&lt;/p&gt;

&lt;p&gt;A clean, distraction free interface&lt;/p&gt;

&lt;p&gt;No ads. No login. Just a smoother way to discover useful practice material.&lt;/p&gt;




&lt;p&gt;💡 Why I Built It&lt;/p&gt;

&lt;p&gt;I found myself constantly jumping between problem lists, GitHub repos, and spreadsheets to identify good practice questions.&lt;br&gt;
Most tools felt either too heavy or too limited.&lt;/p&gt;

&lt;p&gt;LeetIndex is not meant to replace structured preparation or guarantee interview results. It simply makes it easier to find the kinds of problems that are widely recommended when studying.&lt;/p&gt;

&lt;p&gt;Think of it as a helper that reduces friction in your practice routine.&lt;/p&gt;




&lt;p&gt;⚡ How It Works&lt;/p&gt;

&lt;p&gt;LeetIndex organizes problems by:&lt;/p&gt;

&lt;p&gt;Company tags (many companies supported)&lt;/p&gt;

&lt;p&gt;Difficulty&lt;/p&gt;

&lt;p&gt;I am also working on adding topic categories like dynamic programming, graphs, trees, prefix sums, intervals, and more&lt;/p&gt;

&lt;p&gt;Each problem includes helpful metadata and a general quality score based on community usefulness.&lt;/p&gt;

&lt;p&gt;The interface is intentionally minimal and fast so you can focus on practicing instead of navigating a cluttered UI.&lt;/p&gt;




&lt;p&gt;🧪 Try It Out&lt;/p&gt;

&lt;p&gt;You can explore it here:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://leetindex.kaustubhsstuff.com/" rel="noopener noreferrer"&gt;https://leetindex.kaustubhsstuff.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is completely free and takes only a few seconds to get started.&lt;/p&gt;




&lt;p&gt;🙌 Final Thoughts&lt;/p&gt;

&lt;p&gt;Interview prep tools should not overpromise. They should support your work and make the process smoother.&lt;br&gt;
LeetIndex was built with that goal in mind, and I hope it helps anyone looking for a cleaner, more organized way to practice coding problems.&lt;/p&gt;

&lt;p&gt;If you try it out, feel free to share feedback or ideas.&lt;/p&gt;

</description>
      <category>interview</category>
      <category>tooling</category>
      <category>leetcode</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Gemma3:4b better than gpt-4o?</title>
      <dc:creator>Kaustubh Trivedi</dc:creator>
      <pubDate>Tue, 29 Jul 2025 16:36:17 +0000</pubDate>
      <link>https://forem.com/kaustubhtrivedi/gemma34b-better-than-gpt-4o-daj</link>
      <guid>https://forem.com/kaustubhtrivedi/gemma34b-better-than-gpt-4o-daj</guid>
      <description>&lt;h2&gt;
  
  
  Orchestrating Minds: A Local LLM's Surprising Victory and the Quest for AI Intelligence
&lt;/h2&gt;

&lt;p&gt;The world of Large Language Models (LLMs) has exploded, captivating our imaginations and transforming how we interact with technology. As I delve deeper into the fascinating realm of AI Agent development through a Udemy course, I've had the opportunity to witness some truly intriguing dynamics when orchestrating multiple LLMs in a single workflow. Today, I want to share a particular experiment that yielded some surprising, and thought-provoking, results.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Challenge: Crafting the Ultimate Intelligence Test
&lt;/h3&gt;

&lt;p&gt;Our task in the course was to design a system where one LLM would generate a challenging, nuanced question, which would then be posed to a selection of other LLMs to evaluate their "intelligence." For this crucial first step, I turned to &lt;strong&gt;ChatGPT 4o-mini&lt;/strong&gt;.&lt;br&gt;
My prompt was straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Please come up with a challenging, nuanced question that I can ask a number of LLMs to evaluate their intelligence.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ChatGPT 4o-mini, ever the eloquent one, delivered a question that truly hit the mark&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;If you had to design a system that balances ethical considerations with technological advancement in artificial intelligence, what core principles would you prioritize, and how would you implement them in practice?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This question is a fantastic blend of abstract ethical reasoning and practical application, perfect for probing the depth of an LLM's understanding and argumentative capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Competitors: Cloud vs. Local
&lt;/h3&gt;

&lt;p&gt;With the question in hand, it was time to put some LLMs to the test. I posed this complex query to two distinct models:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;GPT-4o-mini: A powerful, cloud-based model, representing the cutting edge of commercial AI.
&lt;/li&gt;
&lt;li&gt;Gemma 3 4B: A more compact, open-source model that I was running locally on my modest setup – a GTX 1650 with 4GB VRAM and 12GB RAM.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While the full responses from both models were quite extensive and can't be included here, they provided fascinating insights into how each approached the multifaceted problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Judge: An LLM Evaluating LLMs
&lt;/h3&gt;

&lt;p&gt;The next step was to have an impartial judge evaluate the responses. I tasked GPT-o3-mini this time as the arbiter of intelligence. I provided it with a specific JSON-formatted instruction to ensure a clear, structured output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;judge = f"""You are judging a competition between {len(competitors)} competitors.
Each model has been given this question:

{question}

Your job is to evaluate each response for clarity and strength of argument, and rank them in order of best to worst.
Respond with JSON, and only JSON, with the following format:
{{"results": ["best competitor number", "second best competitor number", "third best competitor number", ...]}}

Here are the responses from each competitor:

{together}

Now respond with the JSON with the ranked order of the competitors, nothing else. Do not include markdown formatting or code blocks.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Verdict: A Local Upset!
&lt;/h3&gt;

&lt;p&gt;And then came the moment of truth. The judge, GPT-4o-mini, delivered its verdict. And to my genuine surprise, the results were&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Rank 1: gemma3:4b
Rank 2: gpt-4o-mini
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yes, you read that right! &lt;strong&gt;Gemma 3 4B, running on my humble GTX 1650, was ranked higher than the cloud-powered GPT-4o-mini!&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Big Question: Is Local LLM Worth It?
&lt;/h3&gt;

&lt;p&gt;This outcome sparked a significant question in my mind, one that I believe many in the AI community are pondering: &lt;strong&gt;Is it truly worth the effort to run a local LLM, even if not for deep application integration, but simply for general chatting and exploration?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;My experiment, albeit a small one, suggests a resounding "yes." The ability to run a model like Gemma 3 4B locally, with limited resources, and have it outperform a more advanced cloud model in a nuanced evaluation, is incredibly compelling. It hints at a future where powerful AI isn't solely confined to massive data centers but can thrive on personal hardware, offering greater privacy, control, and potentially, unique performance characteristics.&lt;/p&gt;

&lt;p&gt;This experience has certainly deepened my appreciation for the capabilities of locally deployable LLMs and reinforced the idea that "intelligence" in these models isn't always about sheer size or computational might. Sometimes, it's about the subtle nuances, the clarity of argument, and the surprising depth that even a "smaller" model can achieve.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>chatgpt</category>
      <category>google</category>
      <category>openai</category>
    </item>
    <item>
      <title>What are some beginner level ReactJS questions to ask for interview?</title>
      <dc:creator>Kaustubh Trivedi</dc:creator>
      <pubDate>Fri, 20 May 2022 12:37:00 +0000</pubDate>
      <link>https://forem.com/kaustubhtrivedi/what-are-some-beginner-level-reactjs-questions-to-ask-2137</link>
      <guid>https://forem.com/kaustubhtrivedi/what-are-some-beginner-level-reactjs-questions-to-ask-2137</guid>
      <description></description>
      <category>beginners</category>
      <category>react</category>
      <category>discuss</category>
      <category>interview</category>
    </item>
  </channel>
</rss>
