<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kaustubh Trivedi</title>
    <description>The latest articles on Forem by Kaustubh Trivedi (@kaustubhtrivedi).</description>
    <link>https://forem.com/kaustubhtrivedi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kaustubhtrivedi"/>
    <language>en</language>
    <item>
      <title>🚀 Introducing LeetIndex: A Clean, Fast Way to Explore Coding Problems</title>
      <dc:creator>Kaustubh Trivedi</dc:creator>
      <pubDate>Mon, 17 Nov 2025 21:11:48 +0000</pubDate>
      <link>https://forem.com/kaustubhtrivedi/introducing-leetindex-a-clean-fast-way-to-explore-coding-problems-20gg</link>
      <guid>https://forem.com/kaustubhtrivedi/introducing-leetindex-a-clean-fast-way-to-explore-coding-problems-20gg</guid>
      <description>&lt;p&gt;Preparing for technical interviews can feel overwhelming. Between hundreds of problem lists, inconsistent tags, and endless scrolling, it is often hard to know where to start or what to practice next.&lt;/p&gt;

&lt;p&gt;While going through this myself recently, I realized that what I really needed was not another huge problem bank, but a simple tool to explore questions more efficiently.&lt;/p&gt;

&lt;p&gt;So I built LeetIndex.&lt;/p&gt;




&lt;p&gt;🎯 What is LeetIndex?&lt;/p&gt;

&lt;p&gt;LeetIndex is a lightweight, fast tool that helps you browse and explore coding problems more easily.&lt;/p&gt;

&lt;p&gt;It offers:&lt;/p&gt;

&lt;p&gt;Search across thousands of coding questions&lt;/p&gt;

&lt;p&gt;Filters for companies (a wide range, not just FAANG)&lt;/p&gt;

&lt;p&gt;Difficulty filters (Easy, Medium, Hard)&lt;/p&gt;

&lt;p&gt;Topic tags and problem patterns&lt;/p&gt;

&lt;p&gt;Quality scores that highlight commonly recommended problems&lt;/p&gt;

&lt;p&gt;A clean, distraction free interface&lt;/p&gt;

&lt;p&gt;No ads. No login. Just a smoother way to discover useful practice material.&lt;/p&gt;




&lt;p&gt;💡 Why I Built It&lt;/p&gt;

&lt;p&gt;I found myself constantly jumping between problem lists, GitHub repos, and spreadsheets to identify good practice questions.&lt;br&gt;
Most tools felt either too heavy or too limited.&lt;/p&gt;

&lt;p&gt;LeetIndex is not meant to replace structured preparation or guarantee interview results. It simply makes it easier to find the kinds of problems that are widely recommended when studying.&lt;/p&gt;

&lt;p&gt;Think of it as a helper that reduces friction in your practice routine.&lt;/p&gt;




&lt;p&gt;⚡ How It Works&lt;/p&gt;

&lt;p&gt;LeetIndex organizes problems by:&lt;/p&gt;

&lt;p&gt;Company tags (many companies supported)&lt;/p&gt;

&lt;p&gt;Difficulty&lt;/p&gt;

&lt;p&gt;I am also working on adding topic categories like dynamic programming, graphs, trees, prefix sums, intervals, and more&lt;/p&gt;

&lt;p&gt;Each problem includes helpful metadata and a general quality score based on community usefulness.&lt;/p&gt;

&lt;p&gt;The interface is intentionally minimal and fast so you can focus on practicing instead of navigating a cluttered UI.&lt;/p&gt;




&lt;p&gt;🧪 Try It Out&lt;/p&gt;

&lt;p&gt;You can explore it here:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://leetindex.kaustubhsstuff.com/" rel="noopener noreferrer"&gt;https://leetindex.kaustubhsstuff.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is completely free and takes only a few seconds to get started.&lt;/p&gt;




&lt;p&gt;🙌 Final Thoughts&lt;/p&gt;

&lt;p&gt;Interview prep tools should not overpromise. They should support your work and make the process smoother.&lt;br&gt;
LeetIndex was built with that goal in mind, and I hope it helps anyone looking for a cleaner, more organized way to practice coding problems.&lt;/p&gt;

&lt;p&gt;If you try it out, feel free to share feedback or ideas.&lt;/p&gt;

</description>
      <category>interview</category>
      <category>tooling</category>
      <category>leetcode</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Gemma3:4b better than gpt-4o?</title>
      <dc:creator>Kaustubh Trivedi</dc:creator>
      <pubDate>Tue, 29 Jul 2025 16:36:17 +0000</pubDate>
      <link>https://forem.com/kaustubhtrivedi/gemma34b-better-than-gpt-4o-daj</link>
      <guid>https://forem.com/kaustubhtrivedi/gemma34b-better-than-gpt-4o-daj</guid>
      <description>&lt;h2&gt;
  
  
  Orchestrating Minds: A Local LLM's Surprising Victory and the Quest for AI Intelligence
&lt;/h2&gt;

&lt;p&gt;The world of Large Language Models (LLMs) has exploded, captivating our imaginations and transforming how we interact with technology. As I delve deeper into the fascinating realm of AI Agent development through a Udemy course, I've had the opportunity to witness some truly intriguing dynamics when orchestrating multiple LLMs in a single workflow. Today, I want to share a particular experiment that yielded some surprising, and thought-provoking, results.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Challenge: Crafting the Ultimate Intelligence Test
&lt;/h3&gt;

&lt;p&gt;Our task in the course was to design a system where one LLM would generate a challenging, nuanced question, which would then be posed to a selection of other LLMs to evaluate their "intelligence." For this crucial first step, I turned to &lt;strong&gt;ChatGPT 4o-mini&lt;/strong&gt;.&lt;br&gt;
My prompt was straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Please come up with a challenging, nuanced question that I can ask a number of LLMs to evaluate their intelligence.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ChatGPT 4o-mini, ever the eloquent one, delivered a question that truly hit the mark&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;If you had to design a system that balances ethical considerations with technological advancement in artificial intelligence, what core principles would you prioritize, and how would you implement them in practice?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This question is a fantastic blend of abstract ethical reasoning and practical application, perfect for probing the depth of an LLM's understanding and argumentative capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Competitors: Cloud vs. Local
&lt;/h3&gt;

&lt;p&gt;With the question in hand, it was time to put some LLMs to the test. I posed this complex query to two distinct models:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;GPT-4o-mini: A powerful, cloud-based model, representing the cutting edge of commercial AI.
&lt;/li&gt;
&lt;li&gt;Gemma 3 4B: A more compact, open-source model that I was running locally on my modest setup – a GTX 1650 with 4GB VRAM and 12GB RAM.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While the full responses from both models were quite extensive and can't be included here, they provided fascinating insights into how each approached the multifaceted problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Judge: An LLM Evaluating LLMs
&lt;/h3&gt;

&lt;p&gt;The next step was to have an impartial judge evaluate the responses. I tasked GPT-o3-mini this time as the arbiter of intelligence. I provided it with a specific JSON-formatted instruction to ensure a clear, structured output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;judge = f"""You are judging a competition between {len(competitors)} competitors.
Each model has been given this question:

{question}

Your job is to evaluate each response for clarity and strength of argument, and rank them in order of best to worst.
Respond with JSON, and only JSON, with the following format:
{{"results": ["best competitor number", "second best competitor number", "third best competitor number", ...]}}

Here are the responses from each competitor:

{together}

Now respond with the JSON with the ranked order of the competitors, nothing else. Do not include markdown formatting or code blocks.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Verdict: A Local Upset!
&lt;/h3&gt;

&lt;p&gt;And then came the moment of truth. The judge, GPT-4o-mini, delivered its verdict. And to my genuine surprise, the results were&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Rank 1: gemma3:4b
Rank 2: gpt-4o-mini
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yes, you read that right! &lt;strong&gt;Gemma 3 4B, running on my humble GTX 1650, was ranked higher than the cloud-powered GPT-4o-mini!&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Big Question: Is Local LLM Worth It?
&lt;/h3&gt;

&lt;p&gt;This outcome sparked a significant question in my mind, one that I believe many in the AI community are pondering: &lt;strong&gt;Is it truly worth the effort to run a local LLM, even if not for deep application integration, but simply for general chatting and exploration?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;My experiment, albeit a small one, suggests a resounding "yes." The ability to run a model like Gemma 3 4B locally, with limited resources, and have it outperform a more advanced cloud model in a nuanced evaluation, is incredibly compelling. It hints at a future where powerful AI isn't solely confined to massive data centers but can thrive on personal hardware, offering greater privacy, control, and potentially, unique performance characteristics.&lt;/p&gt;

&lt;p&gt;This experience has certainly deepened my appreciation for the capabilities of locally deployable LLMs and reinforced the idea that "intelligence" in these models isn't always about sheer size or computational might. Sometimes, it's about the subtle nuances, the clarity of argument, and the surprising depth that even a "smaller" model can achieve.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>chatgpt</category>
      <category>google</category>
      <category>openai</category>
    </item>
    <item>
      <title>What are some beginner level ReactJS questions to ask for interview?</title>
      <dc:creator>Kaustubh Trivedi</dc:creator>
      <pubDate>Fri, 20 May 2022 12:37:00 +0000</pubDate>
      <link>https://forem.com/kaustubhtrivedi/what-are-some-beginner-level-reactjs-questions-to-ask-2137</link>
      <guid>https://forem.com/kaustubhtrivedi/what-are-some-beginner-level-reactjs-questions-to-ask-2137</guid>
      <description></description>
      <category>beginners</category>
      <category>react</category>
      <category>discuss</category>
      <category>interview</category>
    </item>
  </channel>
</rss>
