<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: izharhaq1987</title>
    <description>The latest articles on Forem by izharhaq1987 (@izharhaq1987).</description>
    <link>https://forem.com/izharhaq1987</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/izharhaq1987"/>
    <language>en</language>
    <item>
      <title>Testing FastAPI and LangChain with Two Response Modes</title>
      <dc:creator>izharhaq1987</dc:creator>
      <pubDate>Fri, 21 Nov 2025 14:31:24 +0000</pubDate>
      <link>https://forem.com/izharhaq1987/testing-fastapi-and-langchain-with-two-response-modes-2907</link>
      <guid>https://forem.com/izharhaq1987/testing-fastapi-and-langchain-with-two-response-modes-2907</guid>
      <description>&lt;p&gt;I wanted to share a small detail from the customer-support workflow I built last week with FastAPI and LangChain. It’s something that kept the project easy to test and saved time later.&lt;/p&gt;

&lt;p&gt;I set up the app so each request can run in two modes:&lt;/p&gt;

&lt;p&gt;I. Mock mode&lt;br&gt;
II. Real LLM mode&lt;/p&gt;

&lt;p&gt;Mock mode returns fixed responses for each intent. It gave me a stable baseline during debugging, since nothing depended on an external LLM call. Real mode uses OpenAI and follows the same structure, so switching back and forth didn’t break anything.&lt;/p&gt;

&lt;p&gt;One thing that worked well was keeping both paths inside the same handler. The logic stays in one place, and it’s obvious how the request flows. It’s a simple pattern, but it helps when you’re checking user messages, routing intents, and comparing outputs during refinement.&lt;/p&gt;

&lt;p&gt;If anyone’s building something similar, having these two modes early on makes the pipeline easier to reason about. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>FastAPI + LangChain Customer Support Workflow (Micro Case Study)</title>
      <dc:creator>izharhaq1987</dc:creator>
      <pubDate>Fri, 14 Nov 2025 10:45:38 +0000</pubDate>
      <link>https://forem.com/izharhaq1987/fastapi-langchain-customer-support-workflow-micro-case-study-4j4m</link>
      <guid>https://forem.com/izharhaq1987/fastapi-langchain-customer-support-workflow-micro-case-study-4j4m</guid>
      <description>&lt;p&gt;Over the past few days, I’ve been working on a small customer-support automation project using FastAPI and LangChain. &lt;br&gt;
The idea was straightforward: build a lightweight backend service that can handle common support questions, classify the user’s intent, and generate a clear response without needing a large, complex infrastructure.&lt;/p&gt;

&lt;p&gt;I started by structuring simple FastAPI endpoints that receive a user message and pass it through an intent-classification step. From there, the workflow routes the request to the right handler-general FAQ, troubleshooting, order-related questions, or fallback support. LangChain handles the reasoning layer, so each reply stays consistent and follows a controlled format.&lt;/p&gt;

&lt;p&gt;One of the goals was to keep the project useful for both local testing and production environments. To make that easier, the system can run in mock mode (no API calls) or switch to real LLM responses using OpenAI. This helped a lot during debugging and made the pipeline more predictable.&lt;/p&gt;

&lt;p&gt;What I like most about this setup is how small businesses can use something like this to reduce repetitive support work. Even a simple intent classifier + structured response generator can save a team hours each week.&lt;/p&gt;

&lt;p&gt;If you’re exploring ways to automate support workflows or want to see how this kind of pipeline works behind the scenes, I’m happy to share more details.&lt;/p&gt;

&lt;p&gt;Here’s the repo for anyone who wants to look at the code:&lt;br&gt;
&lt;a href="https://github.com/izharhaq1986/chatgpt-customer-support-fastapi" rel="noopener noreferrer"&gt;https://github.com/izharhaq1986/chatgpt-customer-support-fastapi&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>fastapi</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
