<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Frederic Zhou</title>
    <description>The latest articles on Forem by Frederic Zhou (@frederic_zhou).</description>
    <link>https://forem.com/frederic_zhou</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/frederic_zhou"/>
    <language>en</language>
    <item>
      <title>AI Products Are Not About Chat — They Are About Lowering Development Barriers</title>
      <dc:creator>Frederic Zhou</dc:creator>
      <pubDate>Fri, 19 Sep 2025 04:51:54 +0000</pubDate>
      <link>https://forem.com/frederic_zhou/ai-products-are-not-about-chat-they-are-about-lowering-development-barriers-1m21</link>
      <guid>https://forem.com/frederic_zhou/ai-products-are-not-about-chat-they-are-about-lowering-development-barriers-1m21</guid>
      <description>&lt;p&gt;When most people think of “AI products,” they imagine chatbots.&lt;br&gt;
But that’s just a demo — the “Bitcoin” of AI, not the whole blockchain.&lt;/p&gt;

&lt;p&gt;The real power of AI in product development is not to talk with users for fun, but to &lt;strong&gt;make it dramatically easier to build products that solve real business problems&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;h3&gt;
  
  
  Traditional Development vs. AI-Driven Development
&lt;/h3&gt;

&lt;p&gt;In traditional software development, the process looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Collect user requirements.&lt;/li&gt;
&lt;li&gt;Product managers design workflows.&lt;/li&gt;
&lt;li&gt;Engineers write code to cover every scenario.&lt;/li&gt;
&lt;li&gt;Deploy, wait for feedback, iterate.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;⠀&lt;br&gt;
This works — but it’s slow, rigid, and expensive.&lt;br&gt;
If you miss an edge case, users are stuck until the next release.&lt;/p&gt;

&lt;p&gt;AI changes this model.&lt;/p&gt;

&lt;p&gt;Now, instead of hardcoding every decision path, we can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accept natural language input (any format).&lt;/li&gt;
&lt;li&gt;Let an AI agent parse intent and dynamically pick tools or actions.&lt;/li&gt;
&lt;li&gt;Generate an output immediately — even if it’s the first time the user ever asked for that scenario.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⠀&lt;br&gt;
This turns our product into a system that &lt;strong&gt;adapts in real time&lt;/strong&gt;, rather than a frozen set of predefined flows.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;h3&gt;
  
  
  AI Is for Developers, Not Just Users
&lt;/h3&gt;

&lt;p&gt;Here’s the key insight:&lt;br&gt;
AI does not directly “solve” user problems — it solves &lt;strong&gt;developer problems&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It lets small teams build systems that used to require massive engineering resources.&lt;br&gt;
It lets us cover more edge cases with fewer hardcoded rules.&lt;br&gt;
It lets us delay decisions, experiment faster, and ship prototypes that actually work in the wild.&lt;/p&gt;

&lt;p&gt;For users, this feels like “magic.”&lt;br&gt;
For teams, it’s simply &lt;strong&gt;better leverage&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;h3&gt;
  
  
  The Real Value: Business Logic + Flexibility
&lt;/h3&gt;

&lt;p&gt;Users still care about the same things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Did my task get done?&lt;/li&gt;
&lt;li&gt;Was it fast and affordable?&lt;/li&gt;
&lt;li&gt;Was the experience smooth?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⠀&lt;br&gt;
AI doesn’t replace good product thinking — it enhances it.&lt;br&gt;
The team still needs deep understanding of business logic, customer workflows, and what “success” means.&lt;br&gt;
But now, instead of spending months coding every possibility, we can let AI handle the messy details, reason about ambiguous input, and keep the product running smoothly.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;h3&gt;
  
  
  Bottom Line
&lt;/h3&gt;

&lt;p&gt;AI products are not “chat windows.”&lt;br&gt;
They are &lt;strong&gt;adaptive systems&lt;/strong&gt; where input → reasoning → action → output forms a live loop, often with human approval steps in between.&lt;/p&gt;

&lt;p&gt;Think of AI as the &lt;em&gt;brain&lt;/em&gt; inside your product — not the interface.&lt;br&gt;
It’s there to make your workflows dynamic, your edge cases covered, and your product faster to build and easier to maintain.&lt;/p&gt;

&lt;p&gt;The best AI products will not feel like “AI products.”&lt;br&gt;
They will just feel like software that &lt;strong&gt;finally understands what you wanted&lt;/strong&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Implement a Traceable ReAct Agent Using Temporal and LangChain</title>
      <dc:creator>Frederic Zhou</dc:creator>
      <pubDate>Tue, 16 Sep 2025 07:26:15 +0000</pubDate>
      <link>https://forem.com/frederic_zhou/implement-a-traceable-react-agent-using-temporal-and-langchain-41il</link>
      <guid>https://forem.com/frederic_zhou/implement-a-traceable-react-agent-using-temporal-and-langchain-41il</guid>
      <description>&lt;p&gt;While there are existing examples of workflows invoking AI agents, most of those workflows merely call the agent as a whole—tool invocation happens outside the workflow. &lt;strong&gt;As a result, it’s impossible to observe when the LLM chooses to call a tool, and any issues—like errors or delays in the tool—aren’t visible in the execution trace.&lt;/strong&gt; In contrast, our approach embeds the tool calls inside the Temporal Workflow itself.** This way, each tool invocation becomes part of the workflow history, fully traceable and observable—including tool failures or latency**.&lt;/p&gt;

&lt;p&gt;This integration not only surfaces the full reasoning chain of the agent but also leverages Temporal’s built-in observability for debugging and reliability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vmwm3dt2otybhdda3e3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vmwm3dt2otybhdda3e3.png" alt=" " width="690" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Project repo: &lt;a href="https://github.com/Frederic-Zhou/temporal_agent_workflow" rel="noopener noreferrer"&gt;https://github.com/Frederic-Zhou/temporal_agent_workflow&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;How to build Temporal + Langchain ReAct AI Agent&lt;br&gt;
This article will walk you through building an observable, extensible AI Agent system based on Temporal and Langchain. We’ll use a ReAct Agent for automatic reasoning and tool invocation, and every step can be traced in the Temporal workflow. Hope you enjoy the hands-on process—feel free to reach out and share your ideas!&lt;/p&gt;


&lt;h2&gt;
  
  
  1. Environment Setup
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Start Temporal Server (docker-compose is recommended, just a few minutes)&lt;/li&gt;
&lt;li&gt;Install dependencies (managed by poetry, just run &lt;code&gt;poetry install&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Set up your API KEY (if you want to use external LLMs like Gemini)&lt;/li&gt;
&lt;/ol&gt;


&lt;h2&gt;
  
  
  2. Design the Agent Workflow
&lt;/h2&gt;

&lt;p&gt;The core logic is in &lt;code&gt;workflows.py&lt;/code&gt;, where we use a Temporal workflow to orchestrate LLM reasoning and tool calls:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@workflow.defn&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AiAgentWorkflow&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nd"&gt;@workflow.run&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;messages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;HumanMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
        &lt;span class="n"&gt;MAX_STEPS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;MAX_STEPS&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;ai_msg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;workflow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute_activity&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm_chat&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;schedule_to_close_timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;timedelta&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;seconds&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;AIMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;ai_msg&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;ai_msg&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tool_calls&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
                &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;tool_call&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;ai_msg&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tool_calls&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
                    &lt;span class="n"&gt;tool_call_result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;workflow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute_activity&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                        &lt;span class="n"&gt;tool_call&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                        &lt;span class="n"&gt;tool_call&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;args&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;params&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                        &lt;span class="n"&gt;schedule_to_close_timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;timedelta&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;seconds&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                    &lt;span class="p"&gt;)&lt;/span&gt;
                    &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                        &lt;span class="nc"&gt;ToolMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                            &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tool_call_result&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                            &lt;span class="n"&gt;tool_call_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;tool_call&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                        &lt;span class="p"&gt;)&lt;/span&gt;
                    &lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  3. Implement Tools and LLM Calls
&lt;/h2&gt;

&lt;p&gt;Define your LLM and tool logic in &lt;code&gt;activities.py&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@activity.defn&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;llm_chat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;init_chat_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemini-2.0-flash&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model_provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;google_genai&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;llm_with_tools&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bind_tools&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;division&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;llm_with_tools&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;

&lt;span class="nd"&gt;@activity.defn&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;divisionParams&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BaseModel&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Field&lt;/span&gt;&lt;span class="p"&gt;(...,&lt;/span&gt; &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Numerator&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Field&lt;/span&gt;&lt;span class="p"&gt;(...,&lt;/span&gt; &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Denominator, must not be zero&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;gt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@activity.defn&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;division&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;divisionParams&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;//&lt;/span&gt; &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can add more tools as you need—just implement an activity and register it with the worker.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Start the Worker
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;worker.py&lt;/code&gt; registers and runs all workflows and activities. You usually don't need to change much here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;localhost:7233&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;worker&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;task_queue&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-task-queue&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;workflows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;AiAgentWorkflow&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;activities&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;llm_chat&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;division&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Worker started.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  5. Start a Workflow (Submit a Task)
&lt;/h2&gt;

&lt;p&gt;Use &lt;code&gt;starter.py&lt;/code&gt; to submit an Agent reasoning task and see the full process in action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;localhost:7233&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute_workflow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AiAgentWorkflow&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;what is 1 divided by 2 then add 1?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AI-Agent-workflow-{{uuid.uuid4()}}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;task_queue&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-task-queue&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Workflow result:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  6. Observability &amp;amp; Debugging
&lt;/h2&gt;

&lt;p&gt;With the Temporal UI (default &lt;a href="http://localhost:8233" rel="noopener noreferrer"&gt;http://localhost:8233&lt;/a&gt;), you can see every LLM reasoning and tool call's input, output, duration, and status in real time. Debugging and tracing are super convenient.&lt;/p&gt;




&lt;h2&gt;
  
  
  7. Extensions &amp;amp; Advanced Usage
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Add any tool you want—just implement a new activity and register it with the worker.&lt;/li&gt;
&lt;li&gt;Integrate more LLMs or external APIs.&lt;/li&gt;
&lt;li&gt;Use this project as a template to quickly build your own observable AI Agent system.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  8. Reference &amp;amp; Community
&lt;/h2&gt;

&lt;p&gt;Full code and more details: &lt;a href="https://github.com/Frederic-Zhou/temporal_agent_workflow" rel="noopener noreferrer"&gt;https://github.com/Frederic-Zhou/temporal_agent_workflow&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have any questions, ideas, or just want to chat, feel free to reach out! You can also open an issue on GitHub or join the Temporal community.&lt;/p&gt;

</description>
      <category>tooling</category>
      <category>architecture</category>
      <category>tutorial</category>
      <category>ai</category>
    </item>
    <item>
      <title>Workflows: Windmill vs n8n vs Langflow vs Temporal — Choosing the Right Tool for the Job</title>
      <dc:creator>Frederic Zhou</dc:creator>
      <pubDate>Tue, 16 Sep 2025 07:21:18 +0000</pubDate>
      <link>https://forem.com/frederic_zhou/workflows-windmill-vs-n8n-vs-langflow-vs-temporal-choosing-the-right-tool-for-the-job-23h5</link>
      <guid>https://forem.com/frederic_zhou/workflows-windmill-vs-n8n-vs-langflow-vs-temporal-choosing-the-right-tool-for-the-job-23h5</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g4w9n4hw4b50md0yvem.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g4w9n4hw4b50md0yvem.png" alt=" " width="720" height="1080"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Over the past year, I’ve been working with four very different (and very opinionated) workflow tools: Windmill, n8n, Langflow, and Temporal.&lt;/p&gt;

&lt;p&gt;Each has its own sweet spot, its quirks, and its “oh-no-why-did-I-choose-this” moments.&lt;/p&gt;

&lt;p&gt;This post is my field guide to these four tools — what they do best, where they fall short, and how I mix them together into a pragmatic, production-ready workflow stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Meet the Four Players
&lt;/h2&gt;

&lt;p&gt;Before we dive into comparisons, let’s introduce the cast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🛠 Windmill — The Script Wizard&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Windmill is your best friend when you want to glue things together quickly. It lets you write scripts in Python, TypeScript, Go, and even build small internal apps and expose them as APIs. It’s ridiculously easy to chain functions together, build internal tooling, and deploy lightweight automations.&lt;/p&gt;

&lt;p&gt;But here’s the thing: Windmill is not built for high concurrency.&lt;/p&gt;

&lt;p&gt;Every script spins up an isolated runtime (256 MB or more), which is great for safety but heavy for high-QPS scenarios. Think of Windmill as the artisan chef who makes each dish by hand — delicious, but not ideal when you need to serve 10,000 burgers in an hour.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔗 n8n — The Connector&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;n8n is like Zapier on steroids, but self-hosted and much more flexible.&lt;/p&gt;

&lt;p&gt;It excels at connecting external services: Slack, Stripe, Notion, databases, webhooks, timers — you name it. Its visual interface makes it easy to read, easy to share, and easy to debug.&lt;/p&gt;

&lt;p&gt;But as workflows grow, visual sprawl sets in.&lt;/p&gt;

&lt;p&gt;A dozen branches, nested loops, error handling — suddenly your “visual workflow” looks like a spaghetti diagram from a conspiracy movie. Long-running human-in-the-loop flows can also feel clunky. And yes, it now has AI Agent nodes, but if you compare them with a specialized AI orchestration platform like Langflow, they feel a bit… basic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧠 Langflow — The AI Orchestrator&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you do AI-driven workflows, Langflow feels like cheating.&lt;/p&gt;

&lt;p&gt;It’s basically a visual LangChain: drag-and-drop LLMs, tools, vector stores, retrievers, memory components, and chain them in powerful ways. You can experiment with prompts, watch reasoning steps in real-time, and rapidly iterate.&lt;/p&gt;

&lt;p&gt;But Langflow is laser-focused on AI use cases.&lt;/p&gt;

&lt;p&gt;If you need to connect to SaaS APIs, schedule jobs, handle retries, or do database polling, you’ll likely wrap Langflow in something else (like n8n) to handle those responsibilities. It’s an AI brain, not a whole nervous system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🏗 Temporal — The Heavyweight&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Temporal is the “serious” one in the room. It’s code-driven, production-grade, distributed workflow orchestration.&lt;/p&gt;

&lt;p&gt;It shines when you need strong consistency, retries, fault tolerance, long-running workflows, and high concurrency. Payments, order pipelines, state machines, human approvals — Temporal is designed to never lose state, even if your entire cluster crashes.&lt;/p&gt;

&lt;p&gt;But be warned: Temporal is code-first.&lt;/p&gt;

&lt;p&gt;There’s no pretty drag-and-drop interface. You’ll be writing workflow code, activity code, and deploying workers. It’s the most powerful of the bunch, but with great power comes a steeper learning curve.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Head-to-Head Comparison
&lt;/h2&gt;

&lt;p&gt;Let’s put them side by side on the dimensions that actually matter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzi5k7p44ri55ffych6ky.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzi5k7p44ri55ffych6ky.png" alt=" " width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. When I Pick Each Tool
&lt;/h2&gt;

&lt;p&gt;Here’s my decision matrix when I start a project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Windmill&lt;/strong&gt; — internal glue logic, small automations, ad-hoc jobs, one-off scripts, quick web endpoints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;n8n&lt;/strong&gt; — external API integration, webhooks, cron jobs, notifications, “business glue”.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Langflow&lt;/strong&gt; — LLM agents, retrieval-augmented generation (RAG), multi-step AI reasoning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Temporal&lt;/strong&gt; — critical backend processes that must be durable, auditable, recoverable, and scale under load.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, I rarely use just one. Instead, I mix them:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Webhook&lt;/strong&gt; → &lt;strong&gt;n8n flow&lt;/strong&gt; (validate + route)&lt;br&gt;
→ If AI decision needed → &lt;strong&gt;Langflow Agent&lt;/strong&gt;&lt;br&gt;
→ If long-running workflow → &lt;strong&gt;Temporal Workflow&lt;/strong&gt;&lt;br&gt;
→ For quick side tasks → &lt;strong&gt;Windmill&lt;/strong&gt; script&lt;/p&gt;

&lt;h2&gt;
  
  
  4. The Reality Check
&lt;/h2&gt;

&lt;p&gt;A few subtle but important clarifications:&lt;/p&gt;

&lt;p&gt;Windmill is powerful, but don’t expect it to behave like a serverless clusterless Lambda farm. If you need to process 50k events/minute, you will need aggressive tuning and parallel workers — or just pick a different engine.&lt;/p&gt;

&lt;p&gt;n8n’s visual workflows can stay maintainable if you use sub-workflows and templates — treat them like code modules.&lt;br&gt;
Langflow is not a replacement for full workflow orchestration — it’s your AI layer. Pair it with n8n or Temporal for a complete solution.&lt;/p&gt;

&lt;p&gt;Temporal is not free magic. You still need to think about idempotency, schema evolution, monitoring, and versioning. But it pays off massively for business-critical processes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Final Thoughts
No single tool is “the best” — they’re like different instruments in an orchestra.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Windmill&lt;/strong&gt; is the agile soloist for small, precise pieces.&lt;br&gt;
&lt;strong&gt;n8n&lt;/strong&gt; is the versatile connector, playing well with others.&lt;br&gt;
&lt;strong&gt;Langflow&lt;/strong&gt; is the AI virtuoso, brilliant at reasoning.&lt;br&gt;
&lt;strong&gt;Temporal&lt;/strong&gt; is the percussion section — steady, reliable, unflinching under pressure.&lt;/p&gt;

&lt;p&gt;When you use them together, you can build a workflow system that’s robust, scalable, easy to iterate, and fun to work with.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR for Busy Engineers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Windmill → Best for low-frequency scripts + internal tools&lt;br&gt;
n8n → Best for external integrations + readable flows&lt;br&gt;
Langflow → Best for AI agents + RAG pipelines&lt;br&gt;
Temporal → Best for high-concurrency, long-running, reliable workflows&lt;br&gt;
Mix and match them like Lego bricks, and you’ve got yourself a solid workflow stack.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>tooling</category>
      <category>automation</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
