<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Rock</title>
    <description>The latest articles on Forem by Rock (@rockchinq).</description>
    <link>https://forem.com/rockchinq</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/rockchinq"/>
    <language>en</language>
    <item>
      <title>Open Source AI Agent for Remote PC Control: The OpenClaw Alternative Built Into LangBot</title>
      <dc:creator>Rock</dc:creator>
      <pubDate>Sun, 15 Mar 2026 12:23:34 +0000</pubDate>
      <link>https://forem.com/rockchinq/open-source-ai-agent-for-remote-pc-control-the-openclaw-alternative-built-into-langbot-2loo</link>
      <guid>https://forem.com/rockchinq/open-source-ai-agent-for-remote-pc-control-the-openclaw-alternative-built-into-langbot-2loo</guid>
      <description>&lt;h1&gt;
  
  
  Open Source AI Agent for Remote PC Control — The OpenClaw Alternative Built Into LangBot
&lt;/h1&gt;




&lt;h2&gt;
  
  
  ✨ What is LangTARS?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;LangTARS&lt;/strong&gt; is a powerful &lt;strong&gt;OpenClaw alternative&lt;/strong&gt; and native plugin for LangBot, inspired by the loyal and reliable robot TARS from the movie &lt;em&gt;Interstellar&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;As a versatile &lt;strong&gt;AI agent&lt;/strong&gt; for &lt;strong&gt;remote computer control&lt;/strong&gt;, LangTARS allows you to remotely command your &lt;strong&gt;Mac, Windows, or Linux&lt;/strong&gt; machine directly through your favorite messaging apps like &lt;strong&gt;Telegram, Discord, DingTalk, and WeChat&lt;/strong&gt;! If you've been looking for an &lt;strong&gt;OpenClaw-like&lt;/strong&gt; computer use agent that integrates seamlessly into your existing &lt;strong&gt;chatbot framework&lt;/strong&gt;, LangTARS is the perfect solution.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Deploy LangBot, and you can experience LangTARS instantly!&lt;/strong&gt; No extra standalone agent installation needed—just get it with one click from the plugin marketplace.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpub-c7c365991905453481dc9f8834619665.r2.dev%2Flangtars%2Ftars.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpub-c7c365991905453481dc9f8834619665.r2.dev%2Flangtars%2Ftars.gif" alt="LangTARS Demo"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🤖 The LangBot Ecosystem
&lt;/h2&gt;

&lt;p&gt;LangTARS is just ONE plugin in the massive LangBot ecosystem. &lt;strong&gt;LangBot&lt;/strong&gt; is a leading &lt;strong&gt;open source chatbot&lt;/strong&gt; framework designed for connecting LLMs to messaging platforms.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-platform bot integration&lt;/strong&gt;: Native support for Telegram, Discord, WeChat, DingTalk, Lark, LINE, Slack, and more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-model LLM chatbot&lt;/strong&gt;: Works flawlessly with OpenAI, Claude, Gemini, DeepSeek, and local models.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rich Plugin Marketplace&lt;/strong&gt;: Access over 30+ plugins for endless functionality.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advanced Features&lt;/strong&gt;: Pipeline-based message processing, RAG support, and multi-agent workflows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-hosted AI&lt;/strong&gt;: Total control over your data and deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Workflow Integrations&lt;/strong&gt;: Connect LangBot to platforms like &lt;strong&gt;Dify&lt;/strong&gt;, &lt;strong&gt;n8n&lt;/strong&gt;, &lt;strong&gt;Langflow&lt;/strong&gt;, and &lt;strong&gt;Coze&lt;/strong&gt;. LangBot can act as the messaging frontend for your complex AI workflows, meaning LangTARS + Dify/n8n = powerful automation!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Get started with LangBot:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Easiest&lt;/strong&gt;: &lt;a href="https://cloud.langbot.app" rel="noopener noreferrer"&gt;LangBot Cloud&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Self-host: &lt;code&gt;uvx langbot@latest&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Documentation: &lt;a href="https://docs.langbot.app" rel="noopener noreferrer"&gt;docs.langbot.app&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/langbot-app/LangBot" rel="noopener noreferrer"&gt;github.com/langbot-app/LangBot&lt;/a&gt; (15k+ stars)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🎯 What Can It Do?
&lt;/h2&gt;

&lt;p&gt;As a comprehensive &lt;strong&gt;automation tool&lt;/strong&gt;, just send a message, and the AI will handle your &lt;strong&gt;task automation&lt;/strong&gt; and &lt;strong&gt;browser automation&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;You Say&lt;/th&gt;
&lt;th&gt;AI Does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;!tars open Reddit in Chrome, post a thread with the title "Hello from LangTARS" and say hi&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;🌐 Automatically controls Chrome to open Reddit and publish a post&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;!tars organize my desktop files&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;📁 Automatically categorizes and moves files&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;!tars open the browser and search for today's AI news&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;🌐 Opens a browser, searches the web, and summarizes the news&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;!tars create a meeting note for our marketing sync&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;📝 Automatically creates a markdown note file&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;!tars check system memory and CPU&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;💻 Uses MCP tools to get computer status&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpub-c7c365991905453481dc9f8834619665.r2.dev%2Flangtars%2Fusage-example.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpub-c7c365991905453481dc9f8834619665.r2.dev%2Flangtars%2Fusage-example.png" alt="Usage Example"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpub-c7c365991905453481dc9f8834619665.r2.dev%2Flangtars%2Fplanner-mode.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpub-c7c365991905453481dc9f8834619665.r2.dev%2Flangtars%2Fplanner-mode.png" alt="Plan Mode"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  🚀 Why Choose LangTARS?
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1️⃣ LangBot Native Plugin, Out of the Box
&lt;/h3&gt;

&lt;p&gt;Instead of deploying a complex standalone agent, you can easily &lt;strong&gt;deploy AI bot&lt;/strong&gt; capabilities right where you already chat.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Deploy LangBot with one command&lt;/span&gt;
uvx langbot@latest

&lt;span class="c"&gt;# Install LangTARS from the plugin market, configure your LLM, and start using!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;No complex configuration, no extra deployment. LangBot users can experience OpenClaw-style computer control directly!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpub-c7c365991905453481dc9f8834619665.r2.dev%2Flangtars%2Finstall-flow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpub-c7c365991905453481dc9f8834619665.r2.dev%2Flangtars%2Finstall-flow.png" alt="Installation Flow"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  2️⃣ Intelligent Task Planning, Autonomous Execution
&lt;/h3&gt;

&lt;p&gt;Powered by an AI planning engine based on the &lt;strong&gt;ReAct agent&lt;/strong&gt; loop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Your Command → Understand Needs → Make a Plan → Execute Step-by-Step → Feedback Results
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AI assistant thinks like a human and automatically breaks down complex tasks using integrated &lt;strong&gt;MCP tools&lt;/strong&gt;!&lt;/p&gt;




&lt;h3&gt;
  
  
  3️⃣ Multi-Browser Support for Web Automation
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Browser&lt;/th&gt;
&lt;th&gt;macOS&lt;/th&gt;
&lt;th&gt;Windows&lt;/th&gt;
&lt;th&gt;Linux&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Playwright (Headless)&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Safari&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Chrome&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Edge&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Firefox&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h3&gt;
  
  
  4️⃣ Security-First Design
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;🚫 &lt;strong&gt;Dangerous Command Interception&lt;/strong&gt; — Automatically blocks commands like &lt;code&gt;rm -rf /&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;📂 &lt;strong&gt;Workspace Isolation&lt;/strong&gt; — Restricts file operations to safe directories&lt;/li&gt;
&lt;li&gt;📋 &lt;strong&gt;Command Whitelist&lt;/strong&gt; — Configurable allowed commands&lt;/li&gt;
&lt;li&gt;👤 &lt;strong&gt;User Access Control&lt;/strong&gt; — Restricts usage to specific authorized users&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpub-c7c365991905453481dc9f8834619665.r2.dev%2Flangtars%2Fsecurity.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpub-c7c365991905453481dc9f8834619665.r2.dev%2Flangtars%2Fsecurity.png" alt="Security Features"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  📱 Quick Start
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Deploy LangBot
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;🚀 &lt;strong&gt;Fastest way to start:&lt;/strong&gt; Use &lt;a href="https://cloud.langbot.app" rel="noopener noreferrer"&gt;LangBot Cloud&lt;/a&gt; — deploy your LangBot instance in one click, no server needed! Install LangTARS from the plugin marketplace and start controlling your computer immediately.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Alternative for power users (Self-hosted):&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uvx langbot@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Configure Chat Platform
&lt;/h3&gt;

&lt;p&gt;Follow the documentation to configure your &lt;strong&gt;Telegram bot&lt;/strong&gt;, &lt;strong&gt;WeChat bot&lt;/strong&gt;, or &lt;strong&gt;Discord bot&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Install LangTARS
&lt;/h3&gt;

&lt;p&gt;Search for &lt;strong&gt;LangTARS&lt;/strong&gt; in the &lt;strong&gt;plugin marketplace&lt;/strong&gt; and install it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Start Chatting
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!tars Hello, tell me what you can do
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;📱 &lt;strong&gt;Recommendation&lt;/strong&gt;: Use it on Telegram or Discord for the best interactive experience!&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🎮 Common Commands
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Command&lt;/th&gt;
&lt;th&gt;Function&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;!tars &amp;lt;Task Description&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Execute a task&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;!tars stop&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Stop the current task&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;!tars what&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Check execution status&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;!tars reset&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Reset the conversation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;!tars help&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Show help&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🌟 Project Information
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;Information&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GitHub&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/langbot-app/LangTARS" rel="noopener noreferrer"&gt;github.com/langbot-app/LangTARS&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Language&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Python 100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;License&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;CC BY-NC-ND 4.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Platforms&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;macOS / Windows / Linux&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🔗 Related Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;📖 &lt;strong&gt;LangBot Documentation&lt;/strong&gt;: &lt;a href="https://docs.langbot.app" rel="noopener noreferrer"&gt;docs.langbot.app&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;💬 &lt;strong&gt;Issue Feedback&lt;/strong&gt;: &lt;a href="https://github.com/langbot-app/LangTARS/issues" rel="noopener noreferrer"&gt;GitHub Issues&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;⭐ &lt;strong&gt;Support with a Star&lt;/strong&gt;: &lt;a href="https://github.com/langbot-app/LangTARS" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>ai</category>
      <category>opensource</category>
      <category>automation</category>
      <category>productivity</category>
    </item>
    <item>
      <title>7 Open-Source Frameworks for Deploying AI Bots to Messaging Platforms in 2026</title>
      <dc:creator>Rock</dc:creator>
      <pubDate>Sat, 28 Feb 2026 12:41:09 +0000</pubDate>
      <link>https://forem.com/rockchinq/7-open-source-frameworks-for-deploying-ai-bots-to-messaging-platforms-in-2026-5glj</link>
      <guid>https://forem.com/rockchinq/7-open-source-frameworks-for-deploying-ai-bots-to-messaging-platforms-in-2026-5glj</guid>
      <description>&lt;p&gt;I spent the last few weeks evaluating open-source frameworks for a project that needed an AI chatbot running on multiple messaging platforms simultaneously — specifically Discord, Telegram, and WeChat. &lt;/p&gt;

&lt;p&gt;The existing "best chatbot framework" listicles are mostly outdated (still recommending Dialogflow and BotKit in 2026?), so I figured I'd share what I actually found useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Was Looking For
&lt;/h2&gt;

&lt;p&gt;My requirements were pretty specific:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-platform&lt;/strong&gt;: One codebase, multiple messaging apps (not just web chat)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLM-native&lt;/strong&gt;: Built for connecting to GPT, Claude, DeepSeek, etc. — not NLU-era intent matching&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-hosted&lt;/strong&gt;: Full control over data and deployment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Actually maintained&lt;/strong&gt;: Regular commits, active community, recent releases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's what made the cut, organized by use case.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Botpress — The Enterprise Visual Builder
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: 14.5k ⭐ | &lt;strong&gt;Language&lt;/strong&gt;: TypeScript | &lt;strong&gt;License&lt;/strong&gt;: MIT&lt;/p&gt;

&lt;p&gt;Botpress has been around since 2017 and has evolved significantly. It now offers a visual flow builder, built-in NLU, and native integrations with Slack, Telegram, Messenger, and Microsoft Teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Polished visual editor — genuinely usable by non-developers&lt;/li&gt;
&lt;li&gt;Built-in knowledge base and RAG&lt;/li&gt;
&lt;li&gt;Large plugin ecosystem&lt;/li&gt;
&lt;li&gt;Good documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No WeChat, QQ, LINE, or DingTalk support&lt;/li&gt;
&lt;li&gt;Cloud-first model — self-hosting is possible but clearly not the priority&lt;/li&gt;
&lt;li&gt;Some advanced features gated behind paid plans&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Teams that want a visual builder and primarily target Western messaging platforms.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Rasa — The NLU Veteran
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: 21k ⭐ | &lt;strong&gt;Language&lt;/strong&gt;: Python | &lt;strong&gt;License&lt;/strong&gt;: Apache 2.0&lt;/p&gt;

&lt;p&gt;Rasa is the OG of open-source chatbots. It's battle-tested in enterprise environments and offers the most sophisticated NLU pipeline of any open-source tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Most mature conversation management (stories, rules, forms)&lt;/li&gt;
&lt;li&gt;Strong NLU with entity extraction&lt;/li&gt;
&lt;li&gt;Extensive enterprise track record&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Designed for the pre-LLM era — bolting on GPT feels awkward&lt;/li&gt;
&lt;li&gt;Steep learning curve&lt;/li&gt;
&lt;li&gt;Recent pivot to Rasa Pro (commercial) has fragmented the open-source offering&lt;/li&gt;
&lt;li&gt;Multi-platform support requires custom connectors&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Enterprise teams with existing Rasa deployments or complex NLU requirements.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Wechaty — The WeChat Specialist
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: 22.5k ⭐ | &lt;strong&gt;Language&lt;/strong&gt;: TypeScript | &lt;strong&gt;License&lt;/strong&gt;: Apache 2.0&lt;/p&gt;

&lt;p&gt;If your primary target is WeChat, Wechaty is the standard. It provides a clean RPA-style SDK for WeChat automation and has expanded to support WhatsApp, Lark, and a few other platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Best WeChat integration available&lt;/li&gt;
&lt;li&gt;Clean, developer-friendly API&lt;/li&gt;
&lt;li&gt;Strong community in the Chinese developer ecosystem&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;WeChat-centric — other platform support is secondary&lt;/li&gt;
&lt;li&gt;No built-in AI/LLM integration (BYO everything)&lt;/li&gt;
&lt;li&gt;WeChat's anti-bot measures can cause issues&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Projects where WeChat is the primary or only platform.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Flowise — Visual LLM Chains
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: 49k ⭐ | &lt;strong&gt;Language&lt;/strong&gt;: TypeScript | &lt;strong&gt;License&lt;/strong&gt;: Apache 2.0&lt;/p&gt;

&lt;p&gt;Flowise gives you a drag-and-drop UI for building LangChain flows. It was acquired by Workday in 2025, which gives it enterprise backing but raises questions about long-term open-source commitment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Beautiful visual builder for LLM chains&lt;/li&gt;
&lt;li&gt;Direct LangChain integration&lt;/li&gt;
&lt;li&gt;Easy to prototype RAG applications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Not really a "messaging bot" framework — it's an LLM orchestrator&lt;/li&gt;
&lt;li&gt;Messaging platform integrations are limited and feel bolted-on&lt;/li&gt;
&lt;li&gt;Post-acquisition direction unclear&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Prototyping LLM workflows and RAG applications, not multi-platform messaging bots.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. LangBot — Multi-Platform IM + LLM Hub
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: 15.4k ⭐ | &lt;strong&gt;Language&lt;/strong&gt;: Python | &lt;strong&gt;License&lt;/strong&gt;: MIT&lt;/p&gt;

&lt;p&gt;This one surprised me. LangBot (formerly QChatGPT) focuses specifically on the gap between AI backends and messaging platforms. It supports 10+ IM platforms including QQ, WeChat, Discord, Telegram, Slack, LINE, Lark, and DingTalk — which is more than anything else I found.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Widest messaging platform coverage (both Chinese and international)&lt;/li&gt;
&lt;li&gt;Native integration with Dify, n8n, Langflow, Coze as "runners" — so you can use visual workflow tools for AI logic&lt;/li&gt;
&lt;li&gt;Also supports direct OpenAI/Claude/Gemini connections&lt;/li&gt;
&lt;li&gt;Pipeline architecture — different bots can use different AI backends&lt;/li&gt;
&lt;li&gt;Cross-process plugin isolation (plugins can't crash the main process)&lt;/li&gt;
&lt;li&gt;WebUI for management&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.dify.ai/en/learn-more/use-cases/connect-dify-to-various-im-platforms-by-using-langbot" rel="noopener noreferrer"&gt;Listed in Dify's official docs&lt;/a&gt; as the recommended way to connect Dify to messaging platforms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Documentation is bilingual (Chinese/English) but English docs are thinner&lt;/li&gt;
&lt;li&gt;Newer project — smaller Western community compared to Botpress/Rasa&lt;/li&gt;
&lt;li&gt;Plugin ecosystem is still rebuilding after a major architecture change&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Anyone who needs to deploy an AI bot to multiple messaging platforms, especially if you're using Dify, n8n, or Langflow for AI orchestration.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. AstrBot — The Community-Focused Alternative
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: 18.3k ⭐ | &lt;strong&gt;Language&lt;/strong&gt;: Python | &lt;strong&gt;License&lt;/strong&gt;: MIT&lt;/p&gt;

&lt;p&gt;AstrBot is LangBot's closest competitor and actually has more GitHub stars. It supports QQ, WeChat, Telegram, and Feishu with a simpler setup process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Easy to get started&lt;/li&gt;
&lt;li&gt;Active Chinese developer community&lt;/li&gt;
&lt;li&gt;Good plugin ecosystem for entertainment use cases&lt;/li&gt;
&lt;li&gt;Dify integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fewer international platform integrations (no Discord, Slack, LINE, DingTalk)&lt;/li&gt;
&lt;li&gt;More focused on consumer/entertainment than B2B&lt;/li&gt;
&lt;li&gt;Less modular architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Chinese IM platforms with a focus on community/entertainment bots.&lt;/p&gt;




&lt;h2&gt;
  
  
  7. n8n + Custom Connectors — The DIY Approach
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: 177k ⭐ | &lt;strong&gt;Language&lt;/strong&gt;: TypeScript | &lt;strong&gt;License&lt;/strong&gt;: Sustainable Use License&lt;/p&gt;

&lt;p&gt;n8n isn't a chatbot framework per se, but its AI Agent nodes combined with messaging triggers (Telegram, Slack, Discord) make it a legitimate option. You build the entire flow visually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Most flexible — literally any workflow logic&lt;/li&gt;
&lt;li&gt;400+ integrations for business logic&lt;/li&gt;
&lt;li&gt;Strong AI Agent support with tool calling&lt;/li&gt;
&lt;li&gt;Huge community&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No native WeChat, QQ, or LINE support&lt;/li&gt;
&lt;li&gt;Each platform needs its own trigger setup&lt;/li&gt;
&lt;li&gt;Not designed for high-throughput chat scenarios&lt;/li&gt;
&lt;li&gt;Conversation memory management is manual&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Teams already using n8n who want to add AI chat capabilities to a few platforms.&lt;/p&gt;




&lt;h2&gt;
  
  
  Comparison Matrix
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Botpress&lt;/th&gt;
&lt;th&gt;Rasa&lt;/th&gt;
&lt;th&gt;Wechaty&lt;/th&gt;
&lt;th&gt;Flowise&lt;/th&gt;
&lt;th&gt;LangBot&lt;/th&gt;
&lt;th&gt;AstrBot&lt;/th&gt;
&lt;th&gt;n8n&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Discord&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Telegram&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Slack&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;WeChat&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;QQ&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LINE&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lark/Feishu&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DingTalk&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Visual Builder&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;via Dify/n8n&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LLM-Native&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Self-Hosted&lt;/td&gt;
&lt;td&gt;⚠️&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dify Integration&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Plugin System&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  My Takeaway
&lt;/h2&gt;

&lt;p&gt;The chatbot landscape has split into two worlds:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Western-focused tools&lt;/strong&gt; (Botpress, Rasa) have good docs and polished UIs but barely support Asian messaging platforms. They were built for a pre-LLM world and are retrofitting AI capabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Asia-origin tools&lt;/strong&gt; (LangBot, AstrBot, Wechaty) cover WeChat/QQ/DingTalk but are less known in Western developer circles. The newer ones (LangBot, AstrBot) are LLM-native from the ground up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow tools&lt;/strong&gt; (n8n, Flowise) aren't chatbot frameworks but are increasingly used as AI backends — especially when paired with a dedicated messaging layer.&lt;/p&gt;

&lt;p&gt;If I had to pick one today for a project spanning both Chinese and international platforms, I'd probably go with LangBot + Dify. The Dify integration is officially documented and supported on both sides, and the platform coverage is unmatched. For Western-only deployments, Botpress is the safe choice.&lt;/p&gt;

&lt;p&gt;What's your setup? I'm curious what other people are using — drop a comment.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This comparison is based on my evaluation in February 2026. Stars, features, and project directions change fast in this space.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>chatbot</category>
      <category>opensource</category>
      <category>telegram</category>
    </item>
    <item>
      <title>Deep Dive into the LangBot Plugin System: Process Isolation, Event-Driven Hooks &amp; Component Architecture</title>
      <dc:creator>Rock</dc:creator>
      <pubDate>Sat, 28 Feb 2026 10:36:48 +0000</pubDate>
      <link>https://forem.com/rockchinq/deep-dive-into-the-langbot-plugin-system-process-isolation-event-driven-hooks-component-2b4m</link>
      <guid>https://forem.com/rockchinq/deep-dive-into-the-langbot-plugin-system-process-isolation-event-driven-hooks-component-2b4m</guid>
      <description>&lt;p&gt;Most chatbot frameworks call their "plugin system" a glorified dynamic import of Python modules. LangBot 4.0 takes a harder but more principled approach — &lt;strong&gt;every plugin runs in its own process&lt;/strong&gt;, communicating with the host through a structured JSON-RPC-style protocol.&lt;/p&gt;

&lt;p&gt;This article dissects the system from source code, end to end.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overall Architecture: A Three-Layer Process Model
&lt;/h2&gt;

&lt;p&gt;LangBot's plugin system consists of three cooperating process layers:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sm0x9jliwbrtp86cso4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sm0x9jliwbrtp86cso4.png" alt="LangBot Plugin System Architecture" width="800" height="622"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Each layer has a distinct responsibility:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;LangBot Main Process&lt;/strong&gt;: Runs business logic (message pipelines, platform adapters, model invocations), connects to Runtime via &lt;code&gt;PluginRuntimeConnector&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plugin Runtime&lt;/strong&gt;: The orchestration layer — discovers, launches, and manages all plugin subprocesses, routes requests from the main process to the appropriate plugin.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plugin Subprocesses&lt;/strong&gt;: Each plugin runs in its own Python process, communicating with Runtime via stdio pipes.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Why Three Layers Instead of Two?
&lt;/h3&gt;

&lt;p&gt;The intuitive design would have the main process manage plugin processes directly. LangBot adds the Runtime layer for &lt;strong&gt;deployment flexibility&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Local development&lt;/strong&gt;: Main process spawns Runtime as a child via stdio (zero config)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker production&lt;/strong&gt;: Runtime runs as a separate container, connected via WebSocket&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Windows compatibility&lt;/strong&gt;: Since Windows asyncio has incomplete stdio subprocess support, it automatically falls back to WebSocket&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The same codebase — no config changes — adapts from development to production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Communication Protocol: JSON-RPC-Style Request/Response
&lt;/h2&gt;

&lt;p&gt;All cross-process communication runs on a unified protocol layer. The core data structures are minimal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Request
&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ActionRequest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pydantic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;BaseModel&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;seq_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;    &lt;span class="c1"&gt;# Sequence number for matching request/response
&lt;/span&gt;    &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;    &lt;span class="c1"&gt;# Action name
&lt;/span&gt;    &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;     &lt;span class="c1"&gt;# Payload
&lt;/span&gt;
&lt;span class="c1"&gt;# Response
&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ActionResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pydantic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;BaseModel&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;seq_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;
    &lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;           &lt;span class="c1"&gt;# 0 = success
&lt;/span&gt;    &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;
    &lt;span class="n"&gt;chunk_status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;   &lt;span class="c1"&gt;# "continue" | "end" (streaming support)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;Handler&lt;/code&gt; class is the system's core abstraction, acting as both RPC client and server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Handler&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;call_action&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;15.0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Actively call an action provided by the peer, wait for response&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;seq_id_index&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="n"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ActionRequest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;make_request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;seq_id_index&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;future&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Future&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;resp_waiters&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;seq_id_index&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;future&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;model_dump&lt;/span&gt;&lt;span class="p"&gt;()))&lt;/span&gt;
        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wait_for&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;future&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;

    &lt;span class="nd"&gt;@action&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;SomeAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DO_SOMETHING&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;handle_something&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;ActionResponse&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Register an action for the peer to call&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;ActionResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;success&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;result&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ok&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key design points:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;seq_id&lt;/code&gt;-based request/response matching enables full-duplex concurrent calls&lt;/li&gt;
&lt;li&gt;Streaming responses via &lt;code&gt;chunk_status&lt;/code&gt; for long-running operations like command execution&lt;/li&gt;
&lt;li&gt;Large messages auto-chunk (stdio: 16KB / WebSocket: 64KB per chunk)&lt;/li&gt;
&lt;li&gt;File transfer uses a separate base64 chunking mechanism&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Action Enums: Clear API Contracts
&lt;/h3&gt;

&lt;p&gt;The system defines all cross-process calls through four enum groups:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Plugin → Runtime (plugin-initiated requests)
&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;PluginToRuntimeAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;REGISTER_PLUGIN&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;register_plugin&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;SEND_MESSAGE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;send_message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;        &lt;span class="c1"&gt;# Send message to a platform
&lt;/span&gt;    &lt;span class="n"&gt;INVOKE_LLM&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;invoke_llm&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;           &lt;span class="c1"&gt;# Call an LLM
&lt;/span&gt;    &lt;span class="n"&gt;SET_PLUGIN_STORAGE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;set_plugin_storage&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;  &lt;span class="c1"&gt;# Persistent storage
&lt;/span&gt;    &lt;span class="c1"&gt;# ...
&lt;/span&gt;
&lt;span class="c1"&gt;# Runtime → Plugin (runtime-dispatched commands)
&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;RuntimeToPluginAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;INITIALIZE_PLUGIN&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;initialize_plugin&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;EMIT_EVENT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;emit_event&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;CALL_TOOL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;call_tool&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;EXECUTE_COMMAND&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;execute_command&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;SHUTDOWN&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;shutdown&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="c1"&gt;# ...
&lt;/span&gt;
&lt;span class="c1"&gt;# LangBot Main → Runtime
&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;LangBotToRuntimeAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;INSTALL_PLUGIN&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;install_plugin&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;EMIT_EVENT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;emit_event&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;LIST_TOOLS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;list_tools&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="c1"&gt;# ...
&lt;/span&gt;
&lt;span class="c1"&gt;# Runtime → LangBot Main
&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;RuntimeToLangBotAction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;GET_PLUGIN_SETTINGS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;get_plugin_settings&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;SET_BINARY_STORAGE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;set_binary_storage&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="c1"&gt;# ...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This makes API boundaries crystal clear — what a plugin can and cannot do is defined entirely by these enums.&lt;/p&gt;

&lt;h2&gt;
  
  
  Plugin Lifecycle
&lt;/h2&gt;

&lt;p&gt;A plugin goes through these stages from installation to execution:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Discovery
&lt;/h3&gt;

&lt;p&gt;On startup, Runtime scans the &lt;code&gt;data/plugins/&lt;/code&gt; directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;launch_all_plugins&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;plugin_path&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;glob&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;glob&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data/plugins/*&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isdir&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;plugin_path&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="k"&gt;continue&lt;/span&gt;
        &lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;launch_plugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;plugin_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;plugin_run_tasks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Directory names follow the &lt;code&gt;{author}__{name}&lt;/code&gt; convention, each containing a &lt;code&gt;manifest.yaml&lt;/code&gt; and plugin code.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Launch
&lt;/h3&gt;

&lt;p&gt;Runtime spawns an independent subprocess for each plugin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;launch_plugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;plugin_path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;python_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;executable&lt;/span&gt;
    &lt;span class="n"&gt;args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-m&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;langbot_plugin.cli.__init__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;run&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--prod&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="n"&gt;ctrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;StdioClientController&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;python_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;working_dir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;plugin_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Each plugin runs in its own directory
&lt;/span&gt;    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;ctrl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;new_plugin_connection_callback&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key detail&lt;/strong&gt;: The subprocess working directory is set to the plugin's own directory — natural filesystem isolation.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Registration
&lt;/h3&gt;

&lt;p&gt;After starting, the plugin process actively registers itself with Runtime:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Runtime-side registration handler
&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;register_plugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;container_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;debug_plugin&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;plugin_container&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;PluginContainer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_dict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;container_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c1"&gt;# Fetch plugin settings from the main process
&lt;/span&gt;    &lt;span class="n"&gt;plugin_settings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;control_handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call_action&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;RuntimeToLangBotAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GET_PLUGIN_SETTINGS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{...}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c1"&gt;# Initialize the plugin (send config)
&lt;/span&gt;    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;initialize_plugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;plugin_settings&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c1"&gt;# Store the plugin container
&lt;/span&gt;    &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;plugins&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;plugin_container&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Running
&lt;/h3&gt;

&lt;p&gt;Once in &lt;code&gt;INITIALIZED&lt;/code&gt; state, the plugin can receive events, tool calls, and command executions.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Shutdown
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;shutdown_plugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;plugin_container&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# 1. Notify the plugin to shut down gracefully
&lt;/span&gt;    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;plugin_container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;_runtime_plugin_handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;shutdown_plugin&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="c1"&gt;# 2. Close the communication connection
&lt;/span&gt;    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;plugin_container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;_runtime_plugin_handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="c1"&gt;# 3. Kill the subprocess
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stdio_process&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stdio_process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;kill&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wait_for&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stdio_process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wait&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Component System: Four Extension Types
&lt;/h2&gt;

&lt;p&gt;A LangBot plugin isn't a single hook function — it's a &lt;strong&gt;component container&lt;/strong&gt;. A single plugin can provide multiple component types simultaneously:&lt;/p&gt;

&lt;h3&gt;
  
  
  EventListener
&lt;/h3&gt;

&lt;p&gt;The most fundamental extension — listen for events in the message pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langbot_plugin.api.definition.components.common.event_listener&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;EventListener&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langbot_plugin.api.entities.events&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;PersonNormalMessageReceived&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langbot_plugin.api.entities.context&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;EventContext&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MyListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;EventListener&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nd"&gt;@EventListener.handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;PersonNormalMessageReceived&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;on_person_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;EventContext&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;event&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;
        &lt;span class="c1"&gt;# Modify the user message before it reaches the LLM
&lt;/span&gt;        &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user_message_alter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Answer in poetry: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text_message&lt;/span&gt;

        &lt;span class="c1"&gt;# Or block further processing
&lt;/span&gt;        &lt;span class="c1"&gt;# ctx.prevent_default()
&lt;/span&gt;        &lt;span class="c1"&gt;# ctx.prevent_postorder()
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Supported events cover the full message lifecycle:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Event&lt;/th&gt;
&lt;th&gt;Trigger&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;PersonMessageReceived&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Any private message received&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;GroupMessageReceived&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Any group message received&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;PersonNormalMessageReceived&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Private message deemed processable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;GroupNormalMessageReceived&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Group message deemed processable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;NormalMessageResponded&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;LLM response completed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;PromptPreProcessing&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Prompt preprocessing stage&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Event propagation supports two interruption modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;prevent_default()&lt;/code&gt;: Skip default behavior (e.g., skip the LLM call)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;prevent_postorder()&lt;/code&gt;: Stop subsequent plugins from running&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tool
&lt;/h3&gt;

&lt;p&gt;Tools for LLM Function Calling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langbot_plugin.api.definition.components.tool.tool&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Tool&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;WeatherTool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Tool&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;query_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;city&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;city&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Beijing&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="c1"&gt;# Call weather API...
&lt;/span&gt;        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: Sunny, 25°C&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tool metadata (name, description, parameter schema) is defined in a companion YAML manifest file. LangBot automatically converts this into the Function definition that LLMs understand.&lt;/p&gt;

&lt;h3&gt;
  
  
  Command
&lt;/h3&gt;

&lt;p&gt;User-triggered commands via &lt;code&gt;!command&lt;/code&gt;, with subcommand support:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langbot_plugin.api.definition.components.command.command&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Command&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MyCommand&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Command&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;super&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

        &lt;span class="nd"&gt;@self.subcommand&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hello&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;help&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Say hello&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;hello&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="nc"&gt;CommandReturn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello from plugin!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="nd"&gt;@self.subcommand&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;help&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Show status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="nc"&gt;CommandReturn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;All systems operational.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Command results are returned via &lt;code&gt;AsyncGenerator&lt;/code&gt;, providing natural streaming output.&lt;/p&gt;

&lt;h3&gt;
  
  
  KnowledgeRetriever
&lt;/h3&gt;

&lt;p&gt;A multi-instance component for connecting external knowledge bases:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langbot_plugin.api.definition.components.knowledge_retriever.retriever&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;KnowledgeRetriever&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MyRetriever&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;KnowledgeRetriever&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;retrieve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search_external_db&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;RetrievalResultEntry&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;KnowledgeRetriever is a &lt;strong&gt;polymorphic component&lt;/strong&gt; — a single retriever class can spawn multiple instances, each with independent configuration. This allows users to connect multiple different external knowledge bases.&lt;/p&gt;

&lt;h2&gt;
  
  
  SDK API: What Plugins Can Do
&lt;/h2&gt;

&lt;p&gt;Plugins gain rich capabilities through the &lt;code&gt;LangBotAPIProxy&lt;/code&gt; inherited by &lt;code&gt;BasePlugin&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;LangBotAPIProxy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Message operations
&lt;/span&gt;    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;send_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;bot_uuid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;target_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;target_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message_chain&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Model invocation
&lt;/span&gt;    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_llm_models&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;invoke_llm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model_uuid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;funcs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt; &lt;span class="n"&gt;extra_args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;

    &lt;span class="c1"&gt;# Persistent storage (plugin-level isolation)
&lt;/span&gt;    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;set_plugin_storage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_plugin_storage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;bytes&lt;/span&gt;

    &lt;span class="c1"&gt;# Workspace storage (cross-plugin shared)
&lt;/span&gt;    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;set_workspace_storage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_workspace_storage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;bytes&lt;/span&gt;

    &lt;span class="c1"&gt;# System info
&lt;/span&gt;    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_langbot_version&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_bots&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;list_plugins_manifest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The storage API design is worth noting&lt;/strong&gt;: Two levels of KV storage — &lt;code&gt;plugin_storage&lt;/code&gt; (plugin-private) and &lt;code&gt;workspace_storage&lt;/code&gt; (globally shared), storing data as bytes (base64-serialized in transit). Simple but flexible enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Event Dispatch Mechanism
&lt;/h2&gt;

&lt;p&gt;The complete path from main process to plugin:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo8ma6l3zd6vdp6m7j4a8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo8ma6l3zd6vdp6m7j4a8.png" alt="Event Dispatch Flow" width="800" height="750"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Key source code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;emit_event&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;event_context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;include_plugins&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;plugin&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;plugins&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;plugin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="n"&gt;RuntimeContainerStatus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INITIALIZED&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;continue&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;plugin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;enabled&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;continue&lt;/span&gt;

        &lt;span class="c1"&gt;# Pipeline-level plugin filtering
&lt;/span&gt;        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;include_plugins&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;plugin_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;plugin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;manifest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;author&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;plugin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;manifest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;plugin_id&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;include_plugins&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="k"&gt;continue&lt;/span&gt;

        &lt;span class="n"&gt;resp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;plugin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;_runtime_plugin_handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;emit_event&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;event_context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;model_dump&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;event_context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;EventContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;model_validate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;event_context&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

        &lt;span class="c1"&gt;# Plugin requested propagation stop
&lt;/span&gt;        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;event_context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;is_prevented_postorder&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
            &lt;span class="k"&gt;break&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;emitted_plugins&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;event_context&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;include_plugins&lt;/code&gt; parameter enables &lt;strong&gt;pipeline-level plugin binding&lt;/strong&gt; — different message processing pipelines can use different subsets of plugins.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation &amp;amp; Distribution
&lt;/h2&gt;

&lt;p&gt;Plugins support three installation sources:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Local upload&lt;/strong&gt;: &lt;code&gt;.lbpkg&lt;/code&gt; files (actually zip archives containing manifest.yaml and code)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Marketplace&lt;/strong&gt;: Install from LangBot Space online&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Release&lt;/strong&gt;: Download from a GitHub repository's Release assets&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The installation flow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;install_plugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;install_info&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;current_action&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;downloading plugin package&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="c1"&gt;# 1. Fetch and extract the plugin package (unzip)
&lt;/span&gt;    &lt;span class="n"&gt;plugin_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;author&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;version&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;install_plugin_from_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;plugin_file&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;current_action&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;installing dependencies&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="c1"&gt;# 2. Install dependencies (pip install -r requirements.txt)
&lt;/span&gt;    &lt;span class="n"&gt;pkgmgr_helper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;install_requirements&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;requirements_file&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;current_action&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;initializing plugin settings&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="c1"&gt;# 3. Initialize configuration
&lt;/span&gt;    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;control_handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call_action&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;RuntimeToLangBotAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INITIALIZE_PLUGIN_SETTINGS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{...}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;current_action&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;launching plugin&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="c1"&gt;# 4. Launch the plugin process
&lt;/span&gt;    &lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;launch_plugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;plugin_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The entire process reports progress via &lt;code&gt;AsyncGenerator&lt;/code&gt;, enabling real-time installation status in the frontend.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developer Experience
&lt;/h2&gt;

&lt;p&gt;The SDK provides a complete developer toolchain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Initialize a new plugin&lt;/span&gt;
lbp init

&lt;span class="c"&gt;# Add a component&lt;/span&gt;
lbp component add

&lt;span class="c"&gt;# Run locally for debugging&lt;/span&gt;
lbp run

&lt;span class="c"&gt;# Package for publication&lt;/span&gt;
lbp publish
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Debug mode&lt;/strong&gt; has a particularly clever design: the developer's plugin connects to the running Runtime via WebSocket (instead of stdio), meaning you can hot-reload plugin code without restarting LangBot. Debug plugins are specially marked in the UI and protected from accidental deletion.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparisons with Other Systems
&lt;/h2&gt;

&lt;h3&gt;
  
  
  vs Dify Plugins
&lt;/h3&gt;

&lt;p&gt;Dify's plugin system (&lt;code&gt;dify-plugin-daemon&lt;/code&gt;) shares the process isolation philosophy with LangBot, but the focus differs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dify&lt;/strong&gt;: Plugins extend workflow node types (Tool, Model, Extension) — designed for AI application orchestration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LangBot&lt;/strong&gt;: Plugins extend the message processing pipeline (Event, Tool, Command, KnowledgeRetriever) — designed for instant messaging scenarios&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LangBot's &lt;code&gt;EventListener&lt;/code&gt; component provides a capability Dify lacks — injecting logic at any stage of message processing.&lt;/p&gt;

&lt;h3&gt;
  
  
  vs MCP (Model Context Protocol)
&lt;/h3&gt;

&lt;p&gt;MCP is a standardized protocol for AI tool invocation. LangBot's Tool component and MCP services overlap functionally, but serve different purposes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MCP&lt;/strong&gt;: A universal "AI calls external capabilities" protocol, usable by any LLM application&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LangBot Tool&lt;/strong&gt;: Deeply integrated with message processing context, with access to session info, user identity, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, LangBot natively supports MCP — users can configure MCP servers directly in LangBot without writing plugins. LangBot's Tool component is for scenarios requiring access to LangBot's internal context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design Decisions Explained
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why process isolation instead of threads/coroutines?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Plugin code quality is unpredictable; a segfault shouldn't crash the entire service&lt;/li&gt;
&lt;li&gt;Dependency isolation: different plugins may depend on different versions of the same library&lt;/li&gt;
&lt;li&gt;Resource control: you can set per-plugin process resource limits&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why JSON instead of Protobuf/MessagePack?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Debug-friendly: developers can directly read communication logs&lt;/li&gt;
&lt;li&gt;Natively supported in Python, no extra dependencies&lt;/li&gt;
&lt;li&gt;The performance bottleneck isn't serialization (plugin call frequency is far below database queries)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why stdio over WebSocket by default?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;stdio requires no network stack — lower latency&lt;/li&gt;
&lt;li&gt;Simpler process lifecycle management (child processes auto-cleanup when parent exits)&lt;/li&gt;
&lt;li&gt;WebSocket is only used where stdio isn't supported (Docker, Windows)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;LangBot's plugin system is a &lt;strong&gt;production-grade, process-isolated, event-driven component framework for extensibility&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Its core design principles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Safety first&lt;/strong&gt;: Process isolation ensures plugins can't destabilize the main service&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment flexibility&lt;/strong&gt;: Dual stdio/WebSocket modes adapt to all environments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developer-friendly&lt;/strong&gt;: Complete SDK, CLI, and debug support&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Component-based&lt;/strong&gt;: Four component types cover the major extension needs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you're interested in developing LangBot plugins, start with the &lt;a href="https://docs.langbot.app/en/plugin/dev/tutor" rel="noopener noreferrer"&gt;plugin development docs&lt;/a&gt;, or browse existing plugins on the &lt;a href="https://space.langbot.app/market" rel="noopener noreferrer"&gt;marketplace&lt;/a&gt; for inspiration.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://blog.langbot.app/en/posts/langbot-plugin-system-deep-dive/" rel="noopener noreferrer"&gt;LangBot Blog&lt;/a&gt;. Star us on &lt;a href="https://github.com/langbot-app/LangBot" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>python</category>
      <category>opensource</category>
    </item>
    <item>
      <title>How to Connect DeepSeek R1 to WeChat, Discord &amp; Telegram in 5 Minutes (FREE)</title>
      <dc:creator>Rock</dc:creator>
      <pubDate>Sat, 28 Feb 2026 10:36:12 +0000</pubDate>
      <link>https://forem.com/rockchinq/how-to-connect-deepseek-r1-to-wechat-discord-telegram-in-5-minutes-free-2bdc</link>
      <guid>https://forem.com/rockchinq/how-to-connect-deepseek-r1-to-wechat-discord-telegram-in-5-minutes-free-2bdc</guid>
      <description>&lt;p&gt;DeepSeek has taken the AI world by storm. Its R1 reasoning model rivals OpenAI's o1 but is open-source and significantly cheaper.&lt;/p&gt;

&lt;p&gt;However, the official DeepSeek app currently only supports "AI Search" in WeChat, and direct integration into group chats or private workflows is limited.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What if you could have a fully functional DeepSeek R1 bot in your WeChat groups, Discord servers, and Telegram chats right now?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With &lt;strong&gt;LangBot&lt;/strong&gt;, you can. And it takes less than 5 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why LangBot?
&lt;/h2&gt;

&lt;p&gt;LangBot is an open-source, production-grade IM bot platform. Unlike simple scripts or single-platform bots, LangBot gives you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Unified Platform:&lt;/strong&gt; Connect once, deploy to WeChat, Discord, Telegram, Slack, Lark, and more.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Model Agnostic:&lt;/strong&gt; Use DeepSeek R1, V3, Claude 3.5, GPT-4, or local Ollama models.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;No Coding:&lt;/strong&gt; robust WebUI for configuration.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Enterprise Features:&lt;/strong&gt; Knowledge base (RAG), Plugin system, and multi-user management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;A Server or PC:&lt;/strong&gt; Docker installed (VPS, local computer, or Synology/NAS).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;DeepSeek API Key:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Official:&lt;/strong&gt; &lt;a href="https://platform.deepseek.com" rel="noopener noreferrer"&gt;platform.deepseek.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;SiliconFlow (Recommended for speed):&lt;/strong&gt; &lt;a href="https://cloud.siliconflow.cn" rel="noopener noreferrer"&gt;cloud.siliconflow.cn&lt;/a&gt; (Faster R1 inference).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 1: Deploy LangBot
&lt;/h2&gt;

&lt;p&gt;If you haven't deployed LangBot yet, run this one command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/langbot-app/LangBot
&lt;span class="nb"&gt;cd &lt;/span&gt;LangBot/docker
docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access your dashboard at &lt;code&gt;http://localhost:5300&lt;/code&gt; (or your server IP).&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Configure DeepSeek Model
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Go to &lt;strong&gt;Models&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Provider List&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; Find &lt;strong&gt;DeepSeek&lt;/strong&gt; (or &lt;strong&gt;SiliconFlow&lt;/strong&gt; / &lt;strong&gt;OpenAI Compatible&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt; Enter your API Key.&lt;/li&gt;
&lt;li&gt; Click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.langbot.app%2Fimages%2Fblog%2Fdeepseek-wechat%2Fmodel-config.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.langbot.app%2Fimages%2Fblog%2Fdeepseek-wechat%2Fmodel-config.png" alt="DeepSeek Model Configuration" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now create a &lt;strong&gt;Model Instance&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Name:&lt;/strong&gt; &lt;code&gt;DeepSeek-R1-Bot&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Model:&lt;/strong&gt; &lt;code&gt;deepseek-reasoner&lt;/code&gt; (for R1) or &lt;code&gt;deepseek-chat&lt;/code&gt; (for V3).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 3: Create a Chat Pipeline
&lt;/h2&gt;

&lt;p&gt;LangBot uses "Pipelines" to manage bot logic.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Go to &lt;strong&gt;Pipelines&lt;/strong&gt; -&amp;gt; &lt;strong&gt;New Pipeline&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; Choose &lt;strong&gt;Chat Pipeline&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; Select your &lt;code&gt;DeepSeek-R1-Bot&lt;/code&gt; model.&lt;/li&gt;
&lt;li&gt; (Optional) Add a System Prompt:
&amp;gt; "You are a helpful assistant powered by DeepSeek R1. You think deeply before answering."&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 4: Connect to WeChat (or any platform)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  For WeChat (Personal / Wechaty)
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Note: Personal WeChat access relies on third-party libraries and carries some risk. For business use, we recommend Enterprise WeChat (WeCom).&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Go to &lt;strong&gt;Bots&lt;/strong&gt; -&amp;gt; &lt;strong&gt;New Bot&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; Select &lt;strong&gt;WeChat&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; Choose the &lt;strong&gt;GeweChat&lt;/strong&gt; or &lt;strong&gt;Wechaty&lt;/strong&gt; adapter (depending on your preference).&lt;/li&gt;
&lt;li&gt; Click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; Scan the QR code that appears in the logs or UI.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  For Discord / Telegram
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Select &lt;strong&gt;Discord&lt;/strong&gt; or &lt;strong&gt;Telegram&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; Paste your &lt;strong&gt;Bot Token&lt;/strong&gt; (from Discord Developer Portal or BotFather).&lt;/li&gt;
&lt;li&gt; Click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 5: Test It Out!
&lt;/h2&gt;

&lt;p&gt;Open your chat app and send a message. You should see DeepSeek R1 "thinking" (if supported) and replying with high-quality reasoning.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.langbot.app%2Fimages%2Fblog%2Fdeepseek-wechat%2Fwechat-demo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.langbot.app%2Fimages%2Fblog%2Fdeepseek-wechat%2Fwechat-demo.png" alt="WeChat Demo with DeepSeek" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced: Using Search &amp;amp; Tools
&lt;/h2&gt;

&lt;p&gt;DeepSeek is great, but DeepSeek with &lt;strong&gt;Internet Access&lt;/strong&gt; is better.&lt;/p&gt;

&lt;p&gt;In LangBot:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Go to &lt;strong&gt;Plugins&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; Install &lt;strong&gt;Tavily Search&lt;/strong&gt; or &lt;strong&gt;Google Search&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; Add the plugin to your DeepSeek Pipeline.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now your bot can search the web for real-time info &lt;em&gt;before&lt;/em&gt; reasoning with DeepSeek R1!&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Ready to build?&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://github.com/langbot-app/LangBot" rel="noopener noreferrer"&gt;Star us on GitHub&lt;/a&gt; or &lt;a href="https://docs.langbot.app" rel="noopener noreferrer"&gt;Read the Docs&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://blog.langbot.app/en/posts/connect-deepseek-to-wechat/" rel="noopener noreferrer"&gt;LangBot Blog&lt;/a&gt;. Star us on &lt;a href="https://github.com/langbot-app/LangBot" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>deepseek</category>
      <category>chatbot</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Deploy Your Own AI Bot to Discord, Telegram &amp; WeChat in 5 Minutes</title>
      <dc:creator>Rock</dc:creator>
      <pubDate>Sat, 28 Feb 2026 10:35:59 +0000</pubDate>
      <link>https://forem.com/rockchinq/deploy-your-own-ai-bot-to-discord-telegram-wechat-in-5-minutes-4ppo</link>
      <guid>https://forem.com/rockchinq/deploy-your-own-ai-bot-to-discord-telegram-wechat-in-5-minutes-4ppo</guid>
      <description>&lt;p&gt;What if you could have GPT-5, Claude, DeepSeek, and Gemini all answering questions in your Discord server, Telegram group, and WeChat — at the same time?&lt;/p&gt;

&lt;p&gt;No API wrangling. No weeks of development. Just one Docker command.&lt;/p&gt;

&lt;p&gt;That's &lt;strong&gt;LangBot&lt;/strong&gt; — and it just crossed &lt;strong&gt;15,000 stars&lt;/strong&gt; on GitHub.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem Everyone Faces
&lt;/h2&gt;

&lt;p&gt;You want an AI assistant in your team's chat. Maybe for customer support on Telegram. Maybe for a coding helper in Discord. Maybe for a knowledge base bot in your company's WeChat or Lark group.&lt;/p&gt;

&lt;p&gt;But then reality hits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each platform has its own bot API, webhook format, and auth flow&lt;/li&gt;
&lt;li&gt;You need to handle message queuing, session management, and error recovery&lt;/li&gt;
&lt;li&gt;Switching LLM providers means rewriting your integration layer&lt;/li&gt;
&lt;li&gt;Adding RAG or tool calling is yet another project&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LangBot solves all of this with a single, unified platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes LangBot Different
&lt;/h2&gt;

&lt;h3&gt;
  
  
  13+ Messaging Platforms, One Codebase
&lt;/h3&gt;

&lt;p&gt;Deploy a single LangBot instance and connect it to:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Global:&lt;/strong&gt; Discord, Telegram, Slack, LINE, WhatsApp&lt;br&gt;
&lt;strong&gt;Asia:&lt;/strong&gt; WeChat (Official Account), WeCom, QQ, Lark, DingTalk, Feishu, KOOK&lt;/p&gt;

&lt;p&gt;Each platform gets its own adapter — you just fill in your bot token in the WebUI and you're live.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90khrwhkytl60z4tdwbf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90khrwhkytl60z4tdwbf.png" alt="LangBot Bot Management Page" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  20+ LLM Models, Zero Lock-in
&lt;/h3&gt;

&lt;p&gt;Through &lt;strong&gt;LangBot Space&lt;/strong&gt;, you get instant access to 20 cloud models out of the box — no API keys to manage:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude&lt;/strong&gt; (Opus 4.6, Sonnet 4.5, Haiku 4.5)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPT&lt;/strong&gt; (GPT-5.2, GPT-5-mini, GPT-4.1-mini)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini&lt;/strong&gt; (3 Pro, 2.5 Pro, 2.5 Flash)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DeepSeek&lt;/strong&gt; (R1, V3)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grok&lt;/strong&gt; (4, 4.1)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Qwen&lt;/strong&gt; (3 Max)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or add your own providers — OpenAI-compatible endpoints, Ollama for local models, any provider you want.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05uwoqdrvj6l28xn4ura.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05uwoqdrvj6l28xn4ura.png" alt="Model Selection" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Built-in Agent with Tool Calling
&lt;/h3&gt;

&lt;p&gt;LangBot's Local Agent isn't just a chat wrapper — it's a full agent runtime:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-round conversations&lt;/strong&gt; with configurable memory&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Function calling / tool use&lt;/strong&gt; for LLM-driven actions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP (Model Context Protocol)&lt;/strong&gt; support for connecting to 100+ pre-built tools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Knowledge base (RAG)&lt;/strong&gt; with built-in vector search&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9i18k1kgf8sietrj5ge5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9i18k1kgf8sietrj5ge5.png" alt="Pipeline AI Configuration" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Plugin Marketplace
&lt;/h3&gt;

&lt;p&gt;37+ community plugins and growing — install with one click:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;WebSearch&lt;/strong&gt; — Let your bot search the web&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Image Generator&lt;/strong&gt; — Generate images from text&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LinkAnaly&lt;/strong&gt; — Auto-preview links in chat&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ScheNotify&lt;/strong&gt; — Schedule reminders with natural language&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Google Search&lt;/strong&gt;, &lt;strong&gt;Tavily Search&lt;/strong&gt;, &lt;strong&gt;RAGFlow Retriever&lt;/strong&gt;, and more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjg96f3wdgf05ptvayg2z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjg96f3wdgf05ptvayg2z.png" alt="LangBot Space Plugin Market" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy in 5 Minutes — For Real
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Run Docker Compose
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/langbot-app/LangBot
&lt;span class="nb"&gt;cd &lt;/span&gt;LangBot/docker
docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. LangBot is now running at &lt;code&gt;http://localhost:5300&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Initialize with LangBot Space
&lt;/h3&gt;

&lt;p&gt;Open the WebUI and click &lt;strong&gt;"Initialize with Space"&lt;/strong&gt;. This connects your instance to LangBot Space, giving you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;20 cloud models ready to use (with free credits)&lt;/li&gt;
&lt;li&gt;One-click plugin installation&lt;/li&gt;
&lt;li&gt;Managed API keys&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxu0iixs730ea7wh0m0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxu0iixs730ea7wh0m0d.png" alt="Initialize with Space" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Configure Your Pipeline
&lt;/h3&gt;

&lt;p&gt;Go to &lt;strong&gt;Pipelines&lt;/strong&gt; and edit the default &lt;code&gt;ChatPipeline&lt;/code&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select your model (e.g., &lt;code&gt;deepseek-v3&lt;/code&gt;, &lt;code&gt;gpt-5-mini&lt;/code&gt;, &lt;code&gt;claude-sonnet-4-5&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Customize the system prompt&lt;/li&gt;
&lt;li&gt;Optionally attach a knowledge base or enable tools&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwemw66bvemzpcd4nnjww.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwemw66bvemzpcd4nnjww.png" alt="Pipelines Page" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Connect a Platform
&lt;/h3&gt;

&lt;p&gt;Go to &lt;strong&gt;Bots&lt;/strong&gt; → click &lt;strong&gt;+&lt;/strong&gt; → choose your platform (Discord, Telegram, etc.) → enter your bot token.&lt;/p&gt;

&lt;p&gt;Done. Your bot is live.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Test It
&lt;/h3&gt;

&lt;p&gt;Use the built-in &lt;strong&gt;Debug Chat&lt;/strong&gt; to test your pipeline before going live:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa516shvixp8hp0n1qnd7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa516shvixp8hp0n1qnd7.png" alt="Debug Chat" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Real Conversations, Real Value
&lt;/h2&gt;

&lt;p&gt;Here's what it looks like when LangBot is running in a QQ group — users asking technical questions and getting instant, accurate answers:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lz6nqvv9bhqvpi7mgc1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lz6nqvv9bhqvpi7mgc1.png" alt="Group Chat Demo" width="671" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And in private chat:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2baqusyqlk5em6i70cgz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2baqusyqlk5em6i70cgz.png" alt="Private Chat Demo" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture That Scales
&lt;/h2&gt;

&lt;p&gt;LangBot is built for production:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pipeline architecture&lt;/strong&gt; — each bot binds to a pipeline; pipelines handle AI logic, triggers, safety controls, and output formatting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-process plugin isolation&lt;/strong&gt; — a bad plugin can't crash your bot&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple runner backends&lt;/strong&gt; — use LangBot's Local Agent, or connect to Dify, n8n, Langflow, Coze for complex workflows&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database flexibility&lt;/strong&gt; — SQLite for dev, PostgreSQL for production&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector DB options&lt;/strong&gt; — Chroma, Qdrant, Milvus, pgvector, SeekDB&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why 15,000+ Developers Choose LangBot
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;LangBot&lt;/th&gt;
&lt;th&gt;Building from Scratch&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Platforms&lt;/td&gt;
&lt;td&gt;13+ ready&lt;/td&gt;
&lt;td&gt;Weeks per platform&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LLM Providers&lt;/td&gt;
&lt;td&gt;20+ models&lt;/td&gt;
&lt;td&gt;Manual integration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Agent Runtime&lt;/td&gt;
&lt;td&gt;Built-in&lt;/td&gt;
&lt;td&gt;Build your own&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RAG&lt;/td&gt;
&lt;td&gt;Native + external&lt;/td&gt;
&lt;td&gt;Separate project&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Plugin System&lt;/td&gt;
&lt;td&gt;Marketplace&lt;/td&gt;
&lt;td&gt;DIY&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Deployment&lt;/td&gt;
&lt;td&gt;&lt;code&gt;docker compose up&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Days of setup&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;WebUI&lt;/td&gt;
&lt;td&gt;Included&lt;/td&gt;
&lt;td&gt;Build your own&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Get Started
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/langbot-app/LangBot" rel="noopener noreferrer"&gt;github.com/langbot-app/LangBot&lt;/a&gt; — give us a star!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation:&lt;/strong&gt; &lt;a href="https://docs.langbot.app" rel="noopener noreferrer"&gt;docs.langbot.app&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plugin Market:&lt;/strong&gt; &lt;a href="https://space.langbot.app" rel="noopener noreferrer"&gt;space.langbot.app&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/langbot-app/LangBot
&lt;span class="nb"&gt;cd &lt;/span&gt;LangBot/docker
docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your AI bot empire starts with one command.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://blog.langbot.app/en/posts/deploy-ai-bot-in-5-minutes/" rel="noopener noreferrer"&gt;LangBot Blog&lt;/a&gt;. Star us on &lt;a href="https://github.com/langbot-app/LangBot" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>chatbot</category>
      <category>opensource</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Finally Got My Dify Agent Working in Discord, Telegram and Slack</title>
      <dc:creator>Rock</dc:creator>
      <pubDate>Thu, 11 Dec 2025 04:27:01 +0000</pubDate>
      <link>https://forem.com/rockchinq/finally-got-my-dify-agent-working-in-discord-telegram-and-slack-4g2d</link>
      <guid>https://forem.com/rockchinq/finally-got-my-dify-agent-working-in-discord-telegram-and-slack-4g2d</guid>
      <description>&lt;p&gt;Want your Dify Agent to break free from the browser and chat directly in WeChat, QQ, or Telegram? Combine LangBot with Dify and you can set it up in under 10 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Combo?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/langbot-app/LangBot" rel="noopener noreferrer"&gt;&lt;strong&gt;LangBot&lt;/strong&gt;&lt;/a&gt; is the most powerful open-source multi-platform chatbot framework available. It supports major Chinese IMs (QQ, WeChat, Feishu, DingTalk) and international platforms (Telegram, Discord, Slack, LINE). Its core strength is connecting AI backends to messaging apps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dify&lt;/strong&gt; is one of the hottest AI app development platforms, offering visual Agent orchestration, 50+ built-in tools, and RAG knowledge bases. Together, they let you build a multi-platform AI assistant fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy LangBot
&lt;/h2&gt;

&lt;p&gt;Three commands with uvx:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;langbot-instance
&lt;span class="nb"&gt;cd &lt;/span&gt;langbot-instance
uvx langbot@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Visit &lt;code&gt;http://127.0.0.1:5300&lt;/code&gt; and register an admin account:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6umjm76ee3tcibw2dx9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6umjm76ee3tcibw2dx9.png" alt="LangBot Dashboard" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Dify Agent
&lt;/h2&gt;

&lt;p&gt;Log in to &lt;a href="https://cloud.dify.ai" rel="noopener noreferrer"&gt;cloud.dify.ai&lt;/a&gt; and create a new app. Choose &lt;strong&gt;Agent&lt;/strong&gt; type - unlike basic chat apps, Agents can reason autonomously and call tools for complex tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93v652a5q2lk9fczuwfo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93v652a5q2lk9fczuwfo.png" alt="Select Agent Type" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure the Agent
&lt;/h3&gt;

&lt;p&gt;In the orchestration interface, set up:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;System Instructions&lt;/strong&gt; - Define the Agent's role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are an intelligent assistant that helps users complete various tasks. You have the following capabilities:
1. Answer various user questions
2. Query weather information
3. Retrieve webpage content

Please always maintain a friendly and professional attitude. If you need to use tools to get information, proactively call the relevant tools.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Add Tools&lt;/strong&gt; - Pick from Dify's 50+ built-in tools, like Weather and Web Scraper:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrg1gj8uo255rip98pqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrg1gj8uo255rip98pqw.png" alt="Add Tools" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Select Model&lt;/strong&gt; - Supports GPT, Claude, Gemini, DeepSeek, and other major LLMs.&lt;/p&gt;

&lt;p&gt;Final configuration:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bo6mb7ba2emu8tewr0p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bo6mb7ba2emu8tewr0p.png" alt="Configuration Complete" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Get the API Key
&lt;/h3&gt;

&lt;p&gt;Click "Publish", then go to "Access API" to create a key:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhr5ce4tnp0p99svvogb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhr5ce4tnp0p99svvogb.png" alt="Create API Key" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save the API key and server address &lt;code&gt;https://api.dify.ai/v1&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect Dify to LangBot
&lt;/h2&gt;

&lt;p&gt;Back in LangBot, go to Pipelines &amp;gt; ChatPipeline &amp;gt; AI Capability:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz8j46ou2zu1wp7jbsws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz8j46ou2zu1wp7jbsws.png" alt="AI Capability Config" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select &lt;strong&gt;Dify Service API&lt;/strong&gt; as the Runner:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdqp2q77068thclay72v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdqp2q77068thclay72v.png" alt="Select Dify Runner" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Base URL&lt;/strong&gt;: &lt;code&gt;https://api.dify.ai/v1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Key&lt;/strong&gt;: The key you just created&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;App Type&lt;/strong&gt;: Chat (including Chatflow)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv64qohet2ivgjclvoe14.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv64qohet2ivgjclvoe14.png" alt="Enter Configuration" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save and click "Debug Conversation" to test:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjh40ii1fo9dadgz2cylh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjh40ii1fo9dadgz2cylh.png" alt="Test Conversation" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the Agent responds correctly and calls tools, the integration is working.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Architecture Rocks
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Decoupled Design&lt;/strong&gt;: Agent capabilities live in Dify, LangBot handles message routing - each does what it's best at&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One Config, Multi-Platform&lt;/strong&gt;: Same Agent serves QQ, WeChat, Telegram, and more simultaneously&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy to Extend&lt;/strong&gt;: Add RAG knowledge bases, Workflow orchestration, or more tools in Dify later&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Configure messaging platforms in LangBot (QQ bot, WeChat personal account, etc.)&lt;/li&gt;
&lt;li&gt;Add more tools in Dify (database queries, API calls)&lt;/li&gt;
&lt;li&gt;Try Dify Chatflow for complex conversation flows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're using n8n, FastGPT, Coze, or Langflow, LangBot supports those too.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Resources&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.langbot.app" rel="noopener noreferrer"&gt;LangBot Docs&lt;/a&gt; | &lt;a href="https://github.com/langbot-app/LangBot" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.dify.ai" rel="noopener noreferrer"&gt;Dify Docs&lt;/a&gt; | &lt;a href="https://cloud.dify.ai" rel="noopener noreferrer"&gt;Cloud Platform&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>langbot</category>
      <category>llm</category>
      <category>discord</category>
    </item>
    <item>
      <title>How I Built a Multi-Platform AI Bot with Langflow's Drag-and-Drop Workflows</title>
      <dc:creator>Rock</dc:creator>
      <pubDate>Fri, 05 Dec 2025 13:51:57 +0000</pubDate>
      <link>https://forem.com/rockchinq/how-i-built-a-multi-platform-ai-bot-with-langflows-drag-and-drop-workflows-4287</link>
      <guid>https://forem.com/rockchinq/how-i-built-a-multi-platform-ai-bot-with-langflows-drag-and-drop-workflows-4287</guid>
      <description>&lt;p&gt;Drive chatbots across QQ, WeChat, Telegram, Discord, and more using visual workflows - no coding required.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://github.com/langbot-app/LangBot" rel="noopener noreferrer"&gt;LangBot&lt;/a&gt; is an open-source instant messaging bot platform that connects AI workflow engines like Langflow, n8n, Dify, FastGPT, and Coze to platforms including WeChat, QQ, Feishu, DingTalk, Telegram, Discord, Slack, and LINE. This tutorial demonstrates how to use Langflow's visual workflows as LangBot's conversation engine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Approach Works
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;True Multi-Platform&lt;/strong&gt;: One workflow powering 8+ messaging platforms simultaneously&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visual Orchestration&lt;/strong&gt;: Drag-and-drop conversation design with conditional branches, multi-turn dialogs, and external API calls&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible AI Models&lt;/strong&gt;: Support for OpenAI, Claude, Gemini, DeepSeek, and local models&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fully Open Source&lt;/strong&gt;: Both LangBot and Langflow are open-source projects for free deployment and customization&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.10+&lt;/li&gt;
&lt;li&gt;Docker (recommended for quick deployment)&lt;/li&gt;
&lt;li&gt;OpenAI API Key or API keys for other LLM services&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Deploy LangBot
&lt;/h2&gt;

&lt;p&gt;Launch with uvx in one command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uvx langbot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First run auto-initializes and opens your browser to &lt;a href="http://127.0.0.1:5300" rel="noopener noreferrer"&gt;http://127.0.0.1:5300&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3h550tmxutyoh2owhf0a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3h550tmxutyoh2owhf0a.png" alt="LangBot Initial Page" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After registration, log in to access the dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs07086i1luqdckuwy05z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs07086i1luqdckuwy05z.png" alt="LangBot Dashboard" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Deploy Langflow
&lt;/h2&gt;

&lt;p&gt;Deploy quickly with Docker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; langflow &lt;span class="nt"&gt;-p&lt;/span&gt; 7860:7860 langflowai/langflow:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Visit &lt;a href="http://localhost:7860" rel="noopener noreferrer"&gt;http://localhost:7860&lt;/a&gt; to access Langflow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbtvsi1l49jtnejei5om.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbtvsi1l49jtnejei5om.png" alt="Langflow Welcome Page" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Create a Langflow Workflow
&lt;/h2&gt;

&lt;p&gt;In Langflow, select the "Basic Prompting" template to get started quickly:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrs5qzevd9slz0o97bn2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrs5qzevd9slz0o97bn2.png" alt="Langflow Template Selection" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This template includes four basic components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Chat Input&lt;/strong&gt;: Receives user messages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt&lt;/strong&gt;: Sets system instructions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Language Model&lt;/strong&gt;: Calls LLM to generate responses&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chat Output&lt;/strong&gt;: Returns results&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwqew3zkw1behhrg29r8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwqew3zkw1behhrg29r8.png" alt="Langflow Workflow Editor" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Language Model
&lt;/h3&gt;

&lt;p&gt;Click the Language Model component and configure:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Model Provider&lt;/strong&gt;: Select OpenAI (or other compatible providers like SiliconFlow, New API)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Name&lt;/strong&gt;: Select gpt-4o-mini or deepseek-chat&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI API Key&lt;/strong&gt;: Enter your API Key&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1f4snankp3h2f7scz0l1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1f4snankp3h2f7scz0l1.png" alt="Langflow OpenAI API Key Configured" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Tip: You can use OpenAI-compatible API services like SiliconFlow or New API by simply modifying the Base URL.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Save the workflow after configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Get Langflow API Information
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Generate API Key
&lt;/h3&gt;

&lt;p&gt;In Langflow's upper right: Settings → API Keys, navigate to the API Keys page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9zt45f76h4v074f87y0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9zt45f76h4v074f87y0.png" alt="Langflow API Keys Page" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Create New Key:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhfnewtm699vwj5fi555.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhfnewtm699vwj5fi555.png" alt="Langflow Create API Key Dialog" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Generate and save the API Key:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj3suf576zqi7vqih9ny.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj3suf576zqi7vqih9ny.png" alt="Langflow API Key Generated" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Format: &lt;code&gt;sk-xxxxxxxxxxxxxxxxxxxxxxxx&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Get Flow ID
&lt;/h3&gt;

&lt;p&gt;Extract from the flow editor's URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:7860/flow/{flow-id}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Record this &lt;code&gt;flow-id&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Configure Langflow in LangBot
&lt;/h2&gt;

&lt;p&gt;Return to LangBot dashboard and go to &lt;strong&gt;Pipelines&lt;/strong&gt; page.&lt;/p&gt;

&lt;p&gt;Click ChatPipeline to edit, in the AI tab:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0ugtbai0kx7t3hjnjry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0ugtbai0kx7t3hjnjry.png" alt="LangBot Pipeline AI Tab" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configure &lt;strong&gt;Runner&lt;/strong&gt;, select &lt;strong&gt;Langflow API&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rq98plp7mkouv44sdbe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rq98plp7mkouv44sdbe.png" alt="LangBot Runner Dropdown" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fill in the Langflow configuration:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ntnbtvi92xxd1vb1ytq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ntnbtvi92xxd1vb1ytq.png" alt="LangBot Langflow Config Form" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configuration items:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Base URL&lt;/strong&gt;: &lt;code&gt;http://localhost:7860&lt;/code&gt; (local) or &lt;code&gt;http://langflow:7860&lt;/code&gt; (Docker network)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Key&lt;/strong&gt;: The API Key generated in Langflow&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flow ID&lt;/strong&gt;: The Flow ID recorded earlier&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7iass63mtv8d8xqpu618.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7iass63mtv8d8xqpu618.png" alt="LangBot Langflow Config Filled" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Docker Tip: If both services run in containers, ensure they're on the same network and use the container name for Base URL.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Click &lt;strong&gt;Save&lt;/strong&gt; to save the configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Test the Conversation
&lt;/h2&gt;

&lt;p&gt;Click &lt;strong&gt;Debug Chat&lt;/strong&gt; on the Pipelines page to open the debug chat interface:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzts8htj9g6mmbhiy9jsd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzts8htj9g6mmbhiy9jsd.png" alt="LangBot Debug Chat Interface" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter a test message like "Hello" and view the AI response:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsi0p49bwh239xn3g49ab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsi0p49bwh239xn3g49ab.png" alt="LangBot Chat Test Success" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;User sends a message on a messaging platform&lt;/li&gt;
&lt;li&gt;LangBot receives and passes it to the Pipeline&lt;/li&gt;
&lt;li&gt;Pipeline calls Langflow API&lt;/li&gt;
&lt;li&gt;Langflow executes the workflow: receives input → adds prompt → calls LLM → returns result&lt;/li&gt;
&lt;li&gt;LangBot sends the response back to the user&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Common Issues
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Cannot connect to Langflow?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Check Base URL. For Docker deployment, ensure containers are on the same network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker network create langbot_network
docker network connect langbot_network langflow
docker network connect langbot_network langbot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use container name: &lt;code&gt;http://langflow:7860&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API call fails?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Confirm API Key and Flow ID are correct&lt;/li&gt;
&lt;li&gt;Verify the Language Model in Langflow has a valid LLM API Key configured&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Advanced Use Cases
&lt;/h2&gt;

&lt;p&gt;Langflow's power lies in visually orchestrating complex AI workflows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Turn Memory&lt;/strong&gt;: Add Memory components for contextual understanding&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conditional Branches&lt;/strong&gt;: Execute different logic based on user input&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;External API Integration&lt;/strong&gt;: Connect databases, search engines, third-party services&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Agent Collaboration&lt;/strong&gt;: Multiple LLM models working together&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAG Applications&lt;/strong&gt;: Integrate vector databases for knowledge base Q&amp;amp;A&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All achievable through drag-and-drop without writing code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;With LangBot + Langflow, you can rapidly build powerful multi-platform AI chatbots. Langflow provides visual workflow orchestration, LangBot handles messaging platform integration - together they create a complete loop from workflow design to multi-platform deployment.&lt;/p&gt;

&lt;p&gt;This approach is ideal for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scenarios requiring the same AI capabilities across multiple platforms&lt;/li&gt;
&lt;li&gt;Teams wanting rapid iteration and testing of different conversation flows&lt;/li&gt;
&lt;li&gt;Developers wanting to build complex AI applications without deep coding&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Related Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://langbot.app" rel="noopener noreferrer"&gt;LangBot Official Site&lt;/a&gt; | &lt;a href="https://docs.langbot.app" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt; | &lt;a href="https://github.com/langbot-app/LangBot" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.langflow.org" rel="noopener noreferrer"&gt;Langflow Official Site&lt;/a&gt; | &lt;a href="https://docs.langflow.org" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt; | &lt;a href="https://github.com/langflow-ai/langflow" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;This article is based on the latest version of LangBot. LangBot supports integration with Dify, n8n, FastGPT, Coze, and other AI platforms - choose the workflow engine that best fits your needs.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>langflow</category>
      <category>rag</category>
      <category>mcp</category>
    </item>
    <item>
      <title>How I Built a Multi-Platform AI Bot with Langflow's Drag-and-Drop Workflows</title>
      <dc:creator>Rock</dc:creator>
      <pubDate>Fri, 05 Dec 2025 13:51:57 +0000</pubDate>
      <link>https://forem.com/rockchinq/how-i-built-a-multi-platform-ai-bot-with-langflows-drag-and-drop-workflows-4lm8</link>
      <guid>https://forem.com/rockchinq/how-i-built-a-multi-platform-ai-bot-with-langflows-drag-and-drop-workflows-4lm8</guid>
      <description>&lt;p&gt;Drive chatbots across QQ, WeChat, Telegram, Discord, and more using visual workflows - no coding required.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://github.com/langbot-app/LangBot" rel="noopener noreferrer"&gt;LangBot&lt;/a&gt; is an open-source instant messaging bot platform that connects AI workflow engines like Langflow, n8n, Dify, FastGPT, and Coze to platforms including WeChat, QQ, Feishu, DingTalk, Telegram, Discord, Slack, and LINE. This tutorial demonstrates how to use Langflow's visual workflows as LangBot's conversation engine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Approach Works
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;True Multi-Platform&lt;/strong&gt;: One workflow powering 8+ messaging platforms simultaneously&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visual Orchestration&lt;/strong&gt;: Drag-and-drop conversation design with conditional branches, multi-turn dialogs, and external API calls&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible AI Models&lt;/strong&gt;: Support for OpenAI, Claude, Gemini, DeepSeek, and local models&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fully Open Source&lt;/strong&gt;: Both LangBot and Langflow are open-source projects for free deployment and customization&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.10+&lt;/li&gt;
&lt;li&gt;Docker (recommended for quick deployment)&lt;/li&gt;
&lt;li&gt;OpenAI API Key or API keys for other LLM services&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Deploy LangBot
&lt;/h2&gt;

&lt;p&gt;Launch with uvx in one command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uvx langbot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First run auto-initializes and opens your browser to &lt;a href="http://127.0.0.1:5300" rel="noopener noreferrer"&gt;http://127.0.0.1:5300&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3h550tmxutyoh2owhf0a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3h550tmxutyoh2owhf0a.png" alt="LangBot Initial Page" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After registration, log in to access the dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs07086i1luqdckuwy05z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs07086i1luqdckuwy05z.png" alt="LangBot Dashboard" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Deploy Langflow
&lt;/h2&gt;

&lt;p&gt;Deploy quickly with Docker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; langflow &lt;span class="nt"&gt;-p&lt;/span&gt; 7860:7860 langflowai/langflow:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Visit &lt;a href="http://localhost:7860" rel="noopener noreferrer"&gt;http://localhost:7860&lt;/a&gt; to access Langflow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbtvsi1l49jtnejei5om.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbtvsi1l49jtnejei5om.png" alt="Langflow Welcome Page" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Create a Langflow Workflow
&lt;/h2&gt;

&lt;p&gt;In Langflow, select the "Basic Prompting" template to get started quickly:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrs5qzevd9slz0o97bn2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrs5qzevd9slz0o97bn2.png" alt="Langflow Template Selection" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This template includes four basic components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Chat Input&lt;/strong&gt;: Receives user messages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt&lt;/strong&gt;: Sets system instructions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Language Model&lt;/strong&gt;: Calls LLM to generate responses&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chat Output&lt;/strong&gt;: Returns results&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwqew3zkw1behhrg29r8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwqew3zkw1behhrg29r8.png" alt="Langflow Workflow Editor" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Language Model
&lt;/h3&gt;

&lt;p&gt;Click the Language Model component and configure:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Model Provider&lt;/strong&gt;: Select OpenAI (or other compatible providers like SiliconFlow, New API)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Name&lt;/strong&gt;: Select gpt-4o-mini or deepseek-chat&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI API Key&lt;/strong&gt;: Enter your API Key&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1f4snankp3h2f7scz0l1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1f4snankp3h2f7scz0l1.png" alt="Langflow OpenAI API Key Configured" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Tip: You can use OpenAI-compatible API services like SiliconFlow or New API by simply modifying the Base URL.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Save the workflow after configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Get Langflow API Information
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Generate API Key
&lt;/h3&gt;

&lt;p&gt;In Langflow's upper right: Settings → API Keys, navigate to the API Keys page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9zt45f76h4v074f87y0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9zt45f76h4v074f87y0.png" alt="Langflow API Keys Page" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Create New Key:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhfnewtm699vwj5fi555.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhfnewtm699vwj5fi555.png" alt="Langflow Create API Key Dialog" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Generate and save the API Key:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj3suf576zqi7vqih9ny.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj3suf576zqi7vqih9ny.png" alt="Langflow API Key Generated" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Format: &lt;code&gt;sk-xxxxxxxxxxxxxxxxxxxxxxxx&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Get Flow ID
&lt;/h3&gt;

&lt;p&gt;Extract from the flow editor's URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:7860/flow/{flow-id}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Record this &lt;code&gt;flow-id&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Configure Langflow in LangBot
&lt;/h2&gt;

&lt;p&gt;Return to LangBot dashboard and go to &lt;strong&gt;Pipelines&lt;/strong&gt; page.&lt;/p&gt;

&lt;p&gt;Click ChatPipeline to edit, in the AI tab:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0ugtbai0kx7t3hjnjry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0ugtbai0kx7t3hjnjry.png" alt="LangBot Pipeline AI Tab" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configure &lt;strong&gt;Runner&lt;/strong&gt;, select &lt;strong&gt;Langflow API&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rq98plp7mkouv44sdbe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rq98plp7mkouv44sdbe.png" alt="LangBot Runner Dropdown" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fill in the Langflow configuration:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ntnbtvi92xxd1vb1ytq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ntnbtvi92xxd1vb1ytq.png" alt="LangBot Langflow Config Form" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configuration items:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Base URL&lt;/strong&gt;: &lt;code&gt;http://localhost:7860&lt;/code&gt; (local) or &lt;code&gt;http://langflow:7860&lt;/code&gt; (Docker network)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Key&lt;/strong&gt;: The API Key generated in Langflow&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flow ID&lt;/strong&gt;: The Flow ID recorded earlier&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7iass63mtv8d8xqpu618.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7iass63mtv8d8xqpu618.png" alt="LangBot Langflow Config Filled" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Docker Tip: If both services run in containers, ensure they're on the same network and use the container name for Base URL.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Click &lt;strong&gt;Save&lt;/strong&gt; to save the configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Test the Conversation
&lt;/h2&gt;

&lt;p&gt;Click &lt;strong&gt;Debug Chat&lt;/strong&gt; on the Pipelines page to open the debug chat interface:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzts8htj9g6mmbhiy9jsd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzts8htj9g6mmbhiy9jsd.png" alt="LangBot Debug Chat Interface" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter a test message like "Hello" and view the AI response:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsi0p49bwh239xn3g49ab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsi0p49bwh239xn3g49ab.png" alt="LangBot Chat Test Success" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;User sends a message on a messaging platform&lt;/li&gt;
&lt;li&gt;LangBot receives and passes it to the Pipeline&lt;/li&gt;
&lt;li&gt;Pipeline calls Langflow API&lt;/li&gt;
&lt;li&gt;Langflow executes the workflow: receives input → adds prompt → calls LLM → returns result&lt;/li&gt;
&lt;li&gt;LangBot sends the response back to the user&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Common Issues
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Cannot connect to Langflow?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Check Base URL. For Docker deployment, ensure containers are on the same network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker network create langbot_network
docker network connect langbot_network langflow
docker network connect langbot_network langbot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use container name: &lt;code&gt;http://langflow:7860&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API call fails?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Confirm API Key and Flow ID are correct&lt;/li&gt;
&lt;li&gt;Verify the Language Model in Langflow has a valid LLM API Key configured&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Advanced Use Cases
&lt;/h2&gt;

&lt;p&gt;Langflow's power lies in visually orchestrating complex AI workflows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Turn Memory&lt;/strong&gt;: Add Memory components for contextual understanding&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conditional Branches&lt;/strong&gt;: Execute different logic based on user input&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;External API Integration&lt;/strong&gt;: Connect databases, search engines, third-party services&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Agent Collaboration&lt;/strong&gt;: Multiple LLM models working together&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAG Applications&lt;/strong&gt;: Integrate vector databases for knowledge base Q&amp;amp;A&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All achievable through drag-and-drop without writing code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;With LangBot + Langflow, you can rapidly build powerful multi-platform AI chatbots. Langflow provides visual workflow orchestration, LangBot handles messaging platform integration - together they create a complete loop from workflow design to multi-platform deployment.&lt;/p&gt;

&lt;p&gt;This approach is ideal for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scenarios requiring the same AI capabilities across multiple platforms&lt;/li&gt;
&lt;li&gt;Teams wanting rapid iteration and testing of different conversation flows&lt;/li&gt;
&lt;li&gt;Developers wanting to build complex AI applications without deep coding&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Related Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://langbot.app" rel="noopener noreferrer"&gt;LangBot Official Site&lt;/a&gt; | &lt;a href="https://docs.langbot.app" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt; | &lt;a href="https://github.com/langbot-app/LangBot" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.langflow.org" rel="noopener noreferrer"&gt;Langflow Official Site&lt;/a&gt; | &lt;a href="https://docs.langflow.org" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt; | &lt;a href="https://github.com/langflow-ai/langflow" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;This article is based on the latest version of LangBot. LangBot supports integration with Dify, n8n, FastGPT, Coze, and other AI platforms - choose the workflow engine that best fits your needs.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>langflow</category>
      <category>rag</category>
      <category>mcp</category>
    </item>
    <item>
      <title>How I Connected Dify Chatflow to My Multi-Platform Bot in Minutes</title>
      <dc:creator>Rock</dc:creator>
      <pubDate>Thu, 04 Dec 2025 12:14:08 +0000</pubDate>
      <link>https://forem.com/rockchinq/how-i-connected-dify-chatflow-to-my-multi-platform-bot-in-minutes-264m</link>
      <guid>https://forem.com/rockchinq/how-i-connected-dify-chatflow-to-my-multi-platform-bot-in-minutes-264m</guid>
      <description>&lt;p&gt;LangBot is a multi-platform chatbot framework that supports QQ, WeChat, Discord, Telegram, Slack, LINE, DingTalk, and Lark. One of its most powerful features is the support for multiple AI runners, including workflow platforms like Dify, n8n, Langflow, and Coze, as well as direct integration with OpenAI, Claude, and other LLM services.&lt;/p&gt;

&lt;p&gt;Dify is an open-source LLM application development platform with visual Chatflow functionality, allowing you to build intelligent conversational applications without coding. This guide shows you how to integrate Dify Chatflow into LangBot to give your bot powerful AI capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Need
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A running LangBot instance&lt;/li&gt;
&lt;li&gt;A Dify account (you can use cloud.dify.ai)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Deploy LangBot
&lt;/h2&gt;

&lt;p&gt;If you haven't deployed LangBot yet, quickly start it with Docker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 5300:5300 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; ./data:/app/data &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; langbot &lt;span class="se"&gt;\&lt;/span&gt;
  rockchin/langbot:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Visit &lt;code&gt;http://localhost:5300&lt;/code&gt; to access the management interface.&lt;/p&gt;

&lt;p&gt;Default credentials:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Email: &lt;a href="mailto:admin@langbot.app"&gt;admin@langbot.app&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Password: Admin123456&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fst84i1m6kutdog3ee2gj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fst84i1m6kutdog3ee2gj.png" alt="LangBot Management Interface" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Create Dify Chatflow Application
&lt;/h2&gt;

&lt;p&gt;Visit &lt;a href="https://cloud.dify.ai" rel="noopener noreferrer"&gt;cloud.dify.ai&lt;/a&gt; and log in. On the Studio page, click "Create from Blank" to create a new application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85op8rlr45x1vy0znq1v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85op8rlr45x1vy0znq1v.png" alt="Dify Studio" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select "Chatflow" type - a conversational application with workflow support. Enter an application name (e.g., "LangBot Demo Chatflow") and create it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9w5qze6o37wyefkiu18s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9w5qze6o37wyefkiu18s.png" alt="Create Chatflow" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Configure the Workflow
&lt;/h2&gt;

&lt;p&gt;After creation, you'll see a visual workflow editor with three default nodes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;START&lt;/strong&gt;: Receives user input&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLM&lt;/strong&gt;: Processes conversation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ANSWER&lt;/strong&gt;: Returns response&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqywxzcb9rvh8las3h9f1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqywxzcb9rvh8las3h9f1.png" alt="Workflow Editor" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the LLM node and add a system prompt in the "System" section:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are a helpful AI assistant integrated with LangBot. You can answer questions and help users with various tasks.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsp6y1eoo0vykh4ugyfse.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsp6y1eoo0vykh4ugyfse.png" alt="Configure Prompt" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can add more nodes as needed - knowledge base retrieval, external API calls, conditional logic, and more to build more powerful conversational capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Get API Key
&lt;/h2&gt;

&lt;p&gt;Click "Publish" in the top right corner to publish your application, then click "API Access" to enter the API access page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow5t1mua097y36cvza89.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow5t1mua097y36cvza89.png" alt="API Access" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click "Create new Secret key" to create an API key. Copy the generated key (format: &lt;code&gt;app-xxxxxxxxxxxxxxxxxxxxxxxx&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6z1yxcr297n3b2awzl8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6z1yxcr297n3b2awzl8.png" alt="Create Key" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;API Server&lt;/strong&gt;: &lt;code&gt;https://api.dify.ai/v1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Key&lt;/strong&gt;: The key you just created&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 5: Configure LangBot
&lt;/h2&gt;

&lt;p&gt;Return to the LangBot management interface, go to the "Pipelines" page, and click "+" to create a new pipeline.&lt;/p&gt;

&lt;p&gt;Enter pipeline information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Name&lt;/strong&gt;: Dify Chatflow Pipeline&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Description&lt;/strong&gt;: Pipeline using Dify Chatflow as the AI runner&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Switch to the "AI" tab, select "Dify Service API" in the "Runner" dropdown, and configure parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Base URL&lt;/strong&gt;: &lt;code&gt;https://api.dify.ai/v1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;App Type&lt;/strong&gt;: Chat (Important: Chatflow apps use Chat type, not Workflow)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Key&lt;/strong&gt;: Paste your API key&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkgo8tx9acn6xm1zvz6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkgo8tx9acn6xm1zvz6v.png" alt="Configure Runner" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click "Save" to save.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Test It
&lt;/h2&gt;

&lt;p&gt;In the pipeline edit dialog, click "Debug Chat" to enter the debug interface. Enter a test message (like "Hello! This is a test message.") and press Enter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajju89pincpk8zluhuqj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajju89pincpk8zluhuqj.png" alt="Test Success" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If configured correctly, you'll see a reply from Dify Chatflow, indicating successful integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Issues
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Error: not_workflow_app&lt;/strong&gt;&lt;br&gt;
This is due to incorrect App Type configuration. Chatflow apps must use "Chat" type, not "Workflow" type.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invalid API Key&lt;/strong&gt;&lt;br&gt;
Ensure the key format is correct (starts with &lt;code&gt;app-&lt;/code&gt;), has been created and activated in Dify, and Base URL is set to &lt;code&gt;https://api.dify.ai/v1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Connection Timeout&lt;/strong&gt;&lt;br&gt;
Check network connectivity, Dify service accessibility, and firewall settings.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Now you can bind this pipeline to your bot and let it process user messages through Dify Chatflow. In Dify, you can further:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add knowledge bases for RAG (Retrieval Augmented Generation)&lt;/li&gt;
&lt;li&gt;Integrate external tools and APIs&lt;/li&gt;
&lt;li&gt;Use conditional nodes for complex logic&lt;/li&gt;
&lt;li&gt;Add variable transformations and data processing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LangBot supports configuring multiple pipelines simultaneously, allowing you to configure different AI capabilities for different scenarios and platforms. It also supports other AI platforms like n8n, Langflow, FastGPT, Coze, as well as direct integration with OpenAI, Claude, Google Gemini, and other LLM services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.langbot.app" rel="noopener noreferrer"&gt;LangBot Official Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.dify.ai" rel="noopener noreferrer"&gt;Dify Official Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/langbot-app/LangBot" rel="noopener noreferrer"&gt;LangBot GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>llm</category>
      <category>discord</category>
    </item>
    <item>
      <title>How I Built a Multi-Platform AI Chatbot with n8n and LangBot</title>
      <dc:creator>Rock</dc:creator>
      <pubDate>Wed, 03 Dec 2025 05:08:59 +0000</pubDate>
      <link>https://forem.com/rockchinq/how-i-built-a-multi-platform-ai-chatbot-with-n8n-and-langbot-35nb</link>
      <guid>https://forem.com/rockchinq/how-i-built-a-multi-platform-ai-chatbot-with-n8n-and-langbot-35nb</guid>
      <description>&lt;p&gt;Connecting n8n's visual workflow automation with LangBot's multi-platform bot framework creates a powerful, code-free way to deploy AI chatbots across QQ, WeChat, Discord, Telegram, Slack, and more. This tutorial shows you how to integrate these tools in minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You'll Need
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.8+ installed&lt;/li&gt;
&lt;li&gt;Node.js 18+ installed&lt;/li&gt;
&lt;li&gt;npm or npx available&lt;/li&gt;
&lt;li&gt;15 minutes of your time&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deploy LangBot in 3 Commands
&lt;/h2&gt;

&lt;p&gt;LangBot is a production-ready bot framework that connects to multiple messaging platforms and AI services including Dify, FastGPT, Coze, OpenAI, Claude, and Gemini. Deploy it instantly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;your-workspace
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; langbot-instance &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;langbot-instance
uvx langbot@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On first launch, LangBot initializes automatically. Open &lt;code&gt;http://127.0.0.1:5300&lt;/code&gt; in your browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz7mvsiu6udn7islp18qa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz7mvsiu6udn7islp18qa.png" alt="LangBot initialization screen" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Register your admin account when prompted. You'll land on the dashboard where you can manage bots, models, pipelines, and integrations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foecj83jbepqbgfb37n2z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foecj83jbepqbgfb37n2z.png" alt="LangBot dashboard" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Set Up n8n Workflow Automation
&lt;/h2&gt;

&lt;p&gt;n8n is an open-source automation platform with 400+ integrations and powerful AI capabilities. Launch it locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;your-workspace
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; n8n-data
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;N8N_USER_FOLDER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/n8n-data
npx n8n
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Visit &lt;code&gt;http://127.0.0.1:5678&lt;/code&gt; and create your owner account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foy577qrqgdqhhoghttyl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foy577qrqgdqhhoghttyl.png" alt="n8n initial setup" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Build Your AI Workflow
&lt;/h2&gt;

&lt;p&gt;Create a new workflow in n8n. You'll need two essential nodes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpm3s0pr7eyoam7ui2xon.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpm3s0pr7eyoam7ui2xon.png" alt="n8n workflow editor" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Add the Webhook Trigger
&lt;/h3&gt;

&lt;p&gt;Click "+" on the canvas and add a &lt;strong&gt;Webhook&lt;/strong&gt; node. Configure it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;HTTP Method&lt;/strong&gt;: POST&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Response Mode&lt;/strong&gt;: Streaming (enables real-time chat responses)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authentication&lt;/strong&gt;: None (adjust for production)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1qxw3av8fqm1nctjlwy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1qxw3av8fqm1nctjlwy.png" alt="Webhook node configuration" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Add the AI Agent
&lt;/h3&gt;

&lt;p&gt;Press Tab, navigate to the "AI" category, and select &lt;strong&gt;AI Agent&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configure the Chat Model&lt;/strong&gt;: Click "Chat Model" and choose "OpenAI Chat Model". Add your credentials:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;API Key&lt;/strong&gt;: Your OpenAI API key (or compatible service key)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Base URL&lt;/strong&gt;: For OpenAI alternatives like Claude, Gemini, or local models, update to your provider's endpoint&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Critical Step - Fix the Prompt Source&lt;/strong&gt;: By default, the AI Agent expects a Chat Trigger node, which won't work with webhooks. Here's how to fix it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Find "Source for Prompt (User Message)" in the AI Agent settings&lt;/li&gt;
&lt;li&gt;Change from "Connected Chat Trigger Node" to "Define below"&lt;/li&gt;
&lt;li&gt;Switch to "Expression" mode&lt;/li&gt;
&lt;li&gt;Enter: &lt;code&gt;{{ $json.body }}&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This expression pulls the user's message from the webhook request body.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftoqo9nwtufkq6nuepugy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftoqo9nwtufkq6nuepugy.png" alt="Configured webhook with AI Agent" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Activate and Get Your Webhook URL
&lt;/h3&gt;

&lt;p&gt;Save the workflow and toggle the activation switch (top-right). Switch to the "Production URL" tab and copy the webhook URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:5678/webhook/{your-webhook-id}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Connect LangBot to n8n
&lt;/h2&gt;

&lt;p&gt;Back in the LangBot dashboard, navigate to &lt;strong&gt;Pipelines&lt;/strong&gt; and click the default "ChatPipeline".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36enr0swqvk05qg5xlpi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36enr0swqvk05qg5xlpi.png" alt="LangBot pipelines page" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Switch to the &lt;strong&gt;AI&lt;/strong&gt; tab and select "n8n Workflow API" from the Runner dropdown. Configure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Webhook URL&lt;/strong&gt;: Paste your n8n production webhook URL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authentication Type&lt;/strong&gt;: None (match your n8n webhook settings)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Timeout&lt;/strong&gt;: 120 seconds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output Key&lt;/strong&gt;: response&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Test It Out
&lt;/h2&gt;

&lt;p&gt;In the Pipeline editor, click "Debug Chat" on the left sidebar. Send a test message like "What is LangBot?"&lt;/p&gt;

&lt;p&gt;If everything works, you'll see LangBot send the message to n8n, where the AI Agent processes it and streams back a response.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Error: "Expected to find the prompt in an input field called 'chatInput'"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This means the AI Agent is still configured for a Chat Trigger node. Fix it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the AI Agent configuration&lt;/li&gt;
&lt;li&gt;Set "Source for Prompt (User Message)" to "Define below"&lt;/li&gt;
&lt;li&gt;Switch to Expression mode&lt;/li&gt;
&lt;li&gt;Enter: &lt;code&gt;{{ $json.body }}&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Test Your Webhook Directly&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Verify the webhook works with curl:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:5678/webhook/your-webhook-id &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"body": "Hello, can you introduce yourself?"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should receive streaming JSON with the AI's response.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the Integration Works
&lt;/h2&gt;

&lt;p&gt;Here's the complete flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User sends a message via QQ, WeChat, Discord, Telegram, Slack, LINE, or any LangBot-supported platform&lt;/li&gt;
&lt;li&gt;LangBot's Pipeline receives the message and calls the n8n Workflow API&lt;/li&gt;
&lt;li&gt;n8n's Webhook node captures the request and passes it to the AI Agent&lt;/li&gt;
&lt;li&gt;The AI Agent uses OpenAI, Claude, Gemini, or your configured LLM to generate a response&lt;/li&gt;
&lt;li&gt;n8n streams the response back to LangBot&lt;/li&gt;
&lt;li&gt;LangBot delivers the response to the user on their original platform&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Why This Combination Works
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;LangBot + n8n&lt;/strong&gt; unlocks powerful capabilities:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;No-Code AI Logic&lt;/strong&gt;: Design conversation flows visually in n8n without touching code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Platform Reach&lt;/strong&gt;: Deploy the same AI across QQ, WeChat, Discord, Telegram, Slack, LINE, DingTalk, and Lark simultaneously&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible AI Models&lt;/strong&gt;: Swap between OpenAI GPT, Anthropic Claude, Google Gemini, Coze, Dify, local models, and more&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rich Integrations&lt;/strong&gt;: Connect n8n's 400+ integrations - databases, APIs, Notion, Airtable, Google Sheets, Slack, and beyond&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool-Calling Agents&lt;/strong&gt;: AI Agent can trigger n8n tools like database queries, API calls, or custom functions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workflow Extensions&lt;/strong&gt;: Add preprocessing, content moderation, logging, or custom business logic&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Perfect For&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enterprise customer support bots&lt;/li&gt;
&lt;li&gt;Knowledge base Q&amp;amp;A systems&lt;/li&gt;
&lt;li&gt;Multi-platform community management&lt;/li&gt;
&lt;li&gt;Task automation assistants&lt;/li&gt;
&lt;li&gt;Unified chat interfaces for teams&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;Extend your bot further:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Integrate Dify or FastGPT for advanced RAG (retrieval-augmented generation)&lt;/li&gt;
&lt;li&gt;Add vector database nodes (Pinecone, Qdrant, Weaviate) for knowledge retrieval&lt;/li&gt;
&lt;li&gt;Connect business APIs for real-time data&lt;/li&gt;
&lt;li&gt;Implement conversation memory and context tracking&lt;/li&gt;
&lt;li&gt;Add content filtering and moderation workflows&lt;/li&gt;
&lt;li&gt;Use Langflow or Coze for additional AI orchestration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This integration gives you the flexibility of code-based AI frameworks like LangChain with the simplicity of visual workflow builders - all while reaching users across every major messaging platform.&lt;/p&gt;

&lt;p&gt;Ready to deploy your multi-platform AI assistant? Start with LangBot and n8n today.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>tutorial</category>
      <category>workflow</category>
    </item>
    <item>
      <title>LangBot 4.6.0 External Knowledge Base Tutorial: Integrating Dify with LangBot for RAG-powered Conversations</title>
      <dc:creator>Rock</dc:creator>
      <pubDate>Tue, 02 Dec 2025 12:50:38 +0000</pubDate>
      <link>https://forem.com/rockchinq/langbot-460-external-knowledge-base-tutorial-integrating-dify-with-langbot-for-rag-powered-2d90</link>
      <guid>https://forem.com/rockchinq/langbot-460-external-knowledge-base-tutorial-integrating-dify-with-langbot-for-rag-powered-2d90</guid>
      <description>&lt;p&gt;LangBot 4.6.0 introduces external knowledge base functionality, allowing users to integrate external knowledge retrieval services such as Dify and RAGFlow into conversation pipelines. This tutorial demonstrates how to combine Dify knowledge base with LangBot to enable intelligent conversations based on domain-specific knowledge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature Overview
&lt;/h2&gt;

&lt;p&gt;The external knowledge base feature enables LangBot to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to various external knowledge retrieval services (Dify, RAGFlow, etc.)&lt;/li&gt;
&lt;li&gt;Provide professional answers based on domain-specific knowledge&lt;/li&gt;
&lt;li&gt;Flexibly extend knowledge retrieval capabilities through the plugin system&lt;/li&gt;
&lt;li&gt;Configure conveniently through WebUI without manual configuration file editing&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before starting, you need:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Deploy LangBot 4.6.0 or higher&lt;/li&gt;
&lt;li&gt;Have a Dify account (register at &lt;a href="https://cloud.dify.ai" rel="noopener noreferrer"&gt;https://cloud.dify.ai&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Configure a conversation model (this tutorial uses claude-opus-4-1-20250805)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 1: Deploy LangBot
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1.1 View LangBot Repository
&lt;/h3&gt;

&lt;p&gt;First, visit the &lt;a href="https://github.com/langbot-app/LangBot" rel="noopener noreferrer"&gt;LangBot GitHub repository&lt;/a&gt; to understand the project. LangBot is a production-grade instant messaging bot development platform supporting multiple messaging platforms and LLM services.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.2 Learn About External Knowledge Base Feature
&lt;/h3&gt;

&lt;p&gt;Visit the &lt;a href="https://docs.langbot.app" rel="noopener noreferrer"&gt;official LangBot documentation&lt;/a&gt; to view the external knowledge base usage instructions. The documentation provides detailed explanations on how to build built-in knowledge bases and connect to external knowledge bases.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.3 Start LangBot
&lt;/h3&gt;

&lt;p&gt;Use the &lt;code&gt;uvx langbot&lt;/code&gt; command to quickly start LangBot, then visit &lt;a href="http://127.0.0.1:5300" rel="noopener noreferrer"&gt;http://127.0.0.1:5300&lt;/a&gt; for initialization. Fill in your email and password to complete registration, then log in.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.4 Dashboard After Login
&lt;/h3&gt;

&lt;p&gt;After successfully logging in, you will see the LangBot WebUI dashboard.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Configure Conversation Model
&lt;/h2&gt;

&lt;p&gt;Before using the knowledge base, you need to configure a conversation model.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.1 Navigate to Model Configuration Page
&lt;/h3&gt;

&lt;p&gt;Click "Models" in the left navigation bar to enter the model configuration page.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.2 Add New Model
&lt;/h3&gt;

&lt;p&gt;Click the "+" button to open the model configuration dialog.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.3 Fill in Model Information
&lt;/h3&gt;

&lt;p&gt;Fill in the following information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Model Name&lt;/strong&gt;: claude-opus-4-1-20250805&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Provider&lt;/strong&gt;: New API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Request URL&lt;/strong&gt;: Your API endpoint&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Key&lt;/strong&gt;: Your API Key&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Abilities&lt;/strong&gt;: Check Vision Ability and Function Call&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Click "Submit" to save the configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Create Knowledge Base in Dify
&lt;/h2&gt;

&lt;h3&gt;
  
  
  3.1 Log in to Dify Platform
&lt;/h3&gt;

&lt;p&gt;Visit &lt;a href="https://cloud.dify.ai" rel="noopener noreferrer"&gt;https://cloud.dify.ai&lt;/a&gt; and log in to your account.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.2 Navigate to Knowledge Page
&lt;/h3&gt;

&lt;p&gt;Click "Knowledge" in the top navigation bar to enter the knowledge base management page.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.3 Create New Knowledge Base
&lt;/h3&gt;

&lt;p&gt;Click "Create Knowledge" to start creating a knowledge base.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.4 Select Data Source
&lt;/h3&gt;

&lt;p&gt;You can choose from the following methods to import data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Import from file&lt;/strong&gt;: Upload document files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sync from Notion&lt;/strong&gt;: Sync from Notion&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sync from website&lt;/strong&gt;: Crawl website content&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This tutorial selects "Import from file" and uploads prepared LangBot documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.5 Upload Document
&lt;/h3&gt;

&lt;p&gt;After uploading the document, the system will display file information. Click "Next" to proceed.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.6 Configure Document Processing Parameters
&lt;/h3&gt;

&lt;p&gt;On the document processing page, you can configure chunk settings, index method, and retrieval settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Chunk Settings&lt;/strong&gt;: General mode, maximum chunk length 1024 characters&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Index Method&lt;/strong&gt;: High Quality (uses embedding model)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retrieval Setting&lt;/strong&gt;: Vector Search&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Click "Save &amp;amp; Process" to start processing the document.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.7 Knowledge Base Processing
&lt;/h3&gt;

&lt;p&gt;The system starts processing the document and generating vector embeddings.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.8 Embedding Completed
&lt;/h3&gt;

&lt;p&gt;After processing is complete, the knowledge base is ready to use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Get Dify API Information
&lt;/h2&gt;

&lt;h3&gt;
  
  
  4.1 Open Service API Panel
&lt;/h3&gt;

&lt;p&gt;Click the "Service API" button in the bottom right. Record the following information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Service API Endpoint&lt;/strong&gt;: &lt;a href="https://api.dify.ai/v1" rel="noopener noreferrer"&gt;https://api.dify.ai/v1&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dataset ID&lt;/strong&gt;: Get from the URL&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4.2 Get API Key
&lt;/h3&gt;

&lt;p&gt;Click the "API Key" button to view the API key. Record your API Key for use in subsequent configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Configure External Knowledge Base in LangBot
&lt;/h2&gt;

&lt;h3&gt;
  
  
  5.1 Navigate to Knowledge Page
&lt;/h3&gt;

&lt;p&gt;In the LangBot WebUI, click "Knowledge" in the left navigation bar.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.2 Switch to External Knowledge Base Tab
&lt;/h3&gt;

&lt;p&gt;Click the "External" tab.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.3 Add External Knowledge Base
&lt;/h3&gt;

&lt;p&gt;Click the "+" button to open the add external knowledge base dialog. In this dialog, you need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Knowledge Base Name&lt;/strong&gt;: Give the knowledge base a name&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Knowledge Base Description&lt;/strong&gt;: Add a description (optional)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retriever&lt;/strong&gt;: Select a knowledge retriever plugin&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Using an external knowledge base requires installing the corresponding knowledge retriever plugin first. You can search for and install the Dify knowledge retriever plugin in the &lt;a href="https://space.langbot.app/market?category=KnowledgeRetriever" rel="noopener noreferrer"&gt;plugin marketplace&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After installing the plugin, select the corresponding plugin from the Retriever dropdown, then fill in the configuration information obtained from Dify (API Endpoint, API Key, Dataset ID).&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Configure Pipeline
&lt;/h2&gt;

&lt;p&gt;After configuring the external knowledge base, you need to enable it in the pipeline:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the "Pipelines" page&lt;/li&gt;
&lt;li&gt;Edit or create a new pipeline&lt;/li&gt;
&lt;li&gt;On the "AI Capabilities" page, select "Built-in Agent" as the runner&lt;/li&gt;
&lt;li&gt;In the knowledge base selection, check the external knowledge base you just configured&lt;/li&gt;
&lt;li&gt;Save the pipeline configuration&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 7: Test the Effect
&lt;/h2&gt;

&lt;p&gt;Test in the pipeline's "Conversation Testing" page, or bind the pipeline to a bot for actual conversation testing. When users ask questions, LangBot will automatically retrieve relevant content from the Dify knowledge base and generate answers combined with the retrieved knowledge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. How to Install Knowledge Retriever Plugins?
&lt;/h3&gt;

&lt;p&gt;Visit the &lt;a href="https://space.langbot.app/market" rel="noopener noreferrer"&gt;LangBot plugin marketplace&lt;/a&gt;, search for "Knowledge Retriever" or "Dify", find the corresponding plugin and click install.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Which External Knowledge Retrieval Services Are Supported?
&lt;/h3&gt;

&lt;p&gt;LangBot currently supports multiple external knowledge retrieval services through the plugin system, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dify&lt;/li&gt;
&lt;li&gt;RAGFlow&lt;/li&gt;
&lt;li&gt;Other custom knowledge retrieval services&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Specific supported services can be viewed in the plugin marketplace.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. What's the Difference Between External and Built-in Knowledge Bases?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Built-in Knowledge Base&lt;/strong&gt;: Data is stored locally in LangBot, with LangBot handling vectorization and retrieval&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;External Knowledge Base&lt;/strong&gt;: Data is stored in external services (like Dify), with retrieval performed through API calls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The advantage of external knowledge bases is the ability to leverage the capabilities of professional LLMOps platforms, such as Dify's advanced document processing and multiple retrieval strategies.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Can Multiple Knowledge Bases Be Used Simultaneously?
&lt;/h3&gt;

&lt;p&gt;Yes. In the pipeline configuration, you can check multiple knowledge bases (built-in or external) simultaneously, and LangBot will comprehensively utilize the content from these knowledge bases to generate answers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Through this tutorial, you learned how to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a knowledge base and upload documents on the Dify platform&lt;/li&gt;
&lt;li&gt;Obtain Dify API key and related configuration information&lt;/li&gt;
&lt;li&gt;Configure a conversation model in LangBot&lt;/li&gt;
&lt;li&gt;Add an external knowledge base in LangBot&lt;/li&gt;
&lt;li&gt;Configure pipelines to use external knowledge bases&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The external knowledge base feature provides LangBot with more powerful and flexible knowledge management capabilities. Combined with professional LLMOps platforms like Dify, you can build more intelligent and professional conversational bots.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://langbot.app" rel="noopener noreferrer"&gt;LangBot Official Website&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.langbot.app" rel="noopener noreferrer"&gt;LangBot Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/langbot-app/LangBot" rel="noopener noreferrer"&gt;LangBot GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dify.ai" rel="noopener noreferrer"&gt;Dify Official Website&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://space.langbot.app/market" rel="noopener noreferrer"&gt;LangBot Plugin Marketplace&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>tutorial</category>
      <category>python</category>
      <category>rag</category>
    </item>
  </channel>
</rss>
