<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: hayerhans</title>
    <description>The latest articles on Forem by hayerhans (@hayerhans).</description>
    <link>https://forem.com/hayerhans</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/hayerhans"/>
    <language>en</language>
    <item>
      <title>Free Model Context Protocol (MCP) Course</title>
      <dc:creator>hayerhans</dc:creator>
      <pubDate>Wed, 16 Jul 2025 08:44:24 +0000</pubDate>
      <link>https://forem.com/hayerhans/free-model-context-protocol-mcp-course-11p3</link>
      <guid>https://forem.com/hayerhans/free-model-context-protocol-mcp-course-11p3</guid>
      <description>&lt;p&gt;🚀 Just launched: My free MCP (Model Context Protocol) course on Udemy&lt;br&gt;
👉 &lt;a href="https://www.udemy.com/course/model-context-protocol-free/" rel="noopener noreferrer"&gt;https://www.udemy.com/course/model-context-protocol-free/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While many courses charge $50+ for just 1 hour of content, I've made this course completely free, packed with practical insights and real-world examples.&lt;/p&gt;

&lt;p&gt;If you're exploring AI agent architectures or want to level up your LLM stack knowledge, this course is for you.&lt;br&gt;
🧠 Built for devs, explained simply.&lt;/p&gt;

&lt;p&gt;🙏 Support my work by enrolling, leaving a review, or sharing it—it helps boost visibility so more engineers can benefit.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>llm</category>
    </item>
    <item>
      <title>What is llms.txt?</title>
      <dc:creator>hayerhans</dc:creator>
      <pubDate>Thu, 22 May 2025 08:17:23 +0000</pubDate>
      <link>https://forem.com/hayerhans/what-is-llmstxt-1hgg</link>
      <guid>https://forem.com/hayerhans/what-is-llmstxt-1hgg</guid>
      <description>&lt;h2&gt;
  
  
  What is llms.txt?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://youtu.be/f-6ffhM4b8M" rel="noopener noreferrer"&gt;Watch the full setup demonstration →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;llms.txt works like a sitemap for websites, but designed for AI systems. It provides large language models with a structured overview of concepts and links to detailed information, allowing the AI to make targeted fetch calls when it needs more specific details.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;Using the FastMCP project as an example, their llms.txt file contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Overview of all key concepts&lt;/li&gt;
&lt;li&gt;Links to detailed documentation&lt;/li&gt;
&lt;li&gt;Structured format that LLMs can easily parse&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When an LLM needs more information, it can fetch specific URLs from the llms.txt file rather than guessing or hallucinating.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up MCP Doc Server
&lt;/h2&gt;

&lt;p&gt;The LangChain team created &lt;strong&gt;MCP Doc&lt;/strong&gt; - an MCP server that serves llms.txt files to AI systems. Here's the detailed setup process:&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Start the MCP Doc Server
&lt;/h3&gt;

&lt;p&gt;The correct command to start the mcpdoc server is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uvx &lt;span class="nt"&gt;--from&lt;/span&gt; mcpdoc mcpdoc &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--urls&lt;/span&gt; &lt;span class="s2"&gt;"LangGraph:https://langchain-ai.github.io/langgraph/llms.txt"&lt;/span&gt; &lt;span class="s2"&gt;"LangChain:https://python.langchain.com/llms.txt"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--transport&lt;/span&gt; sse &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--port&lt;/span&gt; 8082 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--host&lt;/span&gt; localhost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Command Breakdown:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;uvx&lt;/strong&gt;: Python package runner (similar to npx for Node.js)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;--urls&lt;/strong&gt;: Specify multiple llms.txt sources with labels&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;--transport sse&lt;/strong&gt;: Server-Sent Events transport method&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;--port 8082&lt;/strong&gt;: Local port for the MCP server&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;--host localhost&lt;/strong&gt;: Server host configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Configure in VS Code/Cline
&lt;/h3&gt;

&lt;p&gt;In VS Code with Cline, navigate to the server configuration:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click the server symbol in Cline&lt;/li&gt;
&lt;li&gt;Select "Install" &lt;/li&gt;
&lt;li&gt;Click "Configure MCP servers"&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 3: Add MCP Server Configuration
&lt;/h3&gt;

&lt;p&gt;Add this configuration to connect to your running mcpdoc server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"docs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"uvx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"--from"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mcpdoc"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mcpdoc"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"--urls"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"FastMCP:https://gofastmcp.com/llms.txt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"--transport"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sse"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"--port"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"8082"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"--host"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"localhost"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Configuration Details:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multiple sources&lt;/strong&gt;: Add as many &lt;code&gt;"Label:URL"&lt;/code&gt; pairs as needed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Port matching&lt;/strong&gt;: Ensure port matches your running server&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Labels&lt;/strong&gt;: Use descriptive names for each documentation source&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 4: Restart and Verify
&lt;/h3&gt;

&lt;p&gt;After configuration:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Restart your MCP servers&lt;/li&gt;
&lt;li&gt;Verify the "docs MCP" server appears in your available tools&lt;/li&gt;
&lt;li&gt;Test with a query that requires current documentation&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Real-World Demo: Dynamic Documentation Fetching
&lt;/h2&gt;

&lt;p&gt;Here's what actually happens when you ask for help with MCP development:&lt;/p&gt;

&lt;h3&gt;
  
  
  The Query
&lt;/h3&gt;

&lt;p&gt;"Help with dynamic resources in MCP use"&lt;/p&gt;

&lt;h3&gt;
  
  
  The AI's Process
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Lists available sources&lt;/strong&gt;: Uses &lt;code&gt;list_doc_sources&lt;/code&gt; to see what llms.txt files are available&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fetches overview&lt;/strong&gt;: Makes &lt;code&gt;fetch_docs&lt;/code&gt; call to get the complete llms.txt content&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Identifies specific needs&lt;/strong&gt;: Recognizes it needs more detail on "resources"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Makes targeted fetch&lt;/strong&gt;: Calls the specific resources documentation URL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provides accurate answer&lt;/strong&gt;: Uses the fetched information for correct implementation&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Result
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;No hallucination. Perfect implementation with current best practices.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Multiple Source Configuration Example
&lt;/h2&gt;

&lt;p&gt;You can configure multiple llms.txt sources for comprehensive coverage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uvx &lt;span class="nt"&gt;--from&lt;/span&gt; mcpdoc mcpdoc &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--urls&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="s2"&gt;"LangGraph:https://langchain-ai.github.io/langgraph/llms.txt"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="s2"&gt;"LangChain:https://python.langchain.com/llms.txt"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="s2"&gt;"FastMCP:https://gofastmcp.com/llms.txt"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="s2"&gt;"YourProject:https://yourproject.com/llms.txt"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--transport&lt;/span&gt; sse &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--port&lt;/span&gt; 8082 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--host&lt;/span&gt; localhost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives your AI assistant access to documentation from multiple projects simultaneously.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Approach Works Better
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Traditional AI Development Issues
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Outdated training data&lt;/li&gt;
&lt;li&gt;Deprecated API references
&lt;/li&gt;
&lt;li&gt;Hallucinated implementation details&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  With llms.txt + MCP Doc
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Current information&lt;/strong&gt;: Always references latest documentation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Targeted fetching&lt;/strong&gt;: Gets exactly the information needed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No guesswork&lt;/strong&gt;: AI works with actual current docs, not assumptions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Benefits
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Free&lt;/strong&gt;: Unlike some context tools, this approach costs nothing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple sources&lt;/strong&gt;: Can integrate multiple llms.txt files simultaneously
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Current information&lt;/strong&gt;: Always references the latest documentation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structured approach&lt;/strong&gt;: Systematic rather than guessing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IDE integration&lt;/strong&gt;: Works seamlessly with VS Code, Cursor, Windsurf&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;p&gt;Perfect for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MCP server development (as demonstrated)&lt;/li&gt;
&lt;li&gt;Framework-specific development where APIs change frequently&lt;/li&gt;
&lt;li&gt;Multi-project development requiring different documentation sources&lt;/li&gt;
&lt;li&gt;Team environments where everyone needs access to current docs&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Thanks for reading ! Have a question? Reach out to me or comment below &lt;/p&gt;

&lt;p&gt;🔗 CONNECT:&lt;br&gt;
Substack: &lt;a href="https://substack.com/@jhayer93" rel="noopener noreferrer"&gt;https://substack.com/@jhayer93&lt;/a&gt;&lt;br&gt;
Discord Community: &lt;a href="https://discord.gg/2wSR6GPgB5" rel="noopener noreferrer"&gt;https://discord.gg/2wSR6GPgB5&lt;/a&gt;&lt;br&gt;
LinkedIn: &lt;a href="https://www.linkedin.com/in/johannes-hayer-b8253a294/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/johannes-hayer-b8253a294/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>webdev</category>
      <category>vscode</category>
    </item>
    <item>
      <title>smolagents: The Simplest Way to Build Powerful AI Agents</title>
      <dc:creator>hayerhans</dc:creator>
      <pubDate>Wed, 09 Apr 2025 08:11:36 +0000</pubDate>
      <link>https://forem.com/hayerhans/smolagents-the-simplest-way-to-build-powerful-ai-agents-18o</link>
      <guid>https://forem.com/hayerhans/smolagents-the-simplest-way-to-build-powerful-ai-agents-18o</guid>
      <description>&lt;p&gt;Ever wonder how language models can search the web but struggle to run a simple calculation accurately? Or how they can analyze PDFs but can't make an API call? Welcome to the gap between traditional language models and AI agents.&lt;/p&gt;

&lt;p&gt;Traditional LLMs are impressive but passive - they generate text based on patterns learned during training. AI agents, on the other hand, can take action in the world. They can search the web, call APIs, execute code, and interact with databases - all to achieve specific goals.&lt;/p&gt;

&lt;p&gt;Enter smolagents: a lightweight library that strips AI agent development down to its essentials.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3qma5pjnjr0p0joqgxs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3qma5pjnjr0p0joqgxs.png" alt="Image description" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Brain and Body Architecture
&lt;/h2&gt;

&lt;p&gt;As shown in the diagram above, an AI agent consists of two fundamental components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Brain&lt;/strong&gt;: An LLM that handles reasoning and planning, making decisions about which actions to take (hence often called the "Router")&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Body&lt;/strong&gt;: The agent's means of interacting with its environment - defined by the tools and actions available to it&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;smolagents brilliantly implements this architecture with minimal overhead. The entire library is around 1,000 lines of code, embodying the "keep it simple" philosophy that building powerful AI agents shouldn't require complex architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Workflows to True Agents
&lt;/h2&gt;

&lt;p&gt;Where does smolagents fit in the broader landscape of AI development patterns?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fza0dduq4tsgfsqwu3tkm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fza0dduq4tsgfsqwu3tkm.png" alt="Image description" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As this diagram illustrates, there's a spectrum:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the left, we have &lt;strong&gt;workflows&lt;/strong&gt; where the developer pre-defines the paths&lt;/li&gt;
&lt;li&gt;In the middle, the LLM has some control but within limited options&lt;/li&gt;
&lt;li&gt;On the right, true &lt;strong&gt;agents&lt;/strong&gt; that direct their own actions based on environmental feedback&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;smolagents leans toward the right of this spectrum, enabling you to build agents that can truly adapt to circumstances and take appropriate actions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Your First Agent in Three Steps
&lt;/h2&gt;

&lt;p&gt;Here's how simple it is to create a basic agent with smolagents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;smolagents.agent&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;CodeAgent&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;smolagents.model&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;HfApiModel&lt;/span&gt;

&lt;span class="c1"&gt;# 1. Initialize a model
&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;HfApiModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llama3/llama-3-8b-instruct&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# 2. Create an agent
&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;CodeAgent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[])&lt;/span&gt;

&lt;span class="c1"&gt;# 3. Run the agent
&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Calculate the 50th Fibonacci number&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! This agent can already solve mathematical problems by generating and executing Python code. But the real power comes when you add custom tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Magic of Custom Tools
&lt;/h2&gt;

&lt;p&gt;smolagents makes creating custom tools refreshingly straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;smolagents.tool&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tool&lt;/span&gt;

&lt;span class="nd"&gt;@tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Get weather data for a city&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_weather_data&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Returns weather data for a given city.

    Args:
        city: The name of the city (e.g., New York, London, Tokyo)

    Returns:
        A dictionary containing weather data
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="c1"&gt;# Implementation here...
&lt;/span&gt;    &lt;span class="k"&gt;pass&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just add the &lt;code&gt;@tool&lt;/code&gt; decorator to any Python function, provide good documentation, and your agent can now use this capability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code-First: LLMs Doing What They Do Best
&lt;/h2&gt;

&lt;p&gt;smolagents uses a code-first approach instead of requiring JSON formats for actions. When your agent needs to analyze weather patterns across multiple cities, it might generate code like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Get weather for multiple cities
&lt;/span&gt;&lt;span class="n"&gt;cities&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Tokyo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;New York&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;London&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;city&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;cities&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;get_weather_data&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Calculate average temperatures
&lt;/span&gt;&lt;span class="n"&gt;avg_temps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;temperatures&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;temperatures&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; 
             &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;city&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;info&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;items&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;

&lt;span class="c1"&gt;# Find the warmest city
&lt;/span&gt;&lt;span class="n"&gt;warmest_city&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;avg_temps&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;avg_temps&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The warmest city is &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;warmest_city&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; with an average of &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;avg_temps&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;warmest_city&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;°C&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is more natural for LLMs (which are trained extensively on code) and more powerful for solving complex problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use smolagents
&lt;/h2&gt;

&lt;p&gt;smolagents is perfect for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rapid prototyping of AI agents&lt;/li&gt;
&lt;li&gt;Projects requiring custom domain-specific tools&lt;/li&gt;
&lt;li&gt;When you need to switch between different LLM providers&lt;/li&gt;
&lt;li&gt;Building systems that need to interact with their environment&lt;/li&gt;
&lt;li&gt;When simplicity and readability matter more than complex architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, for highly regulated environments or production systems requiring extensive monitoring, you might need additional infrastructure around your smolagents implementation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started Today
&lt;/h2&gt;

&lt;p&gt;Ready to build your first AI agent? Installation is straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;smolagents
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you can quickly set up a model, define your tools, build your agent, and start interacting with it.&lt;/p&gt;

&lt;p&gt;Whether you're building a personal assistant, a data analysis tool, or exploring complex multi-agent systems, smolagents offers a clean, intuitive library to bring your ideas to life.&lt;/p&gt;

&lt;h2&gt;
  
  
  Take Your Agent Development Skills to the Next Level
&lt;/h2&gt;

&lt;p&gt;If you're ready to dive deeper into the world of AI agents, check out the comprehensive smolagents course at &lt;a href="https://ai-in-a-shell.com" rel="noopener noreferrer"&gt;ai-in-a-shell.com&lt;/a&gt;. The free course covers everything from basic agent development to advanced multi-agent systems, custom tool creation, and production monitoring.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>python</category>
      <category>smolagents</category>
    </item>
    <item>
      <title>Web Automation in Plain English: Browser Use Changes Everything</title>
      <dc:creator>hayerhans</dc:creator>
      <pubDate>Tue, 11 Feb 2025 20:40:20 +0000</pubDate>
      <link>https://forem.com/hayerhans/browser-use-ai-powered-web-automation-architecture-3g3b</link>
      <guid>https://forem.com/hayerhans/browser-use-ai-powered-web-automation-architecture-3g3b</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Hey everyone! Today we're diving into Browser Use, an incredible new library that's revolutionizing web automation. If you've ever struggled with Selenium or Playwright, dealing with selectors and timeouts, you're going to love this. Let's build something cool together!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Prefer video content? Check out my detailed walkthrough on YouTube: &lt;a href="https://youtu.be/RsGTT7J7Po8" rel="noopener noreferrer"&gt;https://youtu.be/RsGTT7J7Po8&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Setup Section
&lt;/h2&gt;

&lt;p&gt;First, let's get our environment ready. I'll walk you through this step by step:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a fresh project folder and open your favorite IDE&lt;/li&gt;
&lt;li&gt;Install UV - it's a super fast alternative to pip
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-LsSf&lt;/span&gt; https://astral.sh/uv/install.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a virtual environment with Python 3.11 (Browser Use requirement):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv venv &lt;span class="nt"&gt;--python&lt;/span&gt; 3.11
&lt;span class="nb"&gt;source&lt;/span&gt; .venv/bin/activate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Install Browser Use and Playwright:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv pip &lt;span class="nb"&gt;install &lt;/span&gt;browser-use
playwright &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating Our First Agent
&lt;/h2&gt;

&lt;p&gt;Let's write our first Browser Use agent. Here's the minimal code you need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;browser_use&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;

&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Search for latest news about AI&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What's cool here is that we only need two main parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;task&lt;/code&gt;: Just tell it what you want to do in plain English&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;llm&lt;/code&gt;: Specify which language model to use&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Advanced Configuration
&lt;/h2&gt;

&lt;p&gt;Now, let's look at some powerful features. Browser Use gives us tons of configuration options:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your task&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;controller&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;custom_controller&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# For custom tool calling
&lt;/span&gt;    &lt;span class="n"&gt;use_vision&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;              &lt;span class="c1"&gt;# Enable vision capabilities
&lt;/span&gt;    &lt;span class="n"&gt;save_conversation_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;logs/conversation.json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;  &lt;span class="c1"&gt;# Save chat logs
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;use_vision&lt;/code&gt; parameter is particularly interesting - it lets your agent actually see and understand what's on the webpage. Just keep in mind that for GPT-4o, each image processed costs about 800-1000 tokens (roughly $0.002 USD).&lt;/p&gt;

&lt;h2&gt;
  
  
  Working with Browser Sessions
&lt;/h2&gt;

&lt;p&gt;One of the coolest features is the ability to connect to your existing Chrome instance. This is super helpful for situations where you need to be logged in. Here's how:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;browser_use&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Browser&lt;/span&gt;

&lt;span class="c1"&gt;# Create and reuse a browser instance
&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Browser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;task1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;browser&lt;/span&gt;  &lt;span class="c1"&gt;# Browser instance will be reused
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Don't forget to close when done
&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;browser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Structured Output
&lt;/h2&gt;

&lt;p&gt;If you need structured data, Browser Use has you covered. You can define custom output formats using Pydantic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pydantic&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;BaseModel&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BaseModel&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;post_title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;post_url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;num_comments&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;
    &lt;span class="n"&gt;hours_since_post&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Posts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BaseModel&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Post&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="n"&gt;controller&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Controller&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;output_model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;Posts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Getting Results and History
&lt;/h2&gt;

&lt;p&gt;After running your agent, you get access to tons of useful information:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;history&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Access various types of information
&lt;/span&gt;&lt;span class="n"&gt;urls&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;history&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;urls&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;              &lt;span class="c1"&gt;# URLs visited
&lt;/span&gt;&lt;span class="n"&gt;screenshots&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;history&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;screenshots&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;       &lt;span class="c1"&gt;# Screenshot paths
&lt;/span&gt;&lt;span class="n"&gt;actions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;history&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;action_names&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;      &lt;span class="c1"&gt;# Actions taken
&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;history&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;extracted_content&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;# Extracted data
&lt;/span&gt;&lt;span class="n"&gt;errors&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;history&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;           &lt;span class="c1"&gt;# Any errors
&lt;/span&gt;&lt;span class="n"&gt;model_actions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;history&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;model_actions&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;     &lt;span class="c1"&gt;# All actions with parameters
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bonus: Using a Planner Model
&lt;/h2&gt;

&lt;p&gt;For complex tasks, you can even use a separate model for high-level planning:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;

&lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;planner_llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;o3-mini&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your task&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;planner_llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;planner_llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;           &lt;span class="c1"&gt;# Planning model
&lt;/span&gt;    &lt;span class="n"&gt;use_vision_for_planner&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;      &lt;span class="c1"&gt;# Disable vision for planner
&lt;/span&gt;    &lt;span class="n"&gt;planner_interval&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;                 &lt;span class="c1"&gt;# Plan every 4 steps
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setup lets you use a smaller, cheaper model for planning while keeping the powerful GPT-4o for execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing
&lt;/h2&gt;

&lt;p&gt;That's it for today's tutorial! We've covered everything from basic setup to advanced features like browser session management and structured output. Drop a comment below if you'd like to see more Browser Use tutorials, maybe something about custom functions or system prompts?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>opensource</category>
    </item>
    <item>
      <title>StarSense - new way of interacting with repos</title>
      <dc:creator>hayerhans</dc:creator>
      <pubDate>Sun, 10 Nov 2024 13:10:29 +0000</pubDate>
      <link>https://forem.com/hayerhans/starsense-new-way-of-interacting-with-repos-35df</link>
      <guid>https://forem.com/hayerhans/starsense-new-way-of-interacting-with-repos-35df</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/pgai"&gt;Open Source AI Challenge with pgai and Ollama&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built &lt;a href="https://github.com/XamHans/starsense" rel="noopener noreferrer"&gt;StarSense&lt;/a&gt;, an intelligent chat interface that helps developers easily search and discover their starred GitHub repositories using natural language. The project leverages RAG (Retrieval-Augmented Generation) technology to create a seamless conversation experience with your GitHub stars.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/XamHans/starsense" rel="noopener noreferrer"&gt;StarSense&lt;/a&gt; automatically processes your starred repositories by:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Authenticating with GitHub via OAuth&lt;/li&gt;
&lt;li&gt;Fetching all starred repositories&lt;/li&gt;
&lt;li&gt;Extracting and processing README content&lt;/li&gt;
&lt;li&gt;Storing repository information in PostgreSQL&lt;/li&gt;
&lt;li&gt;Generating embeddings using pgai vectorizer&lt;/li&gt;
&lt;li&gt;Enabling natural language queries using vector similarity search&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Demo/Repo
&lt;/h2&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/XamHans/starsense" rel="noopener noreferrer"&gt;https://github.com/XamHans/starsense&lt;/a&gt;&lt;br&gt;
Video: &lt;a href="https://youtu.be/Uf1uzI0e3jM" rel="noopener noreferrer"&gt;https://youtu.be/Uf1uzI0e3jM&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The application features a clean chat interface where users can interact with their starred repositories naturally:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwi5kpm3btbcluy09mi1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwi5kpm3btbcluy09mi1x.png" alt="Image description" width="800" height="543"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fla0lh1ba8n1tc3m22fbk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fla0lh1ba8n1tc3m22fbk.png" alt="Repository Management" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The project utilizes a robust architecture integrating Timescale, pgai, and Ollama:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvk3e7qvg8lnnq2ksy11.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvk3e7qvg8lnnq2ksy11.PNG" alt="Architecture Diagram" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tools Used
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Frontend
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Next.js 14&lt;/strong&gt;: Latest version of the React framework for building the web interface&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TypeScript&lt;/strong&gt;: For type-safe code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TailwindCSS&lt;/strong&gt;: For styling and responsive design&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NextAuth.js&lt;/strong&gt;: Handling GitHub OAuth authentication&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WebSocket Client&lt;/strong&gt;: Real-time updates during repository ingestion&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Backend
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;FastAPI&lt;/strong&gt;: Modern Python web framework for building the API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WebSocket&lt;/strong&gt;: Real-time connection for providing ingest phase status updates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Poetry&lt;/strong&gt;: Python dependency management&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AI and Vector Search
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;pgai Vectorizer&lt;/strong&gt;: Implemented to generate embeddings for repository content using the following configuration:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;ai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;create_vectorizer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="s1"&gt;'public.repositories'&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;regclass&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;ai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;embedding_openai&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'text-embedding-3-small'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1536&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_key_name&lt;/span&gt;&lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;&lt;span class="s1"&gt;'OPENAI_API_KEY'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="n"&gt;chunking&lt;/span&gt;&lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;ai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chunking_recursive_character_text_splitter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'readme'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="n"&gt;formatting&lt;/span&gt;&lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;ai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;formatting_python_template&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'name: $name url: $url content: $chunk'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;AI Extensions&lt;/strong&gt;: The project utilizes multiple Timescale extensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ai extension for core AI functionality&lt;/li&gt;
&lt;li&gt;vector extension for similarity search&lt;/li&gt;
&lt;li&gt;vectorscale for scalable vector operations&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ollama&lt;/strong&gt;: Used for generating natural language responses based on retrieved repository content, specifically utilizing the llama3 model.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Database &amp;amp; Infrastructure
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;TimescaleDB&lt;/strong&gt;: PostgreSQL-based database with vector search capabilities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub API&lt;/strong&gt;: For fetching starred repositories and README content&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Building StarSense has been an exciting journey in combining modern AI technologies with practical developer tools. The integration of pgai's vectorizer with Ollama's language models creates a powerful synergy that makes repository discovery feel natural and intuitive.&lt;/p&gt;

&lt;p&gt;Some key learnings and highlights:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The pgai vectorizer dramatically simplified the embedding process by:

&lt;ul&gt;
&lt;li&gt;Automatically handling document chunking and preprocessing&lt;/li&gt;
&lt;li&gt;Managing embedding generation and storage&lt;/li&gt;
&lt;li&gt;Eliminating the need for separate embedding infrastructure&lt;/li&gt;
&lt;li&gt;Seamlessly integrating with existing PostgreSQL workflows&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Timescale's AI extensions provided a robust foundation for vector operations&lt;/li&gt;

&lt;li&gt;Ollama's open-source models offered great performance for natural language generation&lt;/li&gt;

&lt;li&gt;The WebSocket implementation enabled real-time feedback during the repository ingestion process&lt;/li&gt;

&lt;li&gt;The combination of Next.js 14 and FastAPI created a performant and developer-friendly stack&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This submission qualifies for the following prize categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open-source Models from Ollama (utilizing llama3)&lt;/li&gt;
&lt;li&gt;Vectorizer Vibe (implementing pgai vectorizer)&lt;/li&gt;
&lt;li&gt;All the Extensions (using ai, vector, and vectorscale extensions)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devchallenge</category>
      <category>pgaichallenge</category>
      <category>database</category>
      <category>ai</category>
    </item>
    <item>
      <title>Machine Learning for Web devs</title>
      <dc:creator>hayerhans</dc:creator>
      <pubDate>Thu, 29 Aug 2024 10:04:12 +0000</pubDate>
      <link>https://forem.com/hayerhans/machine-learning-for-web-devs-24gh</link>
      <guid>https://forem.com/hayerhans/machine-learning-for-web-devs-24gh</guid>
      <description>&lt;p&gt;When I started my career as a Software Engineer, I felt comfortable in the realm of web development. Whenever I heard about someone working in AI or ML, I couldn't help but admire them. I assumed these people possessed a deep understanding of mathematics and statistics that enabled them to do such impressive work. However, with the rise of ChatGPT and its incredible capabilities, I decided to dig deeper into the topic. What I discovered was eye-opening: you don't need to be a genius to work with ML or AI!&lt;/p&gt;

&lt;p&gt;Like any discipline, there are multiple sub-domains within ML and AI that require different skill sets. As an engineer and developer, I approach this field from the perspective of how to integrate it into modern web applications. I recently passed the Azure AI-900 certification, and I realized that my fellow developers could benefit from this knowledge too. So, let's explore what ML is all about!&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Machine Learning?
&lt;/h2&gt;

&lt;p&gt;At its core, Machine Learning (ML) is a subset of artificial intelligence that focuses on creating algorithms and models that can learn from data. Unlike traditional programming where we explicitly define rules, ML algorithms learn patterns and relationships from data to make predictions or decisions.&lt;/p&gt;

&lt;p&gt;To put it in mathematical terms, ML is about learning a function f(x) = y, where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;x represents the input data (features)&lt;/li&gt;
&lt;li&gt;y represents the output (predictions or labels)&lt;/li&gt;
&lt;li&gt;f is the learned function or model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key difference from traditional programming is that the function f is not explicitly coded but learned from data during a process called training.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Terminology
&lt;/h2&gt;

&lt;p&gt;Before we dive deeper, let's familiarize ourselves with some essential terms:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Features&lt;/strong&gt;: These are the observed attributes or input data. For example, if we're predicting house prices, features might include the size of the house, its location, and the number of bedrooms.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Labels&lt;/strong&gt;: These are the outcomes that the model tries to predict. In our house price example, the label would be the actual price of the house.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjjx3lao4vccotqh5nuw4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjjx3lao4vccotqh5nuw4.png" alt="Image description" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Machine Learning
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6k2r5cnazjtz72xwj3yw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6k2r5cnazjtz72xwj3yw.png" alt="Image description" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are several types of machine learning, but we'll focus on the two main categories:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Supervised Learning&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses training data that includes both feature values and known label values&lt;/li&gt;
&lt;li&gt;Format: [X1, X2, X3], Y&lt;/li&gt;
&lt;li&gt;Used when you have historical data with known outcomes&lt;/li&gt;
&lt;li&gt;Example: Predicting house prices based on features like location, size, etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Unsupervised Learning&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses training data that includes only feature values, without labels&lt;/li&gt;
&lt;li&gt;Format: [X1, X2, X3]&lt;/li&gt;
&lt;li&gt;Used to find patterns or relationships in data without predefined outcomes&lt;/li&gt;
&lt;li&gt;Example: Grouping similar customers based on demographics and behavior&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9n56k0ccrpfl4p38152.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9n56k0ccrpfl4p38152.png" alt="Supervised vs Unsupervised Learning" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Training Process
&lt;/h2&gt;

&lt;p&gt;Now that we understand the basics, let's look at how ML models are actually trained&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faytqmhnsrqhj582yfi15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faytqmhnsrqhj582yfi15.png" alt="Image description" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Collection&lt;/strong&gt;: Gather a large dataset relevant to your problem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Preprocessing&lt;/strong&gt;: Clean and prepare your data, handling missing values and outliers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Split the Data&lt;/strong&gt;: Divide your dataset into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Training set: Used to teach the model (typically 70-80% of the data)&lt;/li&gt;
&lt;li&gt;Validation set: Used to tune the model and prevent overfitting (typically 10-15%)&lt;/li&gt;
&lt;li&gt;Test set: Used to evaluate the final model performance (typically 10-15%)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Choose an Algorithm&lt;/strong&gt;: Select an ML algorithm appropriate for your problem. The beauty is that these algorithms are often interchangeable, allowing you to experiment with different approaches.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Train the Model&lt;/strong&gt;: Feed the training data into your chosen algorithm, allowing it to learn the patterns in the data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Validate and Tune&lt;/strong&gt;: Use the validation set to fine-tune your model's parameters and prevent overfitting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test and Evaluate&lt;/strong&gt;: Finally, use the test set to assess how well your model generalizes to new, unseen data.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Want to Learn More?
&lt;/h2&gt;

&lt;p&gt;I'm excited to announce that I'm creating a &lt;strong&gt;&lt;em&gt;free&lt;/em&gt;&lt;/strong&gt; course on Azure AI-900! If you're interested in diving deeper into AI and ML, particularly within the Azure ecosystem, this course is perfect for you. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://preview.mailerlite.io/preview/993462/sites/127724181855929401/ai-900" rel="noopener noreferrer"&gt;Sign up here to get notified when the course launches!&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for reading, and stay tuned for the next post where we'll dive into computer vision capabilities. The world of AI and ML is vast and exciting, and I can't wait to explore it further with you!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>web</category>
    </item>
    <item>
      <title>How to get up &amp; running a LLM locally - in 5 minutes</title>
      <dc:creator>hayerhans</dc:creator>
      <pubDate>Sat, 23 Mar 2024 14:51:31 +0000</pubDate>
      <link>https://forem.com/hayerhans/how-to-get-up-running-a-llm-locally-in-5-minutes-kkd</link>
      <guid>https://forem.com/hayerhans/how-to-get-up-running-a-llm-locally-in-5-minutes-kkd</guid>
      <description>&lt;p&gt;Video Version:&lt;br&gt;
&lt;a href="https://youtube.com/shorts/y0NWVUsfLiU?si=x16bKEoHLfk87nC2"&gt;https://youtube.com/shorts/y0NWVUsfLiU?si=x16bKEoHLfk87nC2&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  What is Ollama?
&lt;/h2&gt;

&lt;p&gt;It's a lightweight framework designed for those who wish to experiment with, customize, and deploy large language models without the hassle of cloud platforms. With Ollama, the power of AI is distilled into a simple, local package, allowing developers and hobbyists alike to explore the vast capabilities of machine learning models.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setting Up Ollama: A Step-by-Step Approach
&lt;/h2&gt;

&lt;p&gt;First download ollama for your OS here:&lt;br&gt;
&lt;a href="https://ollama.com/download"&gt;https://ollama.com/download&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Second run the model you want with: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;ollama run llama2&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Model library
&lt;/h2&gt;

&lt;p&gt;Ollama supports a list of models available on ollama.com/library&lt;/p&gt;

&lt;p&gt;Here are some example models that can be downloaded:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Parameters&lt;/th&gt;
&lt;th&gt;Size&lt;/th&gt;
&lt;th&gt;Download Command&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Llama 2&lt;/td&gt;
&lt;td&gt;7B&lt;/td&gt;
&lt;td&gt;3.8GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ollama run llama2&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mistral&lt;/td&gt;
&lt;td&gt;7B&lt;/td&gt;
&lt;td&gt;4.1GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ollama run mistral&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dolphin Phi&lt;/td&gt;
&lt;td&gt;2.7B&lt;/td&gt;
&lt;td&gt;1.6GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ollama run dolphin-phi&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phi-2&lt;/td&gt;
&lt;td&gt;2.7B&lt;/td&gt;
&lt;td&gt;1.7GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ollama run phi&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Neural Chat&lt;/td&gt;
&lt;td&gt;7B&lt;/td&gt;
&lt;td&gt;4.1GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ollama run neural-chat&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Starling&lt;/td&gt;
&lt;td&gt;7B&lt;/td&gt;
&lt;td&gt;4.1GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ollama run starling-lm&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code Llama&lt;/td&gt;
&lt;td&gt;7B&lt;/td&gt;
&lt;td&gt;3.8GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ollama run codellama&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Llama 2 Uncensored&lt;/td&gt;
&lt;td&gt;7B&lt;/td&gt;
&lt;td&gt;3.8GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ollama run llama2-uncensored&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Llama 2 13B&lt;/td&gt;
&lt;td&gt;13B&lt;/td&gt;
&lt;td&gt;7.3GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ollama run llama2:13b&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Llama 2 70B&lt;/td&gt;
&lt;td&gt;70B&lt;/td&gt;
&lt;td&gt;39GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ollama run llama2:70b&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Orca Mini&lt;/td&gt;
&lt;td&gt;3B&lt;/td&gt;
&lt;td&gt;1.9GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ollama run orca-mini&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vicuna&lt;/td&gt;
&lt;td&gt;7B&lt;/td&gt;
&lt;td&gt;3.8GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ollama run vicuna&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LLaVA&lt;/td&gt;
&lt;td&gt;7B&lt;/td&gt;
&lt;td&gt;4.5GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ollama run llava&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemma&lt;/td&gt;
&lt;td&gt;2B&lt;/td&gt;
&lt;td&gt;1.4GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ollama run gemma:2b&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemma&lt;/td&gt;
&lt;td&gt;7B&lt;/td&gt;
&lt;td&gt;4.8GB&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ollama run gemma:7b&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Memory Requirements&lt;/strong&gt;: &lt;br&gt;
Keep in mind, running these models isn't light on resources. Ensure you have at least 8 GB of RAM for 7B models, and more for the larger ones, to keep your AI running smoothly.&lt;/p&gt;
&lt;h2&gt;
  
  
  Customization
&lt;/h2&gt;

&lt;p&gt;With Ollama, you're not just running models; you're tailoring them. Import models with ease and customize prompts to fit your specific needs. Fancy a model that responds as Mario? Ollama makes it possible with simple command lines:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Customize a prompt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Models from the Ollama library can be customized with a prompt. For example, to customize the llama2 model:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ollama pull llama2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Create a Modelfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; llama2&lt;/span&gt;


&lt;span class="c"&gt;# set the temperature to 1 [higher is more creative, lower is more coherent]&lt;/span&gt;
PARAMETER temperature 1

&lt;span class="c"&gt;# set the system message&lt;/span&gt;

SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
""" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create and run the model:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ollama create mario -f ./Modelfile&lt;br&gt;
ollama run mario&lt;/code&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;hi&lt;br&gt;
Hello! It's your friend Mario.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;
&lt;br&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you liked this content also have a look at my &lt;a href="https://www.youtube.com/channel/UC_Jd57_cBUXG_byLVbLTluA"&gt;YouTube channel&lt;/a&gt; &lt;/p&gt;

</description>
      <category>llm</category>
      <category>chatgpt</category>
      <category>mistral</category>
      <category>ollama</category>
    </item>
    <item>
      <title>This is why Voiceflow makes a winning in the AI Automation area</title>
      <dc:creator>hayerhans</dc:creator>
      <pubDate>Tue, 10 Oct 2023 06:12:22 +0000</pubDate>
      <link>https://forem.com/hayerhans/this-is-why-voiceflow-makes-a-winning-in-the-ai-automation-area-5ai</link>
      <guid>https://forem.com/hayerhans/this-is-why-voiceflow-makes-a-winning-in-the-ai-automation-area-5ai</guid>
      <description>&lt;h2&gt;
  
  
  What is Voiceflow and why does it make a winning?
&lt;/h2&gt;

&lt;p&gt;Voiceflow is a powerful platform that excels in the field of AI Automation. It offers a wide range of tools and features that enable users to create sophisticated conversational experiences with ease. Next to Voiceflows they are other platforms like &lt;a href="https://botpress.com/" rel="noopener noreferrer"&gt;BotPress&lt;/a&gt; and &lt;a href="https://rasa.com/" rel="noopener noreferrer"&gt;Rasa&lt;/a&gt; that are also great for creating chatbots. But what makes Voiceflow so special? Well it's the templates that they provide.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Voiceflow templates?
&lt;/h2&gt;

&lt;p&gt;Voiceflow templates are powerful tools that can significantly improve the speed and efficiency of building chatbots. By utilizing these templates, you can learn from other creators and leverage their expertise to create sophisticated conversational experiences.&lt;/p&gt;

&lt;p&gt;With Voiceflow templates, you can explore different use cases and industries, such as advanced AI FAQ support bots, AI lead generation, personal productivity assistants, language learning chatbots and more.&lt;/p&gt;

&lt;p&gt;By starting with a template, you can save time and effort in the development process. Here are 5 templates that you can use for FREE!&lt;/p&gt;




&lt;p&gt;💡 &lt;strong&gt;Are you an AI bot builder looking for more gigs? Or are you an agency struggling to find qualified technical talent? If so, you're not alone.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That's where a reverse job platform for AI bot builders comes in. This type of platform allows bot builders to create profiles and showcase their skills and experience. Agencies can then browse these profiles and contact bot builders directly to discuss potential projects.&lt;/p&gt;

&lt;p&gt;Pre-sign up now: &lt;a href="https://botdevs.io" rel="noopener noreferrer"&gt;https://botdevs.io&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  5 free templates to enhance your voiceflow bot creation process.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;*&lt;strong&gt;&lt;em&gt;Template 1: Advanced AI FAQ Support Bot&lt;/em&gt;&lt;/strong&gt;*&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Description: This template enables you to create an AI-powered chatbot that offers advanced support by utilizing your own documentation. It utilizes the Knowledge Base feature to extract information from websites or documents, facilitating dynamic conversations. The bot also uses sentiment analysis to assess user responses and provide appropriate replies, drawing information from your data sources.&lt;/li&gt;
&lt;li&gt;Get Started: &lt;a href="https://creator.voiceflow.com/dashboard?import=64a5a7608e09080007968ddf" rel="noopener noreferrer"&gt;Start with Template&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;*&lt;strong&gt;&lt;em&gt;Template 2:  AI Lead Generation&lt;/em&gt;&lt;/strong&gt;*&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Description: Learn how to design a conversational experience and automate the process of sending lead information to a Google spreadsheet. This tutorial is created by Vinera AI, a company that empowers growth through time-saving AI automation. Vinera AI partners with businesses to maximize efficiency by providing AI-powered tools such as chatbots and automations, helping companies save valuable time and resources.&lt;/li&gt;
&lt;li&gt;Get Started: &lt;a href="https://creator.voiceflow.com/dashboard?import=64b570d96a6085000713c1cc" rel="noopener noreferrer"&gt;Start with Template&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;*&lt;strong&gt;&lt;em&gt;Template 3: Knowledge about your Shopify products&lt;/em&gt;&lt;/strong&gt;*&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Description:This is a full e-commerce chatbot template that includes components for creating a custom knowledge questionnaire, collecting leads with Zapier, and connecting with Shopify's API. How cool is that?  You can ask questions about your products and the AI is giving an answer based on your product knowledge&lt;/li&gt;
&lt;li&gt;Get Started: &lt;a href="https://erdincciftci.gumroad.com/l/shopifyaichatbot?layout=profile" rel="noopener noreferrer"&gt;Start with Template&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;*&lt;strong&gt;&lt;em&gt;Template 4: Intelligent AI buddy with conversational memory&lt;/em&gt;&lt;/strong&gt;*&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Description: Learn how to use long term memory in Voiceflow to create an intelligent human-like AI buddy. This template will guide you through building a chatbot with a conversational memory that can remember user preferences, previous interactions, and provide personalized responses.&lt;/li&gt;
&lt;li&gt;Get Started: &lt;a href="https://creator.voiceflow.com/dashboard?import=64cd57d2e8300a00077e234a" rel="noopener noreferrer"&gt;Start with Template&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;*&lt;strong&gt;&lt;em&gt;Template 5: Survey Feedback AI&lt;/em&gt;&lt;/strong&gt;*&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Description: This tutorial uses AI and sentiment analysis to assess a user's satisfaction through a feedback survey. Based on their response, the chatbot will respond accordingly.&lt;/li&gt;
&lt;li&gt;Get Started: &lt;a href="https://creator.voiceflow.com/dashboard?import=64badd7db97919000713cd68" rel="noopener noreferrer"&gt;Start with Template&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;If you need help with any of these templates, visit the VoiceFlow Discord community to get help: &lt;a href="https://discord.com/invite/JXRbEv7nD2" rel="noopener noreferrer"&gt;https://discord.com/invite/JXRbEv7nD2&lt;/a&gt;&lt;/p&gt;

</description>
      <category>bot</category>
      <category>chatbot</category>
      <category>ai</category>
      <category>chatgpt</category>
    </item>
    <item>
      <title>How to get text from any YT video | Free transcribe program 🖹</title>
      <dc:creator>hayerhans</dc:creator>
      <pubDate>Fri, 11 Aug 2023 08:24:34 +0000</pubDate>
      <link>https://forem.com/hayerhans/how-to-get-text-from-any-yt-video-free-transcribe-program-53k6</link>
      <guid>https://forem.com/hayerhans/how-to-get-text-from-any-yt-video-free-transcribe-program-53k6</guid>
      <description>&lt;p&gt;Check out the video tutorial: &lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=b9oyBebJCK0" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=b9oyBebJCK0&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking to transcribe text from YouTube videos effortlessly?  In this step-by-step tutorial, I will guide you through the process of setting up Video2Text on your local computer for a seamless text extraction experience.&lt;/p&gt;

&lt;p&gt;🔗 Get started with Video2Text:&lt;/p&gt;

&lt;p&gt;📂 Clone the Video2Text repository: git clone &lt;a href="https://github.com/XamHans/video-2-text.git" rel="noopener noreferrer"&gt;https://github.com/XamHans/video-2-text.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📁 Navigate to the 'webserver' directory&lt;/p&gt;

&lt;p&gt;🔧 Install required dependencies: pip3 install -r requirements.txt&lt;/p&gt;

&lt;p&gt;➡️ If you encounter a 'streamlit' recognition error, execute this command in your terminal: export PATH="$HOME/.local/bin:$PATH"&lt;/p&gt;

&lt;p&gt;🚀 Launch the Video2Text app: streamlit run app.py&lt;/p&gt;

&lt;p&gt;🌐 Access the Video2Text interface: Open your browser and go to &lt;a href="http://localhost:8501" rel="noopener noreferrer"&gt;http://localhost:8501&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmna22osy2gfilua9j4r2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmna22osy2gfilua9j4r2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🆓 Video2Text is free to use and open-source:&lt;br&gt;
GitHub Repository: &lt;a href="https://github.com/XamHans/video-2-text.git" rel="noopener noreferrer"&gt;https://github.com/XamHans/video-2-text.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 Helpful Links:&lt;br&gt;
PyTube Documentation: &lt;a href="https://pytube.io/en/latest/" rel="noopener noreferrer"&gt;https://pytube.io/en/latest/&lt;/a&gt;&lt;br&gt;
OpenAI Whisper Documentation: &lt;a href="https://github.com/openai/whisper" rel="noopener noreferrer"&gt;https://github.com/openai/whisper&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for reading &lt;/p&gt;

&lt;p&gt;&lt;a href="https://jhayer.tech" rel="noopener noreferrer"&gt;Johannes&lt;/a&gt;&lt;/p&gt;

</description>
      <category>whisper</category>
      <category>streamlit</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>🤖 How to Integrate BotPress Bot into Next.js 13 | 🚀 Step-by-Step Guide!</title>
      <dc:creator>hayerhans</dc:creator>
      <pubDate>Wed, 09 Aug 2023 20:48:42 +0000</pubDate>
      <link>https://forem.com/hayerhans/how-to-integrate-botpress-bot-into-nextjs-13-step-by-step-guide-3o56</link>
      <guid>https://forem.com/hayerhans/how-to-integrate-botpress-bot-into-nextjs-13-step-by-step-guide-3o56</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ssegzbzt5ivbcpw6cc2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ssegzbzt5ivbcpw6cc2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🍕 Get the code here:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/XamHans/botpress-nextjs" rel="noopener noreferrer"&gt;https://github.com/XamHans/botpress-nextjs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;⛔ Make sure that your bot is deployed! &lt;/p&gt;

&lt;p&gt;📹 Video Tutorial: &lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=zUTFqEeA0NI&amp;amp;ab_channel=HayerHans" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=zUTFqEeA0NI&amp;amp;ab_channel=HayerHans&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  📚 Key Steps Covered:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;🔗 Clone the repo with: &lt;code&gt;git clone https://github.com/XamHans/botpress-nextjs.git&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;📁 cd into the folder and install the dependencies with yarn install&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;💻 start the application with yarn dev&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;📝 get the Botpress Webchat Snippet from botpress website&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxfdys20oomgf1py5kdr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxfdys20oomgf1py5kdr.png" alt="Image description"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; 🧩 Navigate to the layout.tsx file. In the head section use the Script Component to init botpress&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qhlstfc68suvipc7swi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qhlstfc68suvipc7swi.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;Script
          src="https://cdn.botpress.cloud/webchat/v0/inject.js"
          onLoad={() =&amp;gt; {
            initBotpress();
          }}
        /&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;initBotpress&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;botpressWebChat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;composerPlaceholder&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Chat with bot&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;botConversationDescription&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;This chatbot was built surprisingly fast with Botpress&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;botId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;YOUR_BOT_ID&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;hostUrl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;YOUR_BOT_HOST_URL&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;messagingUrl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://messaging.botpress.cloud&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;clientId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;YOUR_CLIENT_ID&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;🛠️ make sure that you type on top of the layout.tsx "use client" to make it a client component.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftq23gtc1y41q5g6ydzjm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftq23gtc1y41q5g6ydzjm.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have successfully integrated your bot into your next.js 13 app! 🎉&lt;/p&gt;

</description>
      <category>botpress</category>
      <category>ai</category>
      <category>nextjs</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How to build a Telegram Bot with ChatGPT integration.</title>
      <dc:creator>hayerhans</dc:creator>
      <pubDate>Mon, 10 Jul 2023 17:37:15 +0000</pubDate>
      <link>https://forem.com/hayerhans/how-to-build-a-telegram-bot-with-chatgpt-integration-5elp</link>
      <guid>https://forem.com/hayerhans/how-to-build-a-telegram-bot-with-chatgpt-integration-5elp</guid>
      <description>&lt;p&gt;This tutorial explains step-by-step how to build a custom Telegram chatbot that can interact with OpenAI ChatGPT. The tutorial is written in Python and uses the python-telegram-bot and openai packages.&lt;/p&gt;

&lt;h2&gt;
  
  
  1) Create Telegram Chatbot
&lt;/h2&gt;

&lt;p&gt;Open your IDE and create a file named &lt;code&gt;telegram-bot.py&lt;/code&gt;  &lt;/p&gt;

&lt;p&gt;We are going to use this &lt;a href="https://github.com/python-telegram-bot/python-telegram-bot" rel="noopener noreferrer"&gt;https://github.com/python-telegram-bot/python-telegram-bot&lt;/a&gt; package that will help us to create the telegram bot. Make sure to install it with&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pip3 install python-telegram-bot python-dotenv&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;After installing, paste in this code to your telegram-bot.py file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dotenv&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load_dotenv&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;telegram&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Update&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;telegram.ext&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ApplicationBuilder&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;CommandHandler&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ContextTypes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                          &lt;span class="n"&gt;MessageHandler&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;load_dotenv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;basicConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;%(asctime)s - %(name)s - %(levelname)s - %(message)s&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;logging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INFO&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;TELEGRAM_API_TOKEN&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;TELEGRAM_API_TOKEN&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Update&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ContextTypes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DEFAULT_TYPE&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bot&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chat_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;effective_chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m a bot, please talk to me!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;echo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Update&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ContextTypes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DEFAULT_TYPE&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bot&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chat_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;effective_chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;application&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ApplicationBuilder&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;token&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;TELEGRAM_API_TOKEN&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;build&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;start_handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;CommandHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;start&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;echo_handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MessageHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TEXT&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;~&lt;/span&gt;&lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;COMMAND&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;echo&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;application&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;start_handler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;application&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;echo_handler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;application&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run_polling&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We are defining two handlers, one command helper (so if you type /start in telegram) and one message handler.&lt;/p&gt;

&lt;p&gt;The message handler (funcion echo)  for testing purposes only gives back what the user typed. So let's try out our bot. But before we can do this we need to register our bot at telegram in order to get the API_KEY. &lt;/p&gt;

&lt;p&gt;1.1) Register our bot at telegram&lt;/p&gt;

&lt;p&gt;Head over to &lt;a href="https://web.telegram.org/k/" rel="noopener noreferrer"&gt;https://web.telegram.org/k/&lt;/a&gt; and log in with your smartphone. Use the search bar to search for “BotFather”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3940kzxxfnr1ku5kkwl1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3940kzxxfnr1ku5kkwl1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Start the registration with the &lt;strong&gt;/start&lt;/strong&gt; command.&lt;/p&gt;

&lt;p&gt;It lists all possible commands, we go further with &lt;strong&gt;/newbot&lt;/strong&gt; to register a new bot.&lt;/p&gt;

&lt;p&gt;Now try to find a good name for your bot, I had some struggles:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvag05stmlkeferbu30x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvag05stmlkeferbu30x.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
After successful registration you will get a message from BotFather that your bot is registered, where you can find it and the &lt;strong&gt;token&lt;/strong&gt; to access it. Copy this &lt;strong&gt;token&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hf3x2oqa1axvw43xo8m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hf3x2oqa1axvw43xo8m.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;.env&lt;/code&gt; file in the same directory as the &lt;code&gt;telegram-bot.py&lt;/code&gt; file, and add the following line to it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

TELEGRAM_API_TOKEN=&amp;lt;your_telegram_api_token&amp;gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Replace &lt;code&gt;&amp;lt;your_telegram_api_token&amp;gt;&lt;/code&gt; with your actual Telegram API token provided by BotFather.&lt;/p&gt;




&lt;p&gt;We can now test your telegram bot ! In your IDE open a terminal and in the path where you script is start the bot with&lt;/p&gt;

&lt;p&gt;&lt;code&gt;python3 telegram-bot.py&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;Open the link where you can find your bot or use the search bar on telegram to find it. Click the start button and lets write something.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh48ddqz6g4m2eeft7fe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh48ddqz6g4m2eeft7fe.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Sweet! We have successfully achieved part 1. What we now need to do is to connect our bot with ChatGPT. The idea is simple, the bot is taking the users input. We will query ChatGPT with this input and redirect the answer from ChatGPT back to our user. Lets go!&lt;/p&gt;

&lt;h2&gt;
  
  
  2) Create the CHAT-GPT Client
&lt;/h2&gt;

&lt;p&gt;Let’s create a new python file that will handle all the ChatGPT Stuff, I will call it&lt;/p&gt;

&lt;p&gt;“chatgpt_client.py”. &lt;/p&gt;

&lt;p&gt;We are going to use the official openai python package to query chatgpt api. lets install the openai python package with &lt;/p&gt;

&lt;p&gt;&lt;code&gt;pip3 install openai&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Copy this code to the new created file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;

&lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;request_chat_gpt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_message&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;completion&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ChatCompletion&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-3.5-turbo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user_message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;completion&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;An error occurred: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;  &lt;span class="c1"&gt;# Return an empty string or handle the error appropriately
&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now &lt;a href="https://text-gen.com/get-openai-access-token" rel="noopener noreferrer"&gt;move to the openAI website&lt;/a&gt; and get your token.&lt;/p&gt;

&lt;p&gt;Place the API key in the &lt;code&gt;.env&lt;/code&gt; file, add the following line:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

OPENAI_API_KEY="&amp;lt;your_openai_api_key&amp;gt;"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Replace &lt;code&gt;&amp;lt;your_openai_api_key&amp;gt;&lt;/code&gt; with your actual OpenAI API key.&lt;/p&gt;

&lt;p&gt;To use the openai integration in the echo handler, we can replace the line &lt;code&gt;await context.bot.send_message(chat_id=update.effective_chat.id, text=update.message.text)&lt;/code&gt; with the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

response = request_chat_gpt(update.message.text)
await context.bot.send_message(chat_id=update.effective_chat.id, text=response)



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Of course don’t forget to import the function&lt;/p&gt;

&lt;p&gt;&lt;code&gt;from chatgpt_client import request_chat_gpt&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will send the user's input to the &lt;code&gt;request_chat_gpt&lt;/code&gt; function, which will use the OpenAI API to generate a response. The response will then be sent back to the user via the &lt;code&gt;echo&lt;/code&gt; handler.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd94ouut3shjmtfxonf5d.png" alt="Image description"&gt;
&lt;/h2&gt;

&lt;p&gt;Thats all! Hope you enjoyed this one. Next time we are going to build a AirBnB Chatbot that can respond to guests with custom knowledge like Wifi Passwort or Checkout Times, sounds cool right ? &lt;/p&gt;

&lt;p&gt;If you don't want to miss this article go to my site and subscribe the Newsletter, I would really appreciate this ❤️!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://jhayer.tech" rel="noopener noreferrer"&gt;https://jhayer.tech&lt;/a&gt;&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>telegram</category>
      <category>bot</category>
      <category>python</category>
    </item>
    <item>
      <title>This script translates your language json to any language easily</title>
      <dc:creator>hayerhans</dc:creator>
      <pubDate>Sat, 11 Mar 2023 15:54:56 +0000</pubDate>
      <link>https://forem.com/hayerhans/this-script-translates-your-language-json-to-any-language-easily-4j25</link>
      <guid>https://forem.com/hayerhans/this-script-translates-your-language-json-to-any-language-easily-4j25</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;You can find the Github Repo on my original blog post here:&lt;br&gt;
&lt;a href="https://jhayer.tech/blog/easily-translate-json-files"&gt;https://jhayer.tech/blog/easily-translate-json-files&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Lets dive into it, its not much so we can go through it step by step.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;mirrorObject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;targetLang&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mirroredObj&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{};&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;entries&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;object&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;mirroredObj&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;mirrorObject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;targetLang&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;mirroredObj&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;translateService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;translateText&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;targetLang&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;mirroredObj&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is our first function the &lt;code&gt;mirrorObject&lt;/code&gt;. It has one simple job, to mirror the json object structure but instead of the original value we want the translated value.&lt;/p&gt;

&lt;p&gt;This function is recursively iterating over the json object and translating each value. You see in line 6, if the value is a object we need to call our function again, otherwise we can use the translateService to translate the value. &lt;/p&gt;

&lt;p&gt;The translateService is simply using the node package from deepL to traslate the text, but we will have later a look at it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;mirrorJsonFileWithTranslation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;inputFilename&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;outputFilename&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;targetLang&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Read the JSON file&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fileData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;readFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;inputFilename&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jsonData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fileData&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="c1"&gt;// Create a mirrored object structure&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mirroredData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;mirrorObject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;jsonData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;targetLang&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="c1"&gt;// Write the mirrored data to a new file&lt;/span&gt;
  &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;writeFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;outputFilename&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mirroredData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Mirroring complete!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we will have a look at the &lt;code&gt;mirrorJsonFileWithTranslation&lt;/code&gt; function. This function is reading the json file (with the parameter inputFilename), calling our previously defined mirrorObject function with the desired target language and writes the result to a new json file (parameter outputFilename). What we need now is our entry function the main, to orchestrate the whole process :)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fileNames&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;common.json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;targetLangs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;es&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;fr&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fileName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;fileNames&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;val&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nx"&gt;targetLangs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;targetLang&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;targetLangs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;val&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s2"&gt;`./locales/en/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.json`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;`./locales/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;targetLang&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.json`&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;mirrorJsonFileWithTranslation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s2"&gt;`./locales/de/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.json`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;`./locales/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;targetLang&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.json`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;targetLang&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;})();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we define fileNames as an array for the json files we want to translate. In the targetLangs array we define the languages we want to translate to. We are iterating over the targetLangs array and calling the &lt;code&gt;mirrorJsonFileWithTranslation&lt;/code&gt; function. The main function is using the async/await syntax to wait for the translation to complete before starting the next translation.&lt;/p&gt;

&lt;p&gt;What we need now is the translateService:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;deepl&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;deepl-node&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;TargetLanguageCode&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;deepl-node&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;fs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;Translation&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="nl"&gt;translateText&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="nx"&gt;targetLang&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;TargetLanguageCode&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nx"&gt;TranslateService&lt;/span&gt; &lt;span class="kr"&gt;implements&lt;/span&gt; &lt;span class="nx"&gt;Translation&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="nx"&gt;authKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DEEPL_API_KEY&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;YOUR_DEEPL_API_KEY&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Replace with your key&lt;/span&gt;
&lt;span class="nl"&gt;translator&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;deepl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Translator&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;translator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;deepl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Translator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;authKey&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;translateText&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;targetLang&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;targetLang&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;en&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="nx"&gt;targetLang&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;en-US&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;translator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;translateText&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="nx"&gt;targetLang&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;TargetLanguageCode&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;translateService&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;TranslateService&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;translateService&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The translateService is using the deepl-node package to translate the text. We are using the environment variable &lt;code&gt;DEEPL_API_KEY&lt;/code&gt; to store our deepl api key. If you don't have one you can get one for free here: &lt;a href="https://www.deepl.com/pro#developer"&gt;https://www.deepl.com/pro#developer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To run the script just run: &lt;code&gt;npx ts-node translator.ts&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now you can easily create multiple translations for your web application, maybe you can even automate the process with a github action or build an API to translate your web application on the fly ?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>node</category>
      <category>typescript</category>
      <category>i18n</category>
    </item>
  </channel>
</rss>
