<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: mehmet akar</title>
    <description>The latest articles on Forem by mehmet akar (@mehmetakar).</description>
    <link>https://forem.com/mehmetakar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mehmetakar"/>
    <language>en</language>
    <item>
      <title>Context7 MCP Tutorial</title>
      <dc:creator>mehmet akar</dc:creator>
      <pubDate>Fri, 25 Apr 2025 12:46:07 +0000</pubDate>
      <link>https://forem.com/mehmetakar/context7-mcp-tutorial-3he2</link>
      <guid>https://forem.com/mehmetakar/context7-mcp-tutorial-3he2</guid>
      <description>&lt;p&gt;Context7 and its tool Context7 MCP is so popular in AI-Vibe Coding world, nowadays. I want to mention it and create an article about it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context7 MCP&lt;/strong&gt; is a tool that supercharges AI prompts with &lt;strong&gt;real-time, version-specific documentation and code examples&lt;/strong&gt;. Whether you're using Claude, Cursor, VS Code, or another Model Context Protocol (MCP) client, Context7 helps eliminate hallucinated APIs and outdated examples by injecting live data into your LLM interactions.&lt;/p&gt;




&lt;p&gt;We can see their popularity on github star graphics:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kvppi8cpfd7m5he3xsy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kvppi8cpfd7m5he3xsy.png" alt="Star History Chart" width="800" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🌟 Why Use Context7?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ❌ Without Context7:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Outdated examples based on old training data.&lt;/li&gt;
&lt;li&gt;AI hallucinations about APIs that don’t exist.&lt;/li&gt;
&lt;li&gt;Generalized help for outdated package versions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ✅ With Context7:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Accurate, live documentation and examples pulled from the actual library.&lt;/li&gt;
&lt;li&gt;Instant, relevant answers based on real packages and versions.&lt;/li&gt;
&lt;li&gt;All you have to do is &lt;strong&gt;add &lt;code&gt;use context7&lt;/code&gt;&lt;/strong&gt; to your prompt.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Examples:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Create a basic Next.js project with app router. use context7
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Create a script to delete the rows where the city is "" given PostgreSQL credentials. use context7
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🛠️ Getting Started
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Node.js ≥ &lt;strong&gt;v18.0.0&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;One of the following MCP clients:

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Cursor&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Windsurf&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Claude Desktop&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VS Code&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Docker (optional)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  📦 Installation Methods
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Install via Smithery (for Claude Desktop)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; @smithery/cli &lt;span class="nb"&gt;install&lt;/span&gt; @upstash/context7-mcp &lt;span class="nt"&gt;--client&lt;/span&gt; claude
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🧭 Installation for Specific Clients
&lt;/h2&gt;

&lt;h3&gt;
  
  
  📍 Cursor
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to: &lt;code&gt;Settings&lt;/code&gt; → &lt;code&gt;Cursor Settings&lt;/code&gt; → &lt;code&gt;MCP&lt;/code&gt; → &lt;code&gt;Add new global MCP server&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Or directly update &lt;code&gt;~/.cursor/mcp.json&lt;/code&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"context7"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@upstash/context7-mcp@latest"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Alternative Runtimes:
&lt;/h4&gt;

&lt;p&gt;Using Bun&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"context7"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bunx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@upstash/context7-mcp@latest"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Using Deno&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"context7"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"deno"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"run"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--allow-net"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npm:@upstash/context7-mcp"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🌊 Windsurf
&lt;/h3&gt;

&lt;p&gt;Add to your &lt;code&gt;windsurf.mcp.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"context7"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@upstash/context7-mcp@latest"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  🧩 VS Code / VS Code Insiders
&lt;/h3&gt;

&lt;p&gt;Install via &lt;a href="https://code.visualstudio.com/docs/copilot/chat/mcp-servers" rel="noopener noreferrer"&gt;VS Code MCP Docs&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"servers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Context7"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"stdio"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@upstash/context7-mcp@latest"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install buttons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://insiders.vscode.dev/redirect?url=vscode%3Amcp%2Finstall%3F%257B%2522name%2522%253A%2522context7%2522%252C%2522config%2522%253A%257B%2522command%2522%253A%2522npx%2522%252C%2522args%2522%253A%255B%2522-y%2522%252C%2522%2540upstash%252Fcontext7-mcp%2540latest%2522%255D%257D%257D" rel="noopener noreferrer"&gt;VS Code Install&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://insiders.vscode.dev/redirect?url=vscode-insiders%3Amcp%2Finstall%3F%257B%2522name%2522%253A%2522context7%2522%252C%2522config%2522%253A%257B%2522command%2522%253A%2522npx%2522%252C%2522args%2522%253A%255B%2522-y%2522%252C%2522%2540upstash%252Fcontext7-mcp%2540latest%2522%255D%257D%257D" rel="noopener noreferrer"&gt;VS Code Insiders Install&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  🧠 Claude Code
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude mcp add context7 &lt;span class="nt"&gt;--&lt;/span&gt; npx &lt;span class="nt"&gt;-y&lt;/span&gt; @upstash/context7-mcp@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  💻 Claude Desktop
&lt;/h3&gt;

&lt;p&gt;Update &lt;code&gt;claude_desktop_config.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Context7"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@upstash/context7-mcp@latest"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  🐳 Using Docker
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Dockerfile
&lt;/h4&gt;

&lt;p&gt;Create a file named &lt;code&gt;Dockerfile&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:18-alpine&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @upstash/context7-mcp@latest

&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["context7-mcp"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Build the Docker Image
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; context7-mcp &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  3. Configure MCP Client
&lt;/h4&gt;

&lt;p&gt;Example &lt;code&gt;client_mcp_settings.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Context7"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"autoApprove"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"disabled"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"timeout"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docker"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"run"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"-i"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--rm"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"context7-mcp"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"transportType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"stdio"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: The tag &lt;code&gt;context7-mcp&lt;/code&gt; should match your &lt;code&gt;docker build&lt;/code&gt; tag.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🧰 Available Tools
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;resolve-library-id&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Resolves a general library name into a Context7-compatible ID.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;libraryName&lt;/code&gt; (required)&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;code&gt;get-library-docs&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Fetch docs using Context7 ID.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;context7CompatibleLibraryID&lt;/code&gt; (required)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;topic&lt;/code&gt; (optional): e.g., &lt;code&gt;"routing"&lt;/code&gt;, &lt;code&gt;"hooks"&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;tokens&lt;/code&gt; (optional, default 5000)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ⚙️ Development
&lt;/h2&gt;

&lt;p&gt;Clone the repo and install dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bun i
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Build the project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bun run build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Local config example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"context7"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"tsx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/path/to/folder/context7-mcp/src/index.ts"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🔬 Test with MCP Inspector
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx &lt;span class="nt"&gt;-y&lt;/span&gt; @modelcontextprotocol/inspector npx @upstash/context7-mcp@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🧯 Context7 Troubleshooting
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;ERR_MODULE_NOT_FOUND&lt;/code&gt;?
&lt;/h3&gt;

&lt;p&gt;Switch from &lt;code&gt;npx&lt;/code&gt; to &lt;code&gt;bunx&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"context7"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bunx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@upstash/context7-mcp@latest"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Context7 MCP Errors?
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Remove &lt;code&gt;@latest&lt;/code&gt; from package name.&lt;/li&gt;
&lt;li&gt;Try &lt;code&gt;bunx&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Try &lt;code&gt;deno&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;




</description>
      <category>context7</category>
      <category>mcp</category>
      <category>cursor</category>
      <category>ai</category>
    </item>
    <item>
      <title>Blender MCP: Seamless Integration of Blender with Claude AI</title>
      <dc:creator>mehmet akar</dc:creator>
      <pubDate>Mon, 17 Mar 2025 05:24:28 +0000</pubDate>
      <link>https://forem.com/mehmetakar/blender-mcp-seamless-integration-of-blender-with-claude-ai-302g</link>
      <guid>https://forem.com/mehmetakar/blender-mcp-seamless-integration-of-blender-with-claude-ai-302g</guid>
      <description>&lt;p&gt;Blender MCP (Blender Model Context Protocol) is a nice tool that connects &lt;strong&gt;Blender&lt;/strong&gt; to &lt;strong&gt;Claude AI&lt;/strong&gt;, allowing AI-driven 3D modeling, scene creation, and object manipulation. This integration makes 3D design faster, more intuitive, and highly interactive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Blender MCP?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI-powered modeling&lt;/strong&gt;: Direct interaction with Claude AI for instant 3D modifications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seamless asset management&lt;/strong&gt;: Access &lt;strong&gt;Poly Haven&lt;/strong&gt; and &lt;strong&gt;Hyper3D Rodin&lt;/strong&gt; for quick model generation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advanced material control&lt;/strong&gt;: Modify object textures and materials effortlessly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Script execution&lt;/strong&gt;: Run Python scripts directly within Blender.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Two-way communication&lt;/strong&gt; between Blender and Claude AI&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Object creation, deletion, and transformation&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Material and texture application&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scene inspection and real-time information retrieval&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AI-generated models via Hyper3D Rodin&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installation Guide
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;To install Blender MCP, ensure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Blender 3.0+&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Python 3.10+&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;uv package manager&lt;/strong&gt; (Essential for running MCP)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Installing UV
&lt;/h4&gt;

&lt;p&gt;For &lt;strong&gt;Mac&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;uv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For &lt;strong&gt;Windows&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;powershell&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-c&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"irm https://astral.sh/uv/install.ps1 | iex"&lt;/span&gt;&lt;span class="w"&gt; 
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, add to the system path:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;set&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;C:\Users\nntra\.local\bin&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="n"&gt;Path&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://docs.astral.sh/uv/getting-started/installation/" rel="noopener noreferrer"&gt;Full installation guide&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;⚠️ &lt;strong&gt;Do not proceed before installing UV&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting Up MCP in Claude AI
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Open &lt;strong&gt;Claude AI Desktop&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Settings &amp;gt; Developer &amp;gt; Edit Config&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Add the following to &lt;code&gt;claude_desktop_config.json&lt;/code&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"blender"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"uvx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"blender-mcp"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Installing Blender Addon
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Download the &lt;code&gt;addon.py&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;Open &lt;strong&gt;Blender&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Edit &amp;gt; Preferences &amp;gt; Add-ons&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Install...&lt;/strong&gt; and select &lt;code&gt;addon.py&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Enable the addon (Checkbox: "Interface: Blender MCP")&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How to Use Blender MCP
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Connecting to Claude AI
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0hzlju74yjkpr60mz59.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0hzlju74yjkpr60mz59.png" alt="Blender MCP Sidebar" width="800" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open Blender and press &lt;strong&gt;N&lt;/strong&gt; to access the &lt;strong&gt;3D View Sidebar&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to the &lt;strong&gt;BlenderMCP&lt;/strong&gt; tab&lt;/li&gt;
&lt;li&gt;Enable &lt;strong&gt;Poly Haven API&lt;/strong&gt; (Optional)&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;"Connect to Claude"&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Ensure the &lt;strong&gt;MCP Server&lt;/strong&gt; is running&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Example Commands
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fca4il2hrld0jf2iyak37.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fca4il2hrld0jf2iyak37.png" alt="Blender MCP Hammer Icon" width="711" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can now interact with Blender through Claude AI by using commands like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create a 3D scene&lt;/strong&gt;: "Generate a dungeon scene with a dragon guarding gold"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Demo video&lt;/strong&gt;&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/DqgKuLYUv00"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Modify textures&lt;/strong&gt;: "Make this car red and metallic"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Poly Haven assets&lt;/strong&gt;: "Create a beach scene using HDRIs and rock models from Poly Haven"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Demo video&lt;/strong&gt;&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/I29rn92gkC4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI-generated models&lt;/strong&gt;: "Generate a 3D garden gnome using Hyper3D Rodin"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adjust camera&lt;/strong&gt;: "Set up isometric camera view"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lighting setup&lt;/strong&gt;: "Adjust lighting to a studio environment"&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Using MCP with Cursor Integration
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Open &lt;strong&gt;Cursor Settings &amp;gt; MCP&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Paste the command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uvx blender-mcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Cursor setup guide video]&lt;/strong&gt;&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/wgWsJshecac"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;⚠️ &lt;strong&gt;Run MCP server on either Cursor or Claude, NOT both&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting &amp;amp; FAQs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Common Issues &amp;amp; Fixes
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Connection Issues&lt;/strong&gt;: Ensure the &lt;strong&gt;Blender Addon Server&lt;/strong&gt; is running and correctly configured on Claude.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Timeout Errors&lt;/strong&gt;: Break complex requests into smaller commands.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Poly Haven API Issues&lt;/strong&gt;: Refresh API settings if assets fail to load.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;General Fix&lt;/strong&gt;: Restart Blender and Claude AI.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Security Considerations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Python Execution Risks&lt;/strong&gt;: The &lt;code&gt;execute_blender_code&lt;/code&gt; function allows arbitrary code execution. Always save work before using it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Large Asset Downloads&lt;/strong&gt;: Disable &lt;strong&gt;Poly Haven&lt;/strong&gt; integration if not needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex Commands&lt;/strong&gt;: Break large operations into smaller steps for efficiency.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Contributing &amp;amp; Development
&lt;/h2&gt;

&lt;p&gt;Blender MCP is an open-source project. Contributions are welcome! Submit a Pull Request or report issues on the GitHub repository: @blender-mcp&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Blender MCP is a game-changer for 3D artists, allowing &lt;strong&gt;AI-assisted modeling, asset management, and interactive scene creation&lt;/strong&gt;. By integrating Claude AI with Blender, this tool accelerates workflows and brings creative visions to life effortlessly.&lt;/p&gt;

</description>
      <category>blendermcp</category>
      <category>mcp</category>
      <category>blender</category>
      <category>cursor</category>
    </item>
    <item>
      <title>What is Alpaca LLM?</title>
      <dc:creator>mehmet akar</dc:creator>
      <pubDate>Sun, 16 Mar 2025 05:02:20 +0000</pubDate>
      <link>https://forem.com/mehmetakar/what-is-alpaca-llm-e4h</link>
      <guid>https://forem.com/mehmetakar/what-is-alpaca-llm-e4h</guid>
      <description>&lt;p&gt;I want to talk about &lt;strong&gt;What is Alpaca LLM?&lt;/strong&gt; as it is wondered heavily, nowadays.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Alpaca LLM? Introduction
&lt;/h2&gt;

&lt;p&gt;Alpaca LLM is an advanced, open-source language model developed by researchers at Stanford University as a fine-tuned version of Meta’s LLaMA (Large Language Model Meta AI). Designed to be a cost-effective alternative to proprietary AI models like OpenAI’s GPT-4, Alpaca LLM enables developers and researchers to harness the power of large language models (LLMs) for various applications, including chatbots, content generation, and research. &lt;/p&gt;

&lt;p&gt;In this article, we will explore what Alpaca LLM is, how it works, its features, advantages, limitations, and how you can deploy it for your AI applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Alpaca LLM?
&lt;/h2&gt;

&lt;p&gt;Alpaca LLM is a fine-tuned version of Meta’s LLaMA 7B model. It was trained using self-instruction techniques inspired by OpenAI’s InstructGPT. The goal was to create a lightweight yet powerful AI model capable of generating human-like text while maintaining accessibility and affordability for developers and researchers.&lt;/p&gt;

&lt;p&gt;Unlike proprietary models like GPT-4, Alpaca LLM is available as an open-source project, allowing developers to modify, optimize, and deploy the model in various applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does Alpaca LLM Work?
&lt;/h2&gt;

&lt;p&gt;Alpaca LLM follows a supervised fine-tuning approach:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Base Model Selection:&lt;/strong&gt; The model starts with Meta’s LLaMA 7B as its foundation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Instruction Tuning:&lt;/strong&gt; Researchers created 52,000 unique instruction-following samples using OpenAI’s text-davinci-003 model.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fine-Tuning:&lt;/strong&gt; The model is then fine-tuned with these instructions to improve performance in text generation, summarization, and conversational AI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inference &amp;amp; Deployment:&lt;/strong&gt; Once trained, Alpaca LLM can generate responses to prompts in a manner similar to GPT-based models.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Key Features of Alpaca LLM
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Open-Source:&lt;/strong&gt; Unlike proprietary AI models, Alpaca LLM is freely available for developers and researchers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost-Effective Training:&lt;/strong&gt; Stanford researchers fine-tuned Alpaca for less than $600, proving that high-quality models can be trained at low cost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lightweight &amp;amp; Efficient:&lt;/strong&gt; The model is designed to be compact and runs efficiently on consumer-grade hardware.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Instruction Following:&lt;/strong&gt; It is optimized for following human instructions, making it suitable for chatbot applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customizability:&lt;/strong&gt; Developers can fine-tune the model further to meet specific needs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Advantages of Using Alpaca LLM
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Free &amp;amp; Open-Source:&lt;/strong&gt; No licensing fees, making it accessible to everyone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible Deployment:&lt;/strong&gt; Can be deployed on local machines, cloud servers, or edge devices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Community Support:&lt;/strong&gt; Being open-source, Alpaca LLM benefits from contributions and improvements from the AI research community.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comparable Performance:&lt;/strong&gt; Despite being lightweight, it achieves results similar to state-of-the-art language models.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Limitations of Alpaca LLM
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Limited Scale:&lt;/strong&gt; It is not as powerful as models like GPT-4 due to its smaller dataset and reduced computational resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bias &amp;amp; Safety Issues:&lt;/strong&gt; Like all AI models, it may inherit biases from training data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Requires Technical Knowledge:&lt;/strong&gt; Setting up and fine-tuning Alpaca LLM requires programming and AI expertise.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to Install and Use Alpaca LLM
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before installing Alpaca LLM, ensure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A system with Linux or macOS (Windows users can use WSL)&lt;/li&gt;
&lt;li&gt;Python 3.8 or later&lt;/li&gt;
&lt;li&gt;CUDA-compatible GPU (for efficient inference)&lt;/li&gt;
&lt;li&gt;Git and Python virtual environment tools installed&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 1: Clone the Alpaca Repository
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone the Alpaca LLM repository&lt;/span&gt;
git clone https://github.com/tatsu-lab/stanford_alpaca.git
&lt;span class="nb"&gt;cd &lt;/span&gt;stanford_alpaca
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Set Up the Virtual Environment
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create and activate a virtual environment&lt;/span&gt;
python3 &lt;span class="nt"&gt;-m&lt;/span&gt; venv alpaca_env
&lt;span class="nb"&gt;source &lt;/span&gt;alpaca_env/bin/activate  &lt;span class="c"&gt;# On Windows: alpaca_env\Scripts\activate&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Install Dependencies
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install required Python libraries&lt;/span&gt;
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Download LLaMA Model Weights
&lt;/h3&gt;

&lt;p&gt;Since Alpaca is built on Meta’s LLaMA, you need to obtain access to LLaMA weights. You can request them from Meta’s official site:&lt;br&gt;
&lt;a href="https://ai.facebook.com/blog/large-language-model-llama-meta-ai/" rel="noopener noreferrer"&gt;https://ai.facebook.com/blog/large-language-model-llama-meta-ai/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have the LLaMA weights, place them in the appropriate directory inside the Alpaca project.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 5: Fine-Tune Alpaca LLM
&lt;/h3&gt;

&lt;p&gt;To fine-tune Alpaca with your custom dataset, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python train.py &lt;span class="nt"&gt;--base_model&lt;/span&gt; &lt;span class="s1"&gt;'/path/to/llama/model'&lt;/span&gt; &lt;span class="nt"&gt;--data_path&lt;/span&gt; &lt;span class="s1"&gt;'/path/to/dataset.json'&lt;/span&gt; &lt;span class="nt"&gt;--output_dir&lt;/span&gt; &lt;span class="s1"&gt;'./trained_model'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 6: Running Inference on Alpaca LLM
&lt;/h3&gt;

&lt;p&gt;After training, you can generate text responses using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python generate.py &lt;span class="nt"&gt;--model_path&lt;/span&gt; &lt;span class="s1"&gt;'./trained_model'&lt;/span&gt; &lt;span class="nt"&gt;--prompt&lt;/span&gt; &lt;span class="s1"&gt;'What is Alpaca LLM?'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Future of Alpaca LLM
&lt;/h2&gt;

&lt;p&gt;The Alpaca LLM project demonstrates how cost-effective, high-quality AI models can be developed with minimal resources. With continuous improvements in open-source AI, we can expect better performance, increased accessibility, and more widespread adoption of lightweight LLMs in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Alpaca LLM? Conclusion
&lt;/h2&gt;

&lt;p&gt;Alpaca LLM is an exciting advancement in the field of open-source AI, offering a powerful, cost-effective alternative to proprietary large language models. With its open-source nature, efficiency, and customization options, it is an excellent choice for developers looking to integrate AI capabilities into their projects.&lt;/p&gt;

&lt;p&gt;If you’re interested in AI development and want to experiment with a freely available LLM, Alpaca LLM is a great place to start. As AI technology continues to evolve, models like Alpaca will play a crucial role in democratizing access to artificial intelligence.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>alpacallm</category>
      <category>llama</category>
    </item>
    <item>
      <title>LCM vs. LLM</title>
      <dc:creator>mehmet akar</dc:creator>
      <pubDate>Sat, 15 Mar 2025 22:52:09 +0000</pubDate>
      <link>https://forem.com/mehmetakar/lcm-vs-llm-20kk</link>
      <guid>https://forem.com/mehmetakar/lcm-vs-llm-20kk</guid>
      <description>&lt;p&gt;LCM vs. LLM is a popular question, nowadays. Let me analyze this comparison.&lt;/p&gt;

&lt;p&gt;With the rise of artificial intelligence and machine learning, two key terms that are often discussed are &lt;strong&gt;LCM (Large Concept Models)&lt;/strong&gt; and &lt;strong&gt;LLM (Large Language Models)&lt;/strong&gt;. While they share similarities in being AI-driven models, they differ significantly in their approaches and applications. This article will explore their distinctions and use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is LCM (Large Concept Model)?
&lt;/h2&gt;

&lt;p&gt;LCM, or &lt;strong&gt;Large Concept Model&lt;/strong&gt;, is a new AI paradigm developed by Meta that shifts the focus from token-based processing to concept-level understanding. Unlike LLMs, which predict the next word based on tokenized text, LCMs operate at a higher level of abstraction by modeling entire concepts instead of individual words or tokens.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Do LCMs Work?
&lt;/h3&gt;

&lt;p&gt;LCMs use &lt;strong&gt;concept embeddings&lt;/strong&gt;, which represent ideas rather than words, allowing them to generalize more effectively across languages and modalities. Their key components include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Concept Encoding:&lt;/strong&gt; Instead of breaking text into small units, LCMs encode entire sentences or ideas into a higher-dimensional embedding space.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sequence Modeling:&lt;/strong&gt; These models predict sequences of concept embeddings rather than individual words, enhancing long-term coherence.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Decoding:&lt;/strong&gt; The predicted embeddings are then transformed back into readable text or other formats.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Advantages of LCMs:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Improved Coherence:&lt;/strong&gt; By working with concepts rather than tokens, LCMs maintain better contextual consistency in long-form content.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multilingual &amp;amp; Multimodal Capabilities:&lt;/strong&gt; Since they operate at the concept level, LCMs can generalize across different languages and modalities (text, speech, images, etc.).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Higher-Level Reasoning:&lt;/strong&gt; LCMs improve AI’s ability to understand abstract ideas and complex topics.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Applications of LCMs:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Advanced summarization&lt;/li&gt;
&lt;li&gt;Content creation and reasoning tasks&lt;/li&gt;
&lt;li&gt;Multilingual and multimodal AI applications&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is LLM (Large Language Model)?
&lt;/h2&gt;

&lt;p&gt;LLM, or &lt;strong&gt;Large Language Model&lt;/strong&gt;, is a deep learning-based AI model designed to process and generate human-like text. LLMs rely on tokenization, predicting the next token in a sequence based on vast amounts of text data.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Do LLMs Work?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tokenization:&lt;/strong&gt; Text is broken down into tokens (words or subwords).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Training:&lt;/strong&gt; Models learn linguistic patterns by analyzing massive datasets and adjusting their internal parameters to minimize prediction errors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generation:&lt;/strong&gt; LLMs construct sentences word by word, relying on statistical patterns.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Applications of LLMs:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Text completion and generation&lt;/li&gt;
&lt;li&gt;Machine translation&lt;/li&gt;
&lt;li&gt;Chatbots and conversational AI&lt;/li&gt;
&lt;li&gt;Sentiment analysis and code generation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Limitations of LLMs:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Struggles with Long-Range Context:&lt;/strong&gt; Token-by-token processing makes it difficult to maintain coherence over long texts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Language-Specific Limitations:&lt;/strong&gt; Requires extensive training in each language, making multilingual support more complex.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lack of True Understanding:&lt;/strong&gt; Predicts words statistically rather than understanding concepts like LCMs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Differences Between LCMs and LLMs
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Large Language Models (LLMs)&lt;/th&gt;
&lt;th&gt;Large Concept Models (LCMs)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Processing Unit&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Tokens (words or subwords)&lt;/td&gt;
&lt;td&gt;Concepts (sentences or higher-level ideas)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Abstraction Level&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Operate at a granular level, focusing on individual tokens&lt;/td&gt;
&lt;td&gt;Function at a higher abstraction level, dealing with entire concepts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Language Dependency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Often tailored to specific languages; multilingual capabilities require extensive training&lt;/td&gt;
&lt;td&gt;Designed to be language-agnostic, leveraging embedding spaces that support multiple languages and modalities&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Context Handling&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;May struggle with long-term coherence due to token-by-token processing&lt;/td&gt;
&lt;td&gt;Better equipped for maintaining context over extended content by focusing on broader concepts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Generation Approach&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Sequential token prediction, constructing sentences word by word&lt;/td&gt;
&lt;td&gt;Predicts and generates entire concepts, allowing for more holistic and coherent content creation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Training Paradigm&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Requires vast amounts of token-level data; training involves learning probabilities of token sequences&lt;/td&gt;
&lt;td&gt;Trained on sequences of concept embeddings, enabling the model to grasp and generate higher-level semantic structures&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Applications&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Suitable for tasks requiring detailed token-level manipulation, such as precise text editing or code generation&lt;/td&gt;
&lt;td&gt;Ideal for applications involving abstract reasoning, summarization, and content creation across different languages and formats&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Limitations&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;- May lack deep understanding of context&lt;br&gt;- Can produce less coherent long-form content&lt;br&gt;- Language and modality limitations due to token-based processing&lt;/td&gt;
&lt;td&gt;- Emerging technology with ongoing research&lt;br&gt;- Requires robust concept encoding and decoding mechanisms&lt;br&gt;- Potential challenges in defining and standardizing what constitutes a "concept" across various applications and domains&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  LCM vs. LLM: Conclusion
&lt;/h2&gt;

&lt;p&gt;While both &lt;strong&gt;LCMs&lt;/strong&gt; and &lt;strong&gt;LLMs&lt;/strong&gt; are crucial AI advancements, they serve different purposes. &lt;strong&gt;LLMs&lt;/strong&gt; are ideal for text-based generation tasks, whereas &lt;strong&gt;LCMs&lt;/strong&gt; take a broader, concept-driven approach that enhances coherence, multilingual adaptability, and abstract reasoning. As AI technology evolves, LCMs may overcome many of the limitations of traditional LLMs, offering a more holistic approach to natural language understanding.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>lcm</category>
      <category>llm</category>
      <category>lcmvsllm</category>
    </item>
    <item>
      <title>What is LLM in AI</title>
      <dc:creator>mehmet akar</dc:creator>
      <pubDate>Sat, 15 Mar 2025 18:47:54 +0000</pubDate>
      <link>https://forem.com/mehmetakar/what-is-llm-in-ai-487d</link>
      <guid>https://forem.com/mehmetakar/what-is-llm-in-ai-487d</guid>
      <description>&lt;p&gt;What is LLM in AI? This is the most asked questions for people who wonder ai development.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is LLM in AI? in some words?
&lt;/h2&gt;

&lt;p&gt;Large Language Models (LLMs) have gained significant attention for their ability to understand and generate human-like text. But what exactly is an LLM in AI, and how does it work? This guide will explore everything you need to know about LLMs, their applications, benefits, and limitations.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an LLM in AI?
&lt;/h2&gt;

&lt;p&gt;A Large Language Model (LLM) is a type of artificial intelligence algorithm designed to process and generate text by analyzing vast amounts of language data. These models are built using deep learning techniques, particularly neural networks with billions of parameters. Some well-known LLMs include OpenAI’s GPT-4, Google’s Bard, and Meta’s LLaMA.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do LLMs Work?
&lt;/h2&gt;

&lt;p&gt;LLMs are trained on extensive datasets consisting of books, articles, and online content. They leverage Natural Language Processing (NLP) techniques to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Understand Context&lt;/strong&gt;: Recognizing the meaning of words and sentences based on their placement and usage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Predict Words&lt;/strong&gt;: Generating text based on probabilities derived from training data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mimic Human Responses&lt;/strong&gt;: Producing coherent and contextually relevant replies to prompts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Features of Large Language Models
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: LLMs can process vast amounts of text and generate responses in real-time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multitasking Capability&lt;/strong&gt;: They can handle diverse tasks such as translation, summarization, and question-answering.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-Improvement&lt;/strong&gt;: Many LLMs leverage reinforcement learning techniques to refine their outputs over time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Applications of LLMs in Various Industries
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Content Creation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;LLMs can generate high-quality articles, blog posts, and marketing content with minimal human intervention.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Customer Support&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;AI chatbots powered by LLMs improve customer service by providing instant responses to queries.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Healthcare&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;They assist in analyzing medical records, summarizing patient data, and even generating medical reports.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Software Development&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;LLMs like GitHub Copilot help developers by auto-completing code and offering debugging suggestions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of LLMs
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Time-Saving&lt;/strong&gt;: Automates repetitive text-based tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost-Effective&lt;/strong&gt;: Reduces the need for manual content generation and data analysis.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Personalization&lt;/strong&gt;: Provides tailored responses and recommendations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Challenges and Limitations
&lt;/h2&gt;

&lt;p&gt;Despite their advantages, LLMs come with some challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bias in Data&lt;/strong&gt;: They may inherit biases present in their training data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High Computational Costs&lt;/strong&gt;: Training and deploying LLMs require substantial computational resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Potential Misinformation&lt;/strong&gt;: Not all AI-generated content is accurate or reliable.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Future of LLMs
&lt;/h2&gt;

&lt;p&gt;The future of LLMs looks promising with advancements in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;More Efficient Models&lt;/strong&gt;: Reducing energy consumption while maintaining performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better Context Understanding&lt;/strong&gt;: Enhancing their ability to interpret and generate more accurate responses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ethical AI Development&lt;/strong&gt;: Addressing bias and improving transparency in AI decision-making.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is LLM in AI? Conclusion
&lt;/h2&gt;

&lt;p&gt;Large Language Models (LLMs) are revolutionizing the way we interact with AI, offering numerous applications across various industries. However, it is crucial to understand their capabilities and limitations to use them effectively. As AI technology continues to evolve, LLMs will become even more sophisticated, shaping the future of communication and automation.&lt;/p&gt;

&lt;p&gt;By understanding the fundamentals of LLMs, businesses and individuals can harness their power to improve productivity, enhance user experiences, and drive innovation.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>llminai</category>
      <category>whatisllm</category>
    </item>
    <item>
      <title>LLM vs NLP</title>
      <dc:creator>mehmet akar</dc:creator>
      <pubDate>Fri, 14 Mar 2025 08:04:20 +0000</pubDate>
      <link>https://forem.com/mehmetakar/llm-vs-nlp-54o9</link>
      <guid>https://forem.com/mehmetakar/llm-vs-nlp-54o9</guid>
      <description>&lt;p&gt;LLM vs NLP is one of the most asked comparisons in the ai world. &lt;br&gt;
Let me analyze their differences, advantages, disadvantages.&lt;/p&gt;

&lt;h2&gt;
  
  
  LLM vs NLP Introduction
&lt;/h2&gt;

&lt;p&gt;In recent years, advancements in artificial intelligence (AI) have led to significant progress in the field of natural language processing (NLP) and the rise of large language models (LLMs). While these terms are often used interchangeably, they have distinct differences in functionality, scope, and applications. This article provides a comprehensive comparison of LLMs vs NLP, exploring their differences, similarities, and use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is NLP (Natural Language Processing)?
&lt;/h2&gt;

&lt;p&gt;NLP is a branch of AI focused on enabling computers to understand, interpret, and generate human language. It involves various techniques, including machine learning, deep learning, linguistic rules, and statistical methods to process and analyze text or speech data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Core Components of NLP:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Tokenization&lt;/strong&gt; – Breaking text into words or subwords.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Part-of-Speech (POS) Tagging&lt;/strong&gt; – Identifying grammatical categories.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Named Entity Recognition (NER)&lt;/strong&gt; – Detecting entities like names, places, and dates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sentiment Analysis&lt;/strong&gt; – Determining the emotional tone of a text.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Text Classification&lt;/strong&gt; – Categorizing text into predefined groups.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Machine Translation&lt;/strong&gt; – Converting text from one language to another.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What is an LLM (Large Language Model)?
&lt;/h2&gt;

&lt;p&gt;An LLM is a type of AI model designed to process and generate human-like text. These models are built using deep learning techniques, primarily transformers, and are trained on vast datasets.&lt;/p&gt;

&lt;h3&gt;
  
  
  Features of LLMs:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Pretrained on Large Datasets&lt;/strong&gt; – LLMs learn from massive text corpora before fine-tuning for specific tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contextual Understanding&lt;/strong&gt; – Unlike traditional NLP models, LLMs consider extensive context for better coherence.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generative Capabilities&lt;/strong&gt; – They can create human-like text, answer questions, and even write code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero-shot and Few-shot Learning&lt;/strong&gt; – LLMs can perform tasks with minimal training data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multimodal Integration&lt;/strong&gt; – Some advanced LLMs support text, image, and audio processing.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Key Differences Between LLM and NLP
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;NLP&lt;/th&gt;
&lt;th&gt;LLM&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Definition&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;A field of AI focused on processing human language.&lt;/td&gt;
&lt;td&gt;A type of AI model designed to generate and understand language at scale.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scope&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Includes a broad range of techniques and methods.&lt;/td&gt;
&lt;td&gt;A subset of NLP using deep learning-based models.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Data Processing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Uses rule-based, statistical, and machine learning approaches.&lt;/td&gt;
&lt;td&gt;Uses massive datasets and neural networks.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Training&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Can be task-specific or general-purpose.&lt;/td&gt;
&lt;td&gt;Typically pre-trained on large corpora and fine-tuned.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Examples&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;SpaCy, NLTK, BERT, TF-IDF&lt;/td&gt;
&lt;td&gt;GPT-4, LLaMA, Claude, Gemini&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Usage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Sentiment analysis, machine translation, chatbots&lt;/td&gt;
&lt;td&gt;Conversational AI, content generation, reasoning tasks&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  LLM vs NLP Use Cases and Applications
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;When to Use NLP?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Text Preprocessing:&lt;/strong&gt; Cleaning and structuring data for machine learning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Information Retrieval:&lt;/strong&gt; Search engines and document indexing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spam Detection:&lt;/strong&gt; Filtering unwanted messages.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Voice Assistants:&lt;/strong&gt; Speech-to-text and text-to-speech applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;When to Use LLMs?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Content Creation:&lt;/strong&gt; Writing articles, emails, and social media posts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chatbots and Virtual Assistants:&lt;/strong&gt; AI-powered customer service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Generation:&lt;/strong&gt; Writing and debugging code using AI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Analysis:&lt;/strong&gt; Summarizing and extracting insights from large text datasets.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  LLM vs NLP: Which is Better?
&lt;/h2&gt;

&lt;p&gt;When comparing &lt;strong&gt;LLM vs NLP which is better&lt;/strong&gt;, the answer depends on the use case. Traditional NLP techniques are still valuable for structured tasks like sentiment analysis, while LLMs excel at open-ended language generation. Businesses looking for advanced conversational AI or &lt;strong&gt;how do large language models work&lt;/strong&gt; should consider LLMs. However, NLP remains essential for practical applications that require rule-based processing and computational efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future of LLMs and NLP
&lt;/h2&gt;

&lt;p&gt;As AI research progresses, NLP and LLMs will continue evolving. Some anticipated trends include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid Models:&lt;/strong&gt; Combining rule-based NLP with deep learning for better accuracy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Personalized AI:&lt;/strong&gt; Models tailored to individual users for improved responses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ethical AI Development:&lt;/strong&gt; Addressing biases and making AI more transparent.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multimodal AI:&lt;/strong&gt; Integrating text, images, and voice for seamless interactions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  LLM vs NLP: The Final Verdict
&lt;/h2&gt;

&lt;p&gt;While NLP and LLMs are interconnected, they serve different purposes in AI-driven language processing. NLP encompasses a broader field with various methods, whereas LLMs represent a specialized subset with advanced deep-learning capabilities. Understanding their differences and applications will help businesses and developers choose the right technology for their needs.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>nlp</category>
      <category>llmvsnlp</category>
    </item>
    <item>
      <title>Best Open Source LLM</title>
      <dc:creator>mehmet akar</dc:creator>
      <pubDate>Fri, 14 Mar 2025 07:47:29 +0000</pubDate>
      <link>https://forem.com/mehmetakar/best-open-source-llm-3ig8</link>
      <guid>https://forem.com/mehmetakar/best-open-source-llm-3ig8</guid>
      <description>&lt;p&gt;I want to mention some best open source llm in 2025.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Open Source LLM: Introduction
&lt;/h2&gt;

&lt;p&gt;Open-source Large Language Models (LLMs) are becoming increasingly powerful, offering flexible and cost-effective alternatives to proprietary models. These models enable developers, researchers, and enterprises to integrate AI capabilities into their applications without relying on closed-source platforms. This article provides an overview of the best open-source LLMs in 2025, including their capabilities, use cases, licensing, and performance comparisons.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Top Open-Source LLMs of 2025&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. LLaMA 3.1&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Developer:&lt;/strong&gt; Meta AI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release Date:&lt;/strong&gt; July 23, 2024&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parameter Size:&lt;/strong&gt; 405B, 70B, 8B&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Cases:&lt;/strong&gt; General text generation, multilingual processing, code generation, long-form content, enterprise AI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License:&lt;/strong&gt; Llama Community License&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Highlights:&lt;/strong&gt; LLaMA 3.1 is Meta’s latest high-performance model, designed for large-scale enterprise applications and advanced AI research.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. DeepSeek-R1&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Developer:&lt;/strong&gt; DeepSeek&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release Date:&lt;/strong&gt; January 2025&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parameter Size:&lt;/strong&gt; 671B&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Cases:&lt;/strong&gt; General-purpose AI, scalable applications, chatbots, education, data analysis.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License:&lt;/strong&gt; DeepSeek License&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Highlights:&lt;/strong&gt; DeepSeek-R1 is one of the largest open-source LLMs, optimized for efficiency and adaptability across various industries.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Qwen 2.5 72B&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Developer:&lt;/strong&gt; Alibaba Cloud&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release Date:&lt;/strong&gt; September 19, 2024&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parameter Size:&lt;/strong&gt; 72B&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Cases:&lt;/strong&gt; Multilingual and multimodal tasks, enterprise AI, international team collaboration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License:&lt;/strong&gt; Qwen License&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Highlights:&lt;/strong&gt; This model supports multiple languages and image-text interactions, making it ideal for global AI applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Mistral 7B &amp;amp; Mistral Large 2&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Developer:&lt;/strong&gt; Mistral AI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release Dates:&lt;/strong&gt; Mistral 7B (2023), Mistral Large 2 (July 2024)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parameter Sizes:&lt;/strong&gt; 7B, 123B&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Cases:&lt;/strong&gt; Edge AI, personal assistants, scalable AI, research and development.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License:&lt;/strong&gt; Apache 2.0, Mistral Research License&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Highlights:&lt;/strong&gt; Mistral 7B is lightweight for on-device AI, while Mistral Large 2 is optimized for enterprise-scale processing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5. Falcon 180B&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Developer:&lt;/strong&gt; Technology Innovation Institute (TII)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release Date:&lt;/strong&gt; September 2024&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parameter Size:&lt;/strong&gt; 180B&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Cases:&lt;/strong&gt; Financial analysis, legal AI, healthcare applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License:&lt;/strong&gt; TII Falcon License&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Highlights:&lt;/strong&gt; Falcon 180B is one of the most advanced open-source LLMs for deep reasoning and high-performance AI tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;6. DeepSeek-MoE 16B&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Developer:&lt;/strong&gt; DeepSeek&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release Date:&lt;/strong&gt; January 9, 2024&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parameter Size:&lt;/strong&gt; 16B (2.7B activated per token)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Cases:&lt;/strong&gt; Domain-specific AI, efficient training, custom AI solutions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License:&lt;/strong&gt; DeepSeek License&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Highlights:&lt;/strong&gt; A Mixture of Experts (MoE) model that reduces computational costs while maintaining high adaptability.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;7. PaLM 2&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Developer:&lt;/strong&gt; Google&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release Date:&lt;/strong&gt; May 2023&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parameter Size:&lt;/strong&gt; 340B&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Cases:&lt;/strong&gt; Multimodal AI, translation, advanced reasoning, research.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License:&lt;/strong&gt; Gemma License&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Highlights:&lt;/strong&gt; PaLM 2 is designed for cross-modal tasks, including text, image, and audio understanding.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;8. Grok-1&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Developer:&lt;/strong&gt; xAI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release Date:&lt;/strong&gt; November 2023&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parameter Size:&lt;/strong&gt; 314B&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Cases:&lt;/strong&gt; Creative applications, humor, personalized AI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License:&lt;/strong&gt; Open-source&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Highlights:&lt;/strong&gt; Grok-1 specializes in generating engaging and humorous content for entertainment industries.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;9. Gemma 2&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Developer:&lt;/strong&gt; Google&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release Date:&lt;/strong&gt; 2025&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parameter Size:&lt;/strong&gt; 2B, 9B, 27B&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Cases:&lt;/strong&gt; General text generation, question answering, summarization, code generation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License:&lt;/strong&gt; Gemma License&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Highlights:&lt;/strong&gt; Gemma 2 models are lightweight yet powerful, making them ideal for both research and commercial applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;10. Yi&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Developer:&lt;/strong&gt; 01.AI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release Date:&lt;/strong&gt; 2025&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parameter Size:&lt;/strong&gt; 6B, 9B, 34B&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Cases:&lt;/strong&gt; Bilingual AI, code generation, mathematical reasoning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License:&lt;/strong&gt; Apache 2.0&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Highlights:&lt;/strong&gt; Yi models are optimized for high-performance bilingual AI and code-related tasks.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Performance &amp;amp; Benchmark Comparisons&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Processing Speed:&lt;/strong&gt; DeepSeek-R1 and Falcon 180B lead in speed for large-scale tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory Efficiency:&lt;/strong&gt; DeepSeek-MoE 16B and Mistral 7B are optimized for resource-constrained environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multimodal Abilities:&lt;/strong&gt; Qwen 2.5 and PaLM 2 excel in text-image interactions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Creative AI:&lt;/strong&gt; Grok-1 is the best performer in creative and entertainment-focused applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise Use:&lt;/strong&gt; LLaMA 3.1 (405B) and DeepSeek-R1 (671B) are top-tier choices for high-performance AI deployment.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Choosing the best open-source LLM depends on specific needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;For enterprise-scale AI:&lt;/strong&gt; LLaMA 3.1, DeepSeek-R1, and Falcon 180B.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For edge computing and lightweight AI:&lt;/strong&gt; Mistral 7B and DeepSeek-MoE 16B.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For multimodal tasks:&lt;/strong&gt; Qwen 2.5 and PaLM 2.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For creative and personalized AI:&lt;/strong&gt; Grok-1.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These open-source LLMs provide developers with a broad selection of tools for research, business applications, and personal projects, ensuring innovation continues in the AI ecosystem.&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>bestopensourcellm</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Postgresql vs. MongoDB</title>
      <dc:creator>mehmet akar</dc:creator>
      <pubDate>Fri, 14 Mar 2025 07:44:20 +0000</pubDate>
      <link>https://forem.com/mehmetakar/postgresql-vs-mongodb-a0a</link>
      <guid>https://forem.com/mehmetakar/postgresql-vs-mongodb-a0a</guid>
      <description>&lt;p&gt;PostgreSQL vs. MongoDB is the one of the most wondered comparisons in the database world. Let me compare them in a concise way.&lt;/p&gt;

&lt;h2&gt;
  
  
  PostgreSQL vs. MongoDB: A Concise Comparison
&lt;/h2&gt;

&lt;p&gt;PostgreSQL and MongoDB are two of the most popular database management systems. PostgreSQL is an open-source relational database, while MongoDB is a NoSQL document-oriented database. Both have their unique strengths, and choosing between them depends on various factors such as data structure, scalability needs, and performance requirements. This article will compare them based on installation, benchmarks, pricing, and key performance metrics.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Installation Tutorials&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Installing PostgreSQL on Ubuntu&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Update the Package List:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Install PostgreSQL:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;postgresql postgresql-contrib
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Verify Installation:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   psql &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Installing MongoDB on Ubuntu&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Import the Public Key:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   wget &lt;span class="nt"&gt;-qO&lt;/span&gt; - https://www.mongodb.org/static/pgp/server-5.0.asc | &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-key add -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create a List File for MongoDB:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/mongodb-org-5.0.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Update the Package List:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Install MongoDB:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; mongodb-org
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start and Enable MongoDB:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start mongod
   &lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;mongod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;PostgreSQL vs. MongoDB: Performance Benchmarks&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Transaction Performance&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;According to Airbyte:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PostgreSQL processed over &lt;strong&gt;20,000 transactions per second&lt;/strong&gt;, whereas MongoDB handled &lt;strong&gt;fewer than 2,000 transactions per second&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;OLTP Workloads&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;PostgreSQL was found to be &lt;strong&gt;2.7 to 3.2 times faster&lt;/strong&gt; in in-memory tests.&lt;/li&gt;
&lt;li&gt;With larger datasets (2TB), PostgreSQL outperformed MongoDB by &lt;strong&gt;25 to 40 times&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;OLAP Workloads&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;PostgreSQL performed &lt;strong&gt;35% to 53% faster&lt;/strong&gt; than MongoDB in JSON-based analytical queries.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;PostgreSQL vs. MongoDB: Pricing Models&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;PostgreSQL&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;PostgreSQL is &lt;strong&gt;free&lt;/strong&gt; and open-source. &lt;/li&gt;
&lt;li&gt;Managed PostgreSQL services start at &lt;strong&gt;$15/month&lt;/strong&gt; on platforms like DigitalOcean.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;MongoDB&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;MongoDB provides a &lt;strong&gt;free Community Edition&lt;/strong&gt; and a paid &lt;strong&gt;Enterprise Edition&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Managed MongoDB Atlas services start at &lt;strong&gt;$15.23/month&lt;/strong&gt; on DigitalOcean.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;PostgreSQL vs. MongoDB: Other Performance Metrics&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Scalability&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MongoDB&lt;/strong&gt; supports &lt;strong&gt;horizontal scaling&lt;/strong&gt; with built-in &lt;strong&gt;sharding&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt; primarily scales &lt;strong&gt;vertically&lt;/strong&gt;, though sharding can be manually implemented.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Data Modeling&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MongoDB&lt;/strong&gt; is schema-less and flexible for unstructured data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt; follows a strict schema model for structured data integrity.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Query Language&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MongoDB&lt;/strong&gt; uses &lt;strong&gt;MongoDB Query Language (MQL)&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt; uses &lt;strong&gt;SQL&lt;/strong&gt;, a widely adopted query language.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;PostgreSQL vs. MongoDB: Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Both PostgreSQL and MongoDB are powerful databases with different strengths. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Choose PostgreSQL&lt;/strong&gt; if you require &lt;strong&gt;ACID compliance, complex transactions, and structured data&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Choose MongoDB&lt;/strong&gt; if you need &lt;strong&gt;high scalability, flexibility, and quick development&lt;/strong&gt; for unstructured data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Selecting the right database depends on your specific use case, workload requirements, and budget.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>mongodb</category>
      <category>database</category>
      <category>postgresqlvsmongodb</category>
    </item>
    <item>
      <title>RAG Vector Database - Use Cases &amp; Tutorial</title>
      <dc:creator>mehmet akar</dc:creator>
      <pubDate>Thu, 13 Mar 2025 11:05:59 +0000</pubDate>
      <link>https://forem.com/mehmetakar/rag-vector-database-2lb2</link>
      <guid>https://forem.com/mehmetakar/rag-vector-database-2lb2</guid>
      <description>&lt;p&gt;RAG Vector Database is one of the first main terms RAG geeks are looking for. Let me dive in!&lt;/p&gt;

&lt;h2&gt;
  
  
  RAG Vector Database: A Comprehensive Guide: Table of Contents
&lt;/h2&gt;

&lt;h6&gt;
  
  
  - RAG Vector Database: Introduction
&lt;/h6&gt;

&lt;h6&gt;
  
  
  - What is RAG?
&lt;/h6&gt;

&lt;h6&gt;
  
  
  - Vector Databases Explained
&lt;/h6&gt;

&lt;h6&gt;
  
  
  - What is RAG Vector Database?
&lt;/h6&gt;

&lt;h6&gt;
  
  
  - Popular Vector Databases
&lt;/h6&gt;

&lt;h6&gt;
  
  
  - Implementing RAG with Vector Databases
&lt;/h6&gt;

&lt;h6&gt;
  
  
  - Tutorials and Examples
&lt;/h6&gt;

&lt;h6&gt;
  
  
  - Best Practices
&lt;/h6&gt;

&lt;h6&gt;
  
  
  - Future Trends
&lt;/h6&gt;

&lt;h6&gt;
  
  
  - RAG Vector Database: Conclusion
&lt;/h6&gt;

&lt;h2&gt;
  
  
  RAG Vector Database: Introduction
&lt;/h2&gt;

&lt;p&gt;Retrieval Augmented Generation (RAG) with vector databases has revolutionized how AI systems access and utilize information. This comprehensive guide explores the technology, implementation, and best practices for building powerful RAG systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is RAG?
&lt;/h2&gt;

&lt;p&gt;RAG (Retrieval Augmented Generation) is an AI architecture that combines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Information retrieval from a knowledge base&lt;/li&gt;
&lt;li&gt;Large Language Model (LLM) generation capabilities&lt;/li&gt;
&lt;li&gt;Vector embeddings for semantic search&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Improved accuracy with up-to-date information&lt;/li&gt;
&lt;li&gt;Reduced hallucinations&lt;/li&gt;
&lt;li&gt;Better context awareness&lt;/li&gt;
&lt;li&gt;Verifiable responses&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Vector Databases Explained
&lt;/h2&gt;

&lt;p&gt;Vector databases are specialized systems that store and retrieve high-dimensional vectors representing data. Key features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vector Embeddings&lt;/strong&gt;: Mathematical representations of data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Similarity Search&lt;/strong&gt;: Fast nearest neighbor search&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Efficient handling of millions of vectors&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Index Types&lt;/strong&gt;: HNSW, IVF, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is RAG Vector Database?
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;RAG (Retrieval-Augmented Generation) vector database&lt;/strong&gt; is a specialized type of database used in AI applications, particularly in &lt;strong&gt;Large Language Models (LLMs)&lt;/strong&gt;, to enhance their knowledge retrieval and response generation capabilities. It combines &lt;strong&gt;vector databases&lt;/strong&gt; with &lt;strong&gt;retrieval-augmented generation (RAG)&lt;/strong&gt; techniques to improve the quality, accuracy, and relevance of responses.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How It Works&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data Storage in Vector Format:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Documents, text, and other information are converted into &lt;strong&gt;vector embeddings&lt;/strong&gt; using &lt;strong&gt;embedding models&lt;/strong&gt; (e.g., OpenAI's Ada, Cohere, Sentence Transformers).&lt;/li&gt;
&lt;li&gt;These embeddings capture &lt;strong&gt;semantic meaning&lt;/strong&gt; and are stored in a &lt;strong&gt;vector database&lt;/strong&gt; like &lt;strong&gt;Pinecone, Weaviate, Milvus, Qdrant, or PostgreSQL with pgvector&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Retrieval (R in RAG):&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When a user inputs a query, the system &lt;strong&gt;converts the query into a vector&lt;/strong&gt; and searches for the most &lt;strong&gt;similar embeddings&lt;/strong&gt; in the database using &lt;strong&gt;approximate nearest neighbors (ANN) search&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Augmenting the LLM (A in RAG):&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The retrieved text is fed into a &lt;strong&gt;language model (e.g., GPT-4, LLaMA, Claude)&lt;/strong&gt; as &lt;strong&gt;context&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The LLM &lt;strong&gt;generates a response&lt;/strong&gt; by leveraging both the retrieved data and its own pre-trained knowledge.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Final Response Generation (G in RAG):&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The model &lt;strong&gt;integrates the retrieved information&lt;/strong&gt; and produces a more &lt;strong&gt;context-aware and accurate response&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Benefits of RAG with Vector Databases&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Better Accuracy&lt;/strong&gt;: Helps reduce hallucinations by grounding responses in real data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficient Information Retrieval&lt;/strong&gt;: Uses vector-based similarity instead of keyword-based search.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Can handle large-scale unstructured data (documents, PDFs, research papers, etc.).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customization&lt;/strong&gt;: Allows businesses to fine-tune AI models with their proprietary data without retraining LLMs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Use Cases&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Chatbots &amp;amp; Virtual Assistants&lt;/strong&gt;: Providing better responses by integrating domain-specific knowledge.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise Search&lt;/strong&gt;: Improving search efficiency in company databases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Legal &amp;amp; Financial AI&lt;/strong&gt;: Summarizing documents with high precision.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medical AI&lt;/strong&gt;: Ensuring responses are based on verified medical sources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;E-commerce &amp;amp; Recommendations&lt;/strong&gt;: Enhancing product search and recommendations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Popular RAG Vector Databases
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Pinecone
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pinecone&lt;/span&gt;

&lt;span class="n"&gt;pinecone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-api-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pinecone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;example-index&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Insert vectors
&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;upsert&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;id1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;metadata&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;example&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}),&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Weaviate
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;weaviate&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;weaviate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:8080&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Create schema
&lt;/span&gt;&lt;span class="n"&gt;class_obj&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;class&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Document&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vectorizer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text2vec-transformers&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_class&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;class_obj&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Milvus
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pymilvus&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Collection&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;connections&lt;/span&gt;

&lt;span class="n"&gt;connections&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;collection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;documents&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Search vectors
&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;query_vectors&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;embeddings&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;param&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;metric_type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;L2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Chroma
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;chromadb&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;chromadb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;collection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;documents&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Add documents
&lt;/span&gt;&lt;span class="n"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;documents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[[&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt;&lt;span class="p"&gt;]],&lt;/span&gt;
    &lt;span class="n"&gt;ids&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;id1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Implementing RAG with Vector Databases
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Basic RAG Pipeline
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.embeddings&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAIEmbeddings&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.vectorstores&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Chroma&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.llms&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.chains&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;RetrievalQA&lt;/span&gt;

&lt;span class="c1"&gt;# Initialize components
&lt;/span&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAIEmbeddings&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;vectorstore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Chroma&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;embedding_function&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Create RAG chain
&lt;/span&gt;&lt;span class="n"&gt;qa_chain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;RetrievalQA&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_chain_type&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;chain_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stuff&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;retriever&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;vectorstore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;as_retriever&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Tutorials and Examples
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Basic RAG Implementation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# 1. Prepare documents
&lt;/span&gt;&lt;span class="n"&gt;documents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Document 1 content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Document 2 content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# 2. Create embeddings
&lt;/span&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAIEmbeddings&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;doc_embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;embed_documents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;documents&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# 3. Store in vector database
&lt;/span&gt;&lt;span class="n"&gt;vectorstore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_documents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;documents&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# 4. Query
&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;qa_chain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Your question here&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Advanced RAG with Hybrid Search
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Combine keyword and semantic search
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.retrievers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;BM25Retriever&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.retrievers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;EnsembleRetriever&lt;/span&gt;

&lt;span class="c1"&gt;# Create retrievers
&lt;/span&gt;&lt;span class="n"&gt;bm25_retriever&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;BM25Retriever&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_documents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;documents&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;vector_retriever&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vectorstore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;as_retriever&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Combine retrievers
&lt;/span&gt;&lt;span class="n"&gt;ensemble_retriever&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;EnsembleRetriever&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;retrievers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;bm25_retriever&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;vector_retriever&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;weights&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Best Practices
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data Preparation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clean and preprocess text&lt;/li&gt;
&lt;li&gt;Split documents appropriately&lt;/li&gt;
&lt;li&gt;Remove duplicates&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Vector Database Selection&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consider scale requirements&lt;/li&gt;
&lt;li&gt;Evaluate hosting options&lt;/li&gt;
&lt;li&gt;Compare performance metrics&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Optimization&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use appropriate chunk sizes&lt;/li&gt;
&lt;li&gt;Implement caching&lt;/li&gt;
&lt;li&gt;Monitor and tune performance&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement access controls&lt;/li&gt;
&lt;li&gt;Encrypt sensitive data&lt;/li&gt;
&lt;li&gt;Regular security audits&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Future Trends
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Multi-modal RAG systems&lt;/li&gt;
&lt;li&gt;Improved context compression&lt;/li&gt;
&lt;li&gt;Hybrid search techniques&lt;/li&gt;
&lt;li&gt;Real-time updating capabilities&lt;/li&gt;
&lt;li&gt;Enhanced privacy features&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  RAG Vector Database: Conclusion
&lt;/h2&gt;

&lt;p&gt;RAG vector databases represent a powerful approach to enhancing AI systems with accurate, retrievable knowledge. By following this guide and implementing best practices, you can build robust RAG systems for various applications.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Important Note: I update this article regularly to reflect the latest developments in RAG and vector database technologies.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>rag</category>
      <category>vectordatabase</category>
      <category>ragvectordatabase</category>
      <category>ai</category>
    </item>
    <item>
      <title>Manus AI Invitation Code</title>
      <dc:creator>mehmet akar</dc:creator>
      <pubDate>Thu, 13 Mar 2025 09:20:03 +0000</pubDate>
      <link>https://forem.com/mehmetakar/manus-ai-invitation-code-229n</link>
      <guid>https://forem.com/mehmetakar/manus-ai-invitation-code-229n</guid>
      <description>&lt;p&gt;Manus AI Invitation Code is asked heavily by the AI development geeks, nowadays. As you realize, Manus AI does not accept direct sign-up. It accepts users via giving invitation code after filling a application form.&lt;/p&gt;

&lt;p&gt;I'll guide you through the process of applying for getting Manus AI invitation code process. &lt;/p&gt;

&lt;h2&gt;
  
  
  Manus AI Invitation Code: How to get it?
&lt;/h2&gt;

&lt;p&gt;There are 4 steps for applying for Manus AI Invitation Code.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: Click on "Get Started"&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;First, you should visit manus ai home page. As you see below, you should click on the upper-right "get started" button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvil9erupk0naqllglhzo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvil9erupk0naqllglhzo.png" alt="Manus AI Invitation Code Step 1" width="800" height="695"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2: Click on "Apply for access"&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;After clicking on "get started", the code activation popup window opens. You should click on "Apply for access" because you have not get Manus AI invitation code yet. See below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uoq244ttwc9nrhusgzt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uoq244ttwc9nrhusgzt.png" alt="Manus AI Invitation Code Step 2" width="370" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3: Filling Manus AI Invitation Code Application Form&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;After clicking on "Apply for access", Manus AI Invitation Code application form page opens. In this page, Manus AI want to know your plans for using Menus AI. See Below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgiy2635xgg46huxii5el.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgiy2635xgg46huxii5el.png" alt="Manus AI Invitation Code Step 3" width="606" height="610"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this stage, we should be careful. Manus AI has some use cases as they have stated in their website. We can mention one of these use cases in that application form.&lt;/p&gt;

&lt;p&gt;If you don't know why you use Manus AI, I created an example with the use cases provided by Manus AI. &lt;/p&gt;

&lt;p&gt;Be Careful: You should keep the paragraph of one of the use cases and remove others as you see below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Manus AI Invitation Code Application Form Content Sample:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Subject:&lt;/strong&gt; Invitation Code Request  &lt;/p&gt;

&lt;p&gt;Dear Manus Team,  &lt;/p&gt;

&lt;p&gt;Hello!  &lt;/p&gt;

&lt;p&gt;My name is [Your Name], and I am excited about the potential of integrating Manus AI into my work and daily life. Given the wide-ranging capabilities of Manus, I see immense value in leveraging its AI-driven solutions for my specific needs.  &lt;/p&gt;

&lt;p&gt;One of my primary use cases for Manus involves trip planning and travel research. I have an upcoming trip to Japan in April and would love to use Manus to generate a personalized itinerary, complete with travel insights, cultural recommendations, and an optimized schedule to enhance my experience. A custom travel handbook tailored to my interests would greatly streamline my planning process. &lt;strong&gt;(Keep one, remove others)&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Additionally, I am deeply involved in financial market analysis, particularly focusing on Tesla stock performance. Manus’s ability to provide in-depth, visually interactive dashboards with real-time financial insights would be instrumental in making informed investment decisions. &lt;strong&gt;(Keep one, remove others)&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Another area where I see Manus adding value is in education and content creation. As someone passionate about teaching, I would like to explore how Manus can generate engaging, video-based educational content to explain complex topics like the momentum theorem in a way that is easy to grasp for middle school students. &lt;strong&gt;(Keep one, remove others)&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Moreover, I frequently compare and assess insurance policies and financial products. I believe Manus can streamline this process by generating structured comparison tables, helping me analyze key differences between policies and make well-informed choices. &lt;strong&gt;(Keep one, remove others)&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;On a professional level, I am actively involved in B2B supplier sourcing and industry research, particularly in finding high-quality suppliers and evaluating AI-driven solutions for various industries. With Manus’s expertise in compiling comprehensive research reports and conducting competitive analysis, I am eager to leverage its capabilities to enhance my market insights. &lt;strong&gt;(Keep one, remove others)&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Lastly, I am interested in Manus’s ability to optimize e-commerce operations. As someone who manages an online store, I see Manus as an invaluable tool for analyzing sales data, generating performance reports, and identifying actionable strategies to boost conversion rates. &lt;strong&gt;(Keep one, remove others)&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;I have high expectations for Manus and am eager to explore its AI-driven insights across multiple domains. I am confident that its powerful data-processing capabilities will significantly enhance my decision-making and efficiency.  &lt;/p&gt;

&lt;p&gt;Thank you for considering my application. If I am fortunate enough to receive an invitation code, I will fully utilize Manus to optimize my workflows, research, and personal projects. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Applicant:&lt;/strong&gt; [Your Name]&lt;br&gt;&lt;br&gt;
[Application Date]&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Finishing Apply Process
&lt;/h3&gt;

&lt;p&gt;As you see below, "Thanks for your interest for Manus" page opens.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fowmr8d99aq0cqstfvobu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fowmr8d99aq0cqstfvobu.png" alt="Manus AI Invitation Code Step 4" width="677" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It says it will return to your email. &lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Taking and Submitting Manus AI Invitation Code.
&lt;/h3&gt;

&lt;p&gt;After finishing the applying process, you will get an email from Manus AI and see an invitation code if you are accepted by Manus AI. &lt;/p&gt;

&lt;p&gt;After taking Manus AI Invitation Code, you should again click on get started button on home page and submit the code in the code activation page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk20mst2pexpmqejmr88u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk20mst2pexpmqejmr88u.png" alt="Image description" width="349" height="589"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it! You can now experience Manus AI!&lt;/p&gt;

&lt;p&gt;Thanks for reading. See you on more tutorials!&lt;/p&gt;

</description>
      <category>manusai</category>
      <category>manus</category>
      <category>ai</category>
      <category>llm</category>
    </item>
    <item>
      <title>Model Context Protocol (MCP) Tutorial</title>
      <dc:creator>mehmet akar</dc:creator>
      <pubDate>Thu, 13 Mar 2025 06:36:14 +0000</pubDate>
      <link>https://forem.com/mehmetakar/model-context-protocol-mcp-tutorial-3nda</link>
      <guid>https://forem.com/mehmetakar/model-context-protocol-mcp-tutorial-3nda</guid>
      <description>&lt;p&gt;Model Context Protocol (MCP) is actually a concept that is a few months old, but getting so hot nowadays as AI development evolves day by day.&lt;br&gt;
Let me look at what is Model Context Protocol (MCP) and deep dive into Model Context Protocol (MCP) with tutorial with implementation code examples.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;What is Model Context Protocol (MCP)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Model Context Protocol (MCP), an open source project run by &lt;a href="https://www.linkedin.com/company/anthropicresearch/" rel="noopener noreferrer"&gt;Anthropic&lt;/a&gt;, is an emerging framework that enables structured and dynamic context management for machine learning models, particularly in applications involving conversational AI, large language models (LLMs), and multi-modal systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qwtcawq7nk98nmiaa8c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qwtcawq7nk98nmiaa8c.png" alt="Model Context Protocol (MCP)" width="701" height="610"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;This protocol is designed to provide &lt;strong&gt;better control, efficiency, and adaptability&lt;/strong&gt; in real-time AI applications by managing the context dynamically based on user interactions and data flows.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Key Features of Model Context Protocol&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic Context Switching&lt;/strong&gt; – Enables AI models to switch between different contexts based on predefined rules or user interactions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory Management&lt;/strong&gt; – Efficiently handles long-term and short-term memory for contextual conversations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimized for LLMs&lt;/strong&gt; – Reduces token usage by maintaining only relevant context.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Agent Coordination&lt;/strong&gt; – Enables collaboration between multiple AI agents.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fine-Tuned Context Retrieval&lt;/strong&gt; – Ensures only necessary context is fetched, improving response relevance.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Use Cases of Model Context Protocol&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Conversational AI&lt;/strong&gt; – Enhances chatbots and voice assistants by dynamically managing conversation memory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Generation and Assistance&lt;/strong&gt; – Allows AI coding assistants to remember user-specific coding preferences.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customer Support AI&lt;/strong&gt; – Manages ongoing support cases by maintaining relevant conversation history.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medical AI Assistants&lt;/strong&gt; – Helps AI-powered diagnostic tools maintain patient history context.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multimodal AI Applications&lt;/strong&gt; – Enables AI models to understand and switch between text, images, and audio contexts.&lt;/li&gt;
&lt;/ol&gt;


&lt;h2&gt;
  
  
  &lt;strong&gt;Model Context Protocol in Action&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let's explore MCP through &lt;strong&gt;practical examples&lt;/strong&gt; with Python-based implementations.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;1. Implementing a Basic Model Context Protocol&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here’s how you can implement a simple &lt;strong&gt;context-aware AI model&lt;/strong&gt; using Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ModelContextProtocol&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;update_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;  &lt;span class="c1"&gt;# Limit memory to the last 5 messages
&lt;/span&gt;            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;pop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[])&lt;/span&gt;

&lt;span class="c1"&gt;# Example Usage
&lt;/span&gt;&lt;span class="n"&gt;mcp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ModelContextProtocol&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;mcp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello, AI!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;mcp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Can you help me with Python?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mcp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;  &lt;span class="c1"&gt;# Output: Last 5 interactions
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  &lt;strong&gt;2. Implementing Context-Aware LLM using OpenAI GPT&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;MCP can be used to enhance &lt;strong&gt;OpenAI’s GPT models&lt;/strong&gt; by feeding only the relevant conversation history.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ContextAwareGPT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;api_key&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;update_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;  &lt;span class="c1"&gt;# Limit history
&lt;/span&gt;            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;pop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;new_message&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;new_message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;context_str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ChatCompletion&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are an AI assistant.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; 
                     &lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;choices&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# Example Usage
&lt;/span&gt;&lt;span class="n"&gt;gpt_context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ContextAwareGPT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-api-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;gpt_context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate_response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;How do I install NumPy?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  &lt;strong&gt;3. Implementing Multi-Agent Model Context Protocol&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In complex applications, multiple AI agents can collaborate using &lt;strong&gt;shared contexts&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MultiAgentMCP&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;contexts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;update_agent_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;contexts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;contexts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;contexts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;contexts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;  &lt;span class="c1"&gt;# Limit memory
&lt;/span&gt;            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;contexts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;pop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_agent_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;contexts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[])&lt;/span&gt;

&lt;span class="c1"&gt;# Example Usage
&lt;/span&gt;&lt;span class="n"&gt;mcp_agents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MultiAgentMCP&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;mcp_agents&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update_agent_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Agent1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Analyzing data patterns.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;mcp_agents&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update_agent_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Agent2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Generating a summary.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mcp_agents&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_agent_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Agent1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;  &lt;span class="c1"&gt;# Output: Context history for Agent1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;Advanced MCP: Using Vector Databases for Context Storage&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Instead of storing plain text, a &lt;strong&gt;vector database&lt;/strong&gt; like Pinecone or FAISS can be used to store and retrieve context &lt;strong&gt;efficiently&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Example: Using FAISS for Context Retrieval&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;faiss&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;VectorizedContext&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dim&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;faiss&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;IndexFlatL2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dim&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;add_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;vector&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;vector&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;float32&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context_data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context_data&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;vector&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;retrieve_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;vector&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;indices&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;vector&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;float32&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context_data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;idx&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;idx&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;indices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;idx&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;context_data&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# Example Usage
&lt;/span&gt;&lt;span class="n"&gt;context_manager&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;VectorizedContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dim&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;context_manager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;rand&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;  &lt;span class="c1"&gt;# Adding a random vector as context
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context_manager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;retrieve_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;rand&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;  &lt;span class="c1"&gt;# Retrieve similar contexts
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;Future of Model Context Protocol&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Integration with Blockchain&lt;/strong&gt; – Ensuring secure context storage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-Time Context Compression&lt;/strong&gt; – Optimizing large-scale AI models.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Personalized AI Models&lt;/strong&gt; – Tailoring responses based on user behavior.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Model Context Protocol (MCP) : Final Words&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Model Context Protocol is transforming AI by improving how models &lt;strong&gt;understand, store, and retrieve context dynamically&lt;/strong&gt;. Whether for &lt;strong&gt;chatbots, LLMs, multi-agent AI, or vectorized knowledge bases&lt;/strong&gt;, MCP enhances efficiency and responsiveness. &lt;/p&gt;

</description>
      <category>ai</category>
      <category>modelcontextprotocol</category>
      <category>mcp</category>
      <category>llm</category>
    </item>
    <item>
      <title>Openai Agents SDK, Responses Api Tutorial</title>
      <dc:creator>mehmet akar</dc:creator>
      <pubDate>Wed, 12 Mar 2025 07:28:14 +0000</pubDate>
      <link>https://forem.com/mehmetakar/openai-agents-sdk-responses-api-tutorial-o8d</link>
      <guid>https://forem.com/mehmetakar/openai-agents-sdk-responses-api-tutorial-o8d</guid>
      <description>&lt;p&gt;Openai Agents SDK, Responses Api is the new tools provided by Openai. I want to talk about &lt;strong&gt;OpenAI's New Tools for Building Intelligent Agents&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Openai Agents SDK, Responses API&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;OpenAI has introduced a suite of new tools aimed at empowering developers and enterprises to build more sophisticated and reliable AI agents. These enhancements mark a significant step in making AI-driven applications more adaptable, efficient, and accessible.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Openai Responses API&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;Responses API&lt;/strong&gt; is OpenAI’s new API primitive for leveraging built-in tools to build agents. It combines the simplicity of Chat Completions with the tool-use capabilities of the Assistants API. The API provides a more flexible foundation for developers by enabling multi-tool use and multiple model turns within a single API call.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Supports &lt;strong&gt;web search, file search, and computer use&lt;/strong&gt; as built-in tools.&lt;/li&gt;
&lt;li&gt;Offers &lt;strong&gt;a unified item-based design&lt;/strong&gt; for easier management.&lt;/li&gt;
&lt;li&gt;Allows &lt;strong&gt;intuitive streaming events&lt;/strong&gt; and simplified SDK access via &lt;code&gt;response.output_text&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Enables &lt;strong&gt;better data storage and evaluation capabilities&lt;/strong&gt; for agent performance analysis.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;strong&gt;Responses API is available today&lt;/strong&gt;, with standard pricing applying to tokens and tools used.&lt;/p&gt;

&lt;p&gt;OpenAI’s Responses Api in action:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/vnSFhTiHlDU"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ficz13z9vcknkwmoupqt5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ficz13z9vcknkwmoupqt5.jpg" alt="Openai Agents SDK" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Openai Agents SDK&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A major highlight is the release of an &lt;strong&gt;advanced agent-building framework&lt;/strong&gt;, which simplifies the integration of AI into complex workflows. The new tools allow for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Seamless orchestration&lt;/strong&gt; of multiple AI models.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory capabilities&lt;/strong&gt; for agents to retain information across interactions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better security and compliance&lt;/strong&gt; features to ensure enterprise-grade usage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-agent workflows&lt;/strong&gt; that allow developers to integrate agents that work collaboratively.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configurable handoffs&lt;/strong&gt; between AI agents based on task requirements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, &lt;strong&gt;Coinbase&lt;/strong&gt; used the Agents SDK to prototype and deploy AgentKit, which enables AI agents to interact with &lt;strong&gt;crypto wallets&lt;/strong&gt; and other on-chain activities in just a few hours.&lt;/p&gt;

&lt;p&gt;OpenAI’s Agents SDK in action:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/H3mLQT2WeqI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;New Built-in Tools for Agents&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Web Search&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Developers have already been utilizing web search for a variety of use cases, including shopping assistants, research agents, and travel booking agents—any application that requires timely information from the web. for a variety of use cases including shopping assistants, research agents, and travel booking agents—any application that requires timely information from the web.&lt;/p&gt;

&lt;p&gt;One example is &lt;strong&gt;Hebbia&lt;/strong&gt;, which leverages the web search tool to help asset managers, private equity and credit firms, and law practices quickly extract actionable insights from extensive public and private datasets. to help asset managers, private equity and credit firms, and law practices quickly extract actionable insights from extensive public and private datasets. By integrating real-time search capabilities into their research workflows, Hebbia delivers richer, context-specific market intelligence and continuously improves the precision and relevance of their analyses, outperforming current benchmarks.&lt;/p&gt;

&lt;p&gt;The web search tool in the API is powered by the same model used for ChatGPT search. Benchmarks such as &lt;strong&gt;SimpleQA&lt;/strong&gt;, which evaluates the accuracy of LLMs in answering short, factual questions, show that &lt;strong&gt;GPT‑4o search preview&lt;/strong&gt; and &lt;strong&gt;GPT‑4o mini search preview&lt;/strong&gt; achieve scores of &lt;strong&gt;90% and 88%&lt;/strong&gt;, respectively.. On &lt;strong&gt;SimpleQA&lt;/strong&gt;, a benchmark that evaluates the accuracy of LLMs in answering short, factual questions, &lt;strong&gt;GPT‑4o search preview&lt;/strong&gt; and &lt;strong&gt;GPT‑4o mini search preview&lt;/strong&gt; score &lt;strong&gt;90% and 88%&lt;/strong&gt; respectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SimpleQA Accuracy Benchmark&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fynqu1bosnbp4entvf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fynqu1bosnbp4entvf5.png" alt="openai-response-api" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Responses generated with web search in the API include links to sources, such as news articles and blog posts, allowing users to engage with more information while also giving content owners opportunities to reach a broader audience., such as news articles and blog posts, giving users a way to learn more. With these clear, inline citations, users can engage with information in a new way, while content owners gain new opportunities to reach a broader audience.&lt;/p&gt;

&lt;p&gt;Additionally, websites and publishers have the option to appear in web search results within the API, enhancing visibility for their content.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;web search tool&lt;/strong&gt; is available to all developers in preview in the &lt;strong&gt;Responses API&lt;/strong&gt;. Developers also have direct access to OpenAI’s fine-tuned search models via &lt;strong&gt;gpt-4o-search-preview&lt;/strong&gt; and &lt;strong&gt;gpt-4o-mini-search-preview&lt;/strong&gt; in the Chat Completions API. Pricing starts at &lt;strong&gt;\$30 per 1,000 queries for GPT‑4o search&lt;/strong&gt; and &lt;strong&gt;\$25 per 1,000 queries for GPT‑4o-mini search&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides &lt;strong&gt;fast, up-to-date answers&lt;/strong&gt; with citations from the web.&lt;/li&gt;
&lt;li&gt;Available in &lt;strong&gt;GPT-4o&lt;/strong&gt; and &lt;strong&gt;GPT-4o-mini&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Enables &lt;strong&gt;shopping assistants, research agents, and travel booking bots&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;responses&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gpt-4o&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;web_search_preview&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;What was a positive news story that happened today?&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;output_text&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;File Search&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Retrieves &lt;strong&gt;relevant information&lt;/strong&gt; from large document volumes.&lt;/li&gt;
&lt;li&gt;Supports &lt;strong&gt;query optimization, metadata filtering, and reranking&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Helps with &lt;strong&gt;customer support, legal research, and technical documentation queries&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;productDocs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vectorStores&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Product Documentation&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;file_ids&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;file1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;file2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;file3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;responses&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gpt-4o-mini&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;file_search&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;vector_store_ids&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;productDocs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="na"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;What is deep research by OpenAI?&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;output_text&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;Computer Use Automation&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Developers can use the computer use tool to automate browser-based workflows like performing quality assurance on web apps or executing data-entry tasks across legacy systems.&lt;/p&gt;

&lt;p&gt;One example is &lt;strong&gt;Unify&lt;/strong&gt;, a system designed to streamline revenue growth through AI agents. By leveraging OpenAI’s computer use tool, Unify’s agents can access previously &lt;strong&gt;unreachable data via APIs&lt;/strong&gt;—such as verifying a business’s real estate expansion through &lt;strong&gt;online maps&lt;/strong&gt;. This analysis serves as a &lt;strong&gt;custom trigger&lt;/strong&gt; for personalized outreach, allowing go-to-market teams to engage buyers with greater accuracy and efficiency.&lt;/p&gt;

&lt;p&gt;Another example is &lt;strong&gt;Luminai&lt;/strong&gt;, which has integrated the computer use tool to &lt;strong&gt;automate operational workflows&lt;/strong&gt; for large enterprises struggling with legacy systems lacking API support. In a pilot project with a major community service organization, &lt;strong&gt;Luminai automated the application processing and user enrollment process in just days&lt;/strong&gt;, something traditional robotic process automation (RPA) systems had failed to achieve in months.&lt;/p&gt;

&lt;p&gt;Before launching &lt;strong&gt;Computer-Using Agent (CUA)&lt;/strong&gt; in Operator last year, OpenAI conducted extensive &lt;strong&gt;safety testing and red teaming&lt;/strong&gt;, addressing three primary risks: &lt;strong&gt;misuse, model errors, and frontier risks&lt;/strong&gt;. With the introduction of CUA in the API, OpenAI has implemented additional safeguards, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Safety checks to guard against prompt injections.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Confirmation prompts for sensitive actions.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tools to help developers isolate execution environments.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Enhanced detection of potential policy violations.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Despite these mitigations, the model is still susceptible to &lt;strong&gt;errors, particularly in non-browser environments&lt;/strong&gt;. Benchmark tests show that &lt;strong&gt;CUA’s performance on OSWorld is 38.1%&lt;/strong&gt;, indicating the need for &lt;strong&gt;human oversight&lt;/strong&gt; in real-world automation scenarios.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benchmark Comparison for OSWorld, WebArena, and WebVoyager&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5t8l51wbaail44st57we.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5t8l51wbaail44st57we.png" alt="openai-agents-sdk-computer-use" width="774" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Starting today, the &lt;strong&gt;computer use tool&lt;/strong&gt; is available as a &lt;strong&gt;research preview&lt;/strong&gt; in the &lt;strong&gt;Responses API&lt;/strong&gt; for select developers in &lt;strong&gt;usage tiers 3-5&lt;/strong&gt;. Pricing is set at &lt;strong&gt;\$3 per 1M input tokens&lt;/strong&gt; and &lt;strong&gt;\$12 per 1M output tokens&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Developers can now build AI-powered tools that perform &lt;strong&gt;computer automation tasks&lt;/strong&gt;, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Capturing mouse and keyboard actions.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Automating workflows across multiple software applications.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Enhancing enterprise productivity through AI-driven task execution.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;responses&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;computer-use-preview&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;computer_use_preview&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;display_width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;display_height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;768&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;browser&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="na"&gt;truncation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;auto&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;I'm looking for a new camera. Help me find the best one.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;output&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;4. AI Safety and Governance Enhancements&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Recognizing the importance of responsible AI use, OpenAI has reinforced its safety measures with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Stronger bias detection algorithms.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;More transparent AI decision-making processes.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Advanced compliance tracking tools for regulatory requirements.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Additional safety evaluations&lt;/strong&gt; for automation-related risks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Openai Agents SDK Tutorial&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;OpenAI has also provided new &lt;strong&gt;code examples&lt;/strong&gt; for integrating AI into applications efficiently:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example: Using the Agents SDK to build AI agents&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Runner&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;WebSearchTool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;function_tool&lt;/span&gt;

&lt;span class="nd"&gt;@function_tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;submit_refund_request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;item_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;success&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;support_agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Support &amp;amp; Returns&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a support agent who can submit refunds&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;submit_refund_request&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;shopping_agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Shopping Assistant&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a shopping assistant who can search the web&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;WebSearchTool&lt;/span&gt;&lt;span class="p"&gt;()],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;triage_agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Triage Agent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Route the user to the correct agent.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;handoffs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;shopping_agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;support_agent&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run_sync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;starting_agent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;triage_agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What shoes might work best with my outfit so far?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Deprecation of the Assistants API&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;OpenAI has announced that the &lt;strong&gt;Assistants API will be deprecated by mid-2026&lt;/strong&gt; in favor of the &lt;strong&gt;Responses API&lt;/strong&gt;. Developers are encouraged to migrate applications to the Responses API, which offers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Full feature parity with Assistants API.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Support for Assistant-like and Thread-like objects.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Better flexibility, speed, and efficiency.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A detailed &lt;strong&gt;migration guide&lt;/strong&gt; will be provided by OpenAI closer to the sunset date.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Openai Agents SDK, Responses API: Final Thoughts&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;OpenAI continues to push the boundaries of what’s possible with artificial intelligence. With these new tools, businesses and developers can create &lt;strong&gt;smarter, safer, and more scalable&lt;/strong&gt; AI solutions. As these advancements roll out, the AI landscape is set to become more innovative and impactful than ever before.&lt;/p&gt;

&lt;p&gt;For more details, visit OpenAI’s official announcement here:openai.com/news&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openai</category>
      <category>openaiagents</category>
      <category>openairesponsesapi</category>
    </item>
  </channel>
</rss>
