<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: PHP CMS Framework</title>
    <description>The latest articles on Forem by PHP CMS Framework (@albert_andrew_24878d43267).</description>
    <link>https://forem.com/albert_andrew_24878d43267</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/albert_andrew_24878d43267"/>
    <language>en</language>
    <item>
      <title>Magento 2 - tips and tricks every Developer should know</title>
      <dc:creator>PHP CMS Framework</dc:creator>
      <pubDate>Tue, 07 Apr 2026 04:25:57 +0000</pubDate>
      <link>https://forem.com/albert_andrew_24878d43267/magento-2-tips-and-tricks-every-developer-should-know-bo</link>
      <guid>https://forem.com/albert_andrew_24878d43267/magento-2-tips-and-tricks-every-developer-should-know-bo</guid>
      <description>&lt;p&gt;Magento 2 has a reputation for being complicated, and honestly that reputation is earned. The learning curve is steep, the documentation has gaps, and some of its architectural decisions only make sense once you have been burned by the alternative. I have been working with Magento 2 since its early releases and the tips in this post come directly from real projects, real mistakes, and things I wish someone had told me before I spent hours figuring them out the hard way.&lt;/p&gt;

&lt;p&gt;For more details - &lt;a href="https://www.phpcmsframework.com/2026/04/magento-2-tips-tricks-developers.html" rel="noopener noreferrer"&gt;https://www.phpcmsframework.com/2026/04/magento-2-tips-tricks-developers.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>php</category>
    </item>
    <item>
      <title>Laravel and Prism PHP: The Modern Way to Work with AI Models</title>
      <dc:creator>PHP CMS Framework</dc:creator>
      <pubDate>Sat, 04 Apr 2026 17:15:25 +0000</pubDate>
      <link>https://forem.com/albert_andrew_24878d43267/laravel-and-prism-php-the-modern-way-to-work-with-ai-models-9d3</link>
      <guid>https://forem.com/albert_andrew_24878d43267/laravel-and-prism-php-the-modern-way-to-work-with-ai-models-9d3</guid>
      <description>&lt;p&gt;Every Laravel project that needs AI ends up with a different implementation. One project uses the OpenAI PHP client directly. Another one uses a wrapper someone wrote three years ago that is no longer maintained. A third one is tightly coupled to a specific model, so switching from GPT-4o to Claude requires rewriting half the service layer.&lt;/p&gt;

&lt;p&gt;Prism PHP solves this properly. It is a Laravel package that gives you a single, consistent API for working with multiple AI providers. OpenAI, Anthropic Claude, Ollama for local models, Mistral, Gemini, and more, all through the same fluent interface. You switch providers by changing one line. Your application code does not care which model is behind it.&lt;/p&gt;

&lt;p&gt;This post covers the full picture. All the supported providers and when to use each one, text generation with structured output, tool calling so your AI can actually interact with your application, and embeddings for semantic search. I will tie all three together with a real-world example at the end so you can see how they work as a system rather than isolated features.&lt;/p&gt;

&lt;p&gt;What you need:&lt;/p&gt;

&lt;p&gt;Laravel 10 or 11&lt;br&gt;
PHP 8.1+&lt;br&gt;
Composer&lt;br&gt;
API keys for whichever providers you plan to use.&lt;br&gt;
Ollama requires a local install but is free to run.&lt;/p&gt;

&lt;p&gt;For more details - &lt;a href="https://www.phpcmsframework.com/2026/04/laravel-prism-php-ai-models.html" rel="noopener noreferrer"&gt;https://www.phpcmsframework.com/2026/04/laravel-prism-php-ai-models.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>laravel</category>
      <category>php</category>
    </item>
    <item>
      <title>Build a RAG Pipeline Inside Joomla for Intelligent Site Search</title>
      <dc:creator>PHP CMS Framework</dc:creator>
      <pubDate>Tue, 31 Mar 2026 18:12:08 +0000</pubDate>
      <link>https://forem.com/albert_andrew_24878d43267/build-a-rag-pipeline-inside-joomla-for-intelligent-site-search-le</link>
      <guid>https://forem.com/albert_andrew_24878d43267/build-a-rag-pipeline-inside-joomla-for-intelligent-site-search-le</guid>
      <description>&lt;p&gt;Joomla's built-in search has always had the same fundamental limitation. It is keyword-based. A visitor types "how do I reset my account" and the search engine looks for articles containing those exact words. If your article uses the phrase "recover your login credentials" instead, it does not show up. The visitor gets no results, concludes your site does not have the answer, and leaves.&lt;/p&gt;

&lt;p&gt;This is not a Joomla problem specifically. It is what keyword search does. It matches strings, not meaning. RAG, Retrieval-Augmented Generation, solves this at the architecture level. Instead of matching keywords, it converts both your content and the search query into vector embeddings, finds content that is semantically similar, and uses an LLM to generate a direct answer from that content. A visitor asking "how do I reset my account" gets a proper answer even if none of your articles use those exact words.&lt;/p&gt;

&lt;p&gt;I will walk through the full implementation. We will cover the three main vector storage options honestly so you can make the right choice for your setup, then go deep on building the complete RAG pipeline inside a custom Joomla component using PostgreSQL with pgvector and OpenAI.&lt;/p&gt;

&lt;p&gt;What you need: Joomla 4 or 5, PHP 8.1+, Composer, PostgreSQL with the pgvector extension installed, and an OpenAI API key.&lt;/p&gt;

&lt;p&gt;For more details : &lt;a href="https://www.phpcmsframework.com/2026/03/joomla-rag-pipeline-intelligent-site-search.html" rel="noopener noreferrer"&gt;https://www.phpcmsframework.com/2026/03/joomla-rag-pipeline-intelligent-site-search.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>joomla</category>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Drupal and LangChain: Building Multi-Step AI Pipelines for Enterprise CMS</title>
      <dc:creator>PHP CMS Framework</dc:creator>
      <pubDate>Sat, 28 Mar 2026 13:52:47 +0000</pubDate>
      <link>https://forem.com/albert_andrew_24878d43267/drupal-and-langchain-building-multi-step-ai-pipelines-for-enterprise-cms-11l4</link>
      <guid>https://forem.com/albert_andrew_24878d43267/drupal-and-langchain-building-multi-step-ai-pipelines-for-enterprise-cms-11l4</guid>
      <description>&lt;p&gt;Enterprise content teams have a problem that does not get talked about enough. It is not producing content, most large organisations have plenty of that. The problem is what happens to content before it gets published. Review queues that stretch for days, moderation bottlenecks where one editor is the single point of failure, policy checks that get skipped under deadline pressure, and taxonomy tagging that is inconsistent across a team of twenty people all making their own judgment calls.&lt;/p&gt;

&lt;p&gt;I worked with an enterprise Drupal site last year that had over 3,000 pieces of content sitting in a moderation queue at any given time. Four editors, no automation, no triage. Good content was getting buried under low-quality submissions and the editors were spending most of their time on mechanical checks rather than actual editorial judgment.&lt;/p&gt;

&lt;p&gt;What they needed was a multi-step AI pipeline sitting between content submission and human review. Something that could screen content automatically, flag policy violations, suggest taxonomy terms, score quality, and route content to the right reviewer based on what it found. That is what this post is about.&lt;/p&gt;

&lt;p&gt;We will cover how LangChain fits into a Drupal architecture, the honest tradeoffs between the Python, JavaScript, and PHP approaches, and then go deep on building the full pipeline in PHP inside a custom Drupal module.&lt;/p&gt;

&lt;p&gt;For more info : &lt;a href="https://www.phpcmsframework.com/2026/03/drupal-langchain-ai-pipeline-enterprise-cms.html" rel="noopener noreferrer"&gt;https://www.phpcmsframework.com/2026/03/drupal-langchain-ai-pipeline-enterprise-cms.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>drupal</category>
      <category>programming</category>
    </item>
    <item>
      <title>Build a WhatsApp AI Assistant Using Laravel, Twilio and OpenAI</title>
      <dc:creator>PHP CMS Framework</dc:creator>
      <pubDate>Wed, 25 Mar 2026 15:37:52 +0000</pubDate>
      <link>https://forem.com/albert_andrew_24878d43267/build-a-whatsapp-ai-assistant-using-laravel-twilio-and-openai-p0k</link>
      <guid>https://forem.com/albert_andrew_24878d43267/build-a-whatsapp-ai-assistant-using-laravel-twilio-and-openai-p0k</guid>
      <description>&lt;p&gt;A few months ago a client came to us with a pretty common problem. Their support team was spending most of the day answering the same twenty questions over and over. Shipping times, return policies, order status, payment methods. The questions were predictable. The answers were documented. But every single one still needed a human to respond.&lt;/p&gt;

&lt;p&gt;They were already using WhatsApp for customer communication, so the ask was simple: can we put something intelligent on that channel so the team can focus on the cases that actually need them? That is how we ended up building a WhatsApp AI assistant using Laravel, Twilio, and OpenAI, and it is exactly what this post covers.&lt;/p&gt;

&lt;p&gt;By the end you will have a working bot that receives WhatsApp messages through a Twilio webhook, maintains conversation memory per customer so context carries across messages, and uses OpenAI to generate replies that sound like a real support agent. The whole thing runs on standard Laravel, no exotic packages.&lt;/p&gt;

&lt;p&gt;For more Details -&amp;gt; &lt;a href="https://www.phpcmsframework.com/2026/03/whatsapp-ai-assistant-laravel-twilio-openai.html" rel="noopener noreferrer"&gt;https://www.phpcmsframework.com/2026/03/whatsapp-ai-assistant-laravel-twilio-openai.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>laravel</category>
      <category>php</category>
    </item>
    <item>
      <title>Build an AI Code Review Bot with Laravel — Real-World Use Case</title>
      <dc:creator>PHP CMS Framework</dc:creator>
      <pubDate>Sat, 21 Mar 2026 20:41:19 +0000</pubDate>
      <link>https://forem.com/albert_andrew_24878d43267/build-an-ai-code-review-bot-with-laravel-real-world-use-case-54cj</link>
      <guid>https://forem.com/albert_andrew_24878d43267/build-an-ai-code-review-bot-with-laravel-real-world-use-case-54cj</guid>
      <description>&lt;p&gt;
Let me tell you how this idea actually started. A few months back, our team was doing PR reviews and I kept writing the same comment over and over, something like "this will cause an N+1 issue, please use eager loading." Different developer, different PR, same problem. Third time in two weeks I typed that comment, I thought there has to be a smarter way to handle this first pass.
&lt;/p&gt;

&lt;p&gt;
That is what this is. Not some fancy AI product. Just a practical Laravel tool that takes a PHP code snippet, sends it to OpenAI, and gives back structured feedback before a human reviewer even opens the PR. The idea is simple: catch the obvious stuff automatically so your senior devs can spend their review time on things that actually need a human brain.
&lt;/p&gt;

&lt;p&gt;
I will walk through the full build. By the end you will have a working Laravel app that accepts code, returns severity-tagged issues, security flags, suggestions, and a quality score. We will also hook it up to a queue so the UI does not freeze waiting on the API.
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you need before starting:&lt;/strong&gt; Laravel 10 or 11, PHP 8.1+, Composer, and an OpenAI API key. That is it.&lt;/p&gt;

&lt;h2&gt;Why not PHPStan or CodeSniffer?&lt;/h2&gt;

&lt;p&gt;
Because they are rule-based. They catch what they have been told to catch, nothing more.
&lt;/p&gt;

&lt;p&gt;
PHPStan at max level is genuinely good. I use it. But here is the thing, some of the worst bugs in production do not violate a single linting rule. An N+1 query loop is syntactically perfect. A function that silently returns null on failure will not trigger any warning. A missing authorization check on a route will not show up in static analysis at all.
&lt;/p&gt;

&lt;p&gt;
An LLM understands context. It can look at code and say "this will fall apart under load" or "this validation will silently pass null." That is a different category of feedback altogether. Use both, they are not competing with each other.
&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What Gets Checked&lt;/th&gt;
&lt;th&gt;PHPStan / PHPCS&lt;/th&gt;
&lt;th&gt;AI Reviewer&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Syntax and type errors&lt;/td&gt;
&lt;td&gt;Strong&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Coding standards&lt;/td&gt;
&lt;td&gt;Strong&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;N+1 / query logic problems&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Security patterns&lt;/td&gt;
&lt;td&gt;Partial&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Architecture suggestions&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Explains why something is wrong&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;How everything fits together&lt;/h2&gt;

&lt;p&gt;
Before touching any code, here is the flow:
&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;Developer submits PHP code via a form
        ↓
Laravel controller validates it
        ↓
CodeReviewService builds a structured prompt
        ↓
OpenAI GPT-4o analyses the code
        ↓
JSON response gets parsed
        ↓
Feedback renders back to the developer&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;
No complex abstractions, no unnecessary packages beyond the OpenAI client. The structure is clean enough that adding features later, storing review history, GitHub webhook triggers, Slack notifications, is straightforward.
&lt;/p&gt;

&lt;h2&gt;Step 1: Install Laravel and the OpenAI Package&lt;/h2&gt;

&lt;pre&gt;&lt;code&gt;composer create-project laravel/laravel ai-code-reviewer
cd ai-code-reviewer
composer require openai-php/laravel&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Publish the config file:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;php artisan vendor:publish --provider="OpenAI\Laravel\ServiceProvider"&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Then open your &lt;code&gt;.env&lt;/code&gt; and add your key:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;OPENAI_API_KEY=sk-your-key-here
OPENAI_ORGANIZATION=&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;
One thing I will say plainly. I have seen API keys committed to git repos more times than I would like. Double check that &lt;code&gt;.env&lt;/code&gt; is in your &lt;code&gt;.gitignore&lt;/code&gt; before anything else.
&lt;/p&gt;

&lt;h2&gt;Step 2: Create a Service - CodeReviewService&lt;/h2&gt;

&lt;p&gt;
Third-party API calls belong in a service class. Not in a controller, not in a model. This keeps things testable and means when you want to swap GPT-4o for a different model down the line, you change exactly one file.
&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;app/Services/CodeReviewService.php&lt;/code&gt; manually:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&amp;lt;?php

namespace App\Services;

use OpenAI\Laravel\Facades\OpenAI;

class CodeReviewService
{
    public function review(string $code): array
    {
        $response = OpenAI::chat()-&amp;gt;create([
            'model'       =&amp;gt; 'gpt-4o',
            'temperature' =&amp;gt; 0.3,
            'messages'    =&amp;gt; [
                [
                    'role'    =&amp;gt; 'system',
                    'content' =&amp;gt; 'You are a senior PHP developer and Laravel architect.
                                  Review PHP code and return feedback as valid JSON only.
                                  No markdown. No explanation outside the JSON object.',
                ],
                [
                    'role'    =&amp;gt; 'user',
                    'content' =&amp;gt; $this-&amp;gt;buildPrompt($code),
                ],
            ],
        ]);

        return $this-&amp;gt;parse($response-&amp;gt;choices[0]-&amp;gt;message-&amp;gt;content);
    }

    private function buildPrompt(string $code): string
    {
        return &amp;lt;&amp;lt;&amp;lt;PROMPT
Review the PHP/Laravel code below. Return a JSON object with these keys:

- "summary": 1-2 sentence overall assessment.
- "score": integer 1 to 10 for code quality.
- "issues": array of objects with:
    - "severity": "critical", "warning", or "info"
    - "line_hint": function name or rough location
    - "message": clear explanation of the problem
- "suggestions": array of improvement suggestions as strings.
- "security_flags": array of security concerns, or empty array.

Code:

\`\`\`php
{$code}
\`\`\`
PROMPT;
    }

    private function parse(string $raw): array
    {
        $clean = preg_replace('/^

```json\s*/i', '', trim($raw));
        $clean = preg_replace('/```

$/', '', trim($clean));

        $data = json_decode(trim($clean), true);

        if (json_last_error() !== JSON_ERROR_NONE) {
            return [
                'summary'        =&amp;gt; 'Response could not be parsed. Try submitting again.',
                'score'          =&amp;gt; null,
                'issues'         =&amp;gt; [],
                'suggestions'    =&amp;gt; [],
                'security_flags' =&amp;gt; [],
            ];
        }

        return $data;
    }
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;
The &lt;code&gt;temperature: 0.3&lt;/code&gt; is intentional. Lower temperature means less randomness, the model stays focused and gives consistent output. For creative writing you would push that higher. For structured technical analysis, you want predictable not creative.
&lt;/p&gt;

&lt;p&gt;
Also notice the parse method strips markdown fences. GPT-4o usually returns clean JSON when you ask for it, but it occasionally wraps the output in backtick fences anyway. This handles that without breaking anything.
&lt;/p&gt;

&lt;h2&gt;Step 3: Controller and Routes&lt;/h2&gt;

&lt;pre&gt;&lt;code&gt;php artisan make:controller CodeReviewController&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;&amp;lt;?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use App\Services\CodeReviewService;

class CodeReviewController extends Controller
{
    public function __construct(
        private CodeReviewService $reviewService
    ) {}

    public function index()
    {
        return view('code-review.index');
    }

    public function review(Request $request)
    {
        $request-&amp;gt;validate([
            'code' =&amp;gt; 'required|string|min:10|max:5000',
        ]);

        $feedback = $this-&amp;gt;reviewService-&amp;gt;review($request-&amp;gt;input('code'));

        return view('code-review.result', compact('feedback'));
    }
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Add the routes in &lt;code&gt;routes/web.php&lt;/code&gt;:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;use App\Http\Controllers\CodeReviewController;

Route::get('/code-review', [CodeReviewController::class, 'index'])
    -&amp;gt;name('code-review.index');

Route::post('/code-review', [CodeReviewController::class, 'review'])
    -&amp;gt;name('code-review.review');&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Step 4: Blade Views&lt;/h2&gt;

&lt;p&gt;
Keeping these minimal. The styling comes from your existing setup, no need to add anything extra here.
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;resources/views/code-review/index.blade.php&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html lang="en"&amp;gt;
&amp;lt;head&amp;gt;
    &amp;lt;meta charset="UTF-8"&amp;gt;
    &amp;lt;title&amp;gt;AI Code Reviewer&amp;lt;/title&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;

&amp;lt;h1&amp;gt;AI Code Reviewer&amp;lt;/h1&amp;gt;
&amp;lt;p&amp;gt;Paste PHP or Laravel code below and get structured feedback instantly.&amp;lt;/p&amp;gt;

&amp;lt;form method="POST" action="{{ route('code-review.review') }}"&amp;gt;
    @csrf
    &amp;lt;textarea name="code" rows="15" cols="80"
              placeholder="Paste your PHP code here..."&amp;gt;{{ old('code') }}&amp;lt;/textarea&amp;gt;

    @error('code')
        &amp;lt;p&amp;gt;{{ $message }}&amp;lt;/p&amp;gt;
    @enderror

    &amp;lt;br&amp;gt;
    &amp;lt;button type="submit"&amp;gt;Review Code&amp;lt;/button&amp;gt;
&amp;lt;/form&amp;gt;

&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;resources/views/code-review/result.blade.php&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html lang="en"&amp;gt;
&amp;lt;head&amp;gt;
    &amp;lt;meta charset="UTF-8"&amp;gt;
    &amp;lt;title&amp;gt;Review Result&amp;lt;/title&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;

&amp;lt;h1&amp;gt;Code Review Result&amp;lt;/h1&amp;gt;

&amp;lt;p&amp;gt;{{ $feedback['summary'] ?? '' }}&amp;lt;/p&amp;gt;

@isset($feedback['score'])
    &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Quality Score: {{ $feedback['score'] }} / 10&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
@endisset

@if(!empty($feedback['issues']))
    &amp;lt;h2&amp;gt;Issues Found&amp;lt;/h2&amp;gt;
    @foreach($feedback['issues'] as $issue)
        &amp;lt;div&amp;gt;
            &amp;lt;strong&amp;gt;[{{ strtoupper($issue['severity']) }}]&amp;lt;/strong&amp;gt;
            @if(!empty($issue['line_hint']))
                , {{ $issue['line_hint'] }}
            @endif
            &amp;lt;p&amp;gt;{{ $issue['message'] }}&amp;lt;/p&amp;gt;
        &amp;lt;/div&amp;gt;
        &amp;lt;hr&amp;gt;
    @endforeach
@else
    &amp;lt;p&amp;gt;No major issues found.&amp;lt;/p&amp;gt;
@endif

@if(!empty($feedback['security_flags']))
    &amp;lt;h2&amp;gt;Security Flags&amp;lt;/h2&amp;gt;
    &amp;lt;ul&amp;gt;
        @foreach($feedback['security_flags'] as $flag)
            &amp;lt;li&amp;gt;{{ $flag }}&amp;lt;/li&amp;gt;
        @endforeach
    &amp;lt;/ul&amp;gt;
@endif

@if(!empty($feedback['suggestions']))
    &amp;lt;h2&amp;gt;Suggestions&amp;lt;/h2&amp;gt;
    &amp;lt;ul&amp;gt;
        @foreach($feedback['suggestions'] as $s)
            &amp;lt;li&amp;gt;{{ $s }}&amp;lt;/li&amp;gt;
        @endforeach
    &amp;lt;/ul&amp;gt;
@endif

&amp;lt;p&amp;gt;&amp;lt;a href="{{ route('code-review.index') }}"&amp;gt;Review another snippet&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;

&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Step 5: Queue the API Call, Do not block the UI&lt;/h2&gt;

&lt;p&gt;
GPT-4o usually responds in 2 to 4 seconds for short snippets, sometimes longer. That is not great for a synchronous web request, and on some server configs it will hit a timeout before the response comes back. For any production setup, queue it.
&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;php artisan make:job ProcessCodeReview&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;&amp;lt;?php

namespace App\Jobs;

use App\Services\CodeReviewService;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Illuminate\Support\Facades\Cache;

class ProcessCodeReview implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public int $timeout = 60;
    public int $tries   = 2;

    public function __construct(
        private string $code,
        private string $cacheKey
    ) {}

    public function handle(CodeReviewService $service): void
    {
        $result = $service-&amp;gt;review($this-&amp;gt;code);
        Cache::put($this-&amp;gt;cacheKey, $result, now()-&amp;gt;addMinutes(10));
    }
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Update the controller to dispatch the job and add a polling method:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;public function review(Request $request)
{
    $request-&amp;gt;validate(['code' =&amp;gt; 'required|string|min:10|max:5000']);

    $key = 'review_' . md5($request-&amp;gt;input('code') . uniqid());

    ProcessCodeReview::dispatch($request-&amp;gt;input('code'), $key);

    return view('code-review.waiting', ['cacheKey' =&amp;gt; $key]);
}

public function poll(string $key)
{
    $feedback = Cache::get($key);

    if (!$feedback) {
        return response()-&amp;gt;json(['status' =&amp;gt; 'pending']);
    }

    return response()-&amp;gt;json(['status' =&amp;gt; 'done', 'feedback' =&amp;gt; $feedback]);
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;
For local development, set &lt;code&gt;QUEUE_CONNECTION=sync&lt;/code&gt; in your &lt;code&gt;.env&lt;/code&gt; and jobs will run immediately without needing a worker. In production use &lt;code&gt;redis&lt;/code&gt; or &lt;code&gt;database&lt;/code&gt;.
&lt;/p&gt;

&lt;h2&gt;What the bot actually catching: A Real Example&lt;/h2&gt;

&lt;p&gt;
Here is a piece of code I have seen in various forms across different projects. It works. On a test database with ten orders nobody notices anything wrong with it.
&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&amp;lt;?php

public function getUserOrders($userId)
{
    $orders = DB::table('orders')-&amp;gt;where('user_id', $userId)-&amp;gt;get();

    foreach ($orders as $order) {
        $items = DB::table('order_items')-&amp;gt;where('order_id', $order-&amp;gt;id)-&amp;gt;get();
        echo $order-&amp;gt;id . ': ' . count($items) . ' items&amp;lt;br&amp;gt;';
    }
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;
Put this in front of a customer with 400 orders and watch what happens to your database.
&lt;/p&gt;

&lt;p&gt;Paste that into the reviewer and here is what comes back:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;{
  "summary": "Code is functional but has a critical N+1 query problem and bypasses Eloquent entirely. Direct output with echo also breaks MVC separation and makes this code untestable.",
  "score": 3,
  "issues": [
    {
      "severity": "critical",
      "line_hint": "foreach loop, DB::table order_items",
      "message": "N+1 query problem. One database query fires per order inside the loop. With 400 orders that becomes 401 queries. Use Eloquent with eager loading: Order::with('items')-&amp;gt;where('user_id', $userId)-&amp;gt;get()"
    },
    {
      "severity": "warning",
      "line_hint": "DB::table()",
      "message": "Raw query builder bypasses Eloquent model logic, accessors, and relationships. Switching to Eloquent models makes the code significantly easier to maintain and test."
    },
    {
      "severity": "info",
      "line_hint": "echo statement",
      "message": "Direct output inside a controller or service method violates MVC. Return structured data and handle rendering in the view layer."
    }
  ],
  "suggestions": [
    "Define a hasMany relationship on Order pointing to OrderItem.",
    "Replace DB::table calls with Order::with('items')-&amp;gt;where('user_id', $userId)-&amp;gt;get()",
    "Return a collection and let Blade handle the output, do not echo from service methods."
  ],
  "security_flags": [
    "$userId passes into a query with no type check or validation. Confirm this is an authenticated, validated integer before it reaches any DB call."
  ]
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;
Score of 3, one critical issue, one warning, one info note, and a security flag. All accurate, all actionable. That took under four seconds and it is exactly the kind of feedback that usually takes a few minutes of a senior developer's time to write out properly.
&lt;/p&gt;

&lt;h2&gt;Where this fits in an actual Workflow&lt;/h2&gt;

&lt;p&gt;
I want to be direct about this because I have seen people set up tools like this and then either over-rely on them or drop them after two weeks. The right use here is as a first-pass gate, not a replacement for peer review.
&lt;/p&gt;

&lt;p&gt;
The workflow that actually makes sense: developer opens a PR, the bot triggers via a GitHub webhook, posts its feedback as a comment on the PR, and the human reviewer knows the basics have already been handled. They skip straight to the parts that need real judgment, design decisions, edge cases, whether the approach fits the broader architecture.
&lt;/p&gt;

&lt;p&gt;
That is where this earns its place. Not by replacing review. By removing the repetitive first ten minutes of it.
&lt;/p&gt;

&lt;h2&gt;Few things to know before building this:&lt;/h2&gt;

&lt;p&gt;
The prompt structure matters more than anything else in this whole build. Early versions I tried came back as freeform text, which is hard to work with in a UI. Asking the model to return only JSON with field names you define upfront makes parsing reliable every time. Do not skip that part.
&lt;/p&gt;

&lt;p&gt;
GPT-4o is noticeably better than GPT-3.5 for this kind of task, not just in accuracy but in how it explains problems. "Use eager loading" is less useful than "this fires one query per iteration, here is the exact fix." The difference in API cost is worth it if you are using this on a real codebase.
&lt;/p&gt;

&lt;p&gt;
One more thing. Do not feed entire files in at once, at least not to start. Keep the input focused: a single method, one class, a specific feature. Smaller focused reviews produce better feedback. You can extend the input limit later once you are happy with the output quality.
&lt;/p&gt;

&lt;p&gt;
From here the natural extensions to build are a GitHub webhook integration to trigger reviews on every PR automatically, a review history table to track quality trends over time, custom system prompts per project so the bot reviews against your team's conventions specifically, and Slack notifications when a review completes. None of that is complicated to add on top of what we have built here.
&lt;/p&gt;

&lt;p&gt;
If you found this useful, drop a comment in the comment section.
&lt;/p&gt;

</description>
      <category>ai</category>
      <category>codereview</category>
      <category>laravel</category>
      <category>php</category>
    </item>
    <item>
      <title>Building a RAG System in Laravel from Scratch</title>
      <dc:creator>PHP CMS Framework</dc:creator>
      <pubDate>Thu, 19 Mar 2026 06:45:29 +0000</pubDate>
      <link>https://forem.com/albert_andrew_24878d43267/building-a-rag-system-in-laravel-from-scratch-3ko0</link>
      <guid>https://forem.com/albert_andrew_24878d43267/building-a-rag-system-in-laravel-from-scratch-3ko0</guid>
      <description>&lt;p&gt;Most RAG tutorials start with "first, sign up for Pinecone." I'm going to skip that entirely. For the majority of Laravel applications, a dedicated vector database is overkill. You already have MySQL. You already have Laravel's queue system. That's enough to build a fully functional retrieval augmented generation pipeline that works well into the tens of thousands of documents.&lt;/p&gt;

&lt;p&gt;RAG solves a specific problem. LLMs are trained on general data up to a cutoff date. They know nothing about your application's content, your internal docs, your product knowledge base, or anything else specific to your domain. RAG fixes this by retrieving relevant content from your own data and injecting it into the prompt as context before asking the model to answer. The model stops guessing and starts answering based on what you actually have.&lt;/p&gt;

&lt;p&gt;Here is how to build it properly in Laravel.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Are Building
&lt;/h2&gt;

&lt;p&gt;A pipeline that does four things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Accepts documents (articles, pages, PDFs, anything text-based) and stores them with their embeddings&lt;/li&gt;
&lt;li&gt;  When a user asks a question, converts that question into an embedding&lt;/li&gt;
&lt;li&gt;  Finds the most semantically similar documents using cosine similarity against your stored embeddings&lt;/li&gt;
&lt;li&gt;  Feeds those documents as context to GPT and returns a grounded answer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No external services beyond OpenAI. No Docker containers for a vector DB. Just Laravel, MySQL, and two API calls per query.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  Laravel 10 or 11&lt;/li&gt;
&lt;li&gt;  PHP 8.1+&lt;/li&gt;
&lt;li&gt;  MySQL 8.0+&lt;/li&gt;
&lt;li&gt;  OpenAI API key&lt;/li&gt;
&lt;li&gt;  Guzzle (ships with Laravel)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: The Documents Table
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`php artisan make:migration create_documents_table`


`public function up(): void         {             Schema::create('documents', function (Blueprint $table) {                 $table-&amp;gt;id();                 $table-&amp;gt;string('title');                 $table-&amp;gt;longText('content');                 $table-&amp;gt;longText('embedding')-&amp;gt;nullable(); // JSON float array                 $table-&amp;gt;string('source')-&amp;gt;nullable(); // URL, filename, etc.                 $table-&amp;gt;timestamps();             });         }`


`php artisan migrate`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The embedding column stores a JSON-encoded array of 1536 floats (for text-embedding-3-small). Yes, it's a text column, not a native vector type. MySQL 9 adds vector support but for now JSON in a longText column works fine for most use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: The Document Model
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`php artisan make:model Document`


`namespace App\Models;          use Illuminate\Database\Eloquent\Model;          class Document extends Model         {             protected $fillable = ['title', 'content', 'embedding', 'source'];              protected $casts = [                 'embedding' =&amp;gt; 'array',             ];         }`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;strong&gt;embedding&lt;/strong&gt; cast handles the JSON encoding and decoding automatically. When you set &lt;strong&gt;$document-&amp;gt;embedding = $vectorArray&lt;/strong&gt;, Laravel serializes it. When you read it back, you get a PHP array of floats.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: The Embedding Service
&lt;/h2&gt;

&lt;p&gt;Keep all OpenAI communication in one place. This makes it easy to swap providers later.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`php artisan make:service EmbeddingService`


`namespace App\Services;          use Illuminate\Support\Facades\Http;          class EmbeddingService         {             private string $apiKey;             private string $model = 'text-embedding-3-small';              public function __construct()             {                 $this-&amp;gt;apiKey = config('services.openai.key');             }              public function embed(string $text): array             {                 // Trim to ~8000 tokens to stay within model limits                 $text = mb_substr(strip_tags($text), 0, 32000);                  $response = Http::withToken($this-&amp;gt;apiKey)                     -&amp;gt;post('https://api.openai.com/v1/embeddings', [                         'model' =&amp;gt; $this-&amp;gt;model,                         'input' =&amp;gt; $text,                     ]);                  if ($response-&amp;gt;failed()) {                     throw new \RuntimeException('OpenAI embedding request failed: ' . $response-&amp;gt;body());                 }                  return $response-&amp;gt;json('data.0.embedding');             }              public function cosineSimilarity(array $a, array $b): float             {                 $dot = 0.0;                 $magA = 0.0;                 $magB = 0.0;                  foreach ($a as $i =&amp;gt; $val) {                     $dot += $val * $b[$i];                     $magA += $val ** 2;                     $magB += $b[$i] ** 2;                 }                  $denominator = sqrt($magA) * sqrt($magB);                  return $denominator &amp;gt; 0 ? $dot / $denominator : 0.0;             }         }`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Register it in &lt;strong&gt;config/services.php&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`'openai' =&amp;gt; [             'key' =&amp;gt; env('OPENAI_API_KEY'),         ],`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Step 4: Indexing Documents
&lt;/h2&gt;

&lt;p&gt;A command to process documents and store their embeddings. You run this once on existing content, then hook it into your document creation flow going forward.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`php artisan make:command IndexDocuments`


`namespace App\Console\Commands;          use App\Models\Document;         use App\Services\EmbeddingService;         use Illuminate\Console\Command;          class IndexDocuments extends Command         {             protected $signature = 'rag:index {--fresh : Re-index all documents}';             protected $description = 'Generate and store embeddings for all documents';              public function handle(EmbeddingService $embedder): int             {                 $query = Document::query();                  if (!$this-&amp;gt;option('fresh')) {                     $query-&amp;gt;whereNull('embedding');                 }                  $documents = $query-&amp;gt;get();                 $bar = $this-&amp;gt;output-&amp;gt;createProgressBar($documents-&amp;gt;count());                  foreach ($documents as $doc) {                     try {                         $doc-&amp;gt;embedding = $embedder-&amp;gt;embed($doc-&amp;gt;title . "\n\n" . $doc-&amp;gt;content);                         $doc-&amp;gt;save();                         $bar-&amp;gt;advance();                     } catch (\Exception $e) {                         $this-&amp;gt;error("Failed on document {$doc-&amp;gt;id}: " . $e-&amp;gt;getMessage());                     }                      // Respect OpenAI rate limits                     usleep(200000); // 200ms between requests                 }                  $bar-&amp;gt;finish();                 $this-&amp;gt;newLine();                 $this-&amp;gt;info('Indexing complete.');                  return self::SUCCESS;             }         }`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Run it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`php artisan rag:index`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Notice I'm concatenating title and content before embedding. The title carries a lot of semantic weight and including it improves retrieval accuracy noticeably.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: The Retrieval Logic
&lt;/h2&gt;

&lt;p&gt;This is the core of RAG. Given a query, find the most relevant documents.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`namespace App\Services;          use App\Models\Document;          class RetrievalService         {             public function __construct(private EmbeddingService $embedder) {}              public function retrieve(string $query, int $topK = 5, float $threshold = 0.75): array             {                 $queryVector = $this-&amp;gt;embedder-&amp;gt;embed($query);                 $documents = Document::whereNotNull('embedding')-&amp;gt;get();                  $scored = $documents-&amp;gt;map(function (Document $doc) use ($queryVector) {                     return [                         'document' =&amp;gt; $doc,                         'score' =&amp;gt; $this-&amp;gt;embedder-&amp;gt;cosineSimilarity($queryVector, $doc-&amp;gt;embedding),                     ];                 })                 -&amp;gt;filter(fn($item) =&amp;gt; $item['score'] &amp;gt;= $threshold)                 -&amp;gt;sortByDesc('score')                 -&amp;gt;take($topK)                 -&amp;gt;values();                  return $scored-&amp;gt;toArray();             }         }`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;strong&gt;$threshold&lt;/strong&gt; of 0.75 filters out loosely related documents. You may need to tune this for your content, lower it if you're getting no results, raise it if you're getting irrelevant ones. Anywhere between 0.70 and 0.85 is usually sensible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: The RAG Query Service
&lt;/h2&gt;

&lt;p&gt;This ties retrieval and generation together.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`namespace App\Services;          use Illuminate\Support\Facades\Http;          class RagService         {             public function __construct(                 private RetrievalService $retriever,                 private string $apiKey             ) {                 $this-&amp;gt;apiKey = config('services.openai.key');             }              public function ask(string $question): array             {                 // Step 1: Retrieve relevant documents                 $results = $this-&amp;gt;retriever-&amp;gt;retrieve($question, topK: 4);                  if (empty($results)) {                     return [                         'answer' =&amp;gt; 'I could not find relevant information to answer this question.',                         'sources' =&amp;gt; [],                     ];                 }                  // Step 2: Build context from retrieved docs                 $context = collect($results)                     -&amp;gt;map(fn($r) =&amp;gt; "### {$r['document']-&amp;gt;title}\n{$r['document']-&amp;gt;content}")                     -&amp;gt;join("\n\n---\n\n");                  // Step 3: Send to GPT with context                 $response = Http::withToken($this-&amp;gt;apiKey)                     -&amp;gt;post('https://api.openai.com/v1/chat/completions', [                         'model' =&amp;gt; 'gpt-4o-mini',                         'temperature' =&amp;gt; 0.2,                         'messages' =&amp;gt; [                             [                                 'role' =&amp;gt; 'system',                                 'content' =&amp;gt; "You are a helpful assistant. Answer questions using only the context provided below. If the answer is not in the context, say so clearly. Do not make up information.\n\nContext:\n{$context}"                             ],                             [                                 'role' =&amp;gt; 'user',                                 'content' =&amp;gt; $question,                             ]                         ],                     ]);                  return [                     'answer' =&amp;gt; $response-&amp;gt;json('choices.0.message.content'),                     'sources' =&amp;gt; collect($results)-&amp;gt;map(fn($r) =&amp;gt; [                         'title' =&amp;gt; $r['document']-&amp;gt;title,                         'source' =&amp;gt; $r['document']-&amp;gt;source,                         'score' =&amp;gt; round($r['score'], 3),                     ])-&amp;gt;toArray(),                 ];             }         }`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Two things worth noting here. Temperature is set to 0.2, not the default 0.7. You want deterministic, factual answers when doing RAG, not creative ones. And the system prompt explicitly tells the model to stay within the provided context and admit when it doesn't know. Without that instruction, GPT will hallucinate rather than say "I don't have that information."&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7: The Controller
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`php artisan make:controller RagController`


`namespace App\Http\Controllers;          use App\Services\RagService;         use Illuminate\Http\Request;          class RagController extends Controller         {             public function __construct(private RagService $rag) {}              public function ask(Request $request)             {                 $request-&amp;gt;validate(['question' =&amp;gt; 'required|string|max:500']);                  $result = $this-&amp;gt;rag-&amp;gt;ask($request-&amp;gt;input('question'));                  return response()-&amp;gt;json($result);             }         }`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Register the route in &lt;strong&gt;routes/api.php&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`Route::post('/ask', [RagController::class, 'ask']);`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Step 8: Test It
&lt;/h2&gt;

&lt;p&gt;Seed a couple of documents first:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`Document::create([             'title' =&amp;gt; 'Laravel Queue Configuration',             'content' =&amp;gt; 'Laravel queues allow you to defer time-consuming tasks...',             'source' =&amp;gt; 'https://laravel.com/docs/queues',         ]);`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Run the indexer:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`php artisan rag:index`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then hit the endpoint:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`curl -X POST http://your-app.test/api/ask \             -H "Content-Type: application/json" \             -d '{"question": "How do I configure Laravel queues?"}'`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Response:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&lt;code&gt;{             "answer": "Laravel queues are configured via the config/queue.php file...",             "sources": [                 {                 "title": "Laravel Queue Configuration",                 "source": "https://laravel.com/docs/queues",                 "score": 0.891                 }             ]         }&lt;/code&gt;&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Where This Falls Down at Scale&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;This setup works well up to roughly 50,000 documents. Beyond that, loading all embeddings into memory for comparison becomes a problem. At that point your options are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Add a MySQL generated column + raw SQL dot product approximation to filter candidates before full cosine comparison&lt;/li&gt;
&lt;li&gt;  Move to pgvector if you can switch to PostgreSQL, which handles this natively and efficiently&lt;/li&gt;
&lt;li&gt;  Then and only then consider Pinecone or Weaviate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most Laravel projects never reach that threshold. Start simple, measure, then scale the storage layer when you actually need to.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Build on Top of This
&lt;/h2&gt;

&lt;p&gt;Once the core pipeline is working, the useful next steps are: caching query embeddings so repeated questions don't hit the API twice, chunking long documents into 500-token segments before embedding so retrieval is more granular, adding a feedback mechanism so users can flag bad answers and you can track retrieval quality over time, and per-user conversation history so the model has context across multiple turns.&lt;/p&gt;

&lt;p&gt;That is a production-ready RAG foundation in Laravel with no external vector database. The whole thing is maybe 200 lines of actual PHP spread across four service classes and one command.&lt;/p&gt;

&lt;p&gt;Original post available in - &lt;a href="https://www.phpcmsframework.com/2026/03/building-rag-system-in-laravel-from-scratch.html" rel="noopener noreferrer"&gt;https://www.phpcmsframework.com/2026/03/building-rag-system-in-laravel-from-scratch.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>laravel</category>
      <category>php</category>
      <category>webdev</category>
      <category>ai</category>
    </item>
    <item>
      <title>AI SEO Content Quality Analyzer for WordPress Using PHP and OpenAI</title>
      <dc:creator>PHP CMS Framework</dc:creator>
      <pubDate>Tue, 27 Jan 2026 14:00:54 +0000</pubDate>
      <link>https://forem.com/albert_andrew_24878d43267/ai-seo-content-quality-analyzer-for-wordpress-using-php-and-openai-o26</link>
      <guid>https://forem.com/albert_andrew_24878d43267/ai-seo-content-quality-analyzer-for-wordpress-using-php-and-openai-o26</guid>
      <description>&lt;p&gt;Plugins for SEO tell you what needs to be fixed, but not why.&lt;/p&gt;

&lt;p&gt;They look at readability scores, keyword density, and meta length, but they totally ignore search intent, semantic depth, and content quality.&lt;/p&gt;

&lt;p&gt;In this tutorial, we'll use PHP and OpenAI to create an AI-powered SEO Content Quality Analyzer for WordPress.&lt;/p&gt;

&lt;p&gt;AI will assess content in the same manner as a search engine rather than according to strict guidelines.&lt;/p&gt;

&lt;p&gt;Real-time post analysis and actionable SEO feedback are provided by this tool, which operates within WordPress Admin.&lt;/p&gt;

&lt;p&gt;For more details: &lt;a href="https://www.phpcmsframework.com/2026/01/ai-seo-content-quality-analyzer-for-wordpress-with-php-openai.html" rel="noopener noreferrer"&gt;https://www.phpcmsframework.com/2026/01/ai-seo-content-quality-analyzer-for-wordpress-with-php-openai.html&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AI Duplicate Content Detector for Symfony Using PHP and OpenAI Embeddings</title>
      <dc:creator>PHP CMS Framework</dc:creator>
      <pubDate>Mon, 29 Dec 2025 14:46:28 +0000</pubDate>
      <link>https://forem.com/albert_andrew_24878d43267/ai-duplicate-content-detector-for-symfony-using-php-and-openai-embeddings-5cfh</link>
      <guid>https://forem.com/albert_andrew_24878d43267/ai-duplicate-content-detector-for-symfony-using-php-and-openai-embeddings-5cfh</guid>
      <description>&lt;p&gt;In many Symfony-based CMS and blog applications, duplicate content is a silent issue. Similar articles are rewritten by editors, documentation develops naturally, and eventually you have several pages expressing the same idea in different ways.&lt;/p&gt;

&lt;p&gt;The exact text matching used in traditional duplicate detection fails as soon as the wording is altered.&lt;/p&gt;

&lt;p&gt;In this tutorial, we'll use OpenAI embeddings to create an AI-powered duplicate content detector in Symfony. We'll compare semantic meaning instead of matching keywords, which is the same method that contemporary search engines employ.&lt;/p&gt;

&lt;p&gt;For more details : &lt;a href="https://www.phpcmsframework.com/2025/12/ai-duplicate-content-detector-for-symfony-with-php-openai.html" rel="noopener noreferrer"&gt;https://www.phpcmsframework.com/2025/12/ai-duplicate-content-detector-for-symfony-with-php-openai.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>symfony</category>
      <category>openai</category>
      <category>php</category>
    </item>
    <item>
      <title>AI Category Recommendation System for Drupal 11 Using PHP and OpenAI</title>
      <dc:creator>PHP CMS Framework</dc:creator>
      <pubDate>Tue, 23 Dec 2025 05:58:48 +0000</pubDate>
      <link>https://forem.com/albert_andrew_24878d43267/ai-category-recommendation-system-for-drupal-11-using-php-and-openai-57fm</link>
      <guid>https://forem.com/albert_andrew_24878d43267/ai-category-recommendation-system-for-drupal-11-using-php-and-openai-57fm</guid>
      <description>&lt;p&gt;In Drupal, choosing the right category for a page or article is very important. However, in real life, people can make mistakes. Editors may pick the wrong category, use different categories for similar content, or publish posts quickly without giving much thought to categorization.&lt;/p&gt;

&lt;p&gt;AI-powered Drupal 11 can now recommend the best category for a node. Instead of depending only on keywords, this recommendation is based on the node's actual content.&lt;/p&gt;

&lt;p&gt;For more information: &lt;a href="https://www.phpcmsframework.com/2025/12/ai-category-recommendation-system-for-drupal-11.html" rel="noopener noreferrer"&gt;https://www.phpcmsframework.com/2025/12/ai-category-recommendation-system-for-drupal-11.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>drupal</category>
      <category>openai</category>
      <category>php</category>
    </item>
    <item>
      <title>AI Auto-Tagging in Laravel Using OpenAI Embeddings + Cron Jobs</title>
      <dc:creator>PHP CMS Framework</dc:creator>
      <pubDate>Tue, 09 Dec 2025 13:27:19 +0000</pubDate>
      <link>https://forem.com/albert_andrew_24878d43267/ai-auto-tagging-in-laravel-using-openai-embeddings-cron-jobs-3afj</link>
      <guid>https://forem.com/albert_andrew_24878d43267/ai-auto-tagging-in-laravel-using-openai-embeddings-cron-jobs-3afj</guid>
      <description>&lt;p&gt;It is tedious, inconsistent, and frequently incorrect to manually tag blog posts. With AI embeddings, Laravel can automatically determine the topic of a blog post and assign the appropriate tags without the need for human intervention.&lt;/p&gt;

&lt;p&gt;For more details: &lt;a href="https://www.phpcmsframework.com/2025/12/ai-auto-tagging-in-laravel-using-openai.html" rel="noopener noreferrer"&gt;https://www.phpcmsframework.com/2025/12/ai-auto-tagging-in-laravel-using-openai.html&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building an AI-Powered Product Description Generator in Magento 2 Using PHP &amp; OpenAI</title>
      <dc:creator>PHP CMS Framework</dc:creator>
      <pubDate>Tue, 18 Nov 2025 11:49:14 +0000</pubDate>
      <link>https://forem.com/albert_andrew_24878d43267/building-an-ai-powered-product-description-generator-in-magento-2-using-php-openai-3537</link>
      <guid>https://forem.com/albert_andrew_24878d43267/building-an-ai-powered-product-description-generator-in-magento-2-using-php-openai-3537</guid>
      <description>&lt;p&gt;Writing product descriptions manually is slow, repetitive, and expensive — especially if you manage a large Magento store. But what if Magento could auto-generate SEO-friendly product descriptions using AI?&lt;/p&gt;

&lt;p&gt;In this guide, you’ll learn how to integrate OpenAI’s API into Magento 2 to automatically generate product descriptions, short descriptions, and meta tags — directly from the Admin panel.&lt;/p&gt;

&lt;p&gt;For more details: &lt;a href="https://www.phpcmsframework.com/2025/11/building-ai-powered-product-description-in-magento2-with-openai.html" rel="noopener noreferrer"&gt;https://www.phpcmsframework.com/2025/11/building-ai-powered-product-description-in-magento2-with-openai.html&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
