<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Matheus Delgado</title>
    <description>The latest articles on Forem by Matheus Delgado (@matheusdelgado).</description>
    <link>https://forem.com/matheusdelgado</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/matheusdelgado"/>
    <language>en</language>
    <item>
      <title>LLMs Don’t Have a Security Layer — So I Built One</title>
      <dc:creator>Matheus Delgado</dc:creator>
      <pubDate>Sun, 04 Jan 2026 02:19:31 +0000</pubDate>
      <link>https://forem.com/matheusdelgado/llms-dont-have-a-security-layer-so-i-built-one-2gf1</link>
      <guid>https://forem.com/matheusdelgado/llms-dont-have-a-security-layer-so-i-built-one-2gf1</guid>
      <description>&lt;p&gt;Over the last year, companies have started connecting LLMs to real users, real data and real systems. That’s when I realised something important:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLMs don’t come with a security layer.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once you expose an LLM to the real world, you also expose yourself to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;prompt injection&lt;/li&gt;
&lt;li&gt;phishing &amp;amp; social-engineering via LLM&lt;/li&gt;
&lt;li&gt;data exfiltration&lt;/li&gt;
&lt;li&gt;PII leakage&lt;/li&gt;
&lt;li&gt;unsafe or non-compliant outputs&lt;/li&gt;
&lt;li&gt;zero auditability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And unlike traditional input validation, LLM attacks are linguistic — meaning you’re not filtering SQL or JSON… you’re filtering natural language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So I built a Zero-Trust Security Gateway for LLMs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of calling the model directly, requests flow through:&lt;/p&gt;

&lt;p&gt;client&lt;br&gt;
 ↓&lt;br&gt;
Firewall &amp;amp; risk detection&lt;br&gt;&lt;br&gt;
 ↓&lt;br&gt;
Prompt normalization &amp;amp; rewriting&lt;br&gt;&lt;br&gt;
 ↓&lt;br&gt;
Policy enforcement&lt;br&gt;&lt;br&gt;
 ↓&lt;br&gt;
Inbound data protection (masking)&lt;br&gt;
 ↓&lt;br&gt;
LLM&lt;br&gt;
 ↓&lt;br&gt;
Outbound protection (redaction)&lt;br&gt;
 ↓&lt;br&gt;
Response governance filter&lt;br&gt;
 ↓&lt;br&gt;
Audit logging&lt;/p&gt;

&lt;p&gt;This runs as a gateway, so teams can deploy it inside their own infrastructure — without sending data to yet another SaaS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-world risks I’m seeing&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Prompt injection&lt;br&gt;
Users convince the model to ignore your instructions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Phishing &amp;amp; social engineering&lt;br&gt;
LLM becomes the attack channel itself.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data leakage&lt;br&gt;
Models happily return internal or sensitive information.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What this gateway actually does&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;detect malicious or risky instructions&lt;/li&gt;
&lt;li&gt;detect phishing &amp;amp; deception&lt;/li&gt;
&lt;li&gt;normalize &amp;amp; rewrite prompts safely&lt;/li&gt;
&lt;li&gt;enforce policy rules&lt;/li&gt;
&lt;li&gt;mask &amp;amp; protect sensitive data&lt;/li&gt;
&lt;li&gt;block unsafe outputs&lt;/li&gt;
&lt;li&gt;log every decision
None of this is a silver bullet — the goal is to reduce risk in production environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example: blocking a phishing-style response&lt;/strong&gt;&lt;br&gt;
If the model attempts to request credentials or send a malicious link, the gateway blocks or rewrites it — and logs the decision so it is auditable later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why self-hosted?&lt;/strong&gt;&lt;br&gt;
Many companies don’t want another SaaS handling sensitive data.&lt;br&gt;
So the gateway runs via Docker in your own infra — the license only validates usage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I’d love feedback&lt;/strong&gt;&lt;br&gt;
This is still evolving. If you’re working with LLMs in a serious environment (security, compliance, healthcare, finance, SaaS) I’d really value your input.&lt;/p&gt;

&lt;p&gt;Demo &amp;amp; docs: &lt;a href="https://llmsafe.cloud" rel="noopener noreferrer"&gt;https://llmsafe.cloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy to answer questions or discuss real-world attack scenarios.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi409jxulm9yw9u5qmbcf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi409jxulm9yw9u5qmbcf.png" alt=" " width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0p5uzvi9ebpjblsnfqh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0p5uzvi9ebpjblsnfqh.png" alt=" " width="800" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>llm</category>
      <category>security</category>
    </item>
  </channel>
</rss>
