<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: mario ANTUNES</title>
    <description>The latest articles on Forem by mario ANTUNES (@majpantunes).</description>
    <link>https://forem.com/majpantunes</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/majpantunes"/>
    <language>en</language>
    <item>
      <title>Are Coding LLM Plans About to Die? The Coming Compute Crisis</title>
      <dc:creator>mario ANTUNES</dc:creator>
      <pubDate>Thu, 16 Apr 2026 08:55:57 +0000</pubDate>
      <link>https://forem.com/majpantunes/are-coding-llm-plans-about-to-die-the-coming-compute-crisis-591g</link>
      <guid>https://forem.com/majpantunes/are-coding-llm-plans-about-to-die-the-coming-compute-crisis-591g</guid>
      <description>&lt;p&gt;LLMs have been incredibly useful in boosting productivity, but lately I’ve started to feel that this paradigm may change soon and quite significantly.&lt;/p&gt;

&lt;p&gt;Even though Spec Driven Development has greatly improved productivity, we seem to be reaching a turning point.&lt;/p&gt;

&lt;p&gt;In the past, several platforms offered so-called “coding plans”, where for $10, $50, $100, or $200 you could get a fixed number of requests every 5 hours (or per day/week, depending on the provider). This was an excellent deal for developers.&lt;/p&gt;

&lt;p&gt;However, due to current datacenter limitations, this model may not last. There is investment and hardware available, but there’s a shortage of critical resources like electricity and water to sustain expansion. Spain is already a good example, where even housing development is being impacted due to energy infrastructure limitations.&lt;/p&gt;

&lt;p&gt;We are now facing a real “compute” problem. Coding plans may disappear and shift towards pure pay-per-inference or token-based pricing — which will be significantly more expensive.&lt;/p&gt;

&lt;p&gt;For example, for a freelance developer, using services like Anthropic might become unviable. Around 80 requests to implement a simple feature (roughly 10 minutes of work) can cost about $6. By the end of the day, this can easily exceed $30/day using models like Claude 4.6 Opus.&lt;/p&gt;

&lt;p&gt;At the same time, most platforms (“harnesses”) are removing free tiers (as recently seen with Qwen Code), and when they do offer them, they tend to rely on lower-tier models like GLM, Minimax, Xiaomi, or Qwen.&lt;/p&gt;

&lt;p&gt;For those who can work with these tier-2 models, there are still some alternatives. Minimax offers a $20 plan with around 4500 requests per 5 hours. Ollama claims better performance, but lacks transparency regarding limits. Qwen 3.6 Plus, while decent, costs $50 and often requires waiting for available capacity, likely due to compute constraints.&lt;/p&gt;

&lt;p&gt;Meanwhile, European companies are still far from using AI efficiently, whether in task automation or security. It’s concerning to imagine a future where they become dependent on external compute providers.&lt;/p&gt;

&lt;p&gt;Are we heading back to partial reliance on human labor?&lt;br&gt;
Will we soon return to our old friend… Stack Overflow?&lt;/p&gt;

</description>
      <category>developer</category>
      <category>productivity</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>N8N and its vulnerabilities</title>
      <dc:creator>mario ANTUNES</dc:creator>
      <pubDate>Wed, 24 Sep 2025 09:40:12 +0000</pubDate>
      <link>https://forem.com/majpantunes/n8n-and-its-vulnerabilities-540m</link>
      <guid>https://forem.com/majpantunes/n8n-and-its-vulnerabilities-540m</guid>
      <description>&lt;p&gt;N8N has been growing as a trend in the automation world. Being a self-hosted tool, it’s used both by IT professionals and by people without much experience in servers, programming, or cybersecurity.&lt;br&gt;
The problem is that, while it enables powerful integrations, I see a huge number of potentially fragile automations. The risk is clear: a large-scale security collapse, especially when dealing with flows involving emails, databases, files, and external services. The result? An ecosystem with massive potential for failure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;⚠️ LLM Vulnerabilities in Automation&lt;/strong&gt;&lt;br&gt;
With the rise of Large Language Models (LLMs) inside automation workflows, specific vulnerabilities are emerging. These have already been documented in security reports and by the OWASP Top 10 for LLMs. Some include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompt Injection – inserting malicious commands into prompts&lt;/li&gt;
&lt;li&gt;Sensitive Information Disclosure – unintentional leakage of sensitive data&lt;/li&gt;
&lt;li&gt;Training Data Poisoning – manipulating or corrupting training data&lt;/li&gt;
&lt;li&gt;Insecure Output Handling – unsafe handling of model responses&lt;/li&gt;
&lt;li&gt;Model Denial of Service – DoS attacks aimed at exhausting model resources&lt;/li&gt;
&lt;li&gt;Supply Chain Vulnerabilities – weaknesses in dependencies and supply chains&lt;/li&gt;
&lt;li&gt;Insecure Plugin Design – poorly designed extensions or plugins&lt;/li&gt;
&lt;li&gt;Excessive Agency – overly autonomous AI leading to unintended actions&lt;/li&gt;
&lt;li&gt;System Prompt Leakage – leaking system instructions&lt;/li&gt;
&lt;li&gt;Vector and Embedding Weaknesses – flaws in embeddings that support response generation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;🔒 Layered Security in N8N&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To mitigate risks, it’s essential to approach automation with multiple layers of security.&lt;br&gt;
For instance, an initial step could be implementing Prompt Injection Detection — a Node.js node in N8N capable of checking user inputs and detecting potential anomalies before they compromise workflows.&lt;/p&gt;

&lt;p&gt;A practical example:&lt;br&gt;
👉 &lt;a href="https://github.com/marioalexandreantunes/n8n-workflow-promptinjection" rel="noopener noreferrer"&gt;GitHub repository with Prompt Injection Detection for N8N&lt;/a&gt;&lt;br&gt;
It’s only one layer, but already an important step forward!&lt;/p&gt;

&lt;p&gt;N8N Security Guide - Essential Best Practices&lt;br&gt;
👉 &lt;a href="https://github.com/marioalexandreantunes/n8n-workflow-promptinjection/blob/main/n8n_security_guide(EN).md" rel="noopener noreferrer"&gt;GitHub repository Best Practices&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🚀 The Future of AI-Driven Automation&lt;/strong&gt;&lt;br&gt;
As we move deeper into intelligent automation, awareness and responsibility are crucial. We are still building best practices to protect data, privacy, and infrastructure in this new era of artificial intelligence.&lt;br&gt;
Automation is empowering — but automation without security is an open door to disaster.&lt;/p&gt;

</description>
      <category>llm</category>
      <category>automation</category>
      <category>workflow</category>
    </item>
  </channel>
</rss>
