<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Joseph Chebayi</title>
    <description>The latest articles on Forem by Joseph Chebayi (@chessyjoe).</description>
    <link>https://forem.com/chessyjoe</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/chessyjoe"/>
    <language>en</language>
    <item>
      <title>💥 The $10.5 Trillion Oversight in Banking Cybersecurity</title>
      <dc:creator>Joseph Chebayi</dc:creator>
      <pubDate>Wed, 23 Jul 2025 14:37:39 +0000</pubDate>
      <link>https://forem.com/chessyjoe/the-105-trillion-oversight-in-banking-cybersecurity-1f02</link>
      <guid>https://forem.com/chessyjoe/the-105-trillion-oversight-in-banking-cybersecurity-1f02</guid>
      <description>&lt;p&gt;💥 The $10.5 Trillion Oversight in Banking Cybersecurity&lt;br&gt;
Cybercrime is projected to cost the world $10.5 trillion in 2025—larger than Japan’s entire GDP.&lt;br&gt;
And yet, most bank CISOs still overlook a single line of text that can bypass every firewall they’ve built.&lt;br&gt;
That line is called a prompt injection.&lt;/p&gt;

&lt;p&gt;🧠 What’s happening?&lt;br&gt;
Let’s rewind.&lt;/p&gt;

&lt;p&gt;In 2014, JPMorgan was breached—76M households affected.&lt;br&gt;
In 2024, they got hit again—this time by a glitch that leaked banking info of 451,000+ people for 2 years.&lt;/p&gt;

&lt;p&gt;What failed?&lt;br&gt;
Not the network. The human interface.&lt;br&gt;
That same human vulnerability is now AI’s greatest weakness.&lt;/p&gt;

&lt;p&gt;🧨 What is Prompt Injection?&lt;br&gt;
Think SQL injection, but for LLMs.&lt;br&gt;
It’s malicious instructions hidden in plain text—inside a customer message, a file, even a support ticket.&lt;/p&gt;

&lt;p&gt;LLMs are trained to follow language. Hackers just manipulate that.&lt;/p&gt;

&lt;p&gt;Two examples:&lt;/p&gt;

&lt;p&gt;Bing’s “Sydney” persona leak happened via prompt injection. Users tricked it into revealing rules Microsoft never intended to expose.&lt;/p&gt;

&lt;p&gt;In early 2025, DeepSeek R1—a top open-source model—was jailbroken 100% of the time in prompt injection tests. That means every single try was successful.&lt;/p&gt;

&lt;p&gt;😰 The Human Cost&lt;br&gt;
Even before AI, security teams were overwhelmed.&lt;/p&gt;

&lt;p&gt;97% of data pros say their stack is too complex&lt;/p&gt;

&lt;p&gt;88% live in fear of data leaks from user error&lt;/p&gt;

&lt;p&gt;25% would quit after a major breach scare&lt;/p&gt;

&lt;p&gt;Prompt injection won’t burn down your house today—but it can unlock the front door for someone else.&lt;/p&gt;

&lt;p&gt;✅ Want to test your own AI for this?&lt;br&gt;
If you lead AI, digital channels, or risk at a bank, ask your team one question this week:&lt;/p&gt;

&lt;p&gt;“Can our AI assistant or chatbot be told to ignore its own rules?”&lt;/p&gt;

&lt;p&gt;Drop a 👀 if you want the Prompt Injection Checklist we use to test Fortune 500 co-pilots.&lt;/p&gt;

&lt;p&gt;📌 #AISecurity #PromptInjection #CISO #FinServ #ZeroTrust&lt;/p&gt;

</description>
      <category>ai</category>
      <category>promptengineering</category>
    </item>
  </channel>
</rss>
