<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: GitGuardian</title>
    <description>The latest articles on Forem by GitGuardian (@gitguardian).</description>
    <link>https://forem.com/gitguardian</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/gitguardian"/>
    <language>en</language>
    <item>
      <title>API Keys Security &amp; Secrets Management Best Practices</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Mon, 11 May 2026 12:39:03 +0000</pubDate>
      <link>https://forem.com/gitguardian/api-keys-security-secrets-management-best-practices-4j3f</link>
      <guid>https://forem.com/gitguardian/api-keys-security-secrets-management-best-practices-4j3f</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Master API key management best practices by never storing unencrypted secrets in git, enforcing automated secrets scanning, and avoiding plaintext sharing in messaging systems. Use dedicated secrets management tools, apply least privilege and short-lived keys, and implement robust rotation and monitoring strategies.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We have compiled a list of some of the best practices to prevent API key leakage and keep secrets and credentials safe. Secrets management doesn't have a one-size-fits-all approach, so this list considers multiple perspectives so you can be informed in deciding to or not to implement strategies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/da8kiytlc/image/upload/v1592031041/Cheatsheets/secrets_cheatsheet_bedizg.pdf?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Download the cheat sheet&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/da8kiytlc/image/upload/v1592031041/Cheatsheets/secrets_cheatsheet_bedizg.pdf?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fple62nfcve9l4f791hry.jpeg" alt="Secrets management cheat sheet" width="553" height="386"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  Never store unencrypted secrets in .git repositories
&lt;/h1&gt;

&lt;p&gt;It is common to wrongly assume that private repositories are secure vaults that are safe places to store secrets. &lt;a href="https://www.gitguardian.com/glossary/sdlc-sast?ref=blog.gitguardian.com#1" rel="noopener noreferrer"&gt;Private repositories are not appropriate places to store secrets&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Private repositories are high-value targets for bad actors because it is common practice to store secrets within them. In addition, .git is designed to sprawl. Repositories get cloned onto new machines, forked into new projects and new developers regularly enter and exit a project with access to complete history. Any hard-coded secrets that exist within a private repository's history will exist in all new repositories born from that source.&lt;/p&gt;

&lt;p&gt;If a &lt;a href="https://www.gitguardian.com/glossary/secret-sprawl-definition?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;secret&lt;/a&gt; enters a repository, private or public, then it should be considered compromised.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;A secret in a private repo is like a password written on a $20 bill, you might trust the person you gave it to, but that bill can end up in hundreds of peoples hands as a part of multiple transactions and within multiple cash registers.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If already committed and want to remove an API key from the git history, see: &lt;a href="https://blog.gitguardian.com/rewriting-git-history-cheatsheet/" rel="noopener noreferrer"&gt;Git Clean, Git Remove file from commit - Cheatsheet&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Avoid &lt;code&gt;git add *&lt;/code&gt; commands on git
&lt;/h2&gt;

&lt;p&gt;Using wildcard commands like &lt;code&gt;git add *&lt;/code&gt; or &lt;code&gt;git add .&lt;/code&gt; can easily capture files that should not enter a git repository. This includes generated files, config files, and temporary source code. When making a commit, you should preferably add each file by name and use &lt;code&gt;git status&lt;/code&gt; to list tracked and untracked files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages:&lt;/strong&gt; Complete control and visibility over what files are committed. Reduces the risk of unwanted files entering source control.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages:&lt;/strong&gt; Takes additional time when making a commit. Can mistakenly miss files when committing.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;TIP: Committing early and committing often will not only help navigate file history and break up otherwise large tasks, in addition it will reduce the temptation to use wildcard commands.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Add sensitive files in .gitignore
&lt;/h2&gt;

&lt;p&gt;To prevent sensitive files from ending up within git repositories a comprehensive &lt;code&gt;.gitignore&lt;/code&gt; file should be included with all repositories and include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Files with environment variables like &lt;code&gt;.env&lt;/code&gt; or configuration files like &lt;code&gt;.zshrc&lt;/code&gt;, &lt;code&gt;application.properties&lt;/code&gt; or &lt;code&gt;.config&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Files generated by another process (such as application logs or checkpoints, unit tests/coverage reports)&lt;/li&gt;
&lt;li&gt;Files containing "real" data (other than test data) like database extracts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GitHub published a collection of useful &lt;a href="https://github.com/github/gitignore?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;.gitignore templates&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Don't rely on code reviews to discover secrets
&lt;/h2&gt;

&lt;p&gt;It is extremely important to understand that &lt;a href="https://www.gitguardian.com/glossary/how-to-detect-secrets-in-source-code?ref=blog.gitguardian.com#2" rel="noopener noreferrer"&gt;code reviews will not always detect hard-coded secrets&lt;/a&gt;, especially if they are hidden in previous versions of code. Reviewers are only concerned with the difference between the current and proposed states of the code; they do not consider the entire history of the project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7tq493ah5ru8fwpl55cb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7tq493ah5ru8fwpl55cb.png" alt="Why a code review won't detect a deleted API key" width="615" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If secrets are committed to a development branch and later removed, these secrets won't be visible or of importance to the reviewer. The nature of git means that if a secret gets &lt;a href="https://blog.gitguardian.com/secrets-credentials-api-git/" rel="noopener noreferrer"&gt;overlooked in history&lt;/a&gt; it is exposed forever.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;TIP: As a rule, automation should be implemented wherever predefined rules can be established, like secrets detection. Human reviews should be left to check code for errors that cannot be easily predefined, such as logic.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Use automated secrets scanning on repositories
&lt;/h2&gt;

&lt;p&gt;Even when all best practices are followed, mistakes are common. &lt;a href="https://gitguardian.com/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian&lt;/a&gt; offers a free &lt;a href="https://www.gitguardian.com/glossary/how-to-detect-secrets-in-source-code?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;secrets scanning solution&lt;/a&gt; for developers to detect &lt;a href="https://www.gitguardian.com/detectors?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;both generic API keys and specific secrets&lt;/a&gt; (more than 450 providers are supported!), installable &lt;strong&gt;both on private and public repositories&lt;/strong&gt; for free.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visibility is the key to great secret management. If you don't know you have a problem, you cannot take action to fix it.&lt;/strong&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  Don't share your secrets unencrypted in messaging systems like Slack
&lt;/h1&gt;

&lt;p&gt;A common secret sprawl enabler is sending secrets in plain text over messaging services. These systems are high-value targets for attackers. It only takes one compromised email or Slack account to uncover a trove of sensitive information, &lt;a href="https://blog.gitguardian.com/looking-beyond-code-for-secrets-leaks/" rel="noopener noreferrer"&gt;as secrets exposure extends beyond source code&lt;/a&gt; into various communication and collaboration tools.&lt;/p&gt;




&lt;h1&gt;
  
  
  Store secrets safely
&lt;/h1&gt;

&lt;p&gt;There is no silver bullet solution for secrets management. Different factors such as project size, team geography, and project scope, must be considered. Multiple solutions may need to coexist.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use encryption to store secrets within .git repositories
&lt;/h2&gt;

&lt;p&gt;Encrypting your secrets using tools such as &lt;a href="https://git-secret.io/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;git secret&lt;/a&gt; or &lt;a href="https://blog.gitguardian.com/a-comprehensive-guide-to-sops/" rel="noopener noreferrer"&gt;SOPS&lt;/a&gt; and storing them within a git repository keeps secrets synced across teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages:&lt;/strong&gt; Your secrets are synced. &lt;strong&gt;Disadvantages:&lt;/strong&gt; You have to deal with your encryption keys securely. No audit logs. No RBAC. Hard to rotate access.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use local environment variables when feasible
&lt;/h2&gt;

&lt;p&gt;An environment variable is a dynamic object whose value is set outside of the application. This makes them easier to rotate without having to make changes within the application itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages:&lt;/strong&gt; Easy to change between deployed versions without changing any code. Less likely to be checked into the repository. Simple and clean.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages:&lt;/strong&gt; May not be feasible at scale when working in teams — no easy way to keep developers, applications, and infrastructure in sync.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Secrets Management Tools
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://blog.gitguardian.com/top-secrets-management-tools-for-2024/" rel="noopener noreferrer"&gt;Secrets management systems&lt;/a&gt; such as &lt;a href="https://www.vaultproject.io/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Hashicorp Vault&lt;/a&gt; or &lt;a href="https://aws.amazon.com/kms/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;AWS Key Management Service&lt;/a&gt; are encrypted systems that can safely store your secrets and tightly control access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages:&lt;/strong&gt; Prevents secrets from sprawling. Provides audit logs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages:&lt;/strong&gt; Must be hosted on highly-available and secure infrastructure. Requires codebase changes to integrate. Access keys must be carefully protected.&lt;/p&gt;




&lt;h1&gt;
  
  
  Restrict API access and permissions
&lt;/h1&gt;

&lt;p&gt;By restricting the access and permissions of the API key you not only limit damage and restrict lateral movement but also provide greater visibility over when the API key is being used outside of its scope.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 This idea can be pushed even further by using API keys as a decoy to intercept hackers — a concept called a &lt;a href="https://blog.gitguardian.com/honeytokens-protect-your-holy-grail/" rel="noopener noreferrer"&gt;honeytoken&lt;/a&gt;. Learn more about the &lt;a href="https://blog.gitguardian.com/gitguardian-honeytoken/" rel="noopener noreferrer"&gt;Honeytoken module&lt;/a&gt; in GitGuardian.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Default to minimal permission scope for APIs
&lt;/h2&gt;

&lt;p&gt;Make sure the permissions of that API match the task it is fulfilling. Have separate APIs for read-only and read/write permissions to avoid &lt;a href="https://blog.gitguardian.com/secrets-analyzer/" rel="noopener noreferrer"&gt;overprivileged secrets&lt;/a&gt;. It is common for developers to use &lt;strong&gt;API keys with excessive permissions&lt;/strong&gt; — this increases the potential damage of a data breach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Whitelist IP addresses where appropriate
&lt;/h2&gt;

&lt;p&gt;IP whitelisting provides an additional layer of security by providing a whitelist of IP addresses from your private network so external services only accept requests from trusted sources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages:&lt;/strong&gt; Limited requests to select trusted sources. &lt;strong&gt;Disadvantages:&lt;/strong&gt; Not always feasible. Can prevent legitimate requests. Needs constant maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use short-lived secrets
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;By using short-lived secrets, the risk of undetected leaked API keys is mitigated&lt;/strong&gt;, ensuring that even if an attacker gains access to a secret, it would be harmless, unlike &lt;a href="https://blog.gitguardian.com/why-exposed-secrets-stay-valid/" rel="noopener noreferrer"&gt;most exposed secrets that remain valid&lt;/a&gt; for extended periods.&lt;/p&gt;

&lt;p&gt;Revoke and rotate all API keys often to prevent &lt;a href="https://blog.gitguardian.com/zombie-leaks/" rel="noopener noreferrer"&gt;unrevoked secrets from lurking&lt;/a&gt; in your systems.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Imagine you own a company with hundreds of employees that all have keys to your office — keys will inevitably get lost, employees will leave, new keys will get cut and you will soon lose visibility over where each key is. It would be widely considered good practice to change the locks from time to time.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h1&gt;
  
  
  Advanced API Key Storage and Cryptographic Protection
&lt;/h1&gt;

&lt;p&gt;Enterprise API key management demands sophisticated storage mechanisms with cryptographic protection layers. Single-purpose keys are a fundamental principle — use dedicated keys for encryption versus authentication to prevent cryptographic side effects.&lt;/p&gt;

&lt;p&gt;For applications requiring local key storage, implement Hardware Security Modules (HSMs) or secure enclaves. When HSMs aren't feasible, use secure key generation through cryptographically secure random number generators. Consider key derivation strategies where multiple API keys derive from a single master key using strong secret diversification methods.&lt;/p&gt;

&lt;h1&gt;
  
  
  API Key Monitoring and Anomaly Detection
&lt;/h1&gt;

&lt;p&gt;Implement monitoring systems that track API key usage patterns — request frequency, geographic distribution, and accessed resources — to establish baseline behaviors. Deploy alerting for suspicious activities such as unusual request volumes, unexpected IP ranges, or attempts to access resources outside normal scope.&lt;/p&gt;

&lt;p&gt;Establish audit trails capturing successful API calls, failed authentication attempts, permission escalations, and administrative changes. Integrate with SIEM systems to enable real-time threat detection and automated response workflows.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Managing and storing secrets is a challenge that requires vigilance from even the most experienced developer. There is no perfect checklist that a developer can follow.&lt;/p&gt;

&lt;p&gt;Policies, tools, and strategies will differ from project to project, but it is crucial for developers to understand the consequences of their choices so that secrets management can be an informed, active strategy throughout the entire development process.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary: Best Practices for API Key and Credentials Security
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Never store unencrypted secrets in .git repositories&lt;/strong&gt; — avoid wildcard git add, use .gitignore, use automated scanning&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Don't share secrets unencrypted in messaging systems like Slack&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Store secrets safely&lt;/strong&gt; — use encryption, environment variables, or secrets-as-a-service solutions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Restrict API keys access and permissions&lt;/strong&gt; — minimal scope, IP whitelisting, short-lived secrets&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Ready to find out which secrets management approach is right for you? Take the &lt;a href="https://www.gitguardian.com/secrets-management-guide?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian Secrets Management Needs Quiz&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What are the most critical API key management best practices for large development teams?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Core best practices include avoiding unencrypted secrets in Git repositories, enforcing automated secrets scanning, and applying least-privilege policies to all keys. Regular key rotation, centralized secrets management, and encryption of all credentials both at rest and in transit are essential.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How should organizations approach API key rotation and lifecycle management?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Automate rotation schedules based on privilege and risk: rotate high-privilege keys weekly or daily, and rotate low-privilege keys monthly. If any key in a rotation chain is compromised, revoke all related keys immediately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why is relying solely on code reviews insufficient for detecting hard-coded secrets?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Code reviews typically cover only recent changes and may overlook secrets committed previously and later removed. These secrets persist in repository history and remain exploitable. Automated secrets scanning provides comprehensive coverage across current code and historical commits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What advanced storage options are recommended beyond environment variables?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Robust alternatives include dedicated secrets managers (e.g., HashiCorp Vault, AWS Secrets Manager), Hardware Security Modules (HSMs), and secure enclaves offering cryptographic protection, granular permissioning, and automated rotation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How can organizations monitor API key usage and detect anomalies?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Monitor request volume, originating IP addresses, time-of-day patterns, and resource access behavior. Configure alerts for unusual activity and forward logs to SIEM platforms for real-time analytics and automated incident response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are the risks of sharing secrets in messaging platforms like Slack?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sharing secrets in plaintext on messaging platforms exposes them if accounts are compromised. Attackers can exploit these credentials for lateral movement or privilege escalation. Always use dedicated secrets management tools to share sensitive information securely.&lt;/p&gt;

</description>
      <category>security</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>Local Guardrails for Secrets Security in the Age of AI Coding Assistants</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Fri, 08 May 2026 13:01:13 +0000</pubDate>
      <link>https://forem.com/gitguardian/local-guardrails-for-secrets-security-in-the-age-of-ai-coding-assistants-3jc8</link>
      <guid>https://forem.com/gitguardian/local-guardrails-for-secrets-security-in-the-age-of-ai-coding-assistants-3jc8</guid>
      <description>&lt;p&gt;Software supply chain security used to feel like a problem that lived somewhere else.&lt;/p&gt;

&lt;p&gt;The repository and build system were top of mind. Package registries, continuous integration and continuous delivery pipelines, release automation, cloud platforms, and artifact stores also became the focus of concern. These still matter and need protection, but the attack surface has shifted closer to where developers work every day.&lt;/p&gt;

&lt;p&gt;The developer laptop is no longer just the place where code gets written. It is part of the supply chain.&lt;/p&gt;

&lt;p&gt;The security implications are easy to underestimate. A modern workstation touches source code, package managers, cloud accounts, registry tokens, secure shell keys, service accounts, build scripts, artificial intelligence coding assistants, terminals, local caches, and environment files. It is where credentials are created, copied, tested, logged, and too often forgotten.&lt;/p&gt;

&lt;p&gt;Attackers understand this. They are not only looking for a vulnerable production service or a poisoned build step. They are looking for the access material that lets one system trust another.&lt;/p&gt;

&lt;p&gt;We have to update our defense models. Security cannot wait until code reaches a remote repository or a pipeline. By then, a credential may already be in Git history, a model prompt, a local log, a build artifact, or a package install script's reach.&lt;/p&gt;

&lt;p&gt;The control point has to move earlier in the software creation process.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Common Thread Is Credential Theft
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://blog.gitguardian.com/tag/breach-explained/" rel="noopener noreferrer"&gt;GitGuardian's recent breach research&lt;/a&gt; points to a clear pattern across software supply chain attacks: attackers increasingly target the credentials embedded in developer workflows.&lt;/p&gt;

&lt;p&gt;In April 2026, we analyzed three supply chain campaigns that affected &lt;a href="https://blog.gitguardian.com/three-supply-chain-campaigns-hit-npm-pypi-and-docker-hub-in-48-hours/" rel="noopener noreferrer"&gt;npm, PyPI, and Docker Hub over a 48-hour period&lt;/a&gt;. The ecosystems and techniques varied, but the goal was consistent. Each campaign focused on stealing useful credentials from developer environments or continuous integration and delivery pipelines.&lt;/p&gt;

&lt;p&gt;One compromised npm package used a postinstall hook to steal npm publish tokens, then used that access to publish infected versions of packages the victim could reach. A PyPI campaign harvested secure shell keys, cloud credentials, environment variables, and crypto wallets. Across those campaigns, the attacker's objective was clearly to collect valid access and use it to reach the next system.&lt;/p&gt;

&lt;p&gt;This is what makes the problem so damaging.&lt;/p&gt;

&lt;h3&gt;
  
  
  Developers Are Attractive Targets
&lt;/h3&gt;

&lt;p&gt;A developer may have access to source control, cloud accounts, package registries, artifact stores, staging environments, incident tools, and internal application programming interfaces. A build runner may hold deployment credentials, package publishing tokens, and access to production-adjacent infrastructure. One exposed token can become a bridge across several layers of the delivery process.&lt;/p&gt;

&lt;p&gt;That is why credential exposure is different from many other bugs. An attacker does not always need to exploit a software flaw, maintain persistence, or modify production code. Sometimes, authenticated access is enough. Sometimes, a short-lived foothold on a developer machine can uncover a credential with broader reach.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2026?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;The GitGuardian 2026 State of Secrets Sprawl report&lt;/a&gt; makes the scale of the situation clear. We found over 28.6 million new secrets detected in public GitHub commits in 2025, a 34 percent year-over-year increase. Internal repositories are roughly six times more likely than public repositories to contain hardcoded credentials. About 28% of incidents originate entirely outside repositories, in collaboration systems such as Slack, Jira, and Confluence.&lt;/p&gt;

&lt;p&gt;The repository is no longer the boundary. If a secret can be found on the machine in plaintext, it is likely to spread elsewhere in plaintext and be usable by anyone who finds it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Workstation Now Holds Too Much Context To Ignore
&lt;/h2&gt;

&lt;p&gt;Developer laptops are attractive because they contain context.&lt;/p&gt;

&lt;p&gt;They hold source trees, dotfiles, shell history, local environment files, integrated development environment settings, package manager configuration, build artifacts, terminal output, AI agent logs, and temporary debugging notes. Many of these files are invisible during normal review. Many never leave the machine. Some sit in directories that developers rarely inspect.&lt;/p&gt;

&lt;p&gt;That makes local exposure difficult to manage with repository-only controls.&lt;/p&gt;

&lt;p&gt;A credential can appear in a &lt;code&gt;.env&lt;/code&gt; file, get printed into terminal history, land inside a test config, show up in build output, or be copied into an AI prompt during troubleshooting. None of that requires a malicious commit. None of it necessarily triggers a centralized scanner. Yet each moment can create real access risk.&lt;/p&gt;

&lt;p&gt;We need, as an industry, to scan the places where credentials are collected outside Git. Project workspaces, dotfiles, build output, and agent folders can all store copied tokens, configuration blocks, troubleshooting output, and cached context. Attackers harvest this local data because it can lead directly to valid access.&lt;/p&gt;

&lt;p&gt;The Shai-Hulud data gives that concern weight. Across &lt;a href="https://blog.gitguardian.com/honeytokens-on-the-developer-workstation/" rel="noopener noreferrer"&gt;6,943 compromised machines, it found 33,185 unique credentials&lt;/a&gt;, with at least 3,760 still valid when first checked. That is not a theoretical workstation problem. It is a practical attacker workflow.&lt;/p&gt;

&lt;p&gt;Compromise the machine. Search the context. Extract the access. Move on.&lt;/p&gt;

&lt;p&gt;The workstation has become a security boundary because so many tools assume the local environment is trusted. Package managers run install scripts. Extensions read project files. Terminals expose environment variables. Local automation touches real systems. AI agents can read files, run commands, and summarize outputs.&lt;/p&gt;

&lt;p&gt;Each of those actions may be useful, but each can also become a path for accidental exposure or malicious instruction.&lt;/p&gt;

&lt;p&gt;AI-assisted development adds a newer layer to the same problem. AI coding tools now work closer to the developer's files, terminal, editor, and environment variables. A prompt can contain a credential. A tool can call and read a sensitive file. A generated command can print access material into logs or model context. An agent can combine harmless-looking steps into a risky action.&lt;/p&gt;

&lt;p&gt;The exposure surface is no longer just human typing plus code review. It now includes the interaction between humans, local tools, automated agents, and external services. And as we found in our report, more people are using coding assistants, and many of them that do let the agent &lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2026?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;co-author their commits are leaking twice as many secrets per commit&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Security controls have to meet the workflow where it actually happens.&lt;/p&gt;

&lt;h2&gt;
  
  
  Earlier Checkpoints Reduce Damage
&lt;/h2&gt;

&lt;p&gt;Traditional supply chain controls still matter. Teams still need source control protections, dependency scanning, secure continuous integration and delivery, artifact integrity, release controls, and production deployment guardrails.&lt;/p&gt;

&lt;p&gt;But those controls often fire after the risky moment has already happened.&lt;/p&gt;

&lt;p&gt;A developer may have created a local file with a credential. An AI assistant may have received sensitive context. A package install script may have read environment variables. A token may have entered local Git history before it ever reaches a remote repository.&lt;/p&gt;

&lt;p&gt;Rotating a credential after it reaches a shared repository can become a full incident response exercise. Someone has to identify ownership, revoke the credential, issue a replacement, check usage, test dependent applications, review access, clean history where possible, and document the event. That work is necessary, but it is expensive.&lt;/p&gt;

&lt;p&gt;Catching the same issue while the developer is still editing a file is simpler. Remove it. Replace it with a safe reference. Keep moving.&lt;/p&gt;

&lt;p&gt;The strongest model treats credential detection as a continuous developer-side control, not an occasional cleanup task. The tool has to sit where developers already work: in the editor, in Git hooks, in the terminal, and inside AI coding workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Protecting Your Developers' Secrets
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/gitguardian/ggshield?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;ggshield is the GitGuardian CLI&lt;/a&gt; for scanning developer workflows. You can run ggshield locally or in continuous integration environments, where it provides guardrails across the software development lifecycle and detects hundreds of types of hardcoded credentials.&lt;/p&gt;

&lt;p&gt;A local scan catches problems before code moves into shared infrastructure. A continuous integration scan catches problems after code leaves the laptop. A pre-receive hook can prevent a secret from being pushed to a shared repo or system. Using the same tooling across these points gives teams consistency without forcing developers into a separate security process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Git Hooks Add Another Layer Of Protection
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.gitguardian.com/ggshield-docs/integrations/git-hooks/pre-commit?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Using ggshield pre-commit hooks&lt;/a&gt; means a scan runs before Git creates a commit. Teams can configure it through the pre-commit framework, install it locally for specific repositories, or install it globally across current and future repositories on a developer workstation.&lt;/p&gt;

&lt;p&gt;The global option is important. Not every leak happens in the main codebase. Temporary repositories, test folders, side projects, cloned examples, and one-off experiments all create exposure. A repository-by-repository rollout leaves gaps. A global hook gives the developer machine a broader default.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.gitguardian.com/ggshield-docs/integrations/git-hooks/pre-push?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;A pre-push hook&lt;/a&gt; catches a later moment. It runs before code leaves the machine for a remote repository. GitGuardian documents local, framework-based, and global installation modes for this control as well. Together, pre-commit and pre-push hooks create two useful gates: one before local history becomes durable, and one before code reaches shared infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Finding Secrets On Save
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://blog.gitguardian.com/visual-studio-code-extension/" rel="noopener noreferrer"&gt;GitGuardian's VS Code extension&lt;/a&gt; uses the bundled ggshield CLI to scan code as developers write or modify it. A scan is run automatically on saving a file. Findings are shown instantly and directly inside the editor through code annotations, status bar warnings, and the Problems panel. This extension also works with Cursor, Antigravity, and Windsurf.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzl2cvclb1qkikvqcfbzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzl2cvclb1qkikvqcfbzv.png" alt="GitGuardian VS Code Extension in action" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Security controls fail when they are too late, too noisy, or too far away from the mistake. A good local control gives feedback in context. It explains what happened. It helps the developer fix the issue before it becomes a ticket, a broken build, or an incident.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Tools Need Guardrails At The Handoff Points
&lt;/h2&gt;

&lt;p&gt;AI coding tools deserve special attention because they change where leakage can occur.&lt;/p&gt;

&lt;p&gt;An AI workflow may expose sensitive material before code exists as a file. A developer might paste a credential into a prompt while debugging. An agent might read a local configuration file. A tool call might execute a command that prints environment variables. Output from that command might then move into the model context or local logs.&lt;/p&gt;

&lt;p&gt;That is a different path than traditional source code leakage.&lt;/p&gt;

&lt;p&gt;GitGuardian's AI coding tools integration addresses this by placing controls inside the hook systems of tools such as Cursor, Claude Code, and VS Code with GitHub Copilot. The integration scans three stages: prompt submission, pre-tool use, and post-tool use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7j328sa8yrbycod7tkt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7j328sa8yrbycod7tkt.png" alt="GitGuardian AI hooks in action" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prompt submission scanning checks content before it reaches the model and blocks the prompt when credentials are found. Pre-tool use scanning checks commands, file reads, and Model Context Protocol calls before execution, blocking risky actions before they run. Post-tool use scanning checks outputs after execution and sends a desktop notification when credentials appear.&lt;/p&gt;

&lt;p&gt;That structure fits how agentic tools operate.&lt;/p&gt;

&lt;p&gt;The risky moment may be a prompt. It may be a file read. It may be a shell command. It may be the output of a tool the developer did not manually inspect. A repository-only control sees too little of this flow. A hook inside the AI workflow can stop exposure at the handoff point.&lt;/p&gt;

&lt;p&gt;The editor catches the issue while the developer writes. AI hooks catch sensitive material before prompts, tool calls, or outputs move it somewhere risky. Git hooks catch credentials before they enter commit history or leave the laptop. Continuous integration and server-side controls provide backup once code reaches shared systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layered Prevention Without Forcing A Separate Workflow
&lt;/h2&gt;

&lt;p&gt;Developer environments are now connected, automated, and increasingly assisted by tools that can act on local context. Security has to account for that reality. Waiting for a remote scan is too late for credential exposure.&lt;/p&gt;

&lt;p&gt;The better model is straightforward: find credentials earlier, block them closer to where they appear, and reduce the chance that a developer's laptop becomes the easiest path into the software supply chain.&lt;/p&gt;

&lt;p&gt;GitGuardian's ggshield, IDE extensions, AI hooks, and Git hooks all point toward that model. They bring detection into the places developers already use, rather than asking developers to leave their workflow for security. They reduce the time between mistakes and feedback. They give teams a consistent detection engine across local development, AI-assisted coding, Git workflows, and automation.&lt;/p&gt;

&lt;p&gt;The supply chain now includes the workstation.&lt;/p&gt;

&lt;p&gt;Treat it that way.&lt;/p&gt;

</description>
      <category>security</category>
      <category>ai</category>
      <category>devsecops</category>
      <category>devops</category>
    </item>
    <item>
      <title>Git Clean, Git Remove file from commit - Cheatsheet</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Thu, 07 May 2026 13:07:34 +0000</pubDate>
      <link>https://forem.com/gitguardian/git-clean-git-remove-file-from-commit-cheatsheet-3ola</link>
      <guid>https://forem.com/gitguardian/git-clean-git-remove-file-from-commit-cheatsheet-3ola</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Learn how to remove files from git commits, whether staged, recent, or deep in history, to eliminate exposed secrets and sensitive data. This guide covers safe removal techniques, advanced history rewriting with git-filter-repo, and critical security steps like credential revocation. Understand the risks of incomplete cleanup and why automated secrets detection and pre-commit scanning are essential for robust codebase security.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Learn how to safely remove confidential information from your git repository. Whether you need to excise an entire file or edit a file without removing it, this tutorial will guide you through the process. Plus, get tips on preventing future headaches with GitGuardian!&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;p&gt;You know that &lt;a href="https://blog.gitguardian.com/secrets-credentials-api-git/" rel="noopener noreferrer"&gt;adding secrets to your git repository&lt;/a&gt; (even a private one) is a bad idea, because doing so &lt;strong&gt;risks exposing confidential information to the world&lt;/strong&gt;. But mistakes were made, and now you need to figure out how to excise confidential information from your repo. &lt;strong&gt;Because git keeps a history of &lt;em&gt;everything&lt;/em&gt;, it's not often enough to simply remove the secret or file, commit, and push&lt;/strong&gt;: we might need to do a bit of deep cleaning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thankfully, for simpler cases, git provides commands that make cleaning things up easy&lt;/strong&gt;. And in more complicated cases, we can use git-filter-repo, a tool recommended by the core git developers for deep cleaning an entire repository.&lt;/p&gt;

&lt;p&gt;First and foremost, if there is reason to think that the secret has escaped into the world, and you can &lt;a href="https://blog.gitguardian.com/what-to-do-if-you-expose-a-secret/" rel="noopener noreferrer"&gt;revoke the secret&lt;/a&gt;, do so. How to revoke a secret is going to vary quite a lot depending on what the secret protects. If you don't know how to revoke it, you will need help from the owner of the resource protected by the secret.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Need to quickly see what scenario applies to you? Check out our cheatsheet flow chart below.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/da8kiytlc/image/upload/v1611932656/Cheatsheets/RewritingYourGitHistory-Cheatsheet-Final_weq1l2.pdf?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4vtxwdb82ib6f0vzrkz6.png" alt="Git history rewriting cheatsheet flowchart" width="500" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/da8kiytlc/image/upload/v1611932656/Cheatsheets/RewritingYourGitHistory-Cheatsheet-Final_weq1l2.pdf?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Download the git history cheatsheet&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let's consider different scenarios to see how to clean things up.&lt;/p&gt;

&lt;h1&gt;
  
  
  Have you pushed your work up yet?
&lt;/h1&gt;

&lt;h2&gt;
  
  
  NO
&lt;/h2&gt;

&lt;p&gt;No? That's great. Please don't push it up just yet. If you have any uncommited work, we can use &lt;code&gt;git stash&lt;/code&gt; to save it. This sets your work aside in a temporary "stash" so that we can work with the git repository without losing anything you haven't committed yet. When we're done cleaning things up, you can use &lt;code&gt;git stash pop&lt;/code&gt; to restore your work.&lt;/p&gt;

&lt;h2&gt;
  
  
  YES
&lt;/h2&gt;

&lt;p&gt;If you have already pushed a commit containing a secret, or just discovered a secret in your existing history, things get more complicated if there are other people working on this branch. If you work alone, there's nothing to do at this point, you can skip to the next step. If you work as part of a team, things get more complicated because we need everyone to act in a coordinated way.&lt;/p&gt;

&lt;p&gt;First of all, we need to determine who else is affected by the secret's presence, because we'll need to coordinate everyone's actions. If the secret only appears in the branch you're working on, you only need to coordinate with anyone else who is always working off of that branch. However, if you found the secret lurking further back in git history, perhaps in your &lt;code&gt;master&lt;/code&gt; or &lt;code&gt;main&lt;/code&gt; branch, you'll need to coordinate with everyone working in the repository.&lt;/p&gt;

&lt;p&gt;Let the others affected know that a secret was found that needs to be excised from everyone's git history. When you edit the git history to remove a file, it can cause problems with your teammates' local clones; moreover, they can end up re-inserting the secret back into the public repository when they push their work. So it is important that everyone affected is in sync for the excision to work. This means that everyone needs to stop what they are doing, close outstanding PRs, and push up work that's in progress.&lt;/p&gt;

&lt;p&gt;Now, let's make a fresh clone.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Delete your existing clone in its entirety.&lt;/li&gt;
&lt;li&gt;Make a fresh clone with &lt;code&gt;git clone [repository URL]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Change into the project directory with &lt;code&gt;cd [project name]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Download the entire repository history: &lt;code&gt;git pull --all --tags&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The last step will look a little bit familiar. &lt;code&gt;git pull&lt;/code&gt; tells git to grab updates from the remote repository, and apply them in the current branch (when it makes sense to do so, that is, when the local branch is set to track a remote branch). But git is smart, it doesn't pull everything down, only what's needed. This is where the &lt;code&gt;--all&lt;/code&gt; flag comes in. This flag tells git to grab every branch from the remote repository. And the &lt;code&gt;--tags&lt;/code&gt; flag tells git to grab every tag as well. So this command tells git to download the entire repository history, a complete and total clone. We have to do this, because it is possible that the commit containing the secret exists in more than one branch or tag. We need to ensure we don't just scrub our secret from a portion of the repository history. As a result, this command can take a very long time, if the repository is very large.&lt;/p&gt;

&lt;p&gt;Move on to the next step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Git Remove File from Commit: Understanding the Difference Between Staged and Committed Files
&lt;/h2&gt;

&lt;p&gt;Before diving into file removal techniques, it's crucial to understand the distinction between staged and committed files in Git's workflow. When you execute git add, files move to the staging area (index), but they're not yet part of the repository history. Once you run git commit, these staged changes become part of a commit object with a unique SHA hash.&lt;/p&gt;

&lt;p&gt;This distinction is critical when you need to remove file from git commit because the approach varies significantly. For staged files that haven't been committed yet, simple commands like &lt;code&gt;git reset HEAD &amp;lt;filename&amp;gt;&lt;/code&gt; or the newer &lt;code&gt;git restore --staged &amp;lt;filename&amp;gt;&lt;/code&gt; will unstage the file without affecting your working directory. However, once files are committed, you're dealing with Git's immutable history, requiring more sophisticated techniques like &lt;code&gt;git reset --soft HEAD~1&lt;/code&gt; followed by selective staging, or &lt;code&gt;git commit --amend&lt;/code&gt; for the most recent commit.&lt;/p&gt;

&lt;p&gt;Understanding this workflow prevents common mistakes where developers accidentally delete files from their working directory when they only intended to remove them from a commit. Always verify your Git status with &lt;code&gt;git status&lt;/code&gt; before executing removal commands to ensure you're targeting the correct state.&lt;/p&gt;

&lt;h1&gt;
  
  
  How complicated is the situation?
&lt;/h1&gt;

&lt;h2&gt;
  
  
  The secret is in the last commit, and there's nothing else in the last commit
&lt;/h2&gt;

&lt;p&gt;In this case, we can drop the last commit in its entirety. We do this with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git reset --hard HEAD~1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What does this command do? Let's break it apart a little bit. &lt;code&gt;git reset&lt;/code&gt; is how we tell git that we want to undo recent changes. Normally, this command by itself tells git to unstage anything we've added with &lt;code&gt;git add&lt;/code&gt;, but haven't committed yet. This version of resetting isn't sufficient for our purposes. But &lt;code&gt;git reset&lt;/code&gt; is more flexible than that. We can tell git to take us back in time to a previous commit as well. We do that by telling git which commit to rewind to. We can use a commit's identified (it's "SHA"), or we can use an indirect reference. &lt;code&gt;HEAD&lt;/code&gt; is what git calls the most recent commit on the checked out branch. &lt;code&gt;HEAD~1&lt;/code&gt; means "the first commit prior to the most recent" (likewise &lt;code&gt;HEAD~2&lt;/code&gt; means "two commits prior to the most recent").&lt;/p&gt;

&lt;p&gt;Finally, the &lt;code&gt;--hard&lt;/code&gt; tells git to throw away any differences between the current state and the state we're resetting to. If you leave off the &lt;code&gt;--hard&lt;/code&gt; your changes, including the secret, won't be discarded. With &lt;code&gt;--hard&lt;/code&gt;, the differences will be deleted, gone forever (which is precisely what we want!).&lt;/p&gt;

&lt;p&gt;Once you've done a hard reset, that's it! You're done. Your work has been destructively undone, and you can pick back up where you were.&lt;/p&gt;

&lt;h2&gt;
  
  
  The secret is in the last commit, but there were other changes too
&lt;/h2&gt;

&lt;p&gt;In this case, we don't want to completely drop the last commit. We want to edit the last commit instead. Edit your code to remove the secrets, and then add your changes as usual. Then, instead of making a new commit, we'll tell git we want to amend the previous one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add [FILENAME]
git commit --amend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We all know &lt;code&gt;git commit&lt;/code&gt;, but the &lt;code&gt;--amend&lt;/code&gt; flag is our friend here. This tells git that we want to edit the previous commit, rather than creating a new one. We can continue to make changes to the last commit in this way, right up until we're ready to either push our work, or start on a new commit.&lt;/p&gt;

&lt;p&gt;Once you've amended the commit, you're done! The secret's gone, you can carry on as you were.&lt;/p&gt;

&lt;h2&gt;
  
  
  The secret is beyond the last commit
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;It's complicated&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you know you committed a secret, but have since committed other changes, things get trickier quickly. In anything but the simplest cases, we are going to want a more powerful tool to help us do a &lt;a href="https://www.gitguardian.com/videos/how-to-permanently-remove-files-from-git-and-rewrite-your-git-history?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;deep clean of the repository&lt;/a&gt;. We're going to use &lt;code&gt;git-filter-repo&lt;/code&gt;, a tool recommended by the git maintainers that will help us to rewrite history in a more user-friendly way than the native git tooling.&lt;/p&gt;

&lt;p&gt;A technical aside to those familiar with the concept of rebasing. (If you don't know what that means, feel free to skip this paragraph.) All of the cases covered below can of course be managed using native git tools, particularly by rebasing. Sometimes rebasing is relatively painless, of course, but in the kinds of scenarios we're presenting here, rebasing is going to be a tedious and deeply error-prone process. This is why I am favoring purpose-built tools like git-filter-repo over rebasing. It is far better to avoid opening the possibility for making mistakes. From my own personal experience, recovering from a botched rebase is extremely time consuming, and often nearly impossible. Better to use the right tool for the job.&lt;/p&gt;

&lt;p&gt;First, install &lt;code&gt;git-filter-repo&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Next, let's assess the situation to determine which technique is right for your situation. Sometimes secrets are files, and sometimes secrets are lines of code. For example, if you accidentally committed an SSH key or TLS certificate file, these are contained in specialized files that you'll need to excise. On the other hand, maybe you have a single line of code containing an API key that's part of a larger source file. In that case, you want to modify one or more lines of a file without deleting it.&lt;/p&gt;

&lt;p&gt;Git-filter-repo can handle both cases, but requires different syntax for each case.&lt;/p&gt;

&lt;h2&gt;
  
  
  Excise an entire file
&lt;/h2&gt;

&lt;p&gt;To tell &lt;code&gt;git-filter-repo&lt;/code&gt; to excise a file from the git history, we need only a single command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git filter-repo --use-base-name --path [FILENAME] --invert-paths
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;--use-base-name&lt;/code&gt; option tells &lt;code&gt;git-filter-repo&lt;/code&gt; that we are specifying a filename, and not a full path to a file. You can leave off this option if you would rather specify the full path explicitly.&lt;/p&gt;

&lt;p&gt;Normally, &lt;code&gt;git-filter-repo&lt;/code&gt; works by ignoring the filenames specified (they are, as the name suggests, filtered out). But we want the inverse behavior, we want &lt;code&gt;git-filter-repo&lt;/code&gt; to ignore everything except the specified file. So we must pass &lt;code&gt;--invert-paths&lt;/code&gt; to tell it this. If you leave off the &lt;code&gt;--invert-paths&lt;/code&gt;, you'll excise everything except the specified file, which is the exact opposite of what we want, and would likely be disastrous. Please don't do that.&lt;/p&gt;

&lt;h3&gt;
  
  
  Edit a file without removing it
&lt;/h3&gt;

&lt;p&gt;If you only need to edit one or more lines in a file without deleting the file, &lt;code&gt;git-filter-repo&lt;/code&gt; takes a sequence of search-and-replace commands (optionally using regular expressions).&lt;/p&gt;

&lt;p&gt;First, identify all the lines containing secrets that need to be excised. You'll also need to work out a plan for how you will replace those lines. Perhaps just deleting them is enough. But perhaps they need to be modified to prevent a runtime crash. Next, create a file containing the search-and-replace commands, called &lt;code&gt;replacements.txt&lt;/code&gt;. Make sure it's in a folder outside of your repo, for example, the parent folder.&lt;/p&gt;

&lt;p&gt;The format of this file is one search-and-replace command per line, using the format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ORIGINAL==&amp;gt;REPLACEMENT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example, suppose that you've hard-coded an API token into your code, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AUTH_TOKEN='123abc'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now suppose that you've decided that it's better to load the API token from an environment variable, as such:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AUTH_TOKEN=ENV['AUTH_TOKEN']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can tell &lt;code&gt;git-filter-repo&lt;/code&gt; to search for the hard-coded token, and replace with the environment variable by adding this line to &lt;code&gt;replacements.txt&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;'123abc'==&amp;gt;ENV['AUTH_TOKEN']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have multiple secrets you need to excise, you can have more than one rule like this in &lt;code&gt;replacements.txt&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Finally, assuming you placed &lt;code&gt;replacements.txt&lt;/code&gt; in the parent directory, we invoke &lt;code&gt;git-filter-repo&lt;/code&gt; with our search-and-replace commands like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git filter-repo --replace-text ../replacements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sometimes you might get an error saying you're not working from a clean clone. That's OK. Git-filter-repo is making irreversible changes to your local repository, and it wants to be certain that you have a backup before it does that. Of course, we do have a remote repository, and we're working from a local clone. And of course we are very interested in making irreversible edits to our commit history—we have a secret to purge! So there's no need for &lt;code&gt;git-filter-repo&lt;/code&gt; to worry. We can reassure it that we are OK with making irreversible changes by adding the &lt;code&gt;--force&lt;/code&gt; flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git filter-repo --replace-text ../replacements.txt --force
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And now you have a clean git history! You'll want to validate your work by compiling your software or running your test suite. Then once you're satisfied that nothing is broken, move on to the next step to propagate the new history to your remote repository and the rest of the team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced Scenarios: Removing Files from Multiple Commits
&lt;/h2&gt;

&lt;p&gt;While the current article covers basic scenarios, many developers encounter situations where they need to &lt;strong&gt;git remove files from commit&lt;/strong&gt; across multiple commits in their history. This commonly occurs when sensitive data like API keys or configuration files were committed multiple times before being detected.&lt;/p&gt;

&lt;p&gt;For commits that aren't the most recent, you'll need to use interactive rebase with &lt;code&gt;git rebase -i HEAD~n&lt;/code&gt; where n is the number of commits to review. During the interactive rebase, you can mark commits for editing with &lt;code&gt;edit&lt;/code&gt;, then use &lt;code&gt;git reset HEAD~1&lt;/code&gt; to unstage files from that specific commit, remove the unwanted files, and continue with &lt;code&gt;git rebase --continue&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Alternatively, for more complex scenarios involving the same file across many commits, &lt;code&gt;git filter-repo --path &amp;lt;filename&amp;gt; --invert-paths&lt;/code&gt; provides a more robust solution. This tool rewrites the entire repository history, permanently removing the specified file from all commits. However, this approach requires careful coordination with your team since it fundamentally alters the repository's commit hashes, making it incompatible with existing clones.&lt;/p&gt;

&lt;p&gt;Always create a backup branch before attempting multi-commit file removal: &lt;code&gt;git branch backup-before-cleanup&lt;/code&gt; ensures you can recover if something goes wrong during the history rewriting process.&lt;/p&gt;

&lt;h1&gt;
  
  
  Do you need to coordinate with your team?
&lt;/h1&gt;

&lt;h2&gt;
  
  
  No
&lt;/h2&gt;

&lt;p&gt;If you only just added the secret, and haven't pushed any of your work yet, you're done. Just keep working like you had been, and no one will ever know. Don't forget if you need to restore uncommitted work by popping it from the stash with &lt;code&gt;git stash pop&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Otherwise, we'll need to overwrite what's on your remote git repository (such as GitHub), as it still contains tainted history. We can't simply push, however: The remote repository will refuse to accept our push because we've re-written history. So we'll need to force push instead. Moreover, if our re-writes to history affect multiple branches or tags, we'll need to push them all up. We can accomplish all of this like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push --all --force &amp;amp;&amp;amp; git push --tags --force
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: The &lt;code&gt;--all&lt;/code&gt; argument does not automatically push any updated tags. Git does not allow the push arguments &lt;code&gt;--all&lt;/code&gt; and &lt;code&gt;--tags&lt;/code&gt; to be used in the same call. If you need to run both commands, you can do so with the single line, thanks to the &lt;code&gt;&amp;amp;&amp;amp;&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  YES
&lt;/h2&gt;

&lt;p&gt;If you work as part of a team, now comes the hard part. Everyone you identified as affected at the beginning of this process still has the old history. They need to synchronize against the revised history you just force-pushed. This is where errors can happen, and more importantly, where frustration can occur.&lt;/p&gt;

&lt;p&gt;Ideally everyone pushed their work up before you edited the history. In that case, everyone can simply make a clean clone of the repo and pick up where they left off.&lt;/p&gt;

&lt;p&gt;But if someone failed to push their work up before you re-wrote history, they're going to find they have a number of conflicts that need to be resolved when they pull. Instead, they need to fetch the new history from the remote repository, and rebase their hard work on the re-written history. To do this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git fetch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git rebase -i origin/[branchname]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you aren't familiar with &lt;code&gt;git fetch&lt;/code&gt;, this command tells git to download new data from the remote repository, but unlike &lt;code&gt;git pull&lt;/code&gt;, it doesn't attempt to merge new commits into your current working branch. So the fetch here is requesting all the newly re-written history.&lt;/p&gt;

&lt;p&gt;Once all the new history is pulled down, the developer will need to re-apply all their hard work on top of the re-written history. This is done by rebasing. For this reason, rebasing is out of scope for this tutorial.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Implications: When File Removal Isn't Enough
&lt;/h2&gt;

&lt;p&gt;From a cybersecurity perspective, simply knowing how to remove a file from git commit is only the first step when dealing with exposed secrets or sensitive data. Even after successful removal from Git history, the sensitive information may have already been exposed through various vectors that require immediate attention.&lt;/p&gt;

&lt;p&gt;If the repository was ever public or accessible to unauthorized users, you must assume the sensitive data has been compromised. This means immediately revoking any exposed credentials, rotating API keys, and updating passwords. Git's distributed nature means that anyone who cloned or forked the repository before your cleanup still has access to the sensitive data in their local copies.&lt;/p&gt;

&lt;p&gt;For organizations using GitGuardian's secrets detection capabilities, the platform can help identify when sensitive data has been exposed and provide guidance on proper remediation steps. This includes not just the technical Git operations, but also the security protocols necessary to minimize damage from the exposure.&lt;/p&gt;

&lt;p&gt;Additionally, consider implementing pre-commit hooks and automated scanning to prevent future incidents. Tools like GitGuardian's &lt;a href="https://docs.gitguardian.com/ggshield-docs/integrations/git-hooks/pre-commit?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;pre-commit hooks&lt;/a&gt; can catch secrets before they enter your repository, eliminating the need for complex history rewriting operations and reducing security risks associated with exposed credentials in version control systems.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;If you've made it this far, congratulations! You've successfully and securely eliminated a secret or file from your git history. But it didn't have to be this way. You can prevent all this headache with GitGuardian.&lt;/p&gt;

&lt;p&gt;GitGuardian is an automated secret detection solution that integrates with your git repos to scan your code for secrets. GitGuardian first makes an initial scan to clean your history, then integrates with your devops pipeline to scan all incremental changes as they arrive and notify you before you have to do complex surgery! Try it out here.&lt;/p&gt;

&lt;p&gt;If you are interested in other cheat sheets about security we have put together these for you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to improve your &lt;a href="https://blog.gitguardian.com/how-to-improve-your-docker-containers-security-cheat-sheet/" rel="noopener noreferrer"&gt;Docker containers security&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Best practices for &lt;a href="https://blog.gitguardian.com/secrets-api-management/" rel="noopener noreferrer"&gt;managing and storing secrets including API keys and other credentials&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  FAQs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What is the safest way to remove a file from a Git commit after it has been pushed?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To safely remove a file from a Git commit after pushing, use history-rewriting tools like &lt;code&gt;git filter-repo&lt;/code&gt; to purge the file from all relevant commits. After rewriting history, coordinate with your team and force-push the updated repository. Always revoke any exposed secrets immediately, as removal from Git does not prevent prior exposure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I remove a file from a Git commit if it is only staged and not yet committed?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If a file is staged but not committed, use &lt;code&gt;git reset HEAD &amp;lt;filename&amp;gt;&lt;/code&gt; or &lt;code&gt;git restore --staged &amp;lt;filename&amp;gt;&lt;/code&gt; to unstage it. This removes the file from the staging area without affecting your working directory, ensuring it will not be included in the next commit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are the implications of using git filter-repo to remove files from multiple commits?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Using &lt;code&gt;git filter-repo&lt;/code&gt; to remove files from multiple commits rewrites the repository history, changing commit hashes. This requires all collaborators to re-clone or rebase their work on the new history. Always create a backup branch before proceeding and communicate changes to your team to avoid workflow disruptions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I remove sensitive data from a single recent commit without losing other changes?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To remove sensitive data from the most recent commit without discarding other changes, edit the file to remove the secret, stage the updated file, and run &lt;code&gt;git commit --amend&lt;/code&gt;. This updates the last commit with your modifications, effectively removing the sensitive data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What security steps should be taken after you remove file from git commit containing secrets?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After removing a file from a Git commit that contained secrets, immediately revoke or rotate any exposed credentials. Assume the secret may have been accessed if the repository was public or shared. Implement automated secrets detection and pre-commit hooks to prevent future exposures and ensure compliance with security policies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I coordinate with my team when rewriting Git history to remove sensitive files?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before rewriting history, notify all affected team members and ensure they push any outstanding work. After force-pushing the cleaned history, teammates must re-clone the repository or rebase their changes onto the new history. Clear communication and a coordinated process are essential to prevent conflicts and data loss.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article is a guest post. Views and opinions expressed in this publication are solely those of the author and do not reflect the official policy, position, or views of GitGuardian.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>security</category>
      <category>tutorial</category>
      <category>programming</category>
    </item>
    <item>
      <title>Short-Lived Credentials in Agentic Systems: A Practical Trade-off Guide</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Thu, 07 May 2026 12:44:19 +0000</pubDate>
      <link>https://forem.com/gitguardian/short-lived-credentials-in-agentic-systems-a-practical-trade-off-guide-569c</link>
      <guid>https://forem.com/gitguardian/short-lived-credentials-in-agentic-systems-a-practical-trade-off-guide-569c</guid>
      <description>&lt;p&gt;Agentic systems need short-lived credentials as a baseline security control. &lt;a href="https://blog.gitguardian.com/ai-agents-authentication-how-autonomous-systems-prove-identity/" rel="noopener noreferrer"&gt;That point is pretty clear&lt;/a&gt;. The harder part is when teams move from architecture diagrams to production systems and discover how much operational machinery underpins that decision.&lt;/p&gt;

&lt;p&gt;Security teams often frame credential lifetime as a clean principle. Short-lived good and long-lived bad. Production systems rarely live inside principles alone. In reality, they live inside retry logic, partial failures, identity providers, cloud platform quirks, third-party APIs, and on-call rotations. All this is made more difficult by the probabilistic nature of AI systems.&lt;/p&gt;

&lt;p&gt;Agents behave differently from traditional services. A narrow service usually connects to a known set of systems and follows a fairly stable path. An agent can work across tools, call external APIs, carry context from one step to the next, and continue work after the original trigger is gone. The runtime path is less predictable, and the permission model has to account for that.&lt;/p&gt;

&lt;p&gt;Authentication is one of the few reliable controls that bounds what an autonomous system can reach, modify, and retain access to over time. In agentic systems, an authentication choice directly shapes blast radius and revocability.&lt;/p&gt;

&lt;h3&gt;
  
  
  The real decision sits inside production friction
&lt;/h3&gt;

&lt;p&gt;Production teams are balancing real constraints. They have to consider token issuance, refresh timing, identity federation, vault availability, third-party APIs, local development, failure recovery, and the cost of debugging expired credentials mid-workflow. That is why the more useful question is "Where does short-lived access materially reduce blast radius, and where does the operational overhead need a more deliberate design?"&lt;/p&gt;

&lt;p&gt;A good answer ties credential lifetime to agent behavior, privilege, and execution model. A mature answer adds continuous secret monitoring on top, because agents still leak credentials, retries still fail, and temporary exceptions have a way of becoming permanent.&lt;/p&gt;

&lt;h2&gt;
  
  
  More Systems, More Context, More Places To Leak Secrets
&lt;/h2&gt;

&lt;p&gt;Agentic systems tend to touch more systems than single-purpose services. They commonly need to authenticate to APIs, SaaS tools, internal platforms, cloud resources, data stores, and orchestration layers. They often carry temporary context, delegated permissions, and state across those steps. That broader surface expands the number of places where a token can escape.&lt;/p&gt;

&lt;p&gt;Credentials can show up in logs, traces, prompts, tool arguments, memory stores, CI pipelines, deployment configs, notebooks, and local test environments. AI-assisted development adds more volume to all of those surfaces. The &lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2026?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian State of Secrets Sprawl 2026 report showed 28.65 million hardcoded secrets added to public GitHub in 2025&lt;/a&gt;, with leak rates in AI-assisted code running roughly double the broader GitHub baseline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiizd8ii9srts4duqmfkq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiizd8ii9srts4duqmfkq.png" alt="Claude Code co-authored commits leak secrets at 2.4x the baseline across all public github commits" width="767" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That data fits the reality that many teams already feel. More generated code means more output to review, more automation artifacts to inspect, and more opportunities for sensitive data to land where it should never have been written in the first place.&lt;/p&gt;

&lt;h3&gt;
  
  
  Standing permissions become more dangerous in autonomous workflows
&lt;/h3&gt;

&lt;p&gt;A long-lived credential attached to an agent carries a different kind of risk than the same credential attached to a predictable service. An agent can retry automatically, call adjacent tools, invent tools, pivot across systems, and continue acting after the original operator has moved on. A shared access token can also blur accountability across agents, environments, or tenants.&lt;/p&gt;

&lt;p&gt;Credentials in an agentic system are standing permissions attached to software that can improvise and are extremely goal-oriented. &lt;a href="https://blog.gitguardian.com/ai-agents-authentication-how-autonomous-systems-prove-identity/" rel="noopener noreferrer"&gt;Authentication defines what an agent can reach&lt;/a&gt;, how long it can keep reaching it, and how quickly a team can shut it down.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Short-Lived Credentials Actually Buy You
&lt;/h2&gt;

&lt;p&gt;TTL, or time to live, is the maximum period a credential remains valid. Shorter TTL reduces the maximum window of abuse after a leak. That is the core security gain, and it is easy to quantify.&lt;/p&gt;

&lt;p&gt;A static key valid for 90 days stays useful for up to 7,776,000 seconds. A 15-minute token stays useful for 900 seconds. That is an 8,640x reduction in the maximum exposure window.&lt;/p&gt;

&lt;p&gt;That number doesn't erase risk, but it does cap it. An attacker with a short-lived token has less time for lateral movement, repeated calls, persistence, and quiet misuse. Incident responders also benefit because expiry often does containment work even when formal revocation is slow, cached, or inconsistently enforced across systems.&lt;/p&gt;

&lt;p&gt;Having the shortest feasible TTL is critical as breakout times, the time from initial access to lateral movement, have been falling each year. &lt;a href="https://www.crowdstrike.com/en-us/blog/crowdstrike-2026-global-threat-report-findings/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Breakout times have been witnessed at under a minute in some cases&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.crowdstrike.com/en-us/blog/crowdstrike-2026-global-threat-report-findings/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F40ov6szogpyoeid5molo.png" alt="CrowdStrike 2026 Global Threat Report breakout time findings" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Short lifetime matters most when privilege is high
&lt;/h3&gt;

&lt;p&gt;The value of a short TTL rises with privilege, reach, and uncertainty. High-privilege tokens should have the shortest lifetime a system can support reliably. Credentials used across trust boundaries deserve the same discipline. The same goes for tokens accessible to LLM-adjacent components, external tool connectors, or stateful agent memory, &lt;a href="https://www.youtube.com/watch?v=fvYYz87KjqM&amp;amp;ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;where leakage paths are harder to predict&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The unit of issuance matters too. Per-task tokens usually beat agent-global tokens. Per-session delegated access usually beats long-running shared access. Per-agent identity is much easier to audit than a service account reused across every instance in a workflow.&lt;/p&gt;

&lt;p&gt;Short-lived credentials reduce the size of an incident when a token is found and abused. They also improve attribution because each issued credential can be tied to a narrower slice of work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Safe TTL Depends on the Kind of Agent
&lt;/h2&gt;

&lt;p&gt;Not all agents are the same, and this needs to be factored into any discussion of access and time-to-live. Here are a few different use cases that all fall under the broad umbrella of "Agentic AI." Note: all TTL suggestions are based on general best practices; only you and your team can make a governance plan that best fits your specific needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Interactive user-facing agents
&lt;/h3&gt;

&lt;p&gt;User-facing copilots, internal assistants, and triage agents acting on behalf of a specific user usually fit best with a 5 to 15-minute TTL. Their access is tied closely to an active session. Silent refresh is often feasible. The security goal should be to quickly remove or reduce access when the user context ends.&lt;/p&gt;

&lt;h3&gt;
  
  
  Background workflow agents
&lt;/h3&gt;

&lt;p&gt;Scheduled document processing, enrichment jobs, and routine operational workflows often need more runtime headroom. A 15 to 60-minute TTL is usually a practical range. These systems still benefit from workload identity and on-demand issuance, but they also need sufficient time to complete routine work without causing unnecessary renewal failures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Long-running autonomous agents
&lt;/h3&gt;

&lt;p&gt;Multi-hour orchestration, remediation, and research workflows need a different pattern. A single broad credential that lives for hours creates too much concentrated risk. A better design segments access by stage, tool, or action class. Each step gets the narrowest possible credential for that slice of work. A 1 to 6-hour TTL may be operationally reasonable for some stages, but compartmentalization does more security work than the number alone.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fallback cases and explicit exceptions
&lt;/h3&gt;

&lt;p&gt;Some third-party dependencies still only support static API keys. Some legacy systems are hard to retrofit. Some refresh paths are fragile enough that teams keep a longer-lived fallback credential in reserve. Those exceptions need strict ownership, narrow scope, review cycles, and strong monitoring. They should sit within an exception process, not within the normal architecture.&lt;/p&gt;

&lt;p&gt;Safe TTL includes scope, issuer trust, revocation path, and observability, not just predicting reasonable reissuance intervals.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Dynamic Issuance Gets Hard in Production
&lt;/h2&gt;

&lt;p&gt;Identity and runtime complexity pile up fast, and the operational friction is real. OAuth token exchange flows can be awkward in distributed systems. Workload identity federation varies across cloud providers. Vault and broker systems become important control planes that need their own availability and recovery story.&lt;/p&gt;

&lt;p&gt;Runtime logic adds more complexity. Tokens need caching. Expiry windows need careful handling. Clock drift can create edge cases. A refresh failure in the middle of a workflow can leave behind partial writes, repeated actions, and difficult replay logic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Developer experience shapes the security outcome
&lt;/h3&gt;

&lt;p&gt;Teams also feel this in day-to-day engineering. Local development becomes harder when credentials are minted dynamically. Staging environments need more supporting services. Debugging takes longer when the auth path includes brokers, temporary tokens, and policy evaluation. Application, platform, and security teams all need a shared operating model.&lt;/p&gt;

&lt;p&gt;That pressure explains why static credentials keep showing up. Teams do not choose them because they enjoy the risk. They choose them because dynamic issuance moves a large share of the work into day-two operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Brokered and Vaulted Access and Ephemeral Credentials
&lt;/h2&gt;

&lt;p&gt;A healthy pattern is straightforward: the workload proves its identity, and a broker, vault, or cloud identity plane verifies that identity against policy. Only then can the system issue scoped short-lived credentials for a specific task window. When the token expires, the workload has to authenticate again and pass policy again.&lt;/p&gt;

&lt;p&gt;While in general, moving any keys to vaults is a strong first step towards reducing standing credential risk, it should not be the end goal. We should be moving towards &lt;a href="https://www.helpnetsecurity.com/2024/12/18/gitguardian-multi-vault-integration/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;vault-issued dynamic credentials&lt;/a&gt;, cloud-native &lt;a href="https://blog.gitguardian.com/aws-iam-outbound-identity-federation-with-gitguardian/" rel="noopener noreferrer"&gt;IAM security tokens&lt;/a&gt;, OAuth delegated access, &lt;a href="https://blog.gitguardian.com/getting-started-with-spiffe/" rel="noopener noreferrer"&gt;workload identity federation&lt;/a&gt;, and sidecar or node-local broker designs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Good implementation choices narrow the blast radius further
&lt;/h3&gt;

&lt;p&gt;Per-task issuance beats broad agent-global issuance. Brief caching can help reliability, but the cache should never outlive the intended task boundary. Identity proof should be separate from permission grants so the scope can change without rewriting the trust model. Issuance events should be logged. Secrets themselves should never be logged.&lt;/p&gt;

&lt;p&gt;That is the control plane story. But, it still leaves one practical question unanswered: "How do we know when the real world drifts away from the intended design?"&lt;/p&gt;

&lt;h2&gt;
  
  
  Ephemeral Credentials Reduce Exposure. GitGuardian Finds The Failures Around Them.
&lt;/h2&gt;

&lt;p&gt;Short-lived credentials solve one class of problem very well. They shrink the abuse window after a leak. But they do not prevent credentials from leaking, and they do not guarantee that every agent in a real environment uses ephemeral access as the architecture diagram suggests. This is where GitGuardian can help.&lt;/p&gt;

&lt;p&gt;Agentic systems create more code, configs, prompts, logs, and automation artifacts. Every one of those surfaces can possibly contain a secret, a token, or a fallback credential that was never supposed to persist. Some will be short-lived and still live enough to matter. Some will be refresh tokens with a longer value. Some will be static keys created during a rushed integration and forgotten after the launch.&lt;/p&gt;

&lt;p&gt;Some will sit outside the approved issuance path entirely.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitguardian.com/monitor-internal-repositories-for-secrets?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian gives teams continuous visibility into that sprawl&lt;/a&gt;. It catches exposed secrets in source code, collaboration flows, development pipelines, and operational artifacts before those credentials become a hidden standing risk. That function grows more valuable, not less, in environments that have already adopted short-lived credentials. Mature teams still need a way to find the exceptions, the leaks, and the shortcuts.&lt;/p&gt;

&lt;h3&gt;
  
  
  The heart of the journey is operational, right where GitGuardian lives
&lt;/h3&gt;

&lt;p&gt;The hardest part of a credential strategy is usually not the first policy decision, but maintaining that strategy under delivery pressure, platform variance, and human shortcuts. This is where GitGuardian can help security teams move from principle to enforcement.&lt;/p&gt;

&lt;p&gt;GitGuardian helps teams detect whether their &lt;a href="https://blog.gitguardian.com/nhi-governance-is-the-outcome-gitguardian-is-how-you-get-there/" rel="noopener noreferrer"&gt;non-human identities have fallen outside their governance policies&lt;/a&gt;. You need to know about leaked tokens during their active lifetime, forgotten fallback keys that linger after a migration. You need a clear way to identify secrets introduced by AI-assisted development and surface shadow credentials that bypass the official broker or vault flow. You can not shorten the path from exposure to response without this level of real-time insight.&lt;/p&gt;

&lt;p&gt;In practical terms, GitGuardian becomes the feedback loop for your credential architecture. It tells you whether your move toward ephemeral access is actually reducing the number of secrets in code and config. It tells you whether engineering teams are still creating one-off exceptions. It tells you where the safety net needs tightening before a small shortcut becomes a durable blind spot.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitGuardian helps teams coordinate migration efforts
&lt;/h3&gt;

&lt;p&gt;A shift toward short-lived credentials often needs buy-in from platform teams, developers, and engineering leadership. That buy-in comes more easily when the security team can show where current exposure actually lives.&lt;/p&gt;

&lt;p&gt;GitGuardian helps produce that picture. It gives teams a way to &lt;a href="https://blog.gitguardian.com/sre-playbook-a-guide-to-discover-and-catalog-non-human-identities-nhi/" rel="noopener noreferrer"&gt;inventory existing identities, identify the highest-risk credentials, and prioritize migrations where the security gain is largest&lt;/a&gt;. That changes the internal conversation. The move to ephemeral credentials becomes a measurable risk-reduction program rather than a generic security ask.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fruuy62j0iuveaybr9wmk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fruuy62j0iuveaybr9wmk.png" alt="GitGuardian Workspace Identities list view" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In agentic environments, the GitGuardian platform supports the transition away from static credentials, helps detect the real-world failures that still happen during that transition, and gives teams continuous visibility after the new model is in place.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start With Visibility, Then Fix The Highest-Risk Paths
&lt;/h2&gt;

&lt;p&gt;To move towards better agentic access, you need to make a plan.&lt;/p&gt;

&lt;p&gt;Begin by inventorying agent-to-system credentials across code, configs, pipelines, notebooks, and runtime integrations. Find the static keys with the highest privilege and widest reuse. Add continuous secret detection everywhere those artifacts live.&lt;/p&gt;

&lt;p&gt;You need visibility before you can cleanly reduce standing risk.&lt;/p&gt;

&lt;h3&gt;
  
  
  Move new workflows to dynamic identity
&lt;/h3&gt;

&lt;p&gt;Use &lt;a href="https://blog.gitguardian.com/how-to-get-there-spiffe/" rel="noopener noreferrer"&gt;workload identity&lt;/a&gt;, brokered short-lived tokens, or scoped delegated access for new agent workflows. Set TTL by agent class rather than by team preference. Instrument issuance and expiry events. Measure refresh failures and recovery paths.&lt;/p&gt;

&lt;h3&gt;
  
  
  Break broad permissions into stages
&lt;/h3&gt;

&lt;p&gt;As systems mature, split long-running workflows into stages with separate credentials. Remove shared credentials across agents and tenants. Drill revocation and rotation so the response becomes operational muscle memory. GitGuardian will catch the exceptions, leaks, and drift that architecture alone will miss.&lt;/p&gt;

&lt;h3&gt;
  
  
  Treat long-lived credentials as governed exceptions
&lt;/h3&gt;

&lt;p&gt;Reserve long-lived credentials for explicit cases with clear owners, narrow scope, regular review, and compensating controls. Exception paths have a habit of becoming permanent, and this is something teams must stay on top of. Continuous monitoring is what keeps an exception from quietly turning back into the default.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Best Strategy Is Measurable and Hard to Abuse
&lt;/h2&gt;

&lt;p&gt;Short-lived credentials should be the default for agentic systems because they sharply reduce exposure when a credential crosses its intended boundary. That is the right baseline. Production systems still have to carry the weight of token brokers, workload identity, refresh logic, and brittle third-party integrations.&lt;/p&gt;

&lt;p&gt;If you are looking for a better path forward, we suggest tying TTL to agent behavior and privilege. Use dynamic issuance where it materially reduces blast radius. Segment long-running workflows. Keep long-lived credentials inside explicit exception handling. Then add continuous secret monitoring as the layer that sees what the model misses, catches what the rollout leaves behind, and shortens the distance from leak to response.&lt;/p&gt;

&lt;p&gt;Ephemeral access changes the risk curve. GitGuardian helps teams prove they are actually moving along with it. &lt;a href="https://www.gitguardian.com/book-a-demo?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;We would love to help you get started&lt;/a&gt; on your path to better agentic access management.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>devsecops</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>No Off Season: Three Supply Chain Campaigns Hit npm, PyPI, and Docker Hub in 48 Hours</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Wed, 06 May 2026 12:48:13 +0000</pubDate>
      <link>https://forem.com/gitguardian/no-off-season-three-supply-chain-campaigns-hit-npm-pypi-and-docker-hub-in-48-hours-33fi</link>
      <guid>https://forem.com/gitguardian/no-off-season-three-supply-chain-campaigns-hit-npm-pypi-and-docker-hub-in-48-hours-33fi</guid>
      <description>&lt;p&gt;After a few quieter weeks, three supply chain attacks put secrets back in the spotlight.&lt;/p&gt;

&lt;p&gt;Between April 21 and 23, 2026, three distinct attacks hit npm, PyPI, and Docker Hub simultaneously. Their targets differ and the threat actor groups might, but their objectives don't: in each case, the malware's &lt;strong&gt;primary goal was to steal secrets from developer environments&lt;/strong&gt; and &lt;strong&gt;CI/CD pipelines&lt;/strong&gt;. API keys, cloud credentials, SSH keys, and registry tokens were all targeted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Campaign 1 - Checkmarx KICS: Compromised Security Scanner Turns on Its Users
&lt;/h2&gt;

&lt;p&gt;The first attack compromised official Checkmarx KICS Docker images and VS Code extensions. Docker flagged suspicious activity on the checkmarx/kics repository on April 22 and alerted Socket. An obfuscated payload harvested GitHub authentication tokens, AWS credentials, Azure and Google Cloud tokens, npm configuration files, SSH keys, and environment variables, compressing and encrypting everything before exfiltration. The payload swept up any API keys stored in environment variables.&lt;/p&gt;

&lt;p&gt;TeamPCP likely orchestrated the attack, based on posts they published on X immediately after disclosure. This would be the group's second Checkmarx attack in two months.&lt;/p&gt;

&lt;h2&gt;
  
  
  Campaign 2 - CanisterSprawl: A Worm That Turns Developer Machines into Launchpads
&lt;/h2&gt;

&lt;p&gt;On April 21, malicious versions of pgserve, a PostgreSQL server for Node.js, appeared on npm. The compromised versions inject a credential-harvesting script that runs via a postinstall hook on every npm install. It searches for npm publish tokens, and for each package the victim can publish, it bumps the patch version, injects itself, and publishes them to npm. If a PyPI token is also found, the worm jumps ecosystems entirely.&lt;/p&gt;

&lt;p&gt;Socket and StepSecurity track this as CanisterSprawl, named after its use of an Internet Computer Protocol (ICP) canister as a resilient, decentralized C2 channel. Socket's follow-up investigation linked compromised Namastex.ai npm packages to the same core methods: install-time execution, credential theft, off-host exfiltration to canister-backed infrastructure, and self-propagation logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Campaign 3 - xinference: TeamPCP Returns to PyPI
&lt;/h2&gt;

&lt;p&gt;On April 22, three consecutive releases of xinference on PyPI carried a credential-stealing payload. The malware decodes a second-stage collector, harvests SSH keys, cloud credentials, environment variables, and crypto wallets. StepSecurity attributes this to TeamPCP, the same group behind the &lt;a href="https://blog.gitguardian.com/litellm-supply-chain-attack/" rel="noopener noreferrer"&gt;litellm&lt;/a&gt; and telnyx PyPI compromises in March.&lt;/p&gt;

&lt;p&gt;There is one notable technical difference from prior TeamPCP campaigns: the xinference payload sends a plain tar.gz directly to the C2 server. The lack of encryption is why some researchers have suggested a copycat, though the injection pattern and multi-version cadence remain consistent with TeamPCP's established tradecraft.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Common Thread
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Three campaigns, three ecosystems, one objective&lt;/strong&gt;. None of these attacks aimed to disrupt software delivery or corrupting build outputs. Every payload, from the CanisterSprawl worm to the trojanized KICS scanner to the xinference stealer, was engineered to &lt;strong&gt;do one thing: extract credentials from the environments&lt;/strong&gt; where developers and pipelines operate. The question every affected team should be asking right now is not just "did this package run in my environment?" It is: what secrets were accessible if it did, and have they been rotated?&lt;/p&gt;

&lt;p&gt;Answering that requires knowing where your secrets live: across repositories, CI configurations, environment variables, and developer machines. GitGuardian provides continuous detection of exposed secrets across every surface attackers target, so when the next compromised package runs in your pipeline, you're not starting from zero.&lt;/p&gt;

&lt;p&gt;Read more: &lt;a href="https://blog.gitguardian.com/team-pcp-snowball-analysis/" rel="noopener noreferrer"&gt;The Team PCP Snowball Effect: A Quantitative Analysis&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>cybersecurity</category>
      <category>opensource</category>
      <category>devops</category>
    </item>
    <item>
      <title>SnowFROC 2026: Secure Defaults, Real Trust, and a Better Layer on Top</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Tue, 05 May 2026 12:32:16 +0000</pubDate>
      <link>https://forem.com/gitguardian/snowfroc-2026-secure-defaults-real-trust-and-a-better-layer-on-top-npn</link>
      <guid>https://forem.com/gitguardian/snowfroc-2026-secure-defaults-real-trust-and-a-better-layer-on-top-npn</guid>
      <description>&lt;p&gt;Denver likes a good origin story. The city still keeps a marker for &lt;a href="https://visitdenver.com/blog/post/cheeseburger-birthplace/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Louis Ballast and the Humpty Dumpty Barrel, the local spot tied to the cheeseburger's Colorado claim&lt;/a&gt;. That detail felt oddly right for &lt;a href="https://snowfroc.com/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;SnowFROC 2026&lt;/a&gt;. A cheeseburger is a small upgrade that changes the whole meal. This year's conference kept returning to the same ideas in AppSec, such as how meaningful security progress often comes from well-placed layers that make the better choice easier to make.&lt;/p&gt;

&lt;p&gt;The Snow in "SnowFROC" is due to the time of year the event takes place and the good possibility that it will snow, &lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3mjplq47s4m2x?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;which it did this year&lt;/a&gt;. The other half of the name stands for Front Range OWASP Conference. This year, they expanded it into a two-day event in Denver that drew about 400 attendees to see 35 sessions, take part in 8 half-day training sessions, a CTF, and multiple village activities. The room carried that blend of practical curiosity and sharp hallway conversation that makes any security conference worth the trip.&lt;/p&gt;

&lt;p&gt;Throughout the event, the sessions covered how software is actually built now: fast, AI-assisted, dependency-heavy, and spread across more people and systems than any one security team can fully monitor alone. The strongest sessions focused on incentives, workflows, trust boundaries, and the places where attackers keep finding leverage because defenders still leave too much to intent, memory, and good luck.&lt;/p&gt;

&lt;p&gt;Here are just a few notes from SnowFROC 2026.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Human Layer in Secure Defaults
&lt;/h2&gt;

&lt;p&gt;In the keynote from &lt;a href="https://ca.linkedin.com/in/tanya-janca?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Tanya Janca, founder of She Hacks Purple Consulting&lt;/a&gt;, called "Threat Modeling Developer Behavior: The Psychology of Bad Code," she explained that in AppSec, insecure code is rarely just a technical failure. It is usually a human one. Developers work under pressure, chase deadlines, respond to incentives, and fall back on habits, biases, and shortcuts that feel reasonable in the moment. Instead of telling people they are wrong and expecting better outcomes, AppSec teams need to understand why those choices happen in the first place. Psychology helps explain the gap between what teams say they value and what their systems actually reward.&lt;/p&gt;

&lt;p&gt;Tanya talked about intervention and prevention over blame. Secure defaults beat secure intent because they remove friction and make the safer path the easier one. That can look like pre-commit hooks, IDE nudges, secure-by-default templates, and frequent reminders placed where decisions actually happen, not buried in a wiki. The same logic applies to training. Annual compliance sessions and lists of what not to do do not change behavior very well. Teaching secure patterns, explaining the why behind them, and reinforcing them in small daily ways is far more likely to stick. The goal is not more nagging. It is better environmental design.&lt;/p&gt;

&lt;p&gt;Tanya shared her experiences about AI-assisted coding triggering automation bias, where people trust confident suggestions too quickly. Tight deadlines push present bias, making future breach risk feel abstract next to immediate shipping pressure. Copying code from forums, skipping tests, ignoring warnings, avoiding documentation, or showing off with clever code all follow similar patterns.&lt;/p&gt;

&lt;p&gt;She asked us all to build systems that reward maintainable, tested, secure work and measure what actually matters, including time to fix, adoption of secure patterns, and real vulnerability reduction. If teams want secure coding to be real, they have to make it the path of least resistance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kyie53m3tfvrn8rz33m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kyie53m3tfvrn8rz33m.png" alt="Tanya Janca" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Trust Has Become a Supply Chain Primitive
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/chris-lindsey-39b3915?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Chris Lindsey, Field CTO at OX Security&lt;/a&gt;, started his talk "Inside the Modern Threat Landscape: Attacker Wins, Defender Moves, and Your Priorities," with a reminder that choosing not to act is still a choice. In today's threat landscape, a small set of attack vectors keeps showing up in outsized breaches, including credential theft, session hijacking, phishing, typosquatting, browser extensions, DNS poisoning, and software that appears to come from trusted sources. The common thread is trust. Attackers do not usually break in by brute force alone, instead they build credibility first through a convincing email or a familiar package name, or a browser extension that looks legitimate on the surface.&lt;/p&gt;

&lt;p&gt;Chris asked us to think in terms of what security leaders are asked by boards all the time and often struggle to answer: what did we actually get for this investment? What we need more disciplined framework for evaluating security spending based on risk reduction per dollar. That means asking better questions up front: what threat does this control address, what does it really cost once licensing, implementation, staffing, and maintenance are included, and what measurable reduction in exposure does it create? This is how you get to structured decision-making. When security teams can explain why one control was prioritized over another in terms that leadership understands, the conversation changes from vague reassurance to defensible tradeoffs.&lt;/p&gt;

&lt;p&gt;If software and packages are still being pulled in freely, if extensions get broad permissions without scrutiny, and if reviews stop at surface-level validation, the pipeline stays open to abuse. Chris walked through examples that looked benign at first glance but revealed patterns of Trojan behavior, suspicious permissions, deceptive imports, callback infrastructure, and signs of rushed or obfuscated code. Prioritization is key.&lt;/p&gt;

&lt;p&gt;He gave us the practical advice of what we could immediately implement: Scan software before use, review open source with stronger technical oversight, pin safe packages, and introduce cooldown periods. We must adopt a posture in which we rotate keys aggressively, sever malicious command-and-control connections urgently, and embrace AI to scale analysis where it adds real value. Attackers are operating in the real world and have no intention of reading your threat model. Your defenses need to be just as practical and reality-based.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flft7ggs4zdwvtgbd2a8j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flft7ggs4zdwvtgbd2a8j.png" alt="Chris Lindsey" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  npm's Crisis Is Really an Operations Story
&lt;/h2&gt;

&lt;p&gt;In the session from &lt;a href="https://www.linkedin.com/in/jenngile?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Jenn Gile, founder of OpenSourceMalware.com&lt;/a&gt;, called "npm's dark side: Preventing the next Shai-Hulud," she presented the last year of npm account takeovers and package compromises as a lesson in how malware now rides normal engineering behavior. Jenn drew a sharp line between two kinds of software risk: accidental vulnerabilities and intentionally malicious packages. A vulnerability is a flaw that can be exploited if an attacker has a viable path. Malicious software is built from the start to cause harm, often by targeting developers and build environments directly, and it does not always need the same kind of runtime path to do damage. Malicious code does rely, though, on abusing trust. When trust is the vector, the usual instinct to stay on the latest version can become part of the problem.&lt;/p&gt;

&lt;p&gt;The heart of the session was account takeover (ATO) and why npm remains such an attractive target. Install scripts still run by default, and provenance is not mandatory. Long-lived publishing tokens remain common. In practice, that means attackers do not always need to break the package ecosystem itself. They can hijack trust that already exists. Jenn walked through a string of compromises from 2025 into 2026, including phishing campaigns, typosquatted domains, spoofed maintainer emails, CI and GitHub Actions token theft, and follow-on attacks that used stolen secrets to widen the blast radius. The throughline across cases like Nx, Qix, &lt;a href="https://blog.gitguardian.com/shai-hulud-2/" rel="noopener noreferrer"&gt;Shai-Hulud&lt;/a&gt;, &lt;a href="https://blog.gitguardian.com/team-pcp-snowball-analysis/" rel="noopener noreferrer"&gt;TeamPCP&lt;/a&gt;, and Axios was not just a technical weakness. It was how easily trusted maintainers, trusted packages, and trusted upgrade habits could be turned against the people relying on them.&lt;/p&gt;

&lt;p&gt;Jenn explained that hardware keys help protect the human authentication path, while trusted publishing helps protect the machine path by tying publication to a specific GitHub Actions identity. Session-based authentication can reduce exposure windows, even if it does not eliminate the risk of phishing. However, strong controls only work if teams actually use them, and right now, friction and bias still get in the way.&lt;/p&gt;

&lt;p&gt;Jenn's advice was to treat malware prevention as a team sport across development, product security, cloud security, and incident response. Use lockfiles, avoid automatic upgrades, scrutinize lifecycle scripts, harden CI, scan for malware earlier, rotate and scope credentials, monitor for misuse, and build supply chain playbooks that account for how malware behaves differently from ordinary vulnerabilities, especially in the JavaScript and Python ecosystems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5hs89a186y45fh02mh8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5hs89a186y45fh02mh8.png" alt="Jenn Gile" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Scale Comes From Systems, Not Heroics
&lt;/h2&gt;

&lt;p&gt;In the final talk of the day, from &lt;a href="https://www.linkedin.com/in/mudita-khurana-87b72442/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Mudita Khurana, an Airbnb staff security engineer&lt;/a&gt;, called "Scaling AppSec through humans &amp;amp; agents," they presented a model for handling a world where code volume is rising fast, AI tools are now common, and meaningful portions of code are being produced outside the old IDE-centered workflow. She explained her company is seeing more code, more contributors, and far more code generated with AI than even a few years ago. Today nearly all pull request authors are using AI coding tools weekly, a meaningful amount of code is now written by non-engineers outside the IDE, and a large share of total code is AI-generated. Mudita explained you cannot keep up by adding manual review alone. Their response is a layered one: unified tooling to create consistency, LLM agents to extend coverage, and a human network to bring judgment and context where automation still falls short.&lt;/p&gt;

&lt;p&gt;A single security CLI acts as the abstraction layer over capabilities like static analysis, software composition analysis, secrets detection, and infrastructure-as-code scanning, with the same experience, exemptions, and metrics no matter where it runs. That lets security checks show up across the developer workflow, from lightweight pre-commit feedback to fuller pull request scans and post-merge coverage.&lt;/p&gt;

&lt;p&gt;On top of that, the team is using AI for security review in a more grounded way than generic prompting. Instead of asking a model for a broad security pass, they feed it security requirements as code, along with internal frameworks, auth models, and known anti-patterns. They also measure prompt changes against a dataset built from real historical vulnerabilities, which gives them a baseline for whether the agents are actually improving.&lt;/p&gt;

&lt;p&gt;The part of their plan that Mudita was the most excited to share was their security champions program. They do not treat this program as volunteer side work. It is tied to the engineering career ladder, backed by real responsibilities, and supported with a two-way flow of data between security and the orgs doing the work. These champions help write custom rules, triage findings, support risk assessments, and drive adoption because they understand the business context in a way central security teams often cannot. They have created a feedback loop where human insight improves the tools, the tools improve the signal, and prevention gradually moves earlier, into the IDE, into AI prompts, and into the default way code gets written.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpl9u2793vocmnr2kyslr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpl9u2793vocmnr2kyslr.png" alt="Mudita Khurana" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Security that lives where decisions happen
&lt;/h2&gt;

&lt;p&gt;One pattern ran through almost every strong session: security works best when it shows up at the point of action. In an IDE. In a pull request. In a package policy. In a browser extension review. In a token issuance flow. In a prompt used by an AI assistant. Teams still lose time when secure guidance lives in a wiki, a yearly training deck, or a control that runs too late to influence the original choice.&lt;/p&gt;

&lt;p&gt;That shift sounds simple, but it changes program design. It favors lightweight friction, contextual signals, paved paths, and small reminders over large annual campaigns. It also favors security teams that can collaborate with developer platforms, identity teams, and cloud teams instead of operating as a separate review function.&lt;/p&gt;

&lt;h3&gt;
  
  
  The new perimeter is made of borrowed trust
&lt;/h3&gt;

&lt;p&gt;Modern software development depends on borrowed trust. Developers trust registries, packages, maintainers, AI suggestions, browser tools, and automation pipelines. Organizations trust tokens, runners, integrations, and service accounts to behave within expected bounds. Attackers know that every one of those relationships can be bent.&lt;/p&gt;

&lt;p&gt;That has direct implications for secrets management and non-human identities. A stolen token, an over-scoped credential, or a poisoned dependency can move through trusted systems much faster than traditional controls were built to handle. The answer is tighter provenance, shorter credential lifetimes, stronger attestation, clearer ownership, and continuous review of the trust assumptions hiding inside delivery pipelines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Maturity now means feedback loops
&lt;/h3&gt;

&lt;p&gt;There was another persistent theme that we need to focus on creating feedback loops. Behavioral nudges need measurement to know how to improve them. Threat prioritization needs cost and impact models to claim success. AI review needs evaluation against real defects to be meaningful. Supply chain response needs intelligence, containment, and recovery steps that teams can actually execute.&lt;/p&gt;

&lt;p&gt;Mature AppSec programs increasingly look like systems that learn. They collect signals, improve defaults, refine detections, tighten identity boundaries, and push lessons back into the places where code and infrastructure are created. The organizations that do this well will handle AI-generated code, secrets sprawl, and NHI governance with more control because they have already built the habit of turning incidents and friction into better operating models.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mile High City Learnings
&lt;/h2&gt;

&lt;p&gt;SnowFROC 2026, which happens at the highest altitude of any OWASP event, felt grounded in the best way. Talks treated security as daily operating design that focused on how people are rewarded, how trust is granted, how credentials spread, and how teams scale judgment without burning out the humans in the loop. Your author was able to give a talk about how we moved from slow waterfall based deployment to a world of DevOps where we have never deployed more, faster. We have a golden opportunity as we adopt AI across our tool chains to rethink authentication in a meaningful way that might just reverberate through all our stacks of non-human identities. That is the feedback loop we can all benefit from.&lt;/p&gt;

&lt;p&gt;For teams thinking about identity risk, secrets exposure, and the governance of machine-driven development, SnowFROC offered a useful path forward. Start with defaults. Reduce silent trust. Treat credentials and dependencies as live operational risk. Then build feedback loops that make the next secure decision easier than the last one. That is a practical agenda, and after a snowy spring day in Denver, it also feels achievable.&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>appsec</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>ATLSECCON 2026: Context, Identity, and Restraint in Modern Security</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Mon, 04 May 2026 12:38:30 +0000</pubDate>
      <link>https://forem.com/gitguardian/atlseccon-2026-context-identity-and-restraint-in-modern-security-40lf</link>
      <guid>https://forem.com/gitguardian/atlseccon-2026-context-identity-and-restraint-in-modern-security-40lf</guid>
      <description>&lt;p&gt;Harbor cities understand accumulated risk. Cargo moves in quietly. Weather shifts by degrees. One bad assumption can sit unnoticed until it reaches critical mass. Halifax has lived with that kind of memory for more than a century. On &lt;a href="https://parks.canada.ca/culture/designation/evenement-event/halifax-explosion?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;December 6, 1917, a collision in Halifax Harbor triggered the largest man-made explosion prior to the atomic bomb&lt;/a&gt;, a disaster that directly changed the lives of over 11,000 people and permanently altered the city's sense of consequence.&lt;/p&gt;

&lt;p&gt;That history gave this year's &lt;a href="https://www.atlseccon.com/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Atlantic Security Conference, ATLSECCON&lt;/a&gt;, a useful backdrop. Organizers noted that this event started in 2011 with only 45 people in attendance. The conference has grown into a major regional gathering, with more than 1,750 participants this year. This year's volunteer-led Halifax security conference featured over 70 industry leaders and subject-matter experts delivering sessions across seven speaking tracks.&lt;/p&gt;

&lt;p&gt;A lot of events in 2026 are still trying to sort out how to talk about AI without either flattening the subject into product language or inflating it into prophecy. ATLSECCON largely avoided both traps. The strongest sessions kept returning to a more durable concern: the systems around us are getting faster, less legible, and more distributed, which means trust now depends on context, observability, and restraint.&lt;/p&gt;

&lt;p&gt;Here are just a few takeaways from this year's ATLSCCON.&lt;/p&gt;

&lt;h2&gt;
  
  
  All Data Has A Half-Life
&lt;/h2&gt;

&lt;p&gt;In the opening keynote session from the legendary &lt;a href="https://www.linkedin.com/in/wendynather?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Wendy Nather, Senior Research Initiatives Director at 1Password,&lt;/a&gt; called "Dangerous Data," we were presented with a sharp reminder that data security is no longer just a matter of access control or storage hygiene. What words mean changes over time. Context changes the risk of the same records, which were safe until something was updated. Systems that process data at machine speed can weaponize those changes long before a human review loop catches up.&lt;/p&gt;

&lt;p&gt;Wendy's framing around integrity attacks moved past the familiar story of theft, though stolen data is very much still a problem. Corrupted data, subtly altered records, changed semantics, and AI-assisted manipulation create a harder reality to deal with. Failures from these data changes undermine trust itself, which is usually the thing security teams discover only after everything downstream starts behaving strangely. She made a very strong point about how we are viewing AI with a "toxic anthropomorphism." Teams keep assigning intent and understanding to AI systems as if they are accountable actors rather than pattern engines. That habit creates design mistakes, policy mistakes, and false confidence.&lt;/p&gt;

&lt;p&gt;Wendy explained that security teams need a richer model of data than classification labels and database boundaries. Time, event context, ownership, and downstream use all matter. So does deletion. The old instinct to collect everything because it might become useful later now is dangerous in a world of AI-speed parsing, as data theft and corruption mean a larger blast radius than ever before. There is a direct crossover with secrets sprawl and identity sprawl, which follow the same pattern. Unbounded accumulation feels efficient until the context shifts and the liability becomes the story.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb9afbupph257zrdx1te.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb9afbupph257zrdx1te.png" alt="Wendy Nather" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Exposure Needs a Business Compass
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/tara-jaques-959994b7/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Tara Jaques, Technical Director at Tenable&lt;/a&gt;, presented "Beyond the Silos: Operationalizing Exposure Management in a Fragmented Landscape," a practical case for treating exposure management as an operating model rather than a dashboard category. That distinction mattered. Tara explained that fragmented tools and fragmented teams do not just slow response; they distort judgment.&lt;/p&gt;

&lt;p&gt;She defined exposure narrowly enough to be useful: A risk only becomes an exposure when it is preventable, exploitable, and capable of meaningful impact. That actually cuts through a lot of noise in a time when we have never had more signal. Security teams are drowning in findings, and many programs still reward "motion" over "consequence." Tara argued for context over volume, with prioritization tied to how weaknesses combine in real environments and how those combinations map to actual business harm.&lt;/p&gt;

&lt;p&gt;Organizations are dealing with sprawling SaaS estates, AI-driven change, and identities acting as the new perimeter. The maturity model she outlined was helpful because it describes progress as operational rather than aspirational. Unified visibility, cross-functional alignment, and continuous prioritization might not be glamorous goals, but they are the difference between a security program that can explain its choices and one that can only describe its backlog. For teams working on identity risk or non-human identity governance, that is the same journey. You cannot govern what you cannot inventory. You cannot prioritize what you cannot place in context.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6e4g8d2757kpp7pbvvx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6e4g8d2757kpp7pbvvx.png" alt="Tara Jaques" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Agent Problem Is a Runtime Problem
&lt;/h2&gt;

&lt;p&gt;In the session by &lt;a href="https://ca.linkedin.com/in/jasonkeirstead?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Jason Keirstead, Founding CTO at LangGuard.AI&lt;/a&gt;, "Your AI Agents Are Lying To You," he presented one of the conference's clearest explanations for why agentic systems appear convincing in tests but become dangerous in production. In short, we can test under the best conditions, but the real world has far more nuance, data, and adversaries than we can ever replicate in the lab.&lt;/p&gt;

&lt;p&gt;Jason walked through incidents where agents hallucinated, acted, and then concealed what happened. That sequence breaks the habits that many teams still carry over from traditional application monitoring. A conventional system crashes, throws an error, or fails a control in a way that looks familiar. An agent can proceed confidently, misuse legitimate tool access, and generate a plausible explanation while doing damage. The problem is not just model quality, but instead, a runtime reality. Prompt injection, model substitution, credential drift, tool sprawl, and silent changes in access surface are not normally things that teams generally test their internal agentic systems on.&lt;/p&gt;

&lt;p&gt;The strongest part of the talk was how concrete advice Jason gave: inventory your agents, trace what they do at runtime, and enforce policy as code. We must map credentials back to accountable humans and build agent-aware incident response. It is what a control plane should be accounting for. An AI agent with broad tool access and long-lived credentials is functionally a privileged non-human identity. Treating it as a novelty creates the gap. Treating it as a governed identity gives teams a chance to close it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep67gzxwfsmm35csvh38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep67gzxwfsmm35csvh38.png" alt="Jason Keirstead" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Identity Became The Fastest Path To Breach
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://ca.linkedin.com/in/fortinpascal?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Pascal Fortin, CEO at Cybereco&lt;/a&gt;, presented "When AI Broke Your Security Model: What Still Works, What's Dead, and What to Fix First." He explained that attack chains that once gave defenders hours now compress into seconds. AI did not change what attackers want. It changed how fast and how cheaply they can get it.&lt;/p&gt;

&lt;p&gt;The old assumptions that shaped many security programs have quietly died: MFA alone does not reliably protect accounts, help desks cannot safely verify callers with basic personal information, 90 days of logs is not enough, and IOC-driven detection misses attackers who steal identities and live off legitimate tools. The speed shift is the real story that Pascale delivered. What once took hours now happens in minutes, and in some cases in 22 seconds.&lt;/p&gt;

&lt;p&gt;The infostealer-to-identity pipeline makes that possible by turning stolen passwords, browser sessions, and tokens into full account access without malware, while techniques like adversary-in-the-middle phishing and voice-based social engineering make MFA bypass and account takeover scalable. That is why the legacy SOC model no longer holds up. Human-led alert triage, manual case building, and slow approval chains cannot keep pace with machine-speed attacks.&lt;/p&gt;

&lt;p&gt;What still works is phishing-resistant MFA like FIDO2 and passkeys, short-lived credentials, control-plane separation, immutable backups, behavioral detection, and cross-domain sequence analysis. The first priority is identity: harden the help desk, remove weak MFA for privileged roles, extend log retention, inventory AI agents and OAuth apps, and build preapproved containment paths that can act in seconds. The analysts' role is moving up the stack, away from routine triage and toward workflow design, threat hunting, detection tuning, and high-severity judgment. Your controls are not broken. Your assumptions are.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm884rdz39th1qo6x1s8s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm884rdz39th1qo6x1s8s.png" alt="Pascal Fortin" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Context is now part of the control
&lt;/h2&gt;

&lt;p&gt;Across the conference, talks shared a common premise: any controls without context have limited value. That showed up in data integrity, exposure management, resilience planning, and AI security. Teams have plenty of telemetry. The harder problem is understanding what matters now, what changed recently, and what combinations of facts create real consequences.&lt;/p&gt;

&lt;p&gt;That is as much a maturity issue as a technical one. We need sharper signals, tied to business conditions and operational reality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Trust is being pushed down the stack
&lt;/h3&gt;

&lt;p&gt;In the past, teams often assumed trust sat higher up. You trusted the user, the analyst, the admin, the workflow, or the approval process. Now, a lot of the failure happens lower down, inside the machinery. Logs can be incomplete. An AI agent can take actions too fast for a human to review. A SaaS integration can quietly gain more access over time. A user interface can steer people toward unsafe choices. By the time a human notices, the system has already made the trust decision for them.&lt;/p&gt;

&lt;p&gt;You have to express trust in technical controls, not just in intentions. Systems need to show who did what, when, with what authority, and what changed. Identity has to be tightly governed, especially for service accounts, OAuth apps, and AI agents. Recovery has to be designed in advance because prevention will ultimately, in some way fail.&lt;/p&gt;

&lt;p&gt;Identity systems, tokens, logs, APIs, permissions, provenance, telemetry, and containment controls now carry more of the burden of proving whether something is legitimate. Security teams can no longer rely on people saying, "Trust us, this is fine." The system itself has to make trust visible and enforceable.&lt;/p&gt;

&lt;h3&gt;
  
  
  The winning move is disciplined reduction
&lt;/h3&gt;

&lt;p&gt;One of the quiet patterns across the two days was restraint. Collect less. Expose less. Grant less. Assume less. Several speakers, from different angles, arrived at the same conclusion: complexity is feeding the adversary. Sprawl creates ambiguity, and ambiguity creates time for attackers.&lt;/p&gt;

&lt;p&gt;That is why exposure management, identity governance, and secrets hygiene belong in the same conversation. Each discipline is trying to reduce unnecessary pathways before they become incidents. That is less dramatic than breach rhetoric, but it is how mature programs get built.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Halifax Makes Easy to See
&lt;/h2&gt;

&lt;p&gt;ATLSECCON did a good job resisting easy stories about novelty and disruption. Sessions focused on data accumulation, consequences, and the need to build organizations that can still make sound decisions when speed increases and visibility drops. Your author was able to talk about the impact of holding onto old ways to authenticate while the world evolved at a dizzying pace. We must embrace identity-centric designs and be pragmatic about visibility into the reality of the systems we have built so far.&lt;/p&gt;

&lt;p&gt;We must treat AI agents as governed identities, the same as we should have been doing for all non-human identities deployed in our environments. We need to reduce secrets and credential sprawl before they become operational debt. Teams need to prioritize exposures that matter to the business, not just the scanner, as they invest in observability that helps teams reconstruct intent and sequence, not simply collect artifacts. Organizational trust now depends on architecture and operations moving together.&lt;/p&gt;

&lt;p&gt;Security still has plenty of room for cleverness, but this moment rewards discipline more. Halifax has a long memory for what happens when dangerous material, busy systems, and thin margins of error meet in the same place. ATLSECCON 2026 turned that history into something useful: a conference full of reminders that resilience starts well before the blast.&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>ai</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>What the Mythos-Ready Briefing Says About Credentials</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Fri, 01 May 2026 12:53:05 +0000</pubDate>
      <link>https://forem.com/gitguardian/what-the-mythos-ready-briefing-says-about-credentials-2ik3</link>
      <guid>https://forem.com/gitguardian/what-the-mythos-ready-briefing-says-about-credentials-2ik3</guid>
      <description>&lt;p&gt;The &lt;a href="https://labs.cloudsecurityalliance.org/mythos-ciso/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Mythos-ready briefing&lt;/a&gt; landed last week, co-signed by Jen Easterly, Bruce Schneier, Heather Adkins, Rob Joyce, Chris Inglis, Phil Venables, and 60+ other CISOs from Google, Snowflake, Atlassian, and organizations like the NFL and TransUnion. Among the controls they named as critical for the AI vulnerability era were secrets rotation, non-human identity governance, early detection of compromise, and honeytoken-based deception. If you've been pushing for more budget for better secrets security, this is the document to put in front of your CISO.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the paper says about credentials
&lt;/h2&gt;

&lt;p&gt;The briefing is a response to Anthropic's &lt;a href="https://red.anthropic.com/2026/mythos-preview/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Claude Mythos Preview&lt;/a&gt; announcement, which reported autonomous discovery of thousands of zero-days across every major operating system and browser with a 72% exploit success rate. The paper lays out 11 priority actions, a risk register, and a 90-day plan for CISOs. Credentials underpin nearly every control it calls out.&lt;/p&gt;

&lt;p&gt;In the Key Takeaways, the authors name secrets rotation alongside segmentation, egress filtering, Zero Trust, and phishing-resistant MFA as mitigating controls that limit blast radius when exploitation occurs. The risk register tags "Unmanaged AI Agent Attack Surface" as CRITICAL, pointing to privileged agents operating outside existing control frameworks.&lt;/p&gt;

&lt;p&gt;Priority Action 8 ("Harden Your Environment") mandates phishing-resistant MFA for all privileged accounts and locking down the dependency chain. Priority Action 9 ("Build a Deception Capability") calls for deploying canaries and honeytokens, layered with behavioral monitoring and pre-authorized containment. The executive briefing section frames early detection of compromise as a metric boards should be tracking. This briefing is a rare alignment of industry leadership putting credential security squarely on the critical-controls list.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why a Mythos world makes credentials matter more
&lt;/h3&gt;

&lt;p&gt;There's a common misreading of the AI vulnerability story, which is that zero-days become the dominant threat and everything else fades. The paper's own Appendix A pushes back on that. The authors note that the historical collapse in time-to-exploit has not produced a proportional rise in exploitation impact, and that most consequential recent breaches came from credential abuse, social engineering, or supply chain compromise rather than zero-day exploitation.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://blog.gitguardian.com/verizon-dbir-2025/" rel="noopener noreferrer"&gt;2025 Verizon DBIR&lt;/a&gt; backs this up. Stolen credentials remain the leading initial access vector at 22% of all breaches, and 88% for basic web application attacks. Machine identities are now involved in some stage of &lt;a href="https://www.cyberark.com/resources/product-insights-blog/unified-security-bridging-the-gaps-with-a-defense-in-depth-approach?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;68% of IT security incidents&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Layer Mythos-class capability on top of that, and valid credentials become the fastest way in. When zero-days are cheap, they accelerate lateral movement after initial access. They don't replace credentials as the entry point. That's how the &lt;a href="https://cloud.google.com/blog/topics/threat-intelligence/unc5537-snowflake-data-theft-extortion?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Snowflake breach in 2024&lt;/a&gt; hit 165 organizations from credentials that had been sitting in infostealer logs, some since 2020. MFA wasn't enforced, rotation hadn't happened, and old credentials were still valid — no novel exploit needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI is accelerating the credential sprawl that underlies all of this
&lt;/h2&gt;

&lt;p&gt;That risk is accelerating. AI drives credential exposure on two fronts, volume and surface area. As the paper notes, higher code output with less consistent review increases the number of vulnerabilities that ship. The same velocity drives a parallel explosion in credential creation.&lt;/p&gt;

&lt;p&gt;Our &lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2025?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;State of Secrets Sprawl 2026 report&lt;/a&gt; found 29 million new hardcoded secrets exposed on public GitHub in 2025, a 34% year-over-year increase and the largest single-year jump on record. Credentials tied specifically to AI services surged 81% year-over-year.&lt;/p&gt;

&lt;p&gt;And 28% of secrets-related incidents in our 2026 data originated entirely outside source code. They showed up in CI/CD systems like &lt;a href="https://docs.gitguardian.com/ggshield-docs/integrations/cicd-integrations/github-actions?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitHub Actions and GitLab runners&lt;/a&gt;, in &lt;a href="https://blog.gitguardian.com/secrets-leaked-outside-the-codebase/" rel="noopener noreferrer"&gt;collaboration surfaces like Slack, Jira, and Confluence&lt;/a&gt;, and on developer machines.&lt;/p&gt;

&lt;p&gt;Those are now the same surfaces AI agents read, summarize, and act on as part of day-to-day workflows. The paper's "10 Questions" diagnostic asks whether organizations have disciplined control of their agentic supply chain, including MCP servers, plugins, and skills. The credential question sits directly underneath: what secrets do those systems hold, where do they live, who owns them, and how fast can they be rotated when something goes wrong?&lt;/p&gt;

&lt;p&gt;In most enterprise environments, non-human identities already outnumber human users by a ratio of roughly &lt;a href="https://nhimg.org/nhi-challenges?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;25-50x&lt;/a&gt;. Very few organizations have an inventory of the ones they already have, let alone the ones AI agents are creating at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  What security teams actually need
&lt;/h2&gt;

&lt;p&gt;Security teams need visibility everywhere credentials actually sprawl: repos, CI logs, container layers, tickets, chat threads. That's a solvable problem. The harder part is connecting each exposed secret to the non-human identity behind it and figuring out which services, workloads, or automations depend on it. Without that context, triage stalls, and an exposed credential gets used before anyone can act on it.&lt;/p&gt;

&lt;p&gt;Ownership is where most of this work breaks down. When a credential is exposed, the question "who owns this?" usually doesn't have a clean answer. The developer who committed it may have left the team. Often, the service it authenticates runs in a different group's infrastructure entirely. The rotation path may cross three systems that were never designed to coordinate with each other. In practice, that means the incident sits in a queue while three teams figure out whether it's theirs. Every hour in that queue is an hour the credential is live and usable. That's the exposure window.&lt;/p&gt;

&lt;p&gt;Non-human identities compound the problem. A service account created for a CI pipeline two years ago may have no human owner on record. No one's inbox to land in, no runbook to follow.&lt;/p&gt;

&lt;p&gt;Most security programs already struggle to detect exposed credentials. They don't even touch ownership and response, which is the gap GitGuardian was built to close. &lt;a href="https://www.gitguardian.com/monitor-internal-repositories-for-secrets?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian gives teams continuous secrets detection&lt;/a&gt; across source code and other places where secrets appear. That includes CI/CD systems like GitHub Actions and GitLab task runners, collaboration platforms like Slack and Jira, and developer environments down to the laptop. It surfaces exposed credentials where modern work actually happens, not just where security teams wish it did. From there, &lt;a href="https://www.gitguardian.com/nhi-governance?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;NHI discovery and ownership mapping&lt;/a&gt; connect exposed secrets to the service accounts, API keys, and machine identities that power agentic systems and automation at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  A case for moving credential hygiene up the priority list
&lt;/h2&gt;

&lt;p&gt;Containment is the whole game once time-to-exploit collapses to hours. You can't afford to find credential exposure days or weeks after the fact. A secret sitting in Slack or a build log doesn't show up in a vulnerability scan. An API key tied to an agent workflow still expands the attack surface. A service credential without an owner still slows every remediation step that follows.&lt;/p&gt;

&lt;p&gt;The paper draws a clear line through its 11 priority actions. With exploitation becoming both faster and more automated, response speed and blast-radius reduction move to the center. Secrets rotation, non-human identity governance, phishing-resistant MFA, and honeytoken-based detection belong at the front of the list as core resilience controls. They shape how quickly an organization can contain misuse once an attacker gets in, or once an agentic workflow is abused.&lt;/p&gt;

&lt;p&gt;Given what the data shows, those controls deserve to be on the 45-day track alongside environment hardening, not grouped underneath it. In our longitudinal dataset, &lt;a href="https://www.gitguardian.com/whitepapers/non-human-identity-whitepaper?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;64% of secrets leaked in 2022&lt;/a&gt; still hadn't been revoked as of 2026. The paper warns that time-to-exploit has collapsed to hours. Those two numbers don't coexist safely in the same environment.&lt;/p&gt;

&lt;p&gt;GitGuardian directly supports that shift. Secrets detection helps teams find exposed credentials before attackers do. Rotation signals and remediation workflows push incidents toward closure instead of letting them linger.&lt;/p&gt;

&lt;p&gt;NHI discovery and control help organizations understand which machine identities exist, what they can access, and who's responsible for them. &lt;a href="https://www.gitguardian.com/honeytoken?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian Honeytokens&lt;/a&gt; add an early warning layer that surfaces credential misuse before a broader incident unfolds. That maps directly to Priority Action 9 in the paper, which calls for honeytoken deployment, behavioral monitoring, and pre-authorized containment. The goal is a response that executes at machine speed.&lt;/p&gt;

&lt;p&gt;If you're building your 90-day plan from the Mythos briefing, credential security deserves to move up the list. Hardening, detection, and response all come down to the same question: when something moves, how fast can you contain it? The organizations that come through this well will be the ones that had that answer before they needed it. Our 2026 State of Secrets Sprawl report has the full picture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2026?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;strong&gt;Read the 2026 report&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>cybersecurity</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>AI Agents Authentication: How Autonomous Systems Prove Identity</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Thu, 30 Apr 2026 14:15:56 +0000</pubDate>
      <link>https://forem.com/gitguardian/ai-agents-authentication-how-autonomous-systems-prove-identity-4j0n</link>
      <guid>https://forem.com/gitguardian/ai-agents-authentication-how-autonomous-systems-prove-identity-4j0n</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; AI agents are acting entities given more and more autonomy to execute tasks, write code, and integrate with SaaS tools. To do so, they need to authenticate with numerous systems, making AI authentication a crucial security boundary that determines blast radius, revocability, and long-term governance risk.&lt;/p&gt;

&lt;p&gt;AI agents inherit and amplify credential risks. Organizations that treat AI agents as governed non-human identities, with scoped access, short-lived credentials, continuous secret monitoring, and lifecycle controls, will enable safe autonomy. Those who rely on static API keys and ad hoc credential management will accumulate invisible systemic risk.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why AI Agents Authentication Is Now a Security-Critical Control
&lt;/h2&gt;

&lt;p&gt;There is a fundamental asymmetry in how machine authentication fails compared to human authentication. With humans, the threat is impersonation: someone posing as someone else. With machines, the threat is subversion. If an attacker compromises the runtime itself, every authentication factor on that system becomes accessible simultaneously.&lt;/p&gt;

&lt;p&gt;Agentic AI compounds this because these systems are wired directly into the infrastructure that traditional controls were built to protect: internal APIs, cloud environments, SaaS platforms, regulated data stores. Most agents do not have a distinct, auditable identity of their own. &lt;strong&gt;They inherit the user's credentials&lt;/strong&gt;, assume a shared service account, or act under broad delegated permissions, which means there is no clean way to scope, review, or revoke their access independently of the humans and systems they work alongside.&lt;/p&gt;

&lt;p&gt;In July 2025, &lt;a href="https://incidentdatabase.ai/cite/1152/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Replit's agent deleted a production database&lt;/a&gt; holding data for over 1,200 real companies, then generated 4,000 fake accounts to conceal it because nothing separated the agent's credentials from production write access.&lt;/p&gt;

&lt;p&gt;Prompt injection sharpens this further: because language models cannot reliably distinguish a legitimate instruction from a malicious one embedded in data they process, an agent with broad access can be redirected against the same systems it was built to serve.&lt;/p&gt;

&lt;p&gt;You cannot read the code of an LLM-backed agent and predict its behavior in production. With a deterministic microservice, static analysis and code review could partially substitute for rigorous authorization boundaries. With probabilistic autonomous agents, that option is gone. An agent provisioned with access to a sensitive API will eventually find a reason to use it, not because it is malicious, but because it is designed to explore available options to fulfill its goal. &lt;strong&gt;Authentication design is therefore the only reliable enforcement layer between AI autonomy and enterprise infrastructure.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When AI agents act, authentication determines what they can reach, what they can modify, how long they retain access, and how quickly you can shut them down.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do AI Agents Handle Authentication Today?
&lt;/h2&gt;

&lt;p&gt;Most AI agents authenticate using the same primitives as traditional machine identities: API keys, OAuth tokens, service accounts, IAM roles, or vault-issued credentials.&lt;/p&gt;

&lt;p&gt;While these mechanisms are well-understood, there is a real governance gap today: authentication decisions for AI agents are made quickly during development and rarely revisited as agents scale in capability or scope. The result is a pattern security teams have seen before: convenience wins at deployment, risk accumulates invisibly over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Static API Keys and Personal Access Tokens
&lt;/h3&gt;

&lt;p&gt;Developers (and other users) frequently provision API keys or reuse personal access tokens for rapid integration. These credentials are easy to deploy, often long-lived, rarely rotated, and commonly shared across environments. Most are bearer credentials (possession alone grants access).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In agentic systems, the exposure surface is broader than with traditional integrations.&lt;/strong&gt; Agents generate logs, produce configuration files, persist state, and replicate workflows automatically.&lt;/p&gt;

&lt;p&gt;GitGuardian's &lt;a href="https://blog.gitguardian.com/the-state-of-secrets-sprawl-2026/" rel="noopener noreferrer"&gt;State of Secrets Sprawl 2026&lt;/a&gt; report found &lt;strong&gt;28.65 million hardcoded secrets&lt;/strong&gt; added to public GitHub in 2025 alone, a 34% year-over-year increase and the largest single-year jump on record. More relevant to the agentic context: secret leak rates in AI-assisted code ran &lt;strong&gt;roughly double&lt;/strong&gt; the GitHub-wide baseline throughout the year. As code generation velocity increases, human supervision capabilities decrease, resulting in secrets sprawl scaling with agentic systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  OAuth Tokens
&lt;/h3&gt;

&lt;p&gt;OAuth is generally more secure, but only when properly implemented. Risks arise when scopes are overly broad, refresh tokens are long-lived, or tokens are shared across multiple agents without distinct ownership.&lt;/p&gt;

&lt;p&gt;There is also a structural limitation that goes beyond implementation quality. OAuth validates individual requests; agents create &lt;strong&gt;sequences of requests&lt;/strong&gt;. Each individual call might be authorized, but the combined action across a chain of tool invocations can produce an unauthorized outcome that no single token check ever catches. The gap between request-level authorization and sequence-level behavior is where agentic AI authentication risk actually lives.&lt;/p&gt;

&lt;p&gt;The Model Context Protocol's attempt to standardize OAuth for AI agents has made this gap visible in a concrete way. The current MCP authorization spec relies on anonymous Dynamic Client Registration, meaning any client can register as a valid OAuth client without identifying itself. Enterprises cannot accept this because it makes monitoring, auditing, and revocation nearly impossible, and it opens denial-of-service vectors. It also forces MCP servers to maintain stateful token mappings, which breaks horizontal scaling. This is a live example of how agentic systems expose problems that traditional OAuth was never designed to address.&lt;/p&gt;

&lt;p&gt;There is a harder problem that even well-implemented OAuth cannot solve today: &lt;strong&gt;cross-domain trust&lt;/strong&gt;. When an agent registered in one organization calls a service operated by another, neither OAuth 2.1 nor OIDC provides a standard mechanism to carry the agent's scoped permissions across that boundary. The receiving service has no reliable way to verify who provisioned the agent, under what constraints it operates, or whether the delegated scope is attenuated from the originating principal. For agentic AI architectures that span SaaS ecosystems or partner integrations, this is the current frontier. Organizations building cross-domain agent workflows should treat external service calls as untrusted by default and require explicit scope declaration at the API gateway layer until standards for cross-domain agent federation mature.&lt;/p&gt;

&lt;h3&gt;
  
  
  Service Accounts and IAM Roles
&lt;/h3&gt;

&lt;p&gt;Cloud-native AI agents often rely on AWS IAM roles, GCP service accounts, or Azure Managed Identities. These eliminate static secret storage, which is a genuine security improvement, but they shift the risk rather than eliminate it. Overprivileged role assignments, cross-environment credential reuse, and poor visibility into which agent assumes which role create a different class of exposure.&lt;/p&gt;

&lt;p&gt;NIST SP 800-207A is direct on this: "Each service should present a short-lived cryptographically verifiable identity credential to other services that are authenticated per connection and reauthenticated regularly." Meeting the short-lived requirement while neglecting scope granularity still leaves you with a short-lived token attached to an admin-level role, which is still excessive blast radius.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vault-Issued Dynamic Credentials
&lt;/h3&gt;

&lt;p&gt;Short-lived, identity-bound credentials issued dynamically by vault systems represent a stronger architectural pattern. They reduce standing credential risk, shrink the exposure window, and eliminate manual rotation burden.&lt;/p&gt;

&lt;p&gt;The operational reality is worth acknowledging: encryption is easy, key management is hard. Dynamic secrets are only secure when each AI agent has a unique registered identity, roles are tightly scoped, secret exposure monitoring is active, and revocation workflows are tested in advance. Many organizations are not yet mature enough to manage the overhead of dynamic secrets for ephemeral workloads. Dynamic issuance alone does not guarantee security.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Takeaway
&lt;/h3&gt;

&lt;p&gt;The authentication mechanism alone does not determine security. Security is defined by scope boundaries, token lifetime, attribution clarity, revocability, and lifecycle governance. In autonomous systems, an authentication decision is a blast radius decision.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI Agents Authentication Defines Blast Radius
&lt;/h2&gt;

&lt;p&gt;Authentication design is incident response planning in advance. The difference between a contained breach and a systemic compromise often comes down to a single architectural choice made months earlier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario A&lt;/strong&gt;: An AI DevOps agent is provisioned with a long-lived, admin-level API key to reduce integration friction. The key has organization-wide SaaS scopes and no expiration policy. The agent runtime is compromised through a prompt injection attack. The attacker inherits enterprise-scale access. Revocation requires manually hunting down a credential embedded across multiple environments and generated artifacts, a process measured in hours, sometimes days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario B&lt;/strong&gt;: The same agent uses a short-lived, scoped OAuth token issued specifically for its deployment task. Compromise of the runtime yields access to one integration surface for the token's remaining lifetime, minutes rather than months. Formal revocation via RFC 7009 exists, but most resource servers cache token validation for the token's full lifetime rather than querying the IdP on every request — so in practice, short token lifetime &lt;em&gt;is&lt;/em&gt; the revocation mechanism. The shorter the TTL, the smaller the window an attacker has regardless of whether revocation is called.&lt;/p&gt;

&lt;p&gt;There is a second dimension that mechanism selection alone cannot address: the distinction between impersonation and delegation. When an agent acts using the user's identity, the audit trail records "Sarah performed action X," even when Sarah never approved it, never saw the reasoning behind it, and had no control over the decision. Delegation, where the agent maintains its own identity and acts on the user's behalf within bounded scope, preserves accountability. This distinction matters most when something goes wrong and incident responders need to reconstruct who did what, why, and under whose authority. Agentic AI authentication best practices require delegation as the default model, not impersonation.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Evaluate AI Authentication Methods
&lt;/h2&gt;

&lt;p&gt;Before selecting a mechanism, security architects need consistent decision criteria. Five dimensions apply across all runtime environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Blast radius containment&lt;/strong&gt;: Does the method enforce granular scope? Can it isolate environments? Does it prevent privilege escalation? For autonomous systems this is the primary criterion, because agents can probe and expand their operational surface in ways that static code cannot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Revocability&lt;/strong&gt;: How fast can access be terminated? Can revocation be centralized and traced to a specific agent? The practical question during an incident is whether containment takes minutes or hours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exposure resistance&lt;/strong&gt;: AI systems generate logs, code, configuration files, and prompt responses, all surfaces where credentials can appear. Authentication methods must minimize the impact of token leakage across all of them. This is where AI authentication intersects directly with secret detection: the most secure mechanism is undermined if its credentials routinely surface in CI logs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Security models must support hundreds of AI agents across multi-cloud architecture, SaaS ecosystems, and continuous deployment cycles. Any process requiring manual intervention at scale is a liability, not a control.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developer friction&lt;/strong&gt;: If secure methods are operationally complex, teams revert to API keys. The best AI agent authentication approaches balance strong security guarantees with operational simplicity. This constraint is real and should be designed for, not dismissed.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI Agent Authentication Methods Compared
&lt;/h2&gt;

&lt;p&gt;Not all authentication mechanisms provide equivalent containment, revocability, or governance maturity. The hierarchy below is organized by enterprise risk reduction.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tier 1: OAuth 2.1 / OIDC with Short-Lived, Scoped Tokens
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Best for&lt;/strong&gt;: SaaS integrations, cross-organization APIs, federated enterprise services.&lt;/p&gt;

&lt;p&gt;An identity provider issues short-lived access tokens with defined scopes. The AI agent receives delegated authorization rather than static credentials. When properly scoped and short-lived, compromise is typically contained to a defined integration surface.&lt;/p&gt;

&lt;p&gt;Primary risks include scope misconfiguration, long-lived refresh tokens stored insecurely, and tokens shared across multiple agents. Most OAuth access tokens remain bearer credentials, so risk is mitigated through short lifetimes and contextual access policies. Where a SaaS provider supports OAuth with scoped access, it should be the default; API keys should not substitute when delegated &lt;a href="https://blog.gitguardian.com/oauth-for-mcp-emerging-enterprise-patterns-for-agent-authorization/" rel="noopener noreferrer"&gt;OAuth for MCP&lt;/a&gt; is available.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tier 2: Workload Identity Federation / Managed Identities
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Best for&lt;/strong&gt;: Trusted cloud runtimes, containerized workloads, internal APIs.&lt;/p&gt;

&lt;p&gt;The cloud platform issues short-lived credentials tied to the workload's runtime identity. No static secret is stored in the application, eliminating an entire class of exposure risk.&lt;/p&gt;

&lt;p&gt;The principal risk is overbroad IAM role assignments and role reuse across environments. Cloud-native identity reduces secret exposure risk but not privilege risk. A unique identity per AI agent is not optional; shared roles create attribution blind spots that collapse forensic traceability after a breach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tier 3: mTLS / X.509 Certificate-Based Authentication
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Best for&lt;/strong&gt;: Zero-trust microservices, internal service-to-service communication in high-security environments.&lt;/p&gt;

&lt;p&gt;Both the AI agent and the target service present certificates and verify each other's identity via &lt;a href="https://blog.gitguardian.com/mutual-tls-mtls-authentication/" rel="noopener noreferrer"&gt;mTLS Authentication&lt;/a&gt;. Unlike bearer tokens, this is a proof-of-possession model. A stolen certificate is useless without the corresponding private key, which mitigates the replay attacks that plague token-based approaches.&lt;/p&gt;

&lt;p&gt;The operational complexity is real: PKI management, certificate issuance and renewal, and CRL/OCSP dependencies require infrastructure maturity. There is also a gap specific to AI agents that traditional SPIFFE/SPIRE workload identity deployments do not address. Current Kubernetes-based implementations assign identity at the service account level, meaning all replicas of a workload share the same identity. For deterministic APIs and stateless services this is acceptable. For AI agents, which are non-deterministic and context-driven, two instances of the "same" agent will behave differently based on inputs. Per-instance identity is required for real accountability and compliance traceability. &lt;a href="https://blog.gitguardian.com/getting-started-with-spiffe/" rel="noopener noreferrer"&gt;SPIFFE&lt;/a&gt; can provide it, but organizations extending existing workload identity infrastructure to AI agents without this adjustment inherit an attribution gap.&lt;/p&gt;

&lt;p&gt;See &lt;a href="https://blog.gitguardian.com/a-complete-guide-to-transport-layer-security-tls-authentication/" rel="noopener noreferrer"&gt;TLS Authentication&lt;/a&gt; for foundational implementation context.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tier 4: API Keys and Static Tokens
&lt;/h3&gt;

&lt;p&gt;API keys persist because they are simple, universally supported, and require no infrastructure. In agentic AI authentication, they represent the most common source of preventable risk.&lt;/p&gt;

&lt;p&gt;They are typically long-lived bearer credentials with limited granular scoping. Autonomous AI systems amplify the danger because they generate the exact artifacts, code commits, CI logs, configuration files, where credentials historically get embedded. Revocation is manual and reactive; blast radius is potentially broad and persistent until discovery.&lt;/p&gt;

&lt;p&gt;API keys are acceptable only when no stronger mechanism exists, and exclusively under strict compensating controls: vault-backed storage, unique key per agent, minimized TTL, and continuous secret exposure monitoring. Even then, they are a transitional choice, not a strategic architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tier 5: Hardcoded Secrets
&lt;/h3&gt;

&lt;p&gt;Embedding credentials in source code, prompts, configuration files, or logs is not a technical shortcut. It is a governance failure. It creates permanent exposure risk, spreads credentials across repositories and artifacts that are difficult to fully enumerate, and produces compliance violations with no clean remediation path. The right response is not better scanning; it is architectural redesign.&lt;/p&gt;




&lt;h2&gt;
  
  
  Choosing the Right AI Authentication Model by Environment
&lt;/h2&gt;

&lt;p&gt;Authentication must match the runtime context.&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;trusted cloud environments&lt;/strong&gt;, managed identities and short-lived tokens are the baseline. Static secrets have no place where the cloud platform provides an identity substrate. The primary governance task is ensuring roles are scoped per agent, not shared across workloads.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;SaaS and cross-organization integrations&lt;/strong&gt;, OAuth 2.1 with scoped delegated access is the correct default. Scope discipline is the operative control: agents should hold the minimum permissions required for their specific task, not a superset defined at initial integration.&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;zero-trust internal architectures&lt;/strong&gt;, mTLS with automated certificate management provides the strongest assurance. Mutual authentication ensures both parties verify identity; the agent cannot be impersonated by anything that cannot present a valid certificate and prove possession of the private key.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unmanaged endpoints and edge agents&lt;/strong&gt; must be treated as untrusted by default. They should never store long-lived secrets, must proxy authentication through a trusted backend, and should rely on ephemeral tokens only. The operational context is constrained; the security model must be the most conservative, not the most convenient.&lt;/p&gt;




&lt;h2&gt;
  
  
  Securing AI Authentication Across the Agent Lifecycle
&lt;/h2&gt;

&lt;p&gt;Authentication governance cannot be a one-time decision at deployment. It requires continuous control across the full agent lifecycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creation&lt;/strong&gt;: Every AI agent should be registered in IAM with a unique identity, mapped to a named business owner, and documented with its intended scope and credential requirements before deployment. This is the point where AI authentication vulnerabilities are either designed in or designed out. Security teams that skip identity registration here will spend considerably more effort reconstructing it after a breach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operation&lt;/strong&gt;: Continuous secret scanning of AI-generated outputs, monitoring of API call patterns, and periodic privilege review are not optional for agents in production. For high-risk or high-impact actions, OIDC Client-Initiated Backchannel Authentication (CIBA) offers a mechanism that most teams have not yet adopted but should be on the roadmap. CIBA lets an agent pause, request human approval through an async channel, and resume with a cryptographically verifiable token binding the human's consent to the specific action. The audit trail reads: "Agent performed X, approved by &lt;a href="mailto:alice@company.com"&gt;alice@company.com&lt;/a&gt; at 09:22 UTC." The token is short-lived, single-purpose, and bounded to the approved context — the correct architecture for agents operating near sensitive decisions.&lt;/p&gt;

&lt;p&gt;The practical limitation of any human-in-the-loop mechanism is &lt;strong&gt;consent fatigue&lt;/strong&gt;. When agents operate at volume, approval requests become noise and users begin approving everything reflexively. The scalable answer is not eliminating human oversight but shifting it upstream: define policy before the agent runs, not at runtime. IGA (Identity Governance and Administration) guardrails specify what categories of action require human approval, what can be automatically permitted within defined bounds, and what is blocked outright. The agent then operates within a pre-authorized policy envelope rather than generating individual approval requests. CIBA handles the exceptions; policy handles the rule.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt;: Credential rotation should be tied to lifecycle events, deployment updates, configuration changes, scope modifications, not to calendar schedules. Scope should be reassessed whenever an agent's capabilities change, because capability expansion without privilege review is how agents accumulate standing access invisibly over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decommission&lt;/strong&gt;: Immediate credential revocation and identity removal from IAM are mandatory at end of life, but these are not the same operation. Revocation terminates a specific active token; deprovisioning removes the agent's registered identity entirely — all credentials, stored sessions, persistent permissions, and any delegated scope chains the agent holds. Without a formal deprovisioning workflow, retired agents become ghost identities: removed from operational rotation but still technically credentialed. Emerging SCIM extensions for agentic identity (the AgenticIdentity schema draft) aim to standardize this lifecycle event so it can be automated rather than reconstructed manually per agent.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI Agent Authentication Best Practices for 2026
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Treat AI agents as governed non-human identities.&lt;/strong&gt; Machine identities already outnumber human identities at enterprise organizations by at least 45 to 1, a ratio accelerating with AI adoption. Without formal registration, scoped permissions, owner assignment, and audit inclusion, AI agents accumulate as shadow identities, the most capable actors in your infrastructure with the least oversight. For most enterprises, the practical path to governing AI agent credentials runs through existing PAM (Privileged Access Management) tooling. AI agents are NHIs, and PAM vendors are already extending their platforms to cover them, and security teams should evaluate that coverage before building separate pipelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Eliminate static secrets in favor of short-lived credentials.&lt;/strong&gt; Static API keys and hardcoded tokens are a permanent exposure risk in systems that generate artifacts at machine speed. Every credential should have an enforced expiration, not one set as an afterthought.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enforce scoped access and unique identities per agent.&lt;/strong&gt; Shared credentials between agents eliminate the attribution clarity required for both compliance and incident response. Each agent needs its own identity, its own role, and its own minimum-necessary permission set. Overprivileged AI agents represent the highest-severity authentication security risk in enterprise environments precisely because they operate continuously, autonomously, and at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Isolate identity for agents serving multiple users.&lt;/strong&gt; Enterprise AI assistants that serve multiple users concurrently face an additional risk: cross-context data leakage. When a single agent instance operates under a shared identity and processes multiple users' data, there is no credential-level boundary preventing one user's context from contaminating another's session or response. Each concurrent user context should be treated as a distinct authentication scope, with data access enforced at the identity layer, not only at the application level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continuously scan AI-generated outputs for secrets.&lt;/strong&gt; AI agents produce the exact artifacts, code commits, configuration files, CI logs, where credentials historically get embedded. Secret scanning integrated into AI-assisted development workflows is the compensating control that keeps agentic credential sprawl from becoming unmanageable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automate credential rotation and lifecycle controls.&lt;/strong&gt; Manual rotation does not scale to systems operating autonomously around the clock. Rotation triggers should be event-driven. Revocation on anomaly detection should be immediate and validated in advance, not discovered to be broken during an incident.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Maintain kill-switch capability.&lt;/strong&gt; Every autonomous system must have a tested shutdown pathway: centralized revocation, emergency privilege stripping, runtime isolation, and documented incident response playbooks for agent compromise. Autonomous systems operating without containment mechanisms are not secured. They are presumed safe.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Future of AI Authentication
&lt;/h2&gt;

&lt;p&gt;The current state of AI authentication reflects a transition period. The bearer token model, where possession equals access, is structurally mismatched with systems that generate artifacts, chain tool calls, and operate across trust boundaries without human oversight.&lt;/p&gt;

&lt;p&gt;The direction of travel is toward cryptographic identity at the request level. &lt;a href="https://www.ietf.org/archive/id/draft-patwhite-aauth-00.html?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;AAuth&lt;/a&gt;, a specification under development by Dick Hardt (author of OAuth 2.0), proposes a foundation where agents are first-class identities and every HTTP request is signed by the agent's key pair. Bearer tokens become irrelevant because a stolen token without the corresponding private key cannot be replayed. Delegation chains become explicit, visible, and auditable rather than reconstructed after the fact from logs.&lt;/p&gt;

&lt;p&gt;Multi-agent delegation chains introduce a constraint that standard token formats cannot enforce: each hop should only be able to reduce permissions, never expand them. With bearer tokens, there is no mechanism to prevent a sub-agent from reusing a delegated credential at full scope. Token formats like Biscuits and Macaroons address this through offline scope attenuation: the agent receiving a token can cryptographically restrict it before passing it downstream, and those restrictions cannot be removed by any party that doesn't hold the original minting key. This becomes the correct architecture for recursive agent orchestration, where the root credential should never be reachable from the leaf agents.&lt;/p&gt;

&lt;p&gt;AI-to-AI authentication will become a standard requirement as multi-agent architectures proliferate. When one autonomous agent instructs another, each hop in the chain must independently verify identity and authorization. Without this, a single compromised agent can cascade instructions through a downstream network that has no mechanism to question them.&lt;/p&gt;

&lt;p&gt;Self-provisioning identities, dynamic privilege negotiation, and real-time identity risk scoring will require policy enforcement infrastructure that evaluates agent behavior continuously, not just at initial credential issuance. The boundary between AI authentication and non-human identity governance will disappear, since they are the same discipline operating at different layers of the same problem.&lt;/p&gt;

&lt;p&gt;Security leaders building identity-first controls today are simply ahead of schedule.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary: Authentication Is the Primary Containment Boundary
&lt;/h2&gt;

&lt;p&gt;AI authentication determines access scope, breach containment speed, and lifecycle risk. The mechanism matters less than the scope it enforces, the lifetime it applies, the identity it uniquely represents, and the speed at which it can be revoked.&lt;/p&gt;

&lt;p&gt;Secure authentication architecture enables safe AI autonomy. Organizations that build it will treat AI agents as governed non-human identities, eliminate standing credentials, and validate their revocation capability before they need it. The ones that do not will discover, during an incident, that their most capable systems were also their least monitored.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;How do AI agents handle authentication in enterprise environments?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI agents typically authenticate using the same mechanisms as other non-human identities: API keys, OAuth tokens, service accounts, workload identities, or certificates. The difference is that autonomous agents operate continuously and may dynamically expand their integration surface, which makes authentication scope, token lifetime, and revocability far more critical than in traditional machine-to-machine use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are the most common AI authentication vulnerabilities?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The most common vulnerabilities include hardcoded API keys, long-lived static tokens, overprivileged service accounts, overly broad OAuth scopes, and poor credential rotation practices. In autonomous systems, these weaknesses are amplified because AI agents can replicate configuration errors, generate artifacts containing secrets, and operate at scale without direct human oversight.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is the best authentication method for AI agents?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There is no universal answer; the appropriate model depends on the environment. For SaaS and cross-organization integrations, OAuth 2.1 with short-lived, scoped tokens is typically preferred. For trusted cloud workloads, workload identity federation or managed identities reduce secret exposure risk. In high-security microservices environments, mTLS and certificate-based authentication provide stronger assurance through proof-of-possession. The strongest model across all contexts is one that minimizes static credentials and enables rapid, centralized revocation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Are API keys secure enough for AI systems?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;API keys are convenient but inherently risky for continuously operating agents. They are typically long-lived, replayable, and often lack granular scoping. If used, they must be vaulted, uniquely assigned per agent, rotated frequently, and continuously monitored for exposure. Whenever a stronger mechanism exists, API keys should be replaced; they are acceptable as a transitional choice under strict controls, not as a strategic architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How should organizations rotate AI agent credentials safely?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Credential rotation for AI agents should be automated and tied to lifecycle events: deployment updates, configuration changes, scope modifications, or anomaly detection triggers. Mature organizations use dynamic secret issuance from vault systems, enforce expiration by default, and test revocation workflows before they are needed in an incident. Manual rotation processes do not scale for autonomous systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Should AI agents have separate service accounts?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes. Each AI agent should have its own unique identity and service account. Shared credentials create attribution blind spots and significantly increase blast radius if compromised. Unique identities enable scoped permissions, centralized revocation, clear audit trails, and effective incident response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do you audit and monitor AI authentication at scale?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Auditing AI authentication requires a centralized identity inventory, continuous monitoring of secret exposure, behavioral telemetry analysis, and integration with IAM recertification processes. Security teams should be able to answer four questions for every agent in production: which credentials does it use, what systems can it access, when were its privileges last reviewed, and how quickly can access be revoked. Without those answers, AI authentication risk becomes invisible governance debt.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>devsecops</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>When We Use AI To Ship Fast, Secrets Spread Fast</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Thu, 16 Apr 2026 14:54:21 +0000</pubDate>
      <link>https://forem.com/gitguardian/when-we-use-ai-to-ship-fast-secrets-spread-fast-4c70</link>
      <guid>https://forem.com/gitguardian/when-we-use-ai-to-ship-fast-secrets-spread-fast-4c70</guid>
      <description>&lt;p&gt;One of the largest takeaways from the latest &lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2026?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian State of Secrets Sprawl Report&lt;/a&gt; is that in 2025, the way we all build software changed.&lt;/p&gt;

&lt;p&gt;First, there are more developers than ever before. Publicly active developers on GitHub grew 33% in 2025, and 54% of active developers, meaning someone who has pushed code to GitHub, made their first commit that year. That means a lot of new code, a lot of new projects, and a lot of new integrations arriving all at once. More builders usually means more issues in code and consequently more credentials being leaked. This most certainly was a factor contributing to the 28,649,024 new secrets GitGuardian found in public GitHub commits across 2025. This is a 34% year-over-year increase and the largest annual jump in the report's history. But it is more than just new devs and new agents. Since 2021, leaked secrets have grown 152%, while the public developer population grew 98%. Secrets are leaking faster than the developer base is growing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhykrz7w3xogf0hlelmr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhykrz7w3xogf0hlelmr.png" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But this year, there is another force in the system. AI has become part of the default software stack for almost all developers.&lt;/p&gt;

&lt;p&gt;That shift shows up clearly in the credential data. The report found 1,275,105 AI-service secrets exposed in 2025, with 81% year over year growth in AI-related service secrets. It also found that 12 of the top 15 fastest-growing leaked secret types were AI services. That is a strong signal that AI tooling is no longer a sidecar to the stack. It is the stack, or at least a growing layer of it.&lt;/p&gt;

&lt;p&gt;AI does not have to invent a new category of security mistakes to change the risk picture. It only has to increase the number of services, tools, workflows, and machine identities required to ship even ordinary software. More moving parts mean more keys. More keys mean more ways to leak them. The mechanics are familiar. The pace is new. Let's take a deeper look at what we found in the research across 2025.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI is now a real credential category
&lt;/h2&gt;

&lt;p&gt;The AI story goes well beyond model-provider keys. That is certainly a part of it, but just one aspect. The more interesting pattern is that the whole AI application layer is now visible in leaked secrets data. The fastest-growing detectors include the surrounding services that developers use to make AI features work in production.&lt;/p&gt;

&lt;p&gt;However, since LLMs are at the heart of all of this new infrastructure, let's start our analysis there.&lt;/p&gt;

&lt;h3&gt;
  
  
  LLM Platforms: The "Model as a Service"
&lt;/h3&gt;

&lt;p&gt;Of all the model providers, Deepseek showed the greatest change from the previous report, with 2,179% growth year-over-year.&lt;/p&gt;

&lt;p&gt;xAI showed similar year-over-year growth to OpenAI, at 555%, but with a volume smaller than Deepseek's, at 6,273 leaks. Rounding out the top-growing platforms that developers call directly are Mistral, up 578%; Claude, up 220%; LlamaCloud, up 109%; and Cohere, up 77%. These numbers show us that production-grade model access has become mainstream in real software.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvg5989qug03nebi84cb5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvg5989qug03nebi84cb5.png" alt="TOP 15 Fastest Growing Specific Detectors (YoY)" width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Other findings suggest that developers and teams are comparing and using multiple model vendors for their projects. This can be for failover and resiliency in their apps or to best find the model that supports their use case.&lt;/p&gt;

&lt;p&gt;The best example of this trend is OpenRouter, which saw an astounding 4,661% more leaks. Their platform usually does not operate a single proprietary model. Instead, they act more like a gateway that lets developers access and switch among multiple models through one API.&lt;/p&gt;

&lt;p&gt;NVIDIA is another example, where its keys often relate to AI infrastructure or AI service platforms that help teams run or deploy models, rather than being only a direct "one provider, one model API" offering.&lt;/p&gt;

&lt;h3&gt;
  
  
  Open Models Also Drive Leaks
&lt;/h3&gt;

&lt;p&gt;Hugging Face keys leaked over 130,000 times across public GitHub in 2025, staying about the same as the previous year. HuggingFace is important, though, as it often connects the "open model" world to inference platforms, which is part of the reason we think we are seeing Groq Cloud API keys leaking at a 211% higher rate in our findings. Groq was mainly known for its compute platform and inference hardware, enabling people to efficiently run certain open-source models but at the end of 2024, they began offering access through an API through Groq Cloud.&lt;/p&gt;

&lt;h3&gt;
  
  
  More AI Leaks From The Growing Support Ecosystem
&lt;/h3&gt;

&lt;p&gt;Other findings point to LLM usage being wrapped in a real application layer. Teams are building full products around the model, and those products need orchestration, monitoring, and data services.&lt;/p&gt;

&lt;p&gt;Supabase is the clearest example of the adoption curve turning into leakage risk. In the &lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2025?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;2025 report&lt;/a&gt;, we reported +97% year-over-year growth. This year, that growth rate jumps to +992%. Supabase is widely loved because it makes it easy to stand up a database-backed application quickly, and it has become a common default for modern AI projects, especially when developers want a fast on-ramp to vector-like workflows and retrieval patterns. The more teams reach for an AI-friendly "fast database" to ship quickly, the more likely those, often new, developers are to leak keys.&lt;/p&gt;

&lt;p&gt;Another supporting service is LangChain, which had ~200% more leaks. LangChain is an orchestration framework that helps developers connect models to tools, prompts, and workflows. Its keys show up when teams operationalize multi-step AI features. Other examples include Weights and Biases keys, up 200%, and Jina keys, up around 400%, both of which are used to track experiments, evaluate outputs, and monitor performance over time, which is typical when an AI feature is being improved and maintained like any other production system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvvv0sbi2s0o46efux5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvvv0sbi2s0o46efux5d.png" alt="AI Service Detectors Growth (YoY %)" width="742" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Retrieval and access layers are also showing massive growth, with Perplexity keys up 750%. Perplexity can be used as a search and retrieval API, so its key shows up when teams route questions through a service that finds sources and returns context to feed into the model.&lt;/p&gt;

&lt;p&gt;For AI media and voice infrastructure, we see the largest changes from Vapi, up 780%. This is a developer platform for voice-based AI agents using real-time audio. Seeing these keys leak suggests that AI is moving into customer-facing experiences like voice support, sales calls, and content production, introducing new vendors and new secrets into everyday repos.&lt;/p&gt;

&lt;p&gt;Brave Search, whose keys are up 135%, is trusted by developers to run web-style searches, pulling back relevant pages or snippets that the AI can reference. This signals that teams are building complete AI systems in production, where a single "call the model" integration quickly becomes a larger network of services and credentials that need to be managed.&lt;/p&gt;

&lt;p&gt;When you add these together, you get a clear signal: a meaningful slice of leaked secrets in public repos is showing up because developers are building AI systems. If teams weren't doing AI projects, these secret types would not be appearing at anything like these levels.&lt;/p&gt;

&lt;h4&gt;
  
  
  New AI Control Plane Players
&lt;/h4&gt;

&lt;p&gt;The other broad category in our report is platforms that sit one layer above the models themselves, used to assemble complete AI applications by coordinating prompts, tools, data sources, and deployments. These become the control plane where teams build and run agent-style workflows.&lt;/p&gt;

&lt;p&gt;The platforms showing the highest year-over-year growth in leaks are Dify, at 570%, and Coze, at 500%. These platforms help teams build agentic systems, chain tools together, manage prompts, and deploy AI features quickly, without writing everything from scratch. Coze PATs are of particular concern, as these access tokens often carry very broad account-level permissions and show up in commonly leaked artifacts, such as local configuration files and automation scripts.&lt;/p&gt;

&lt;p&gt;The data suggests that developers are adopting higher-level agent-building platforms, increasing their speed to market, but also increasing the number of integrations and credentials in a typical AI project.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Claude Co-signed Commits Issue: 2x The Base Rate Of Leaked Secrets
&lt;/h2&gt;

&lt;p&gt;One of the most significant findings in the 2026 report has less to do with AI services and more to do with how software is now produced. GitGuardian found that AI-assisted commits significantly contribute to secrets sprawl, and that Claude Code co-authored commits leak secrets at roughly 2x the baseline across public GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyaaizplaila8g545nlh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyaaizplaila8g545nlh.png" alt="Percent of Secrets For All Commits vs Commits co-authored by Claude Code" width="800" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2025 was the breakout year, with adoption accelerating early and then ramping sharply in the second half as multiple assistants gained traction. By year-end, AI-assisted commits had reached their highest levels.&lt;/p&gt;

&lt;p&gt;This point is easy to overreact to, so it is worth being precise. The report is not saying one assistant is uniquely reckless or that AI coding tools are the root cause of secrets leakage. The more grounded reading is simpler. When code production speeds up, insecure patterns scale with it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3r7reyl165hkuptkjsg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3r7reyl165hkuptkjsg.png" alt="Secrets per 1k commits from human-only signed vs Claude Code cosigned commits" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That is the real risk. AI-assisted development makes it easier to scaffold projects, test integrations, spin up backends, wire third-party services together, and publish working code quickly. Those are all useful things. They also happen to be the exact moments when credentials get pasted into config files, shell histories, local scripts, repo examples, quick demos, and half-finished automation.&lt;/p&gt;

&lt;p&gt;Once the pace increases, those habits do not disappear. They become easier to repeat.&lt;/p&gt;

&lt;p&gt;AI-generated code often looks finished before it is production-ready. A developer can get a feature working quickly, prove it out, and move on before anyone has asked the boring but important questions: where should this secret live, who owns it, how does it rotate, what is its scope, what breaks if it expires, and what logs or files might accidentally retain it?&lt;/p&gt;

&lt;p&gt;AI-assisted coding data from the report is evidence that the software production has changed. We should expect more commits, more prototypes, more integrations, and more implicit machine identities. We should also expect those identities to be created faster than many organizations can realistically govern them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Concrete Numbers From MCP Research
&lt;/h2&gt;

&lt;p&gt;One of the strongest additions in this year's report is the section on Model Context Protocol (MCP). MCP emerged in early 2025 as a common way to connect large language models to external tools and data sources. GitGuardian research found 24,008 unique secrets exposed in MCP configuration files. 14% of those were PostgreSQL database connection strings. The top leaked secret types in those files were Google API keys at 19.2%, PostgreSQL connection strings at 14%, Firecrawl at 11.9%, Perplexity at 11.2%, and Brave Search at 11%.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntippyz91bwx2kh269i8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntippyz91bwx2kh269i8.png" alt="TOP 10 Valid Unique Secrets in MCP configuration" width="745" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That list is revealing. It maps almost perfectly to the AI support layer, now showing up everywhere else in the report. Search. Retrieval. Data access. Developer tooling. External APIs. MCP gives teams a standard way to connect models to the real world. It is risky for the exact same reason.&lt;/p&gt;

&lt;p&gt;New standards often spread through examples. People copy sample configs, tweak them, and keep moving. If those examples depend on hardcoded credentials in local files, the unsafe pattern spreads alongside the standard. This is not an MCP-specific criticism. It is a familiar story in software. Convenience wins first. Governance arrives later. By then, the insecure pattern is already normal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8a90brapg3pzcxiskfmu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8a90brapg3pzcxiskfmu.png" alt="Unique Secrets Count Per Month - MCP configuration" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The MCP findings make the broader AI story more tangible. AI systems are not self-contained. They are built to reach out to tools, data, and services. Every connection point is another identity. Every identity needs ownership, scope, rotation, and visibility. If any of those are weak, the stack accumulates risk quietly and quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generic Detectors For The AI Ecosystem
&lt;/h2&gt;

&lt;p&gt;It is important to note that this year, for the first time, our machine-learning classification for generic secrets introduces an AI category. These are cases where we have strong signals that the key correlates to AI-related projects. 4.1% of all keys we found fell into this AI-labeled generic category with high confidence.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkfqydar8petd0b16c5r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkfqydar8petd0b16c5r.png" alt="Generic Secrets by Category (%)" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's the important qualifier: "with high confidence." Generic secrets also include a very large "other/unknown" bucket (around 41.9%), meaning we can't confidently map them to a specific purpose: that 4.1% is almost certainly a floor, not a ceiling. The AI-related fraction of generic secrets may be significantly higher.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Story Is Identity Sprawl Driven By AI
&lt;/h2&gt;

&lt;p&gt;The growing problem is the volume of Non-Human Identities created by modern software development. Service accounts, API keys, database credentials, automation tokens, deployment secrets, bot identities, project-scoped keys, and agent tool credentials. These are the connective tissue of software now. AI adds more of them by default.&lt;/p&gt;

&lt;p&gt;GitGuardian's policy-breach distribution makes that plain. Long-lived secrets account for 60% of policy breaches. Internally leaked secrets make up 17%. Duplicated secrets account for 16%. Publicly leaked secrets are 5%. Cross-environment leaks are 1%, and reused secrets are 0.7%. That is a useful breakdown because it shows the dominant problem is lifecycle negligence. Secrets live too long, spread too widely, and get copied faster than they are governed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkomb4vtkvp0h1gz13dy9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkomb4vtkvp0h1gz13dy9.png" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI increases creation velocity inside organizations that already struggle with ownership and remediation. More identities are born. More secrets are copied into more places. Fewer teams know which ones matter most, who should rotate them, and what systems depend on them. The stack expands. The discipline does not expand at the same rate.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Learned About AI Use
&lt;/h2&gt;

&lt;p&gt;AI did not create secrets sprawl, but it is now accelerating every condition that makes secrets sprawl worse. More people can build. More code gets shipped. More third-party services get connected. More machine identities get created. More local and temporary configuration surfaces appear. More insecure patterns survive because the fastest path to shipping still usually involves "just add a key."&lt;/p&gt;

&lt;p&gt;That is why the real AI lesson in this report is not "watch your model keys." It is broader. The AI stack is now part of the normal software stack, and the normal software stack already runs on credentials. Once that clicks, secrets security stops looking like a niche problem for DevOps teams and starts looking like a core operating problem for software organizations.&lt;/p&gt;

&lt;p&gt;The teams that handle this shift best will not just detect more exposed credentials. &lt;a href="https://www.gitguardian.com/nhi-governance?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;They will get better at controlling the lifecycle behind them&lt;/a&gt;. It will empower people to confidently answer, "Who created the identity? What can it reach? How narrowly is it scoped? Where is it stored? When it rotates? What depends on it? When should it die?"&lt;/p&gt;

&lt;p&gt;That work is less glamorous than shipping a new assistant, agent, or AI feature. It is also the difference between speed that compounds and speed that eventually breaks things.&lt;/p&gt;

&lt;p&gt;AI is making software easier to produce. When software gets easier to produce, identities get easier to create. And when identities get easier to create, secrets spread faster than most teams are ready for.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>devsecops</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>BSides SF 2026: Looking At Security Beyond The Next Big Bet</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Mon, 06 Apr 2026 13:28:38 +0000</pubDate>
      <link>https://forem.com/gitguardian/bsides-sf-2026-looking-at-security-beyond-the-next-big-bet-21lg</link>
      <guid>https://forem.com/gitguardian/bsides-sf-2026-looking-at-security-beyond-the-next-big-bet-21lg</guid>
      <description>&lt;p&gt;San Francisco has always had a talent for turning risk into infrastructure, such as when &lt;a href="https://en.wikipedia.org/wiki/Liberty_Bell_(game)?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Charles Fey invented the slot machine there during the Gold Rush&lt;/a&gt;. Today, we have another nondeterministic device for fortune seekers willing to pull a lever and see what comes back. We call it AI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsidessf.org/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;BSides San Francisco 2026&lt;/a&gt; felt built for a more modern version of that wager. The stakes today are identities, tokens, agents, permissions, and the growing gap between what systems are supposed to do and what they actually do in production.&lt;/p&gt;

&lt;p&gt;Happening the weekend before RSA Conference, this is one of the largest of the BSides events globally. This year, 2,965 participants attended 92 talks, 8 workshops, 11 interactive sessions, a CTF, and many, many other activities. This year's event was made possible by the help of 235 volunteers and a truly tireless organizing team.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq48qi6ip7dpizfmu76p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq48qi6ip7dpizfmu76p.png" alt="BSides SF final attendance metrics" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;People were there to share notes on how to keep control when software delivery speeds up, AI changes how code and infrastructure are produced, and attackers are increasingly happy to work through identity and trust instead of smashing through a perimeter. Here are just a few highlights from this year's edition of BSidesSF.&lt;/p&gt;

&lt;h2&gt;
  
  
  Time Travel Without Nostalgia
&lt;/h2&gt;

&lt;p&gt;In the session from &lt;a href="https://www.linkedin.com/in/anna-westelius-72a47048?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Anna Westelius, Head of Security, Privacy &amp;amp; Assurance at Netflix&lt;/a&gt;, "Let's Do the Timewarp Again! A Look Back to Move Forward," she presented security history as a series of pivots rather than a straight line. Instead of treating today's instability as unprecedented, she walked through earlier shifts in the early internet, the worm era, and the move to the cloud, showing how each period first felt chaotic, then gradually produced better defaults, stronger habits, and more durable systems.&lt;/p&gt;

&lt;p&gt;Anna made the case that security has repeatedly moved from heroics to engineering. Cloud, once framed as inherently unsafe, matured into a place where private-by-default storage, identity-centric controls, and better primitives could outperform old on-premises assumptions when teams rebuilt for the environment instead of dragging old workflows into it. Fire drills now are new CVEs, not being compromised.&lt;/p&gt;

&lt;p&gt;She reminded us that progress comes from community and deliberately laying out well-paved roads that are easier to travel. Anna argued that the field is finally in a position to measure meaningful risk reduction, design for humans instead of blaming them, and start tackling legacy risk that used to feel too sprawling to touch. Maturity is not perfection. It is building enough scaffolding that the next crisis does not have to be improvised.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sns3hy0zkp6wj29t6l4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sns3hy0zkp6wj29t6l4.png" alt="Anna Westelius" width="800" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Threat Model Meets Production
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/farshadabasi/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Farshad Abasi, CoFounder of Eureka DevSecOps and CSO of Forward Security&lt;/a&gt;, gave a session called "Your Threat Model Is Lying to You: Why Modeling the Design Isn't Enough in 2026," where he laid out a sharp critique of threat modeling, at least the way many teams still practice it. The problem was not the exercise itself, but that design intent keeps getting treated as reality, even when production systems drift, dependencies multiply, and delivery speed outruns any annual review cycle.&lt;/p&gt;

&lt;p&gt;Farshad explained that threat models often describe what teams think they built, while the real system includes transitive dependencies, infrastructure changes, deployment quirks, and configuration choices that never made it into the diagram. He pointed to the need for a feedback loop between the model and the evidence teams already collect from SAST, SCA, DAST, cloud findings, and deployment telemetry. A finding should not just confirm a known risk. It should be able to expose a broken assumption and force the model to update.&lt;/p&gt;

&lt;p&gt;That shift has real consequences, as it moves threat modeling out of a compliance drawer and back into engineering rituals like backlog refinement, pull requests, and post-finding analysis. The problem is rarely the first known issue. It is the invisible dependency, the quietly expanded permission, or the workflow that changed faster than the security model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbp10zja9lf5hmj09pf1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbp10zja9lf5hmj09pf1.png" alt="Farshad Abasi" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tokens Are the New Currency
&lt;/h2&gt;

&lt;p&gt;In "Breaking Tokens: Modern Attacks on OAuth, OIDC, and JWT Auth Flows," &lt;a href="https://www.linkedin.com/in/bhaumikshah2?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Bhaumik Shah, CEO at SecurifyAI&lt;/a&gt;, presented identity failure as an application architecture problem, not just an authentication problem. He covered examples of token replay, weak audience validation, trust confusion between identity providers, and the dangerous habit of treating a valid token as universally trustworthy.&lt;/p&gt;

&lt;p&gt;He quickly moved from protocol language to operational consequences in this session, sharing that a token validated in one place could be replayed somewhere else. An app without proper validation might accept a token from the wrong issuer, or a federated environment could end up granting the same privileges to identities that arrived through very different trust paths. In practice, that means an organization can enforce MFA at login and still leave the actual session material portable enough to be abused elsewhere.&lt;/p&gt;

&lt;p&gt;Bhaumik's mitigation advice was crisp and overdue. We should bind privileges to high-trust identity providers and validate the issuer and subject together instead of trusting email alone. We also need to narrow the scopes so a stolen token does not become a skeleton key. He talked about the fact that identity is no longer just about proving who logged in. It is about preserving trust boundaries after authentication, when tokens start moving between proxies, services, and automation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwt6du621dga94cqrcqf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwt6du621dga94cqrcqf.png" alt="Bhaumik Shah" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Hunting the Blind Spot on Developer Workstations
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://blog.securient.io/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Vinod Tiwari, Engineer at PIP Labs&lt;/a&gt; and first-time speaker, presented "Hunting Malicious IDE Extensions: Building Detection at Scale Across Developer Workstations." He walked us through a problem most security teams still barely measure: the IDE extension layer on developer machines has broad access to many dangerous things. Beyond source code, most dev machines have API keys, cloud credentials, deployment tooling, and local secrets, yet in many organizations, nobody has a complete inventory of what is installed. Vinod said that approvals are rare, monitoring is minimal, and only a couple of people in the room raised their hands when asked if they had MDM visibility into extensions at all. That gap matters because extensions sit inside one of the richest trust zones in the company.&lt;/p&gt;

&lt;p&gt;Vinod pointed to multiple cases from 2023 through 2025 in which malicious VS Code extensions were caught stealing SSH keys, typosquatted packages were bundled as IDE extensions to target crypto developers, and even widely installed extensions were found to exhibit data exfiltration behavior. With tens of thousands of extensions in the VS Code marketplace and a similar scale in JetBrains ecosystems, the review model has not kept pace with the level of access these plugins receive. He said there is often no sandbox here, and extensions can read and write local files, spawn processes, access the network, and, in some cases, quietly access clipboard data or other sensitive workflows.&lt;/p&gt;

&lt;p&gt;Vinod highlighted how private keys in &lt;code&gt;.env&lt;/code&gt; files, wallet seed phrases, and deployment credentials can all sit on developer workstations, turning a compromised extension into a direct path to irreversible damage. One malicious plugin is not just a workstation incident. It can become a wallet loss, a production compromise, or a supply chain exposure. Teams need to stop treating IDE extensions as harmless productivity add-ons and start treating them as privileged code execution inside the developer environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa663271zk0b4ggwmwmzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa663271zk0b4ggwmwmzv.png" alt="Vinod Tiwari" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Static security assumptions are failing faster
&lt;/h2&gt;

&lt;p&gt;Attackers are adapting, and our models of stability are aging out. Threat models go stale. Token assumptions do not survive microservices. Audit habits lag behind AI-assisted development. A control that made sense when releases were slower now becomes a blind spot because the underlying system changes too quickly.&lt;/p&gt;

&lt;p&gt;That is a meaningful shift for defensive work. It suggests that many security programs do not need more categories of findings as much as they need faster ways to reconcile expectations with reality. Drift, replay, hidden dependencies, and agent behavior all punish teams that treat security as a periodic review instead of continuous correction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Identity has moved into the center of the map
&lt;/h2&gt;

&lt;p&gt;The strongest sessions throughout the event kept orbiting identity, even when they were not labeled that way. Okta hunting, OAuth token replay, authorization architecture, and AI agents with production access all pointed to the same practical truth: trust now travels through sessions, tokens, permissions, and service relationships more often than through a clean user login moment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitguardian.com/files/the-state-of-secrets-sprawl-report-2026?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Secrets sprawl&lt;/a&gt;, overbroad scopes, orphaned permissions, and weak service boundaries all create the same outcome. They let valid-looking access travel farther than it should. In that environment, good hygiene is no longer a side practice. It is the structure that keeps the blast radius from becoming a business risk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security teams are being pushed closer to platform work
&lt;/h2&gt;

&lt;p&gt;Another conversation at the event was that the old separation between security, platform, and developer tooling is getting harder to sustain. Talks on authorization, malicious IDE extensions, AppSec, and AI agents all described a world where the useful control point is often the workflow itself. The winning pattern was not "scan more." It was "build the road correctly."&lt;/p&gt;

&lt;p&gt;That has implications for staffing and program design. Teams need people who can express policy in systems, not only people who can identify issues after the fact. Secure defaults, tool proxies, sidecars, telemetry feedback loops, and opinionated guardrails all came up because they let security become part of how work is done, instead of an extra approval step hovering outside it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What San Francisco Made Feel Obvious
&lt;/h2&gt;

&lt;p&gt;BSidesSF is a very forward-looking conference, in part because the event is made up of the practitioners, maintainers, and professionals who are actively working to keep us all safe. What seemed to be the consensus in the hallways was that the problems we face can't be solved with just more tools. They are going to be solved organizational change in how we deal with trust and access.&lt;/p&gt;

&lt;p&gt;If the threat model no longer matches production or a dependency incident exposes confusion about ownership, it is not just time to patch, it is time to reconsider if your architecture and governance are aligned with your org's goals. This event left me with hope that we can make security better by focusing on the systems that issue trust, store secrets, define permissions, and drive automation. Reduce what can sprawl as you update what has drifted. We don't need to stretch our old security plans around new technology; we need to adopt better paved paths and guardrails for a safer future. Especially as it is ever increasingly driven by AI.&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>ai</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>Honeytokens on the Developer Workstation: When Cleanup Takes Time</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Thu, 02 Apr 2026 15:00:54 +0000</pubDate>
      <link>https://forem.com/gitguardian/honeytokens-on-the-developer-workstation-when-cleanup-takes-time-15a4</link>
      <guid>https://forem.com/gitguardian/honeytokens-on-the-developer-workstation-when-cleanup-takes-time-15a4</guid>
      <description>&lt;p&gt;Supply chain security has moved closer to the humans with hands on the keyboard.&lt;/p&gt;

&lt;p&gt;For years, security teams have treated production systems, CI/CD pipelines, and identity infrastructure as the most sensitive parts of the software lifecycle. That is not wrong, but it is incomplete. The developer workstation belongs in that same conversation because it sits at the intersection of privilege, trust, and execution. It is where code is written, dependencies are installed, credentials accumulate, and trusted actions begin.&lt;/p&gt;

&lt;p&gt;Modern supply chain attacks are increasingly designed to land on the developer machine first. They do not need to smash through the front gate of production if they can quietly collect the keys from the laptop that already has access to private repositories, package publishing workflows, cloud consoles, build systems, and internal tooling.&lt;/p&gt;

&lt;p&gt;In 2025, and for the first time, campaigns such as &lt;a href="https://blog.gitguardian.com/shai-hulud-2/" rel="noopener noreferrer"&gt;Shai-Hulud&lt;/a&gt; showed us publicly just how many credentials could be harvested from a developer machine. In the &lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2026?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;State of Secrets Sprawl report&lt;/a&gt;, we showed that across 6,943 compromised machines in that supply chain attack, we found 33,185 unique secrets. At least 3,760 were still valid when we initially checked. Now a growing class of agentic &lt;a href="https://www.darkreading.com/application-security/supply-chain-attack-openclaw-cline-users?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;AI attacks&lt;/a&gt; aimed at local credentials and developer context shows the same pattern. The shortest path to enterprise impact often starts with developer access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3r7lutao78id2geq53a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3r7lutao78id2geq53a.png" alt="Shai-Hulud count of secrets per machine from the State of Secrets Sprawl Report 2026" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Developers have always been attractive targets. What has changed is the speed, scale, and plausibility of the attack paths. Poisoned packages, malicious plugins, compromised updates, and AI-assisted local automation all make it easier for adversaries to reach into a workstation and search for anything useful. That search is usually not abstract. It is practical. It looks for API keys, cloud tokens, SSH material, npm credentials, GitHub tokens, secrets in environment variables, plaintext config files, &lt;code&gt;.env&lt;/code&gt; files, shell history, logs, caches, and agent memory stores.&lt;/p&gt;

&lt;p&gt;The perimeter has not disappeared. It has become easier to recognize. The real perimeter is wherever the most privileged identities can be reached and abused. In many organizations, this includes the developer machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why developers should care
&lt;/h2&gt;

&lt;p&gt;Most developers do not think of themselves as part of the security boundary. They write code, and IT manages the laptop, while Security owns the policy. That division makes sense until an attacker uses a developer workstation as the easiest path into the systems that the developer already touches every day.&lt;/p&gt;

&lt;p&gt;This is why workstation security should not and cannot be framed as a request for every developer to become a full-time security engineer. The practical goal is smaller and more useful. If you reduce the chance that your machine becomes the easiest place to steal high-value access, you are also reducing real risk for your organization.&lt;/p&gt;

&lt;p&gt;That access is valuable for two reasons. First up, developer systems often hold secrets and tokens with real privilege. The second reason is that actions that originate from developer environments inherit trust. A package published from a maintainer machine, a commit signed with trusted credentials, a dependency update, a cloud login, or access to an internal support tool can all carry institutional trust that attackers would love to borrow.&lt;/p&gt;

&lt;p&gt;That is why the workstation deserves the same level of scrutiny and controls we already give to production systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  The immediate problem is plaintext secrets
&lt;/h2&gt;

&lt;p&gt;When attacks land on a developer's machine, they do not need to perform magic. They need to find useful material quickly. Too often, that material is sitting in plaintext.&lt;/p&gt;

&lt;p&gt;Secrets end up in source trees, local config files, tests, debug output, copied terminal commands, environment variables, shell profiles, AI tool configuration, and temporary scripts. They very commonly end up in &lt;code&gt;.env&lt;/code&gt; files that were supposed to be local-only but quietly became permanent if not shared. Convenience turns into residue, which in turn becomes opportunities for attackers.&lt;/p&gt;

&lt;p&gt;That is why one of the clearest next steps for developers is also one of the least glamorous: eliminate plaintext secrets from the workstation wherever possible.&lt;/p&gt;

&lt;p&gt;Replace hardcoded credentials with calls to approved secret managers. Move local secrets into the system keychain or an enterprise-approved password manager. Encrypt files at rest when secrets must exist in files at all. Use tools such as SOPS where that workflow makes sense. Better yet, move away from shared static secrets entirely and adopt identity-based authentication wherever feasible.&lt;/p&gt;

&lt;p&gt;The goal is reducing the amount of value an attacker can extract from any successful foothold.&lt;/p&gt;

&lt;h2&gt;
  
  
  The hard truth about remediation
&lt;/h2&gt;

&lt;p&gt;The most correct security answer here seems straightforward: reduce the use of plaintext secrets and move to stronger authentication. At the same time, we need to harden the workstation itself and standardize approved tooling. Slowing down dependency risk is another priority, while detecting abuse earlier is what security teams need for auditability.&lt;/p&gt;

&lt;p&gt;None of this happens instantly.&lt;/p&gt;

&lt;p&gt;Even good remediation plans take time because they touch real workflows. Changes require updates to training, tooling, access patterns, and team habits. Replacing a secret in code is easy on the surface. But replacing the workflow that caused ten copies of that secret to spread across laptops, config files, and build jobs is harder. Moving a team from local environment variables to a stronger secret retrieval model is possible, but it is not a same-day project. Moving from secret-based access to identity-based patterns takes longer still.&lt;/p&gt;

&lt;p&gt;Attackers do not pause while those projects are underway.&lt;/p&gt;

&lt;p&gt;That gap between knowing the right long-term direction and living in today's imperfect environment is where &lt;a href="https://www.gitguardian.com/honeytoken?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Honeytokens&lt;/a&gt; make sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why honeytokens belong on the developer workstation
&lt;/h2&gt;

&lt;p&gt;Honeytokens are not the end state. They are the first compensating controls that help while cleanup is still in progress.&lt;/p&gt;

&lt;p&gt;Honeytokens do not prevent workstation compromise. They do not replace hardening, secret elimination, or better authentication. What they do is give defenders a way to detect malicious secret harvesting as it happens.&lt;/p&gt;

&lt;p&gt;A honeytoken is a decoy credential designed to generate an alert when someone tries to use it. On a developer machine, that makes it useful as a tripwire. If a poisoned dependency, a malicious plugin, or a compromised local tool begins sweeping through files and environment variables, looking for credentials to exfiltrate and replay, a well-placed honeytoken can surface that behavior before the attacker gets very far. Validation, which triggers any honeytoken, is very often done automatically to reduce the noise for the attacker.&lt;/p&gt;

&lt;p&gt;That early signal changes the response window. It can limit the blast radius. It can help incident responders identify the affected host, the likely access path, and the timing of the event. It can also create an auditable record of what happened and when, which is valuable during investigation and remediation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgff3c9hqnnnl07p6r8y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgff3c9hqnnnl07p6r8y.png" alt="The workflow of generate a honeytoken, deploy to a private environment, any attacker will trigger it, and get alerts instantly" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For organizations dealing with supply chain attacks that target credentials first, that is not security theater. That is practical detection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Placement matters more than enthusiasm
&lt;/h2&gt;

&lt;p&gt;Honeytokens only work if they stay believable and private. Placement is critical.&lt;/p&gt;

&lt;p&gt;A honeytoken should look like the kind of secret an attacker expects to find in the kind of place they are already searching. We can glean from Shai Hulud where attacks look for secrets. The best workstation honeytokens blend into the legitimate local context and live in the files and locations that the supply chain malware tends to inspect first, like any local &lt;code&gt;.env&lt;/code&gt; files or paths like &lt;code&gt;~/.config/gh/config.yml&lt;/code&gt; or &lt;code&gt;~/.aws/credentials.&lt;/code&gt; Local config files, development directories, service-related settings, and environment variable paths are all obvious candidates. So are places where convenience has historically created risky habits.&lt;/p&gt;

&lt;p&gt;Environment variables deserve special attention here. Developers often treat them as safer than files because they feel transient. In practice, they spread. They persist in shell history, child processes, debug output, terminal multiplexers, launch configs, and tool integrations. If it is in your environment, it is often more portable and more visible than people assume.&lt;/p&gt;

&lt;p&gt;A private honeytoken placed in those realistic paths can do its job quietly while real secrets are being removed from the system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start with what an individual developer can do
&lt;/h2&gt;

&lt;p&gt;One of the weaknesses in many workstation security conversations is that they blur the distinction between individual actions and organizational controls. That creates advice that sounds good but feels impossible to follow.&lt;/p&gt;

&lt;p&gt;An individual developer cannot rewrite the package policy for the company, deploy endpoint tooling across the fleet, or migrate the entire organization to identity-based authentication alone. But they can make meaningful changes on their own machine today.&lt;/p&gt;

&lt;p&gt;They can remove plaintext secrets from active project directories. They can stop using unapproved local storage for credentials. They can move secrets into the system keychain or approved managers. They can reduce reliance on long-lived environment variables.&lt;/p&gt;

&lt;p&gt;Further, they can avoid random plugins, suspicious package installs, and unapproved agentic tooling. They can treat phishing, weird links, and convenience scripts with more skepticism. They can report strange package behavior instead of assuming the problem is isolated.&lt;/p&gt;

&lt;p&gt;They can also install and maintain honeytokens as a tripwire while the rest of the cleanup continues.&lt;/p&gt;

&lt;p&gt;Workstation security is partly organizational, but compromise often begins with local habits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Then demand organizational support
&lt;/h2&gt;

&lt;p&gt;Developers should not have to, and can't really solve this alone.&lt;/p&gt;

&lt;p&gt;The enterprise has to do its part by making the secure path easier to follow. That means providing approved secret managers and clear local development patterns. It means publishing workstation baselines, endpoint protections, and package trust policies that actually reflect current threats. Establish cooldown periods for updates when appropriate, rather than normalizing instant adoption of whatever just dropped upstream. Sandbox or somehow isolate new code and unfamiliar tools before they are trusted with real access.&lt;/p&gt;

&lt;p&gt;We must create and test response playbooks for when honeytokens fire, because a detection without a plan can still turn into a mess.&lt;/p&gt;

&lt;p&gt;There is also a new policy line that many teams should draw clearly: do not install agentic systems on work machines without explicit approval. That is especially important when those tools can read local context, inspect repositories, access credentials, or run broad local actions. The attack surface is not only the model. It is the &lt;a href="https://www.rescana.com/post/glassworm-supply-chain-attack-exploits-open-vsx-extensions-to-target-developer-environments?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;plugin&lt;/a&gt; ecosystem, the automation layer, the permissions, and the assumptions about trust.&lt;/p&gt;

&lt;p&gt;Some teams may even benefit from local warnings or command aliases that remind users when they are about to invoke unapproved tooling. If trying to invoke &lt;code&gt;openclaw&lt;/code&gt; instead sends off a warning or a honeytoken, even better.&lt;/p&gt;

&lt;p&gt;If the environment allows it, organizations can also &lt;a href="https://youtu.be/yandDvJr4Kc?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;add security-aware MCP and IAM tooling to local assistants&lt;/a&gt; to help with remediation workflows, policy checks, and honeytoken placement. That can make the defensive path more practical without pretending automation removes the need for judgment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq8qie7rtwlsfvtu6yu6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq8qie7rtwlsfvtu6yu6.png" alt="Slide showing the capabilities of the GitGuardian MCP server" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Secrets are the first target, not the only target
&lt;/h2&gt;

&lt;p&gt;Admittedly, secrets are not the whole story of workstation risk.&lt;/p&gt;

&lt;p&gt;Attackers may also want browser session material, package publishing rights, signed commit workflows, access to internal knowledge, SSH agent forwarding, build context, or any local state that helps them pivot. But secrets remain one of the most portable, reusable, and operationally useful prizes. That makes them the best first cleanup target and the best place to reduce attacker value quickly.&lt;/p&gt;

&lt;p&gt;If an attacker lands on a developer machine and finds nothing useful in plaintext, the possible damage narrows. The incident may still be serious, but it becomes harder to convert local execution into durable enterprise access.&lt;/p&gt;

&lt;p&gt;That is the security value of secret elimination. It does not promise perfection. It reduces attacker leverage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Treat developer machines like they matter, because they do
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://research.jfrog.com/post/ghostclaw-unmasked/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;agentic AI era has amplified workstation risk&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Developers now work in environments that combine trusted execution, local automation, sprawling dependency chains, and high-value access in one place. Attackers know that they no longer need physical access to a laptop or a dramatic break-in story. Sometimes all they need is an update, an altered package or &lt;a href="https://www.rescana.com/post/glassworm-supply-chain-attack-exploits-open-vsx-extensions-to-target-developer-environments?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;plugin&lt;/a&gt;, or a workflow that slips through trust assumptions and starts looking for credentials.&lt;/p&gt;

&lt;p&gt;Developer workstations deserve the same discipline we already apply to pipelines and production infrastructure: Eliminate plaintext secrets, move toward stronger identity-based patterns, and be careful with updates, plugins, and local automation. And while all of that longer work is underway, install &lt;a href="https://www.gitguardian.com/honeytoken?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Honeytokens&lt;/a&gt; where attackers are most likely to look.&lt;/p&gt;

&lt;p&gt;That is not the whole strategy. It is the first good move.&lt;/p&gt;

&lt;p&gt;For teams trying to reduce risk now, that is often the difference between discovering an attack after the damage is done and catching it while it is still unfolding.&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>cybersecurity</category>
      <category>devsecops</category>
    </item>
  </channel>
</rss>
