<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: albe_sf</title>
    <description>The latest articles on Forem by albe_sf (@albertomontagnese).</description>
    <link>https://forem.com/albertomontagnese</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/albertomontagnese"/>
    <language>en</language>
    <item>
      <title>Google's I/O 2024 announcements just reset the AI developer stack</title>
      <dc:creator>albe_sf</dc:creator>
      <pubDate>Thu, 14 May 2026 06:56:29 +0000</pubDate>
      <link>https://forem.com/albertomontagnese/googles-io-2024-announcements-just-reset-the-ai-developer-stack-51id</link>
      <guid>https://forem.com/albertomontagnese/googles-io-2024-announcements-just-reset-the-ai-developer-stack-51id</guid>
      <description>&lt;p&gt;Google's I/O 2024 developer keynote just laid out a new, more powerful, and integrated stack for building AI products. The key takeaway isn't just one model or tool, but a cohesive set of components—from a frontier model with a massive context window to a production-ready open source model and a backend framework to wire it all together. For builders, this means it's time to re-evaluate your stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  a 2m token context window changes the game
&lt;/h2&gt;

&lt;p&gt;The headline feature for many will be Gemini 1.5 Pro entering public preview with a 2 million token context window. This isn't an incremental update. A context window of this size allows an application to reason over entire codebases, multiple large documents, or long videos in a single pass. This fundamentally changes the architecture for context-aware applications, potentially simplifying or even replacing complex retrieval-augmented generation (RAG) pipelines that shuttle context in and out of a smaller window.&lt;/p&gt;

&lt;p&gt;For high-frequency or latency-sensitive tasks where the full context isn't needed, Google also introduced Gemini 1.5 Flash, a lighter-weight variant optimized for speed and efficiency. The combination provides two distinct options for developers: a massive-context model for deep, complex reasoning and a faster model for more common, high-volume tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  open source gets a real contender with gemma 2
&lt;/h2&gt;

&lt;p&gt;On the open-source front, the release of Gemma 2 is a significant development. The new family includes 2B, 9B, and 27B parameter models. The 27-billion parameter variant is particularly notable, delivering performance that surpasses models more than twice its size. This makes it a compelling choice for teams that want to self-host or fine-tune a powerful model without the infrastructure overhead of much larger models.&lt;/p&gt;

&lt;p&gt;Gemma 2 introduces a new architecture designed for performance and efficiency, using Grouped Query Attention (GQA) for faster inference. For developers building specialized applications, the ability to fine-tune a capable open model like Gemma 2 on proprietary data is a critical advantage.&lt;/p&gt;

&lt;h2&gt;
  
  
  firebase genkit: a new backend for your ai stack
&lt;/h2&gt;

&lt;p&gt;Perhaps the most practical announcement for day-to-day builders is Firebase Genkit, a new open-source framework for building AI-powered features in Node.js backends (with Go support coming soon). Genkit provides the plumbing to orchestrate multi-step AI workflows, manage prompts, call models, and integrate with services like vector databases.&lt;/p&gt;

&lt;p&gt;It's designed to be model-agnostic, with integrations for Gemini, open-source models via Ollama, and vector stores like Pinecone and Chroma. This addresses a common pain point for developers: the significant amount of boilerplate code required to build production-ready AI features. Genkit also includes a local developer UI for testing, debugging, and inspecting execution traces.&lt;/p&gt;

&lt;p&gt;Here's what a simple flow might look like in Genkit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;configureGenkit&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;defineFlow&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;genkit&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@genkit-ai/core&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;googleAI&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;genkitx-googleai&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;zod&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nf"&gt;configureGenkit&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nf"&gt;googleAI&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;logLevel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;debug&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;enableTracingAndMetrics&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;menuSuggestionFlow&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;defineFlow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;menuSuggestionFlow&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;inputSchema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;object&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;dish&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="na"&gt;outputSchema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;object&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;suggestion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;dish&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;llmResponse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;genkit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;gemini-1.5-pro-latest&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Suggest a creative and appealing menu description for a dish called: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;dish&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;suggestion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;llmResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  the so-what for builders
&lt;/h2&gt;

&lt;p&gt;The announcements from Google I/O provide a more complete and accessible AI stack. You now have a top-tier proprietary model with a uniquely large context window, a competitive open-source model for custom deployments, and a dedicated backend framework to manage the complexity of building and deploying AI features. This combination lowers the barrier to entry for creating sophisticated, context-aware applications and provides the tooling to do it in a structured, production-ready way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://blog.google/" rel="noopener noreferrer"&gt;100 things we announced at I/O 2024&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://firebase.blog/posts/2024/05/introducing-firebase-genkit" rel="noopener noreferrer"&gt;Introducing Firebase Genkit&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>machinelearning</category>
      <category>devtools</category>
    </item>
    <item>
      <title>Your AI Agents Are Probably Accessing Data They Shouldn't</title>
      <dc:creator>albe_sf</dc:creator>
      <pubDate>Thu, 14 May 2026 06:44:25 +0000</pubDate>
      <link>https://forem.com/albertomontagnese/your-ai-agents-are-probably-accessing-data-they-shouldnt-463</link>
      <guid>https://forem.com/albertomontagnese/your-ai-agents-are-probably-accessing-data-they-shouldnt-463</guid>
      <description>&lt;p&gt;A new report on AI agent security confirms what many of us in the trenches have suspected: we are shipping agents with credentials and permissions that are fundamentally insecure. According to a global study, two-thirds of organizations using AI agents believe they have already accessed data beyond their intended scope. The core takeaway is that the identity and access management patterns we built for humans are failing for autonomous, millisecond-speed agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  the detection-to-execution speed mismatch
&lt;/h2&gt;

&lt;p&gt;The fundamental problem is a mismatch of timescales. The study found that it takes organizations an average of 14 hours to detect a compromised AI agent. An agent, however, operates in milliseconds. That massive gap between machine execution speed and human detection speed creates a critical window of vulnerability. A misconfigured or compromised agent can move laterally across multiple core systems using valid credentials long before a human security team even receives an alert.&lt;/p&gt;

&lt;p&gt;This isn't a hypothetical risk. The same report indicates that 61% of organizations have had to revoke or rotate AI agent credentials due to a suspected exposure. The issue isn't that agents are 'breaking in' through novel exploits; they are being given keys to the front door. The problem is one of authorized access that isn't, and cannot be, governed effectively on a human timeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  static credentials are a ticking time bomb
&lt;/h2&gt;

&lt;p&gt;The root of this vulnerability lies in our continued reliance on static, long-lived credentials. We're treating agents like we treat a monolithic application server from 2015, handing them an API key that lives for months or years and often has broad permissions. More than four out of five organizations surveyed stated that a single compromised credential could impact multiple major systems.&lt;/p&gt;

&lt;p&gt;This pattern is familiar to any of us who have shipped a system under pressure. You create a service account, generate a key, and embed it in a configuration file or environment variable. It looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"production"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"database_url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"billing_api_key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sk_live_a1b2c3d4e5f6..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"storage_service_account_key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"{&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;type&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;service_account&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, ...}"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"staging"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"database_url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"billing_api_key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sk_test_..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"storage_service_account_key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a liability. That &lt;code&gt;billing_api_key&lt;/code&gt; is a persistent secret. If the agent's host environment is compromised, or if the agent itself has a flaw that allows it to leak its own environment, that key is now active in the wild until someone manually revokes it. Given the 14-hour average detection time, the potential damage is significant.&lt;/p&gt;

&lt;h2&gt;
  
  
  towards ephemeral, just-in-time identity
&lt;/h2&gt;

&lt;p&gt;The report points towards a different model: ephemeral identity. Instead of issuing long-lived keys, agents should be granted credentials that are created just-in-time for a specific task and expire immediately afterward. This approach treats identity not as a static property but as a temporary, dynamically-scoped state.&lt;/p&gt;

&lt;p&gt;Implementing this isn't trivial. It requires an infrastructure that can continuously govern agents at runtime, creating and destroying credentials on demand based on the immediate context of the agent's task. But it's the only model that closes the speed gap between machine action and human oversight. If a credential only lives for 500 milliseconds, the window for misuse shrinks dramatically.&lt;/p&gt;

&lt;p&gt;As builders, we are moving from shipping code to shipping agents. These agents are not just tools; they are autonomous workers integrated into our core business systems. The study's finding that companies are already spending over $1 million on average to manage AI agent security issues shows the financial cost of getting this wrong. We need to stop handing them the equivalent of a master keycard and start building systems that grant access with the precision and speed that these new workers require.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.prnewswire.com/" rel="noopener noreferrer"&gt;The 2026 State of AI Agent Identity Security&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>devtools</category>
      <category>security</category>
    </item>
    <item>
      <title>GitHub's New Certification Is a Spec For the Modern AI Engineer</title>
      <dc:creator>albe_sf</dc:creator>
      <pubDate>Thu, 14 May 2026 06:15:19 +0000</pubDate>
      <link>https://forem.com/albertomontagnese/githubs-new-certification-is-a-spec-for-the-modern-ai-engineer-25ib</link>
      <guid>https://forem.com/albertomontagnese/githubs-new-certification-is-a-spec-for-the-modern-ai-engineer-25ib</guid>
      <description>&lt;p&gt;GitHub just quietly released a new role-based certification, and it's one of the highest-signal documents I've seen for where our jobs are headed. The 'GitHub Certified: Agentic AI Developer' exam is a spec sheet for the skills required to build and ship AI agents in production. It confirms the shift we've all felt: moving from prompt-level hacking to designing, supervising, and operating complex, stateful systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  from prompt engineering to system integration
&lt;/h2&gt;

&lt;p&gt;The skills listed for the new GH-600 exam are not about crafting the perfect prompt. They are about system-level concerns. The exam covers how to "configure tools, permissions, and environments for agents." This is the language of infrastructure and operations, not just conversational design. It signals that the core work is no longer just coaxing a model to produce a good output, but integrating it safely and reliably into a larger software development lifecycle.&lt;/p&gt;

&lt;p&gt;Building a real agent requires you to think about its environment. What tools can it call? What are its permissions? Can it write to the file system? Does it have network access? These aren't model problems; they are application security and architecture problems. The certification's focus here tells you that building a secure, contained environment for your agent is now a baseline competency.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Example: Running an agent in a constrained environment&lt;/span&gt;
&lt;span class="c"&gt;# This isn't from the certification, but illustrates the principle.&lt;/span&gt;

podman run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--security-opt&lt;/span&gt; no-new-privileges &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cap-drop&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ALL &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--network&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;none &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; ./agent-workspace:/app/workspace:Z &lt;span class="se"&gt;\&lt;/span&gt;
  my-agent-image:latest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Analyze the data in /app/workspace/input.csv and write a report to /app/workspace/output.md"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Treating agents as tier-one applications that require consistent, governed environments is the new standard. This is a world away from tweaking a prompt in a playground.&lt;/p&gt;

&lt;h2&gt;
  
  
  managing state and long-running execution
&lt;/h2&gt;

&lt;p&gt;Another key domain in the certification is the ability to "manage memory, state, and long-running execution." This is the single biggest differentiator between a simple AI-powered feature and a true agent. Agents are not stateless functions. They have goals, they have memory of past actions, and they operate over time. This introduces a host of engineering challenges that are familiar to anyone who has built distributed systems.&lt;/p&gt;

&lt;p&gt;How does your agent persist its state? If the process dies, can it resume its work? How do you handle memory growth in a process that might run for hours or days? These are the questions that separate toy projects from production systems. The fact that GitHub is testing for this shows that the industry expects developers to have answers. You are no longer just a model user; you are the operator of a persistent, autonomous process.&lt;/p&gt;

&lt;h2&gt;
  
  
  evaluation, orchestration, and human oversight
&lt;/h2&gt;

&lt;p&gt;The final piece of the puzzle is about reliability and control. The certification requires developers to know how to "evaluate and improve agent performance," "coordinate multi-agent workflows," and "implement guardrails and human-in-the-loop systems."&lt;/p&gt;

&lt;p&gt;This is the senior-level skillset. Evaluating an agent isn't about running a benchmark once. It's about continuous monitoring and creating feedback loops for improvement. Coordinating multi-agent systems is an architecture problem, requiring you to break down complex tasks and manage communication between specialized agents. And most critically, implementing guardrails and HITL systems is an admission that these systems are not perfectly reliable. The most important skill is knowing how to design for failure and ensure a human can intervene when the agent gets lost or goes off the rails.&lt;/p&gt;

&lt;p&gt;The takeaway here is clear. The era of casual experimentation is over. The skills being codified by this certification are about building robust, observable, and controllable AI systems. It's a significant shift in what it means to be a developer in the agentic era. This exam isn't just a way to get a new badge; it's a study guide for staying relevant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://techcommunity.microsoft.com/t5/github-community-blog/new-github-certified-agentic-ai-developer/ba-p/4134423" rel="noopener noreferrer"&gt;New GitHub Certified: Agentic AI Developer - Microsoft Community Hub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>devtools</category>
      <category>career</category>
    </item>
    <item>
      <title>Anthropic's 'Dangerous' AI and the Hard Reality of Auditing Code</title>
      <dc:creator>albe_sf</dc:creator>
      <pubDate>Wed, 13 May 2026 19:06:59 +0000</pubDate>
      <link>https://forem.com/albertomontagnese/anthropics-dangerous-ai-and-the-hard-reality-of-auditing-code-2j56</link>
      <guid>https://forem.com/albertomontagnese/anthropics-dangerous-ai-and-the-hard-reality-of-auditing-code-2j56</guid>
      <description>&lt;p&gt;Anthropic's latest model, Claude Mythos, was internally deemed too 'dangerously good' at finding security vulnerabilities for a public release. But when tested against the battle-hardened &lt;code&gt;curl&lt;/code&gt; codebase, it exposed the gap between marketing hype and engineering reality, providing a critical lesson for anyone building with AI security tools. The takeaway is not that these models are useless, but that their output is a signal that still requires rigorous human verification.&lt;/p&gt;

&lt;h2&gt;
  
  
  what is claude mythos
&lt;/h2&gt;

&lt;p&gt;Anthropic announced that an internal AI model, Claude Mythos, demonstrated a powerful, emergent capability for discovering and exploiting software vulnerabilities. The capabilities were reportedly so advanced that the company restricted access, providing it only to a select group of organizations to allow them to patch critical flaws before a potential wider release. The model allegedly found thousands of high-severity vulnerabilities across major operating systems and browsers. This raised an immediate question for builders: are we on the verge of fully automated security auditing, or is this another case of over-indexing on a model's potential?&lt;/p&gt;

&lt;h2&gt;
  
  
  the curl test case
&lt;/h2&gt;

&lt;p&gt;The answer came from a real-world test. Daniel Stenberg, creator of &lt;code&gt;curl&lt;/code&gt;, was granted indirect access to a Mythos analysis of his project's 176,000 lines of C code. The model returned five 'confirmed security vulnerabilities'.&lt;/p&gt;

&lt;p&gt;The result after human review was less dramatic. Of the five findings, four were false positives. One was a legitimate, low-severity bug. This outcome on a mature, heavily scrutinized project like &lt;code&gt;curl&lt;/code&gt; is telling. It suggests that while AI can parse massive codebases and identify potential issues at scale, its signal-to-noise ratio is a critical variable. An AI's declaration of a 'confirmed' vulnerability is not the end of an investigation; it is the start.&lt;/p&gt;

&lt;h2&gt;
  
  
  ai output is a signal, not a verdict
&lt;/h2&gt;

&lt;p&gt;For engineers integrating AI into security pipelines, this is the core lesson. These models are powerful pattern-matchers, but they lack the true context and world model of a seasoned security researcher. They will flag code that looks like a known vulnerability pattern, even when idiomatic usage or surrounding logic renders it harmless. A report from a model like Mythos is not a finished list of CVEs. It's a prioritized list of areas for human experts to investigate.&lt;/p&gt;

&lt;p&gt;Your internal tooling and workflow must reflect this. When an AI flags a potential issue, the process should treat it as an assertion to be validated, not a fact to be remediated. Imagine an automated report from a similar tool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"vulnerability_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AI-GEN-004-RCE"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"file_path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/src/app/utils/parser.c"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"line_number"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;242&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"severity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Critical"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"cwe"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CWE-120: Buffer Copy without Checking Size of Input ('Classic Buffer Overflow')"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"confidence"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"High"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The function `parse_user_input` uses `strcpy` to copy a user-provided buffer `input_buffer` to a fixed-size local variable `dest_buffer`. This is a potential buffer overflow vulnerability if the source buffer exceeds the destination size."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"recommendation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Replace `strcpy` with `strncpy` or `snprintf` to prevent buffer overflows by specifying the maximum number of bytes to copy."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This looks plausible. But without a human checking if &lt;code&gt;input_buffer&lt;/code&gt; is sanitized or length-checked upstream, acting on this report alone is premature. The value is not in the AI's conclusion, but in its ability to direct limited human attention to line 242.&lt;/p&gt;

&lt;h2&gt;
  
  
  what this means for builders
&lt;/h2&gt;

&lt;p&gt;The Mythos-on-&lt;code&gt;curl&lt;/code&gt; episode is a necessary recalibration. AI will undoubtedly change security auditing, but it will not eliminate the need for human expertise. It transforms the task from finding a needle in a haystack to sorting a pile of needles and pins. For builders, the mandate is clear: build systems that leverage AI for signal generation, but design workflows that depend on human experts for verification. Do not ship a system that blindly trusts an AI's security assessment. The real danger isn't a rogue AI hacker, but an engineering team that outsources its judgment to one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.anthropic.com/" rel="noopener noreferrer"&gt;Anthropic&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>agents</category>
      <category>programming</category>
    </item>
    <item>
      <title>Anthropic on AWS is Not What You Think</title>
      <dc:creator>albe_sf</dc:creator>
      <pubDate>Wed, 13 May 2026 19:01:56 +0000</pubDate>
      <link>https://forem.com/albertomontagnese/anthropic-on-aws-is-not-what-you-think-8cm</link>
      <guid>https://forem.com/albertomontagnese/anthropic-on-aws-is-not-what-you-think-8cm</guid>
      <description>&lt;p&gt;Anthropic's release of the Claude Platform on AWS is the most significant infrastructure shift for builders since model-specific SDKs. It’s not another managed model offering via Bedrock; it’s Anthropic’s full, cutting-edge API stack deployed on AWS infrastructure, accessible through native AWS endpoints. This solves the primary enterprise adoption hurdles—security, billing, and procurement—at the source, making Claude a legitimate alternative to Azure OpenAI for serious AWS shops.&lt;/p&gt;

&lt;h2&gt;
  
  
  what actually changed
&lt;/h2&gt;

&lt;p&gt;On May 11, Anthropic announced the Claude Platform on AWS. Unlike the existing Amazon Bedrock integration, which offers specific Claude models as part of a multi-vendor catalog, this is a dedicated, Anthropic-managed environment running on AWS hardware. For builders, this means you get the best of both worlds: direct access to Anthropic's complete, up-to-the-minute feature set—including the full Messages API, the Files API, Managed Agents, and tool use—while operating within your existing AWS environment.&lt;/p&gt;

&lt;p&gt;The key differences are in the plumbing. You interact with it via native AWS endpoints. Authentication is handled by AWS IAM, not by a separate Anthropic API key you have to manage and rotate. Most importantly, billing is consolidated directly into your AWS account. This isn't a minor convenience; it's a fundamental change that removes massive organizational friction.&lt;/p&gt;

&lt;h2&gt;
  
  
  the enterprise integration tax
&lt;/h2&gt;

&lt;p&gt;For any large organization, adopting a new AI vendor is a procurement and security nightmare. It requires new contracts, new security reviews for data handling, and a separate billing pipeline that finance has to approve. While Bedrock partially solved this by putting various models under a single AWS bill, it often lags behind the native provider's API in terms of features and model availability. You get the convenience, but you sacrifice access to the latest capabilities.&lt;/p&gt;

&lt;p&gt;The new platform collapses this trade-off. A team can now use their existing AWS enterprise agreement, leverage pre-approved IAM roles and policies for access control, and have all of their Claude usage appear as a line item on their monthly AWS bill. The CISO is happy because access is governed by the same robust IAM system used for everything else. The finance department is happy because there isn't a new vendor to onboard. And you, the builder, are happy because you get direct access to the latest from Anthropic without fighting a six-month procurement battle.&lt;/p&gt;

&lt;p&gt;Here’s what invoking a model on this new platform might look like. Note that you're using an AWS SDK like &lt;code&gt;boto3&lt;/code&gt; to call an Anthropic-specific service endpoint, not the generic Bedrock one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;

&lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Session&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;profile_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;my-aws-profile&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;region_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;us-east-1&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Note the service name is 'anthropic', not 'bedrock-runtime'
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;anthropic&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;modelId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;claude-opus-4-7&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;contentType&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;accept&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;anthropic_version&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bedrock-2023-05-31&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;max_tokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2048&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;messages&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Explain the difference between Anthropic on AWS and Claude on Bedrock.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;response_body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response_body&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This looks familiar, but the &lt;code&gt;service_name&lt;/code&gt; and &lt;code&gt;modelId&lt;/code&gt; string are doing all the work, routing your request through AWS's front door to Anthropic's dedicated infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  the so-what for builders
&lt;/h2&gt;

&lt;p&gt;This move signals a new phase in the AI platform wars. It’s no longer just about having the best model; it’s about having the most seamless enterprise deployment story. By embedding its native platform inside AWS, Anthropic is meeting enterprise clients where they are, offering a path of least resistance to adopt its latest technology. It’s a direct challenge to the tight integration of OpenAI models within the Azure ecosystem.&lt;/p&gt;

&lt;p&gt;For engineers and technical leads inside companies heavily invested in AWS, the decision of which frontier model to use just got a lot more interesting. The excuse that "it's not integrated with our cloud" is gone. The friction is gone. Now, the choice between Claude and its competitors can be based purely on capability, performance, and cost—as it should be.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.anthropic.com/claude/reference/changelog" rel="noopener noreferrer"&gt;Claude API Docs - Changelog&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>claude</category>
      <category>devops</category>
    </item>
    <item>
      <title>Hello world</title>
      <dc:creator>albe_sf</dc:creator>
      <pubDate>Tue, 12 May 2026 22:27:45 +0000</pubDate>
      <link>https://forem.com/albertomontagnese/hello-world-mhd</link>
      <guid>https://forem.com/albertomontagnese/hello-world-mhd</guid>
      <description></description>
    </item>
  </channel>
</rss>
