<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Google AI</title>
    <description>The latest articles on Forem by Google AI (@googleai).</description>
    <link>https://forem.com/googleai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/googleai"/>
    <language>en</language>
    <item>
      <title>Agent Factory Recap: Supercharging Agents on GKE with Agent Sandbox and Pod Snapshots</title>
      <dc:creator>Shir Meir Lador</dc:creator>
      <pubDate>Tue, 07 Apr 2026 13:04:00 +0000</pubDate>
      <link>https://forem.com/googleai/agent-factory-recap-supercharging-agents-on-gke-with-agent-sandbox-and-pod-snapshots-3a5e</link>
      <guid>https://forem.com/googleai/agent-factory-recap-supercharging-agents-on-gke-with-agent-sandbox-and-pod-snapshots-3a5e</guid>
      <description>&lt;p&gt;In the latest episode of the &lt;a href="https://www.youtube.com/playlist?list=PLIivdWyY5sqLXR1eSkiM5bE6pFlXC-OSs" rel="noopener noreferrer"&gt;Agent Factory&lt;/a&gt;, Mofi Rahman and I had the pleasure of hosting, Brandon Royal, the PM working on agentic workloads on GKE. We dove deep into the critical questions around the nuances of choosing the right agent runtime, the power of GKE for agents, and the essential security measures needed for intelligent agents to run code.&lt;/p&gt;

&lt;p&gt;This post guides you through the key ideas from our conversation. Use it to quickly recap topics or dive deeper into specific segments with links and timestamps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why GKE for Agents?
&lt;/h2&gt;

&lt;p&gt;Timestamp: &lt;a href="https://www.youtube.com/watch?v=5_R_Ixk8ENQ&amp;amp;list=PLIivdWyY5sqLXR1eSkiM5bE6pFlXC-OSs&amp;amp;index=1&amp;amp;t=109s" rel="noopener noreferrer"&gt;01:49&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;We kicked off our discussion by tackling a fundamental question: why choose GKE as your agent runtime when serverless options like Cloud Run or fully managed solutions like Agent Engine exist?&lt;/p&gt;

&lt;p&gt;Brandon explained that the decision often boils down to control versus convenience. While serverless options are perfectly adequate for basic agents, the flexibility and governance capabilities of Kubernetes and GKE become indispensable in high-scale scenarios involving hundreds or thousands of agents. GKE truly shines when you need granular control over your agent deployments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl08gkxy41hseuy3fljpu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl08gkxy41hseuy3fljpu.png" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ADK on GKE
&lt;/h2&gt;

&lt;p&gt;Timestamp: &lt;a href="https://www.youtube.com/watch?v=5_R_Ixk8ENQ&amp;amp;list=PLIivdWyY5sqLXR1eSkiM5bE6pFlXC-OSs&amp;amp;index=1&amp;amp;t=418s" rel="noopener noreferrer"&gt;06:58&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We've discussed the &lt;a href="https://www.youtube.com/watch?v=aLYrV61rJG4&amp;amp;list=PLIivdWyY5sqLXR1eSkiM5bE6pFlXC-OSs&amp;amp;index=17" rel="noopener noreferrer"&gt;Agent Development Kit (ADK)&lt;/a&gt; in previous episodes, and Mofi highlighted to us how seamlessly it integrates with GKE and even showed a demo with the agent he built. ADK provides the framework for building the agent's logic, traces, and tools, while GKE provides the robust hosting environment. You can containerize your ADK agent, push it to Google Artifact Registry, and deploy it to GKE in minutes, transforming a local prototype into a globally accessible service.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Sandbox problem
&lt;/h2&gt;

&lt;p&gt;Timestamp: &lt;a href="https://www.youtube.com/watch?v=5_R_Ixk8ENQ&amp;amp;list=PLIivdWyY5sqLXR1eSkiM5bE6pFlXC-OSs&amp;amp;index=1&amp;amp;t=920s" rel="noopener noreferrer"&gt;15:20&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As agents become more sophisticated and capable of writing and executing code, a critical security concern emerges: the risk of untrusted, LLM-generated code. Brandon emphasized that while code execution is vital for high-performance agents and deterministic behavior, it also introduces significant risks in multi-tenant systems. This led us to the concept of a "sandbox."&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Sandbox?
&lt;/h2&gt;

&lt;p&gt;Timestamp: &lt;a href="https://www.youtube.com/watch?v=5_R_Ixk8ENQ&amp;amp;list=PLIivdWyY5sqLXR1eSkiM5bE6pFlXC-OSs&amp;amp;index=1&amp;amp;t=1158s" rel="noopener noreferrer"&gt;19:18&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For those less familiar with security engineering, Brandon clarified that a sandbox provides kernel and network isolation. Mofi further elaborated, explaining that agents often need to execute scripts (e.g., Python for data analysis). Without a sandbox, a hallucinating or prompt-injected model could potentially delete databases or steal secrets if allowed to run code directly on the main server. A sandbox creates a safe, isolated environment where such code can run without harming other systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent Sandbox on GKE Demo
&lt;/h2&gt;

&lt;p&gt;Timestamp: &lt;a href="https://www.youtube.com/watch?v=5_R_Ixk8ENQ&amp;amp;list=PLIivdWyY5sqLXR1eSkiM5bE6pFlXC-OSs&amp;amp;index=1&amp;amp;t=1225s" rel="noopener noreferrer"&gt;20:25&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, how do we build this "high fence" on Kubernetes? Brandon introduced the Agent Sandbox on Kubernetes, which leverages technologies like gVisor, an application kernel sandbox. When an agent needs to execute code, GKE dynamically provisions a completely isolated pod. This pod operates with its own kernel, network, and file system, effectively trapping any malicious code within the gVisor bubble. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexw6cndzjl0w1ybb8mz1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexw6cndzjl0w1ybb8mz1.png" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mofi walked us through a compelling demo of the Agent Sandbox in action.We observed an ADK agent being given a task requiring code execution. As the agent initiated code execution, GKE dynamically provisioned a new pod, visibly labeled as "sandbox-executor," demonstrating the real-time isolation. Brandon highlighted that this pod is configured with strict network policies, further enhancing security.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feauxfwh9kazbqc32u7kz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feauxfwh9kazbqc32u7kz.png" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Future: Pod Snapshots
&lt;/h2&gt;

&lt;p&gt;Timestamp: &lt;a href="https://www.youtube.com/watch?v=5_R_Ixk8ENQ&amp;amp;list=PLIivdWyY5sqLXR1eSkiM5bE6pFlXC-OSs&amp;amp;index=1&amp;amp;t=1779s" rel="noopener noreferrer"&gt;29:39&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While the Agent Sandbox offers incredible security, the latency of spinning up a new pod for every task is a concern. Mofi demoed the game-changing solution: Pod Snapshots. This technology allows us to save their state of running sandboxes and then near-instantly restore them when an agent needs them. Brandon noted that this reduces startup times from minutes to seconds, revolutionizing real-time agentic workflows on GKE.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0cfc4k9zczexdby59o0z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0cfc4k9zczexdby59o0z.png" width="800" height="743"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;It's incredible to see how GKE isn't just hosting agents; it's actively protecting them and making them faster. &lt;/p&gt;

&lt;h2&gt;
  
  
  Your turn to build
&lt;/h2&gt;

&lt;p&gt;Ready to put these concepts into practice? Dive into the full episode to see the demos in action and explore how GKE can supercharge your agentic workloads.&lt;/p&gt;

&lt;p&gt;Learn how to &lt;a href="https://docs.cloud.google.com/kubernetes-engine/docs/tutorials/agentic-adk-vertex?utm_campaign=CDR_0x036db2a4_default&amp;amp;utm_medium=external&amp;amp;utm_source=youtube" rel="noopener noreferrer"&gt;deploy an ADK agent to Google Kubernetes Engine&lt;/a&gt; and how to get your run agent to run code safely using the &lt;a href="http://docs.cloud.google.com/kubernetes-engine/docs/how-to/agent-sandbox" rel="noopener noreferrer"&gt;GKE agent Sandbox&lt;/a&gt;.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Connect with us
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Shir Meir Lador → &lt;a href="https://www.linkedin.com/in/shirmeirlador/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, &lt;a href="https://x.com/shirmeir86?lang=en" rel="noopener noreferrer"&gt;X&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mofi Rahman → &lt;a href="https://www.linkedin.com/in/moficodes" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Brandon Royal → &lt;a href="https://www.linkedin.com/in/brandonroyal/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Hacking with multimodal Gemma 4 in AI Studio</title>
      <dc:creator>Paige Bailey</dc:creator>
      <pubDate>Sat, 04 Apr 2026 03:30:29 +0000</pubDate>
      <link>https://forem.com/googleai/hacking-with-multimodal-gemma-4-in-ai-studio-3had</link>
      <guid>https://forem.com/googleai/hacking-with-multimodal-gemma-4-in-ai-studio-3had</guid>
      <description>&lt;p&gt;We’re in an incredibly fun era for building. The friction between "I have a weird idea" and "I have a working prototype" is basically zero, especially with the release of &lt;strong&gt;&lt;a href="https://ai.google.dev/gemma/docs/core/model_card_4" rel="noopener noreferrer"&gt;Gemma 4&lt;/a&gt;&lt;/strong&gt;, which is now available via the Gemini API and Google AI Studio. &lt;/p&gt;

&lt;p&gt;Whether you want to deeply inspect model reasoning or you're just trying to build a pipeline to auto-caption an archive of historical web comics and obscure wiki trivia, you can now hit open-weights models directly from your code without needing to provision a massive GPU rig first. &lt;/p&gt;

&lt;p&gt;Here’s a look at the architecture, how to use it, and how to go from the UI to production code in one click.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Models: Apache 2.0, MoE, and 256k Context
&lt;/h3&gt;

&lt;p&gt;Before we look at the API, the biggest detail about &lt;a href="https://ai.google.dev/gemma/docs/core" rel="noopener noreferrer"&gt;Gemma 4&lt;/a&gt; is the license: it's released under &lt;strong&gt;Apache 2.0&lt;/strong&gt;. This means total developer flexibility and commercial permissiveness. You can prototype with the Gemini API, and eventually run it anywhere from a local rig to your own cloud infrastructure. &lt;/p&gt;

&lt;p&gt;The benchmarks are also genuinely impressive. The 31B model is currently sitting at #3 on the Arena AI text leaderboard, out-competing models massively larger than it. &lt;/p&gt;

&lt;p&gt;When you drop into &lt;a href="https://ai.dev" rel="noopener noreferrer"&gt;Google AI Studio&lt;/a&gt;, you'll see two primary models in the picker:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Gemma 4 31B IT:&lt;/strong&gt; The flagship dense model. It has a massive 256K context window — perfect for dumping in entire codebases, massive log files, or huge JSON datasets. &lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Gemma 4 26B A4B IT:&lt;/strong&gt; A Mixture-of-Experts (MoE) architecture. It's highly efficient, only activating roughly 4 billion parameters per inference. High throughput, lower cost.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;(Note: There are also E2B and E4B "Edge" models meant for local on-device deployment that feature native audio input, but we're focusing on the AI Studio API today. I recommend that you go download and test the smaller models locally, though!)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbdajipmhlqk4r7hugcq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbdajipmhlqk4r7hugcq.png" alt=" " width="800" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Multimodal Inputs + Chain of Thought
&lt;/h3&gt;

&lt;p&gt;Text is great, but Gemma 4 is natively multimodal. Let's say you want to build a pipeline to reverse-engineer prompts from a folder of distinct images. &lt;/p&gt;

&lt;p&gt;In AI Studio, you can drop images directly into the playground alongside your prompt. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Prompt:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Generate descriptions of each of these images, and a prompt that I could give to an image generation model to replicate each one."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvo6oercxm0kkl9pu6mw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvo6oercxm0kkl9pu6mw.png" alt=" " width="800" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because the Gemma models support advanced reasoning, after you click &lt;code&gt;Run&lt;/code&gt;, you can click the &lt;strong&gt;Thoughts&lt;/strong&gt; toggle to literally step through the model's chain-of-thought process &lt;em&gt;before&lt;/em&gt; it generates its final output. &lt;/p&gt;

&lt;p&gt;If you love understanding the "why" behind model logic, or you're trying to debug why an agent went off the rails, this level of transparency is incredibly useful.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpai468mw2i2n9eofy34c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpai468mw2i2n9eofy34c.png" alt=" " width="800" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Shipping the code
&lt;/h3&gt;

&lt;p&gt;The bridge between "playing around in a UI" and "writing a script" should be exactly one click. Once you have your prompt, your images, and your reasoning configuration dialed in perfectly, click the &lt;strong&gt;Get Code&lt;/strong&gt; button in the top right corner.&lt;/p&gt;

&lt;p&gt;You can grab the exact payload required for &lt;code&gt;TypeScript&lt;/code&gt;, &lt;code&gt;Python&lt;/code&gt;, &lt;code&gt;Go&lt;/code&gt;, or standard &lt;code&gt;cURL&lt;/code&gt;. Best of all, if you toggle "Include prompt/history", it automatically handles the base64 encoding of your images and explicitly sets the &lt;code&gt;thinkingConfig&lt;/code&gt; parameters in the code for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjmpnx5b33fatifq0z1e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjmpnx5b33fatifq0z1e.png" alt=" " width="800" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's what the TypeScript output looks like when you want to use Gemma 4's reasoning capabilities via the SDK:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;GoogleGenAI&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@google/genai&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Initialize the client&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ai&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;GoogleGenAI&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GEMINI_API_KEY&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Configure Gemma 4 reasoning logic&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;thinkingConfig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;thinkingLevel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;HIGH&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;ai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generateContent&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;gemma-4-31b-it&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Tell me a fascinating, obscure story from internet history.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Go build open-source things!
&lt;/h3&gt;

&lt;p&gt;Having Apache 2.0 open-weights models accessible via a fast API completely changes the calculus for weekend projects. Whether you're building a script to summarize deeply technical whitepapers, analyze visual data natively, or wire up autonomous multi-step code generation agents—the friction is basically gone.&lt;/p&gt;

&lt;p&gt;I can't wait to see what you build! Let me know in the comments what rabbit hole you're pointing Gemma at first. Happy hacking this weekend. :)&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>machinelearning</category>
      <category>gemini</category>
    </item>
    <item>
      <title>Agent Factory Recap: Reinforcement Learning and Fine-Tuning on TPUs</title>
      <dc:creator>Shir Meir Lador</dc:creator>
      <pubDate>Tue, 31 Mar 2026 18:56:42 +0000</pubDate>
      <link>https://forem.com/googleai/agent-factory-recap-reinforcement-learning-and-fine-tuning-on-tpus-1o6j</link>
      <guid>https://forem.com/googleai/agent-factory-recap-reinforcement-learning-and-fine-tuning-on-tpus-1o6j</guid>
      <description>&lt;p&gt;In our agent factory holiday special, Don McCasland and I were joined by Kyle Meggs, Senior Product Manager on the TPU Training Team at Google, to dive deep into the world of model fine tuning. We focused specifically on reinforcement learning (RL), and how Google's own infrastructure of TPUs are designed to power these massive workloads at scale.&lt;/p&gt;

&lt;p&gt;This post guides you through the key ideas from our conversation. Use it to quickly recap topics or dive deeper into specific segments with links and timestamps.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Consider Fine-Tuning
&lt;/h2&gt;

&lt;p&gt;Timestamp: &lt;a href="https://www.youtube.com/watch?v=qBOvM7SiDa4&amp;amp;list=PLIivdWyY5sqLXR1eSkiM5bE6pFlXC-OSs&amp;amp;index=2&amp;amp;t=193s" rel="noopener noreferrer"&gt;3:13&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We started with a fundamental question: with foundational models like Gemini becoming so powerful out of the box, and customization through the prompt can often be good enough, when should you consider fine-tuning? &lt;/p&gt;

&lt;p&gt;Fine tuning your own model is relevant when you need high specialization for unique datasets where a generalist model might not excel (such as in the medical domain), or when you have strict privacy restrictions that require hosting your own models trained on your data.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Model Lifecycle: Pre-training and Post-training (SFT and RL)
&lt;/h2&gt;

&lt;p&gt;Timestamp: &lt;a href="https://www.youtube.com/watch?v=qBOvM7SiDa4&amp;amp;list=PLIivdWyY5sqLXR1eSkiM5bE6pFlXC-OSs&amp;amp;index=1&amp;amp;t=232s" rel="noopener noreferrer"&gt;3:52&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Kyle used a great analogy inspired by Andrej Karpathy to break down the stages of training. He described pre-training as "knowledge acquisition," similar to reading a chemistry textbook to learn how things work. Post-training is further split into Supervised Fine-Tuning (SFT), which is analogous to reading already-solved practice problems within the textbook chapter, and Reinforcement Learning (RL), which is like solving new practice problems without help and then checking your answers in the back of the book to measure yourself against an optimal approach and correct answers. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffc192k921af4wed7698x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffc192k921af4wed7698x.png" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Reinforcement Learning (RL) is Essential
&lt;/h2&gt;

&lt;p&gt;Timestamp: &lt;a href="https://www.youtube.com/watch?v=qBOvM7SiDa4&amp;amp;list=PLIivdWyY5sqLXR1eSkiM5bE6pFlXC-OSs&amp;amp;index=1&amp;amp;t=350s" rel="noopener noreferrer"&gt;5:50&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;We explored why RL is currently so important for building modern LLMs. Kyle explained that unlike SFT, which is about imitation, RL is about grading actions to drive "alignment." It’s crucial for teaching a model safety (penalizing what not to do), enabling the model to use tools like search and interact with the physical world through trial and error, and for performing verifiable tasks like math or coding by rewarding the entire chain of thought that leads to a correct answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Agent Industry Pulse: Why 2025 is the year of RL
&lt;/h2&gt;

&lt;p&gt;Timestamp: &lt;a href="https://www.youtube.com/watch?v=qBOvM7SiDa4&amp;amp;list=PLIivdWyY5sqLXR1eSkiM5bE6pFlXC-OSs&amp;amp;index=1&amp;amp;t=513s" rel="noopener noreferrer"&gt;8:33&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;In this segment, we looked at the rapidly evolving landscape of RL. Kyle noted that it is fair to call 2025 the "year of RL," highlighting the massive increase in investment and launches across the industry:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;January:&lt;/strong&gt; DeepSeek-R1 launched, making a huge splash with open-source GRPO.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Summer:&lt;/strong&gt; xAI launched Grok 4, reportedly running a 200k GPU cluster for RL at "pre-training scale."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;October:&lt;/strong&gt; A slew of new tooling launches across Google, Meta, and TML.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;November:&lt;/strong&gt; Gemini 3 launched as a premier thinking model.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Recent:&lt;/strong&gt; Google launched MaxText 2.0 for fine-tuning on TPUs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78ud8v71oa92vgbu4iz5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78ud8v71oa92vgbu4iz5.png" alt="alt text" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hurdles of Implementing RL
&lt;/h2&gt;

&lt;p&gt;Timestamp: &lt;a href="https://www.youtube.com/watch?v=qBOvM7SiDa4&amp;amp;list=PLIivdWyY5sqLXR1eSkiM5bE6pFlXC-OSs&amp;amp;index=1&amp;amp;t=646s" rel="noopener noreferrer"&gt;10:46&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Following the industry trends, we discussed why RL is so difficult to implement. Kyle explained that RL combines the complexities of both training and inference into a single process. He outlined three primary challenges: managing infrastructure at the right balance and scale to avoid bottlenecks; choosing the right code, models, algorithms (like GRPO vs. DPO), and data; and finally, the difficulty of integrating disparate components for training, inference, orchestration, and weight synchronization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjca0lpcpo23s95mzv876.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjca0lpcpo23s95mzv876.png" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To provide a solution across these dimensions of complexity, Google offers MaxText, a vertically integrated solution to help you perform RL in a highly scalable and performant fashion. MaxText provides highly optimized models, the latest post-training algorithms, high performance inference via LLM, and powerful scalability/flexibility via Pathways. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rch212bej2n6eck8lq8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rch212bej2n6eck8lq8.png" alt="alt text" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In contrast to DIY approaches where users assemble their own stack of disparate components from many different providers, Google’s approach offers a single integrated stack of co-designed components, from &lt;strong&gt;silicon&lt;/strong&gt; to &lt;strong&gt;software&lt;/strong&gt; to &lt;strong&gt;solutions&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctihvw4xt9q6ajs1dfdp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fctihvw4xt9q6ajs1dfdp.png" width="800" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Factory Floor
&lt;/h2&gt;

&lt;p&gt;The Factory Floor is our segment for getting hands-on. Here, we moved from high-level concepts to practical code with a live demo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why TPUs Shine for RL
&lt;/h2&gt;

&lt;p&gt;Timestamp: &lt;a href="https://www.youtube.com/watch?v=qBOvM7SiDa4&amp;amp;list=PLIivdWyY5sqLXR1eSkiM5bE6pFlXC-OSs&amp;amp;index=1&amp;amp;t=772s" rel="noopener noreferrer"&gt;12:52&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Before diving into the demo, Kyle explained why TPUs are uniquely suited for complex AI workloads like RL. Unlike other hardware, TPUs were designed system-first. A TPU Pod can connect up to 9,216 chips over low-latency interconnects, allowing for massive scale without relying on standard data center networks. This is a huge advantage for overcoming RL bottlenecks like weight synchronization. Furthermore, because they are purpose-built for AI, they offer superior price-performance and thermal efficiency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fitkt61wg3qhq2oobmryd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fitkt61wg3qhq2oobmryd.png" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo: Reinforcement Learning (GRPO) with TPU
&lt;/h2&gt;

&lt;p&gt;Timestamp: &lt;a href="https://www.youtube.com/watch?v=qBOvM7SiDa4&amp;amp;list=PLIivdWyY5sqLXR1eSkiM5bE6pFlXC-OSs&amp;amp;index=1&amp;amp;t=953s" rel="noopener noreferrer"&gt;15:53&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Don led a hands-on demonstration showing what RL looks like in action using Google's infrastructure. The demo showcased:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Using &lt;strong&gt;MaxText 2.0&lt;/strong&gt; as an integrated solution for the workload.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Leveraging models from MaxText and algorithms from Tunix.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Handling inference using vLLM.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Utilizing &lt;strong&gt;Pathways&lt;/strong&gt; for orchestration and scaling to run GRPO (Group Relative Policy Optimization).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4tqmo8zv62i6oufqj8q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4tqmo8zv62i6oufqj8q.png" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This holiday special was a great deep dive into the cutting edge of model fine tuning. While foundational models are getting better every day, the future of highly specialized, capable agents relies on mastering post-training techniques like RL, and having the right vertically integrated infrastructure, like TPUs, to run them efficiently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your turn to build
&lt;/h2&gt;

&lt;p&gt;We hope this episode gave you valuable tools and perspectives to think about fine-tuning your own specialized agents. Be sure to check out the resources below to explore MaxText 2.0 and start experimenting with TPUs for your workloads. We'll see you next year for a revamped season of The Agent Factory!&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;Post-Training Docs &lt;a href="https://maxtext.readthedocs.io/en/latest/tutorials/post_training_index.html" rel="noopener noreferrer"&gt;https://maxtext.readthedocs.io/en/latest/tutorials/post_training_index.html&lt;/a&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Google Cloud TPU (Ironwood) Documentation: &lt;a href="https://docs.cloud.google.com/tpu/docs/tpu7x" rel="noopener noreferrer"&gt;https://docs.cloud.google.com/tpu/docs/tpu7x&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Google Cloud open source code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MaxText - &lt;a href="https://github.com/AI-Hypercomputer/maxtext" rel="noopener noreferrer"&gt;https://github.com/AI-Hypercomputer/maxtext&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GPU recipes - &lt;a href="https://github.com/AI-Hypercomputer/gpu-recipes" rel="noopener noreferrer"&gt;https://github.com/AI-Hypercomputer/gpu-recipes&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;TPU recipes - &lt;a href="https://github.com/AI-Hypercomputer/tpu-recipes" rel="noopener noreferrer"&gt;https://github.com/AI-Hypercomputer/tpu-recipes&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Andrej Karpathy - Chemistry Analogy: &lt;a href="https://youtu.be/7xTGNNLPyMI?si=Bubrqz_dPpvuqc1M&amp;amp;t=8069" rel="noopener noreferrer"&gt;Deep Dive into LLMs like ChatGPT&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Paper: "Small Language Models are the Future of Agentic AI" (Nvidia): &lt;a href="https://arxiv.org/abs/2506.02153" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://arxiv.org/abs/2506.02153" rel="noopener noreferrer"&gt;https://arxiv.org/abs/2506.02153&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Fine tuning blog: &lt;a href="https://cloud.google.com/blog/topics/developers-practitioners/a-step-by-step-guide-to-fine-tuning-medgemma-for-breast-tumor-classification?e=48754805" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://cloud.google.com/blog/topics/developers-practitioners/a-step-by-step-guide-to-fine-tuning-medgemma-for-breast-tumor-classification?e=48754805" rel="noopener noreferrer"&gt;https://cloud.google.com/blog/topics/developers-practitioners/a-step-by-step-guide-to-fine-tuning-medgemma-for-breast-tumor-classification?e=48754805&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Connect with us
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Shir Meir Lador →  &lt;a href="https://www.linkedin.com/in/shirmeirlador/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/shirmeirlador/&lt;/a&gt;, &lt;a href="https://x.com/shirmeir86?lang=en" rel="noopener noreferrer"&gt;X&lt;/a&gt;  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Don McCasland →  &lt;a href="https://www.linkedin.com/in/donald-mccasland/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/donald-mccasland/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kyle Meggs → &lt;a href="https://www.linkedin.com/in/kyle-meggs/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/kyle-meggs/&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>gemini</category>
    </item>
    <item>
      <title>Cloud Run Jobs vs. Cloud Batch: Choosing Your Engine for Run-to-Completion Workloads</title>
      <dc:creator>Maciej Strzelczyk</dc:creator>
      <pubDate>Tue, 31 Mar 2026 12:19:29 +0000</pubDate>
      <link>https://forem.com/googleai/cloud-run-jobs-vs-cloud-batch-choosing-your-engine-for-run-to-completion-workloads-56eo</link>
      <guid>https://forem.com/googleai/cloud-run-jobs-vs-cloud-batch-choosing-your-engine-for-run-to-completion-workloads-56eo</guid>
      <description>&lt;p&gt;Google Cloud offers plenty of different products and services, some of which seem to be covering overlapping needs. There are multiple storage solutions (&lt;a href="https://cloud.google.com/storage?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Cloud Storage&lt;/a&gt;, &lt;a href="https://cloud.google.com/filestore?&amp;amp;utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Filestore&lt;/a&gt;), database products (&lt;a href="https://cloud.google.com/sql?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Cloud SQL&lt;/a&gt;, &lt;a href="https://cloud.google.com/spanner?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Spanner&lt;/a&gt;, &lt;a href="https://cloud.google.com/bigquery?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;BigQuery&lt;/a&gt;) or ways to run containerized applications (&lt;a href="https://cloud.google.com/run?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Cloud Run&lt;/a&gt; and &lt;a href="https://cloud.google.com/kubernetes-engine?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;GKE&lt;/a&gt;). The breadth of options to choose from can be overwhelming and lead to situations where it’s not obvious which way to go to achieve your goal.&lt;/p&gt;

&lt;p&gt;Similar situation applies to offline processing (aka batch processing). This is a situation where you have some data and want to run the same operation on each piece of this data. For example: transcoding a big video collection, resizing an image gallery or running inference against a prepared set of prompts. The recommended way to handle such situations is to use proper tools that will automatically scale, handle errors and guarantee that all data has been processed. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/batch?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Cloud Batch&lt;/a&gt; and &lt;a href="https://docs.cloud.google.com/run/docs/create-jobs?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Cloud Run Jobs&lt;/a&gt; are two of the options to consider when you want to handle an offline processing task. In this article, I’ll explain what those two products have in common and what are their main differences. We will finish with a couple of examples showing when to best use each of these products.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Similarities
&lt;/h2&gt;

&lt;p&gt;Cloud Batch and Cloud Run Jobs are fundamentally aligned in their purpose and share many core features, making them both excellent choices for asynchronous, run-to-completion tasks like data conversion, media processing, and offline processing. &lt;/p&gt;

&lt;p&gt;Both services allow you to run your code in standard &lt;a href="https://opencontainers.org/" rel="noopener noreferrer"&gt;Open Container Initiative (OCI)&lt;/a&gt; images, completely abstracting away the operational headache of managing permanent clusters. They share critical ecosystem features: both can be triggered for periodic execution using &lt;a href="https://docs.cloud.google.com/scheduler/docs/overview?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Cloud Scheduler&lt;/a&gt; and orchestrated into complex, multi-step data pipelines via &lt;a href="https://cloud.google.com/workflows?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Cloud Workflows&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Security is standardized, with both offering native integration with &lt;a href="https://cloud.google.com/security/products/secret-manager?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Secret Manager&lt;/a&gt; to keep credentials safe, and both fully supporting &lt;a href="https://docs.cloud.google.com/vpc-service-controls/docs/overview?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;VPC Service Controls (VPC-SC)&lt;/a&gt; to define security perimeters. &lt;/p&gt;

&lt;p&gt;Furthermore, the services are designed for workload portability through a compatible task indexing system; both inject environment variables like &lt;code&gt;CLOUD_RUN_TASK_INDEX&lt;/code&gt; and &lt;code&gt;BATCH_TASK_INDEX&lt;/code&gt; to partition data across parallel tasks. This engineering choice allows container images optimized for Cloud Run to be seamlessly migrated and executed on Cloud Batch. &lt;/p&gt;

&lt;p&gt;Finally, both offer native support for mounting Google Cloud Storage buckets (using &lt;a href="https://docs.cloud.google.com/storage/docs/cloud-storage-fuse/overview?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Cloud Storage FUSE&lt;/a&gt;) and NFS network shares to efficiently handle large-scale data ingestion and output.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Differences
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Core Architectural Paradigms&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The fundamental choice between Cloud Run Jobs and Google Cloud Batch often comes down to the desired level of abstraction versus the required level of infrastructure control. Cloud Run Jobs represents the serverless ideal, prioritizing developer velocity and rapid scaling by entirely abstracting the underlying hardware platform. In contrast, Google Cloud Batch operates as a highly configurable orchestration layer sitting directly atop &lt;a href="https://cloud.google.com/products/compute?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Compute Engine&lt;/a&gt;, granting granular control over virtual machine (VM) shapes and deep hardware integrations.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;GPU Ecosystem and Support&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Cloud Run Jobs supports a curated, fully managed GPU experience optimized for inference and video transcoding, though it strictly enforces a limit of one GPU per instance and a 1-hour maximum timeout for GPU-based tasks. Google Cloud Batch unlocks the entire Compute Engine accelerator portfolio, allowing users to attach multiple GPUs (up to 8 per VM) and supporting multi-day training runs with advanced interconnects like &lt;a href="https://en.wikipedia.org/wiki/NVLink" rel="noopener noreferrer"&gt;NVLink&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Task Communication&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The architectural divergence between the two services is further highlighted by their approach to inter-task communication. Cloud Run Jobs operates on a "shared nothing" architecture, where parallel tasks are entirely isolated and possess no native mechanism to communicate with one another directly. This is in stark contrast to Google Cloud Batch, which is specifically engineered to support "tightly coupled" workloads, such as multi-physics simulations or complex weather forecasting. Batch facilitates high-performance communication by supporting &lt;a href="https://en.wikipedia.org/wiki/Message_Passing_Interface" rel="noopener noreferrer"&gt;Message Passing Interface (MPI)&lt;/a&gt; libraries and provisioning compute clusters with &lt;a href="https://docs.cloud.google.com/vpc/docs/rdma-network-profiles?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Cloud RDMA (Remote Direct Memory Access)&lt;/a&gt; technology. This allows nodes to exchange state data with ultra-low latency and high bandwidth, making Batch the requisite choice for sophisticated &lt;a href="https://cloud.google.com/discover/what-is-high-performance-computing?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;high-performance computing (HPC)&lt;/a&gt; scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Financial Models and Billing&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Cloud Run Jobs utilizes instance-based billing, measured in 100-millisecond increments with a generous recurring free tier for vCPU and memory. Google Cloud Batch has no base service fee; users are billed strictly for the underlying Compute Engine infrastructure consumed. Batch offers significant financial leverage through Spot VMs, providing big discounts for fault-tolerant workloads.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Constraints, Limits, and Maximum Scalability&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The fundamental difference in architecture directly impacts the scale, concurrency, and duration of workloads each service can handle. Cloud Run Jobs is optimized for relatively bounded workloads, while Google Cloud Batch is engineered for massive, unbounded computational scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Execution and Task Limits&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A single &lt;strong&gt;Cloud Run job&lt;/strong&gt; is limited to a maximum of 10,000 independent tasks per execution. The maximum execution length for a standard CPU-based task is 168 hours (7 days), but any task utilizing a GPU is severely restricted to a 1-hour maximum timeout. Fault tolerance allows up to 10 retries per failed task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Cloud Batch&lt;/strong&gt; is built for a significantly larger scale. A single job definition can encompass up to 100,000 tasks within a task group and supports executing up to 5,000 of these tasks in parallel. Execution duration is highly permissive; a Batch task can remain in the RUNNING state for up to 14 days by default. This extended timeout applies even to GPU-based tasks, making Batch mandatory for multi-day distributed training runs.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Specification&lt;/th&gt;
&lt;th&gt;Cloud Run Jobs&lt;/th&gt;
&lt;th&gt;Google Cloud Batch&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Max Tasks Per Job&lt;/td&gt;
&lt;td&gt;10,000&lt;/td&gt;
&lt;td&gt;100,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Max Parallel Tasks&lt;/td&gt;
&lt;td&gt;Regional Quota Dependent&lt;/td&gt;
&lt;td&gt;5,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Max CPU Task Timeout&lt;/td&gt;
&lt;td&gt;168 Hours (7 Days)&lt;/td&gt;
&lt;td&gt;14 Days (Default limit)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Max GPU Task Timeout&lt;/td&gt;
&lt;td&gt;1 Hour&lt;/td&gt;
&lt;td&gt;14 Days (Default limit)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Max Retries Per Task&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;Configurable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Max Concurrent VMs&lt;/td&gt;
&lt;td&gt;N/A (Serverless)&lt;/td&gt;
&lt;td&gt;2,000 (single-zone) or 4,000 (multi-zone)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Use Case Examples
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Example 1: Administrative Automation and Nightly ETL&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Recommended Service:&lt;/strong&gt; Cloud Run Jobs&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Scenario:&lt;/em&gt; A SaaS platform must execute a nightly script to migrate localized data into a central BigQuery warehouse, generate daily PDF invoices for thousands of clients, and perform routine database schema migrations.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Justification:&lt;/em&gt; These tasks are typically I/O bound, complete within a few minutes or hours (well under the 168-hour limit), and do not require specialized CPU instruction sets. Cloud Run Jobs excels here because it requires zero infrastructure scaffolding; the team simply containerised scripts and schedules them via Cloud Scheduler.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Example 2: Massively Parallel Document and Media Processing&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Recommended Service:&lt;/strong&gt; Cloud Run Jobs (with GPU if visual processing is required)&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Scenario:&lt;/em&gt; A media or e-commerce company must process thousands of user-uploaded videos or images daily, requiring video transcoding via FFmpeg or lightweight AI inference (e.g., YOLO object detection).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Justification:&lt;/em&gt; This represents an extremely parallel problem where each file can be processed independently using the task index to assign files. Cloud Run can spin up hundreds of L4-backed containers in seconds and scale to zero immediately upon completion.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Example 3: High-Performance Computing (HPC) and Multi-Physics Simulation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Recommended Service:&lt;/strong&gt; Google Cloud Batch&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Scenario:&lt;/em&gt; A climate research institute runs physics-based simulations for weather forecasting, or a pharmaceutical company performs massive simulations for drug discovery.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Justification:&lt;/em&gt; These are "tightly coupled" workloads where parallel processes must exchange state data. Batch is mandatory as it supports MPI configurations and Cloud RDMA for ultra-low latency inter-node communication.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Example 4: Distributed Machine Learning Training&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Recommended Service:&lt;/strong&gt; Google Cloud Batch&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Scenario:&lt;/em&gt; An AI laboratory pre-training a 70-billion parameter model or performing extensive fine-tuning across terabytes of data over several days.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Justification:&lt;/em&gt; Cloud Run Jobs is disqualified due to the 1-hour GPU timeout and 1-GPU-per-instance limit. Batch allows provisioning A3 or A4 machine series with up to 8 GPUs per VM interconnected via NVLink for multi-day training runs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fheu9czfozccesvsr4drr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fheu9czfozccesvsr4drr.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Happy Processing!
&lt;/h2&gt;

&lt;p&gt;I hope this article has helped you better understand the difference between Cloud Batch and Cloud Run Jobs - the two products designed for processing tasks to completion. Lightweight Cloud Run containers and heavy-duty Cloud Batch machines will definitely help you with all the computations tasks you may have. Try them out by &lt;a href="https://codelabs.developers.google.com/codelabs/cloud-starting-cloudrun-jobs?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;creating a Cloud Run Job (code lab)&lt;/a&gt; or by &lt;a href="https://docs.cloud.google.com/batch/docs/create-run-example-job?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;scheduling a Cloud Batch job&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;To stay up to date with all that's happening in the &lt;a href="https://cloud.google.com/?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Google Cloud&lt;/a&gt; world keep an eye on &lt;a href="https://cloud.google.com/blog/?utm_campaign=CDR_0x73f0e2c4_default_b496192395&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Google Cloud blog&lt;/a&gt; and &lt;a href="https://www.youtube.com/googlecloudplatform" rel="noopener noreferrer"&gt;Google Cloud YouTube channel&lt;/a&gt; to not miss any updates!&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>gcp</category>
      <category>devops</category>
    </item>
    <item>
      <title>Take your vibe coding to the next level</title>
      <dc:creator>Martin Omander</dc:creator>
      <pubDate>Thu, 26 Mar 2026 16:11:25 +0000</pubDate>
      <link>https://forem.com/googleai/take-your-vibe-coding-to-the-next-level-1ea</link>
      <guid>https://forem.com/googleai/take-your-vibe-coding-to-the-next-level-1ea</guid>
      <description>&lt;p&gt;If you’ve been following this series, you know that Cloud Run makes deployment easy and AI tools make development fast. But there is a new frontier in software engineering that combines both into a single, fluid experience: Vibe coding.&lt;/p&gt;

&lt;p&gt;Vibe coding isn’t about being "lazy"; it’s about operating at a higher level of abstraction. It’s the ability to stay in a "flow state" where you describe your vision, and the AI handles the syntax, the boilerplate, and the deployment architecture on Cloud Run. In this final part, we look at how to master this high-velocity workflow without losing the "pro" edge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Turn your ideas into a PRD
&lt;/h2&gt;

&lt;p&gt;This video demonstrates generating user personas, core features, and success metrics, and then scaffolding an app structure to deploy on Cloud Run.&lt;/p&gt;


&lt;div&gt;
    &lt;iframe src="https://www.youtube.com/embed/WdhmYWIOjTU"&gt;
    &lt;/iframe&gt;
  &lt;/div&gt;


&lt;h2&gt;
  
  
  Architect your startup strategy
&lt;/h2&gt;

&lt;p&gt;Learn how to generate a "best practices.md" file that covers stateful containers, structured logging, and CI/CD rules for your Cloud Run deployments.&lt;/p&gt;


&lt;div&gt;
    &lt;iframe src="https://www.youtube.com/embed/qkAHH4UyTQY"&gt;
    &lt;/iframe&gt;
  &lt;/div&gt;


&lt;h2&gt;
  
  
  Build and deploy production-ready features
&lt;/h2&gt;

&lt;p&gt;Watch a production-ready API feature be built and deployed in under a minute using the Gemini CLI. &lt;/p&gt;


&lt;div&gt;
    &lt;iframe src="https://www.youtube.com/embed/FQWEam2AQWU"&gt;
    &lt;/iframe&gt;
  &lt;/div&gt;


&lt;h2&gt;
  
  
  Vibe debugging
&lt;/h2&gt;

&lt;p&gt;See how the AI can read production logs, identify a null pointer exception, and write a fix so you can get your app back online quickly.&lt;/p&gt;


&lt;div&gt;
    &lt;iframe src="https://www.youtube.com/embed/stmiyFzhnZo"&gt;
    &lt;/iframe&gt;
  &lt;/div&gt;


&lt;h2&gt;
  
  
  Get started today!
&lt;/h2&gt;

&lt;p&gt;The cloud is no longer a place you "send" your code; it’s an extension of your development environment. By mastering these workflows, you aren't just a developer—you're an architect of the future.&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>vibecoding</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Build real-time conversational agents with Gemini 3.1 Flash Live</title>
      <dc:creator>Thor 雷神 Schaeff</dc:creator>
      <pubDate>Thu, 26 Mar 2026 15:25:51 +0000</pubDate>
      <link>https://forem.com/googleai/build-real-time-conversational-agents-with-gemini-31-flash-live-27f6</link>
      <guid>https://forem.com/googleai/build-real-time-conversational-agents-with-gemini-31-flash-live-27f6</guid>
      <description>&lt;p&gt;Today, we’re launching &lt;a href="https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-flash-live" rel="noopener noreferrer"&gt;Gemini 3.1 Flash Live&lt;/a&gt; via the &lt;a href="https://ai.google.dev/gemini-api/docs/live" rel="noopener noreferrer"&gt;Gemini Live API&lt;/a&gt; in Google AI Studio. Gemini 3.1 Flash Live helps enable developers to build real-time voice and vision agents that can not only process the world around them, but also respond at the speed of conversation.&lt;/p&gt;

&lt;p&gt;This is a step change in latency, reliability and more natural-sounding dialogue, delivering the quality needed for the next generation of voice-first AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experience enhanced latency, reliability and quality
&lt;/h2&gt;

&lt;p&gt;For real-time interactions, every millisecond of latency strips away the natural flow of the conversation that users expect. The new model better understands tone, emphasis and intent, enabling agents with key improvements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Higher task completion rates in noisy, real-world environments:&lt;/strong&gt; We’ve significantly improved the model’s ability to trigger external tools and deliver information during live conversations. By better discerning relevant speech from environmental sounds like traffic or television, the model more effectively filters out background noise to remain reliable and responsive to instructions.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better instruction-following:&lt;/strong&gt; Adherence to complex system instructions has been boosted significantly. Your agent will stay within its operational guardrails, even when conversations take unexpected turns.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More natural and low-latency dialogue:&lt;/strong&gt; The latest model improves on latency and is even more effective at recognizing acoustic nuances like pitch and pace compared to 2.5 Flash Native Audio, making real-time conversations feel a lot more fluid and natural.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-lingual capabilities:&lt;/strong&gt; The model supports more than 90 languages for real-time multi-modal conversations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Build with an expanding ecosystem of integrations
&lt;/h2&gt;

&lt;p&gt;The Live API is built for production environments, but real-world systems require handling of diverse inputs, from live video streams to on-demand phone calls.&lt;/p&gt;

&lt;p&gt;For systems that require WebRTC scaling or global edge routing, we recommend exploring our partner integrations to streamline the development of real-time voice and video agents.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.livekit.io/agents/models/realtime/plugins/gemini/" rel="noopener noreferrer"&gt;LiveKit&lt;/a&gt; — Use the Gemini Live API with LiveKit Agents.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.pipecat.ai/guides/features/gemini-live" rel="noopener noreferrer"&gt;Pipecat by Daily&lt;/a&gt; — Create a real-time AI chatbot using Gemini Live and Pipecat.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.fishjam.io/tutorials/gemini-live-integration" rel="noopener noreferrer"&gt;Fishjam by Software Mansion&lt;/a&gt; — Create live video and audio streaming applications with Fishjam.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://visionagents.ai/integrations/gemini" rel="noopener noreferrer"&gt;Vision Agents by Stream&lt;/a&gt; — Build real-time voice and video AI applications with Vision Agents.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://voximplant.com/products/gemini-client" rel="noopener noreferrer"&gt;Voximplant&lt;/a&gt; — Connect inbound and outbound calls to Live API with Voximplant.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://firebase.google.com/docs/ai-logic/live-api?api=dev" rel="noopener noreferrer"&gt;Firebase AI SDK&lt;/a&gt; — Get started with the Gemini Live API using Firebase AI Logic.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Get started with the Live API
&lt;/h2&gt;

&lt;p&gt;Gemini 3.1 Flash Live is available starting today via the Gemini API and in Google AI Studio. Developers can use the Gemini &lt;a href="https://ai.google.dev/gemini-api/docs/live" rel="noopener noreferrer"&gt;Live API&lt;/a&gt; to integrate the model into their application. &lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/XV5bhkDpL7U"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Explore our developer documentation to learn how you can build real-time agents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gemini &lt;a href="https://ai.google.dev/gemini-api/docs/live?example=mic-stream" rel="noopener noreferrer"&gt;Live API documentation&lt;/a&gt;: Explore features like multilingual support, tool use and function calling, session management (for managing long running conversations) and ephemeral tokens.
&lt;/li&gt;
&lt;li&gt;Gemini &lt;a href="https://github.com/google-gemini/gemini-live-api-examples" rel="noopener noreferrer"&gt;Live API examples&lt;/a&gt;: Get inspiration for the kind of voice experiences you can build today with the model.
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/google-gemini/gemini-skills/tree/main/skills/gemini-live-api-dev" rel="noopener noreferrer"&gt;Gemini Live API Skill&lt;/a&gt;: For coding agents to learn and build with the Live API.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Get started with the &lt;a href="https://ai.google.dev/gemini-api/docs/live-api/get-started-sdk" rel="noopener noreferrer"&gt;Google GenAI SDK&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;genai&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;genai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;YOUR_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemini-3.1-flash-live-preview&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;response_modalities&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AUDIO&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;aio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;live&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Session started&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="c1"&gt;# Send content...
&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>ai</category>
      <category>gemini</category>
      <category>voice</category>
      <category>multimodal</category>
    </item>
    <item>
      <title>Vandalizing My Own Wikipedia Experience: A 90s Cyberpunk GeoCities Makeover</title>
      <dc:creator>Paige Bailey</dc:creator>
      <pubDate>Fri, 20 Mar 2026 15:44:30 +0000</pubDate>
      <link>https://forem.com/googleai/vandalizing-my-own-wikipedia-experience-a-90s-cyberpunk-geocities-makeover-13ie</link>
      <guid>https://forem.com/googleai/vandalizing-my-own-wikipedia-experience-a-90s-cyberpunk-geocities-makeover-13ie</guid>
      <description>&lt;p&gt;Wikipedia is a marvel. It is the Library of Alexandria of our time, a meticulously curated repository of human knowledge, wrapped in a user interface so ruthlessly utilitarian it makes a hospital corridor look like a rave. &lt;/p&gt;

&lt;p&gt;But sometimes, when I am deep in a Wikipedia rabbit hole reading about &lt;a href="https://en.wikipedia.org/wiki/List_of_animals_with_fraudulent_diplomas" rel="noopener noreferrer"&gt;List of animals with fraudulent diplomas&lt;/a&gt; at 2:00 AM, the sterile white background feels... insufficient. I don't want brutalist minimalism. I want the web the way the ancients intended: dripping in neon pink, plastered in Comic Sans, and crawling with pixelated cats. &lt;/p&gt;

&lt;p&gt;So, I decided to write a custom Wikipedia &lt;code&gt;User Script&lt;/code&gt; to turn the site into a 1998 GeoCities cyberpunk fever dream. &lt;/p&gt;

&lt;p&gt;Instead of writing this from scratch, I wanted to see how well modern LLMs could handle writing niche MediaWiki API scripts. Here is a field report on how I built this abomination using Gemini 3.1 Pro Preview.&lt;/p&gt;




&lt;h3&gt;
  
  
  Grounding Gemini with Wikipedia-specific syntax
&lt;/h3&gt;

&lt;p&gt;LLMs are great at writing vanilla JavaScript, but Wikipedia user scripts rely on specific, slightly archaic MediaWiki globals (like &lt;code&gt;mw.loader.using&lt;/code&gt; and &lt;code&gt;mw.util.addCSS&lt;/code&gt;). If you just blindly ask an LLM to "make Wikipedia pink," it usually hallucinates browser extensions or generic Tampermonkey scripts. &lt;/p&gt;

&lt;p&gt;To bypass this, I jumped into &lt;strong&gt;Google AI Studio&lt;/strong&gt; and loaded up the &lt;code&gt;Gemini 3.1 Pro Preview&lt;/code&gt; model. &lt;/p&gt;

&lt;p&gt;The secret sauce here was using the &lt;a href="https://ai.google.dev/gemini-api/docs/url-context" rel="noopener noreferrer"&gt;URL Context feature&lt;/a&gt;. I toggled URL Context on and pasted in the URL for Wikipedia's custom scripting documentation: &lt;br&gt;
&lt;code&gt;https://en.wikipedia.org/wiki/Wikipedia:User_scripts&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;My prompt was simple but unhinged: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Using the provided documentation on Wikipedia User Scripts, write a script for my Special:MyPage/common.js that makes my Wikipedia viewing experience look like a 90s cyberpunk GeoCities page. I want a pink/cyan grid background, glowing Comic Sans headers, a massive scrolling &lt;code&gt;&amp;lt;marquee&amp;gt;&lt;/code&gt; for the article title, a giant glowing sparkle mouse trail, and a squad of animated cats walking across the top of my screen."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Because Gemini 3.1 Pro Preview had the actual documentation in its context window, it knew exactly how to inject CSS securely via MediaWiki's utility functions, and it gave me a plug-and-play script.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvufd1fm6j5rdpx0d6g34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvufd1fm6j5rdpx0d6g34.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h3&gt;
  
  
  Breaking down the script
&lt;/h3&gt;

&lt;p&gt;The resulting script is a beautiful combination of modern DOM manipulation and deeply offensive 90s aesthetics.&lt;/p&gt;
&lt;h4&gt;
  
  
  1. The Marquee Title
&lt;/h4&gt;

&lt;p&gt;If we are going to read about the &lt;a href="https://en.wikipedia.org/wiki/Emu_War" rel="noopener noreferrer"&gt;Emu War&lt;/a&gt;, that title needs to &lt;em&gt;move&lt;/em&gt;. The script grabs the &lt;code&gt;#firstHeading&lt;/code&gt; element and violently wraps its inner HTML in a &lt;code&gt;&amp;lt;marquee&amp;gt;&lt;/code&gt; tag.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;$title&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#firstHeading&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;titleText&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;$title&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;html&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;$title&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;html&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;marquee scrollamount="15" behavior="alternate"&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;titleText&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;/marquee&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Note: The fact that modern browsers in 2026 still parse and execute the &lt;code&gt;&amp;lt;marquee&amp;gt;&lt;/code&gt; tag is a testament to the web’s unbreakable backwards compatibility. It is the digital equivalent of a vestigial tail.&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  2. The Sparkle Trail (A Lesson in Throttling)
&lt;/h4&gt;

&lt;p&gt;To create the mouse trail, the script listens to the &lt;code&gt;mousemove&lt;/code&gt; event and appends absolutely-positioned &lt;code&gt;&amp;lt;span&amp;gt;&lt;/code&gt; elements containing cyberpunk symbols (&lt;code&gt;✦&lt;/code&gt;, &lt;code&gt;★&lt;/code&gt;, &lt;code&gt;✨&lt;/code&gt;) to the DOM. &lt;/p&gt;

&lt;p&gt;To prevent this from immediately melting my GPU (a very real threat when generating hundreds of DOM nodes a second), the model smartly implemented a timestamp throttle:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;now&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;now&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;lastSparkleTime&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Only spawn a sparkle every 40ms&lt;/span&gt;
&lt;span class="nx"&gt;lastSparkleTime&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;now&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It then applies a CSS &lt;code&gt;@keyframes&lt;/code&gt; animation to each sparkle so they slowly drift downward, rotate 180 degrees, and fade to &lt;code&gt;opacity: 0&lt;/code&gt; before being garbage-collected by a &lt;code&gt;setTimeout&lt;/code&gt; a second later. &lt;/p&gt;

&lt;h4&gt;
  
  
  3. The Mathematics of Walking Cats
&lt;/h4&gt;

&lt;p&gt;Instead of using a clunky JS &lt;code&gt;setInterval&lt;/code&gt; to move the cats, Gemini 3.1 leaned into pure, hardware-accelerated CSS animations.&lt;/p&gt;

&lt;p&gt;It created a fixed header container (&lt;code&gt;pointer-events: none&lt;/code&gt; so I can still click the search bar through the cats' ethereal bodies). Then, it applied two separate animations. &lt;/p&gt;

&lt;p&gt;The first animation slides the whole squad from &lt;code&gt;100vw&lt;/code&gt; (off-screen right) to &lt;code&gt;-100%&lt;/code&gt; (off-screen left). &lt;/p&gt;

&lt;p&gt;The second animation creates the "walking" illusion. If you think about the geometry of a walking pixel cat, it's essentially a sine wave. To achieve this, the script applies a 10px vertical bounce to each cat (&lt;code&gt;transform: translateY(-10px)&lt;/code&gt;). &lt;/p&gt;

&lt;p&gt;To make it look like a chaotic squad rather than a synchronized military parade, the script uses the &lt;code&gt;nth-child(even)&lt;/code&gt; pseudo-class to offset the animation delay of every other cat:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;&lt;span class="nc"&gt;.walking-cat&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;animation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;catBounce&lt;/span&gt; &lt;span class="m"&gt;0.4s&lt;/span&gt; &lt;span class="n"&gt;alternate&lt;/span&gt; &lt;span class="n"&gt;infinite&lt;/span&gt; &lt;span class="n"&gt;ease-in-out&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nc"&gt;.walking-cat&lt;/span&gt;&lt;span class="nd"&gt;:nth-child&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nt"&gt;even&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;animation-delay&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.2s&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c"&gt;/* Phase offset for the bounce! */&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are essentially phase-shifting the vertical oscillation of our felines to simulate independent locomotion.&lt;/p&gt;




&lt;h3&gt;
  
  
  The final results
&lt;/h3&gt;

&lt;p&gt;I pasted the code into my &lt;code&gt;Special:MyPage/common.js&lt;/code&gt;, hit publish, and bypassed my cache. &lt;/p&gt;

&lt;p&gt;The result is staggering.&lt;/p&gt;

&lt;p&gt;I am currently reading the deeply serious, heavily cited Wikipedia article for &lt;a href="https://en.wikipedia.org/wiki/Maximilien_Robespierre" rel="noopener noreferrer"&gt;Maximilien Robespierre&lt;/a&gt;. The background is a dark void overlaid with a neon pink laser grid. The header "&lt;strong&gt;MAXIMILIEN ROBESPIERRE&lt;/strong&gt;" is glowing in hot pink Comic Sans, aggressively bouncing off the edges of my monitor. &lt;/p&gt;

&lt;p&gt;

&lt;iframe class="tweet-embed" id="tweet-2035009322531660256-82" src="https://platform.twitter.com/embed/Tweet.html?id=2035009322531660256"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-2035009322531660256-82');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=2035009322531660256&amp;amp;theme=dark"
  }





&lt;/p&gt;

&lt;p&gt;Every time I move my mouse to hover over a citation, a massive explosion of 45-pixel-wide cyan stars erupts across the text. And above it all, a squad of five neon cats marches endlessly toward the left side of my screen, oblivious to the Reign of Terror occurring in the text below them.&lt;/p&gt;

&lt;p&gt;It is awful. I am never turning it off. &lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you want to ruin your own Wikipedia experience, you can find the complete script in the replies below. Just remember to log in, navigate to &lt;code&gt;Special:MyPage/common.js&lt;/code&gt; and &lt;code&gt;Special:MyPage/common.css&lt;/code&gt;, and let the 90s flow through you.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>javascript</category>
      <category>programming</category>
    </item>
    <item>
      <title>My First Experience Creating Antigravity Skills</title>
      <dc:creator>Shir Meir Lador</dc:creator>
      <pubDate>Fri, 20 Mar 2026 15:23:02 +0000</pubDate>
      <link>https://forem.com/googleai/my-first-experience-creating-antigravity-skills-524b</link>
      <guid>https://forem.com/googleai/my-first-experience-creating-antigravity-skills-524b</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7cvbil990snohnuztk9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7cvbil990snohnuztk9w.png" width="700" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;small&gt;Experimenting with Agent skills for the first time, feeling empowered!&lt;/small&gt;&lt;/center&gt;

&lt;p&gt; &lt;br&gt;
Last week, I was at an event where we taught developers how to build &lt;a href="https://goo.gle/aaiwcr-1" rel="noopener noreferrer"&gt;MCP servers&lt;/a&gt;, &lt;a href="http://goo.gle/aaiwcr-2" rel="noopener noreferrer"&gt;agents&lt;/a&gt;, and &lt;a href="http://goo.gle/aaiwcr-3" rel="noopener noreferrer"&gt;deploy open models&lt;/a&gt; to &lt;a href="https://docs.cloud.google.com/run/docs?utm_campaign=CDR_0x91b1edb5_default_b491641592&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;Google Cloud Run&lt;/a&gt;. After the session, one of the developers shared something that really stuck with me: he was already using our content to create specialized &lt;a href="https://antigravity.google/docs/skills" rel="noopener noreferrer"&gt;&lt;strong&gt;Skills&lt;/strong&gt;&lt;/a&gt; to share with his entire team.&lt;/p&gt;

&lt;p&gt;I got inspired and decided it was time to dive into &lt;a href="https://antigravity.google/docs/skills" rel="noopener noreferrer"&gt;Agent Skills&lt;/a&gt;. During my last project, the dev-signal agent, I had a lot of learnings about how to bring agents and AI applications to production in a robust and scalable manner. I thought, &lt;em&gt;this is a great opportunity to give my favorite coding agent, Google’s &lt;a href="https://www.antigravity.google/" rel="noopener noreferrer"&gt;Antigravity&lt;/a&gt; (Google’s “agent-first” IDE), those skills so that going forward, it will just do it for me!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In this post, I’ll walk through how I built the 13 production skills in this &lt;a href="https://github.com/GoogleCloudPlatform/devrel-demos/tree/main/ai-ml/dev-signal/.agent/skills" rel="noopener noreferrer"&gt;repository&lt;/a&gt; and the patterns behind them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Agent Skills?
&lt;/h2&gt;

&lt;p&gt;As &lt;a href="https://www.linkedin.com/in/iromin/?originalSubdomain=in" rel="noopener noreferrer"&gt;Romin Irani&lt;/a&gt; explains in &lt;a href="https://medium.com/google-cloud/tutorial-getting-started-with-antigravity-skills-864041811e0d" rel="noopener noreferrer"&gt;“Getting Started with Google Antigravity Skills”&lt;/a&gt;, skills represent a shift from monolithic context loading to &lt;strong&gt;Progressive Disclosure&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Agents get “overwhelmed” when providing them too many tools all at once (a phenomenon known as “&lt;a href="https://www.linkedin.com/posts/smithakolan_your-ai-agent-is-not-bad-at-reasoning-activity-7422342915089178624-awR3?rcm=ACoAAAYeeDsBfJzKJQaDuSjRnUBmKV20OJV2olc" rel="noopener noreferrer"&gt;Tool Bloat&lt;/a&gt;”), to solve for that, Skills allow the agent to “load” specialist knowledge only when needed. When you ask an agent to “evaluate a shadow revision,” it will figure out it will need to leverage the &lt;strong&gt;Shadow Deployer&lt;/strong&gt; skill as context for this operation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Workspace vs. Global Scope
&lt;/h2&gt;

&lt;p&gt;In Antigravity, you can manage these skills in two distinct ways depending on how you want to use them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Workspace Scope:&lt;/strong&gt; Located in &lt;em&gt;.agent/skills/&lt;/em&gt; within your project root. These are specific to your project and can be committed to GitHub so your entire team can benefit from the same production patterns.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Global Scope:&lt;/strong&gt; Located in &lt;em&gt;~/.gemini/antigravity/skills/.&lt;/em&gt; These are your personal utilities that stay with you across every project you work on.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How I built the skills
&lt;/h2&gt;

&lt;p&gt;Following the principles in &lt;a href="https://www.linkedin.com/in/petruzalek/" rel="noopener noreferrer"&gt;Daniela Petruzalek&lt;/a&gt;’s &lt;a href="https://medium.com/google-cloud/building-agent-skills-with-skill-creator-855f18e785cf" rel="noopener noreferrer"&gt;“Building Agent Skills with skill-creator”,&lt;/a&gt; I took a “methodology-first” approach. I used the existing dev-signal blog series I’ve been working on and the &lt;a href="https://github.com/GoogleCloudPlatform/devrel-demos/tree/main/ai-ml/dev-signal" rel="noopener noreferrer"&gt;codebase&lt;/a&gt; itself as core context, asking Antigravity to identify and codify the unique skills needed to &lt;strong&gt;build a production agent on Google Cloud.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For some of the more specialized areas, I provided additional context with patterns I’d like to follow, such as the agent evaluation &lt;a href="https://codelabs.devsite.corp.google.com/codelabs/production-ready-ai-roadshow/2-evaluating-multi-agent-systems/evaluating-multi-agent-systems#0" rel="noopener noreferrer"&gt;codelab&lt;/a&gt; and &lt;a href="https://cloud.google.com/blog/topics/developers-practitioners/from-vibe-checks-to-continuous-evaluation-engineering-reliable-ai-agents?utm_campaign=CDR_0x91b1edb5_default_b491641592&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;blog&lt;/a&gt; and the agent security &lt;a href="https://codelabs.developers.google.com/codelabs/production-ready-ai-roadshow/3-securing-a-multi-agent-system/securing-a-multi-agent-system#0?utm_campaign=CDR_0x91b1edb5_default_b491641592&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;codelab&lt;/a&gt;, both written by my awesome team.&lt;/p&gt;

&lt;p&gt;These 13 skills provide Antigravity (or any developer using them) the crucial toolkit of a Google Cloud Production Engineer. I’m currently finalizing a detailed, step-by-step walkthrough of the dev-signal agent which will be published on the &lt;a href="https://cloud.google.com/blog" rel="noopener noreferrer"&gt;&lt;strong&gt;Google Cloud Blog&lt;/strong&gt;&lt;/a&gt; very soon! (follow me for future updates)&lt;/p&gt;

&lt;p&gt;In the meantime, you don’t have to wait — the full &lt;a href="https://github.com/GoogleCloudPlatform/devrel-demos/tree/main/ai-ml/dev-signal" rel="noopener noreferrer"&gt;repository&lt;/a&gt; and the &lt;a href="https://github.com/GoogleCloudPlatform/devrel-demos/tree/main/ai-ml/dev-signal/.agent/skills" rel="noopener noreferrer"&gt;skills&lt;/a&gt; are available for you to explore and leverage in your own projects today.&lt;/p&gt;

&lt;p&gt;Here is the full inventory of the skills:&lt;/p&gt;

&lt;h2&gt;
  
  
  🏗️ Production Agent
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;adk-memory-bank-initializer:&lt;/strong&gt; Long-term state logic with Vertex AI Memory Bank.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;agent-containerizer:&lt;/strong&gt; Mixed-runtime Dockerfiles (Python + Node.js).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;cloud-run-agent-architect:&lt;/strong&gt; Least-privilege Terraform for Cloud Run.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;gcp-production-secret-handler:&lt;/strong&gt; In-memory secret fetching pattern (Secret Manager).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mcp-connector-generator:&lt;/strong&gt; Standardized MCP connection logic.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  📊 Evaluation
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;gcp-agent-eval-engine-runner:&lt;/strong&gt; Parallel inference and reasoning trace capture.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;gcp-agent-eval-metric-configurator:&lt;/strong&gt; Setup for Grounding and Tool Use rubrics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;gcp-agent-golden-dataset-builder:&lt;/strong&gt; Tools for building datasets with reference trajectories.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;gcp-agent-shadow-deployer:&lt;/strong&gt; “Dark Canary” deployment scripts with revision tagging.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;gcp-agent-tool-trajectory-evaluator:&lt;/strong&gt; Custom Python metrics for Precision and Recall.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🛡️ Security
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;gcp-agent-model-armor-shield:&lt;/strong&gt; Intelligent firewall (Prompt Injection, RAI, Malicious URL filters).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;gcp-agent-safety-gatekeeper:&lt;/strong&gt; Python integration pattern (safety_util.py) for sanitizing user inputs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;gcp-agent-sdp-template-factory:&lt;/strong&gt; Terraform for Sensitive Data Protection (PII/Secret redaction).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By codifying these patterns to production skills, Antigravity can now leverage these automatically in my day to day development. I hope you find these as helpful as I do!&lt;/p&gt;

&lt;h2&gt;
  
  
  Pro tip - self improving skills!
&lt;/h2&gt;

&lt;p&gt;Because these skills were AI-generated, they might not work perfectly for your specific environment on the first try. But that’s actually the best part of working with an agentic IDE. If a skill doesn’t work well for you, don’t just manually fix the code, let the coding agent figure it out. Once it finds the solution, you can ask it to update the corresponding SKILL.md with the learned workflow. This will capture the corrected workflow for the future, ensuring the agent doesn’t repeat the mistake while saving you tokens and time on the next run. Think of these as living documents that actively improve as you build.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ready to get started?&lt;/strong&gt; Clone the &lt;a href="https://github.com/GoogleCloudPlatform/devrel-demos/tree/main/ai-ml/dev-signal" rel="noopener noreferrer"&gt;repository&lt;/a&gt; and add these skills to your Workspace or Global Scope to start building your own production-ready agents. Learn more about &lt;a href="https://antigravity.google/docs/skills" rel="noopener noreferrer"&gt;Agent skills.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow me on &lt;a href="https://www.linkedin.com/in/shirmeirlador/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; and &lt;a href="https://x.com/shirmeir86?lang=en" rel="noopener noreferrer"&gt;X&lt;/a&gt; for updates on my next blogs and videos.&lt;/p&gt;

</description>
      <category>antigravity</category>
      <category>ai</category>
      <category>googlecloud</category>
      <category>agents</category>
    </item>
    <item>
      <title>Unlocking Gemini CLI with Skills, Hooks &amp; Plan Mode</title>
      <dc:creator>Greg Baugues</dc:creator>
      <pubDate>Fri, 20 Mar 2026 13:00:00 +0000</pubDate>
      <link>https://forem.com/googleai/unlocking-gemini-cli-with-skills-hooks-plan-mode-2bgf</link>
      <guid>https://forem.com/googleai/unlocking-gemini-cli-with-skills-hooks-plan-mode-2bgf</guid>
      <description>&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=ZXYuiEMm21s" rel="noopener noreferrer"&gt;In Unlocking Gemini CLI with Skills, Hooks &amp;amp; Plan Mode&lt;/a&gt;, we moved past the basics and into the "power user" features of &lt;a href="https://t.co/Ly0c8zpKnr" rel="noopener noreferrer"&gt;Gemini CLI&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;I was joined by Jack Wotherspoon from the Gemini CLI team to show how developers can exert more control over their AI agents and handle complex, multi-step projects with confidence.&lt;br&gt;
From a 20-minute app build to the introduction of a "read-only" research mode, this episode was packed with tools designed to bridge the gap between AI autonomy and developer intent.&lt;/p&gt;
&lt;h2&gt;
  
  
  The 20-minute build: From idea to deployment
&lt;/h2&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div&gt;
  &lt;iframe src="https://share.descript.com/embed/X1hyhOf0mkD"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;To set the stage, Jack showcased Memory Wall, a digital bulletin board built using React, Three.js, and Firebase. The kicker? It took only 20 minutes to go from a blank slate to a live-deployed application.&lt;/p&gt;

&lt;p&gt;This served as the playground for the day's deep dives:&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Deterministic control with hooks
&lt;/h2&gt;

&lt;p&gt;One of the biggest hurdles with AI agents is their non-deterministic nature. &lt;a href="https://goo.gle/Gemini-Cli-Hooks" rel="noopener noreferrer"&gt;Hooks&lt;/a&gt; change that. They are scripts that run at specific lifecycle points—like at session start or before a tool call.&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div&gt;
  &lt;iframe src="https://share.descript.com/embed/JfX6yDNTMKY"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;


&lt;br&gt;
 

&lt;ul&gt;
&lt;li&gt;The "dev server" hook: Jack demonstrated a hook that checks if a local dev server is running on startup. If not, Gemini CLI alerts the user and offers to start it.&lt;/li&gt;
&lt;li&gt;Safety first: You can use hooks to run linters or "security guards" that prevent the AI from writing messy code or deleting sensitive files.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Pro tip: Use the new Background Tasks feature (Control + B) to keep your dev servers running in the terminal without blocking your conversation with Gemini.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  2. "Expert Hats": These specialized skills help to refine the behavior of these tools
&lt;/h2&gt;

&lt;p&gt;If you’ve ever worried about "context bloat"—where an AI gets confused by too much information—skills are your solution. Jack described these as "library books on a shelf."&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div&gt;
  &lt;iframe src="https://share.descript.com/embed/oGcpod2SJaj"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;




&lt;p&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Progressive disclosure:&lt;/strong&gt; Instead of loading every best practice into every prompt, Skills load specialized knowledge (like Three.js expertise or documentation style guides) only when they are triggered&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The skill creator:&lt;/strong&gt; Gemini CLI now has a built-in skill to help you build skills. Just ask: "Create a docs-writer skill for this project," and the CLI will walk you through a setup interview.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  3. The "Ask User" tool
&lt;/h2&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div&gt;
  &lt;iframe src="https://share.descript.com/embed/qxqrxO8etQw"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;


&lt;br&gt;
 

&lt;p&gt;Gone are the days of the CLI just guessing what you want. With the new &lt;strong&gt;Ask User&lt;/strong&gt; tool, Gemini CLI can pause and present interactive dialogues, multiple-choice questions, and yes/no prompts. This ensures the agent is aligned with your vision before it touches a single line of code.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Look before you leap with Plan Mode (preview)
&lt;/h2&gt;

&lt;p&gt;Perhaps the most anticipated feature is Plan Mode, currently in preview. It transforms Gemini CLI into a read-only researcher.&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div&gt;
  &lt;iframe src="https://share.descript.com/embed/WA2F0NqGgwh"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;


&lt;br&gt;
 

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Research first:&lt;/strong&gt; In Plan Mode, the agent explores your codebase and external docs to create a structured "battle plan."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User approval:&lt;/strong&gt; It presents this plan to you for feedback. Only once you give the "green light" does it switch to execution mode to start editing files.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Ready to dive deeper?
&lt;/h2&gt;

&lt;p&gt;Watch: Missed the live demo? &lt;a href="https://www.youtube.com/watch?v=ZXYuiEMm21s" rel="noopener noreferrer"&gt;Catch the full replay&lt;/a&gt; here.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Learn:&lt;/strong&gt; Take the free &lt;a href="https://t.co/N82rRy1bPk" rel="noopener noreferrer"&gt;DeepLearning.ai&lt;/a&gt; course to get hands-on and learn more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contribute:&lt;/strong&gt; Gemini CLI is open-source! Check out the "Help Wanted" labels on &lt;a href="https://geminicli.com/docs/contributing/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>gemini</category>
      <category>ai</category>
      <category>cli</category>
    </item>
    <item>
      <title>Introducing the new full-stack vibe coding experience in Google AI Studio</title>
      <dc:creator>Kat Kampf</dc:creator>
      <pubDate>Thu, 19 Mar 2026 18:12:23 +0000</pubDate>
      <link>https://forem.com/googleai/introducing-the-new-full-stack-vibe-coding-experience-in-google-ai-studio-471g</link>
      <guid>https://forem.com/googleai/introducing-the-new-full-stack-vibe-coding-experience-in-google-ai-studio-471g</guid>
      <description>&lt;p&gt;&lt;em&gt;Start building real apps for the modern web with the Antigravity coding agent along with Firebase backend integrations, now in Google AI Studio.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Today, we are launching a completely upgraded vibe coding experience in &lt;a href="http://aistudio.google.com/apps" rel="noopener noreferrer"&gt;Google AI Studio&lt;/a&gt;, designed to turn your prompts into production-ready applications. From multiplayer experiences and installing external libraries to ways to save your progress and log in securely, you can now build truly functional, AI-native applications without ever leaving the vibe coding experience.&lt;/p&gt;

&lt;p&gt;We’re accelerating the path from prompt to production using the new &lt;a href="https://antigravity.google/" rel="noopener noreferrer"&gt;Google Antigravity&lt;/a&gt; coding agent. To further support modern scalable applications, we are also enabling robust backends that bring secure storage and user authentication to your apps via a &lt;a href="https://firebase.blog/posts/2026/03/announcing-ai-studio-integration" rel="noopener noreferrer"&gt;built-in Firebase integration&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experience the difference, from prototypes to production apps
&lt;/h2&gt;

&lt;p&gt;Here’s how the new updates help you build real apps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Build multiplayer experiences:&lt;/strong&gt; Create real-time multiplayer games, collaborative workspaces and shared tools that can connect users instantly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add databases and authentication:&lt;/strong&gt; The agent now proactively detects when your app needs a database or login. After you approve a Firebase integration, it provisions Cloud Firestore for databases and Firebase Authentication for a secure sign-in with Google.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create for the modern web:&lt;/strong&gt; The agent now uses the vast ecosystem of modern web tools. If you want smooth animations or professional icons, the agent automatically figures out the right solution — like installing Framer Motion or Shadcn — to bring your vision to life.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Connect to real-world services:&lt;/strong&gt; Turn prototypes into production-grade software by connecting to the services you already use. You can now bring your own API credentials to securely integrate services like databases, payment processors or Google services like Maps. The agent detects when a key is required and safely stores it in the new Secrets Manager located in the Settings tab.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pick up where you left off:&lt;/strong&gt; Access your data across devices and sessions. Close the browser tab and the app remembers where you left off so you can continue whenever you’re ready to come back.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access a more powerful agent:&lt;/strong&gt; Build complex apps using simpler prompts. The agent now maintains a deeper understanding of your entire project structure and chat history, enabling faster iteration and more precise multi-step code edits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build with Next.js:&lt;/strong&gt; In addition to React and Angular, we now support &lt;a href="https://nextjs.org/" rel="noopener noreferrer"&gt;Next.js&lt;/a&gt; apps out of the box. Select your framework in the updated “Settings” panel.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  See the new agent in action with Build mode
&lt;/h2&gt;

&lt;p&gt;Here are a few examples of what you can build today:&lt;/p&gt;



&lt;center&gt;&lt;small&gt;Real-time multiplayer games: You can now create a massive multiplayer first-person laser tag game in a retro style from just a prompt. Tag real life opponents or beat the AI bots to earn points on the leaderboard before time runs out and win.
&lt;br&gt;
Play or Remix &lt;a href="https://aistudio.google.com/apps/bundled/neon_arena_laser_tag" rel="noopener noreferrer"&gt;Neon Arena&lt;/a&gt; in Google AI Studio.
&lt;/small&gt;&lt;/center&gt;



&lt;center&gt;&lt;small&gt;Real-time collaboration: Imagine prompting for a "multiplayer experience using 3D particles." The agent automatically sets up the real-time syncing logic, imports Three.js and creates a shared space where each person's cursor spawns 3D particles that flow with curl noise.
&lt;br&gt;
Play or Remix &lt;a href="https://aistudio.google.com/apps/bundled/cosmic_flow?" rel="noopener noreferrer"&gt;Cosmic Flow&lt;/a&gt; in Google AI Studio.
&lt;/small&gt;&lt;/center&gt;



&lt;center&gt;&lt;small&gt;Physics and game design: Create complex 3D interactions that adhere to real-world mechanics with ease. The new agent integrates claw machine physics, timers and a leaderboard importing Three.js for animations interactive 3D elements just by asking.
&lt;br&gt;
Play or Remix &lt;a href="https://aistudio.google.com/apps/bundled/neon_claw?showPreview=true&amp;amp;showAssistant=true" rel="noopener noreferrer"&gt;Neon Claw&lt;/a&gt; in Google AI Studio.
&lt;/small&gt;&lt;/center&gt;



&lt;center&gt;&lt;small&gt;Connect to the real world: Build apps that talk to the real world. Securely store your API credentials to fetch live data from Google Maps or send updates to a database, turning a concept into a utility.
&lt;br&gt;
Play or Remix &lt;a href="https://aistudio.google.com/apps/bundled/geoseeker" rel="noopener noreferrer"&gt;GeoSeeker&lt;/a&gt; in Google AI Studio.
&lt;/small&gt;&lt;/center&gt;



&lt;center&gt;&lt;small&gt;Generate and Catalog Your Recipes: Organize and import recipes or generate new ones with Gemini. Collaborate with your friends and family to keep your culinary traditions alive.
&lt;br&gt;
Try or Remix &lt;a href="https://aistudio.google.com/apps/bundled/heirloom_recipes" rel="noopener noreferrer"&gt;Heirloom Recipes&lt;/a&gt; in Google AI Studio.&lt;/small&gt;&lt;/center&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Start building today
&lt;/h2&gt;

&lt;p&gt;This new experience in Google AI Studio has already been used internally to build hundreds of thousands of apps over the last few months. We’re working on more integrations, like Workspace to connect Drive and Sheets to your apps, plus the ability to take your app from Google AI Studio to Google Antigravity with a single button click.&lt;/p&gt;

&lt;p&gt;Whether you are building your first app or have agents building while you do other things, we hope these updates help accelerate your path from idea to deployed, production-ready app. Head over to &lt;a href="https://aistudio.google.com/apps" rel="noopener noreferrer"&gt;Google AI Studio&lt;/a&gt; to try the new experience today.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>antigravity</category>
      <category>agents</category>
      <category>nextjs</category>
    </item>
    <item>
      <title>Vibe-coding in Google AI Studio: my tips to prompt better and create amazing apps</title>
      <dc:creator>Guillaume Vernade</dc:creator>
      <pubDate>Thu, 19 Mar 2026 17:48:19 +0000</pubDate>
      <link>https://forem.com/googleai/vibe-coding-in-google-ai-studio-my-tips-to-prompt-better-and-create-amazing-apps-3kcp</link>
      <guid>https://forem.com/googleai/vibe-coding-in-google-ai-studio-my-tips-to-prompt-better-and-create-amazing-apps-3kcp</guid>
      <description>&lt;p&gt;You might already know &lt;a href="https://ai.studio" rel="noopener noreferrer"&gt;&lt;strong&gt;Google AI Studio&lt;/strong&gt;&lt;/a&gt; as a sandbox to play with the Deepmind models and tinker with all their parameters. But did you know that you can also vibe-code webapps for free and publish them in a few clicks?&lt;/p&gt;

&lt;p&gt;Its &lt;a href="https://ai.studio/build" rel="noopener noreferrer"&gt;&lt;strong&gt;Build&lt;/strong&gt;&lt;/a&gt; section is a game-changer for "vibe coding" and generating functional applications without writing a single line of code. It allows you to rapidly build and iterate on ideas using the power of Gemini models, moving from simple concepts to fully deployed prototypes in minutes.&lt;/p&gt;

&lt;p&gt;Following my own experiments with the platform over the last year, this guide covers the core capabilities of AI Studio, how it compares to other tools, and how to prompt it effectively to build your apps.&lt;/p&gt;




&lt;p&gt;Here's what you'll find in this article:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;0. &lt;em&gt;Why use AI Studio?&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;1. &lt;em&gt;The App Gallery &amp;amp; Remixing&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;2. &lt;em&gt;Get started with Vibe Coding&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;3. &lt;em&gt;Create apps with databases&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;4. &lt;em&gt;My tips to better Vibe Code&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;5. &lt;em&gt;Publish your app&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;6. &lt;em&gt;AI Studio vs. Antigravity: When to use which?&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;7. &lt;em&gt;My favorite creations&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  0. Why use AI Studio? (Native Gemini and Privacy)
&lt;/h1&gt;

&lt;p&gt;Before diving into the "how," let's address the most common question: &lt;em&gt;Why use AI Studio over other popular AI app builders on the market?&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;The first reason is for AI Studio's &lt;strong&gt;native Gemini&lt;/strong&gt; usage. It can create apps that are using the Gemini models, in a way that (as long as you stay in AI Studio) you don't have anything to set up so you, and the folks you're sharing your app with, can use the free tier and enjoy Gemini-powered apps for free.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;: Some advanced models require a paid API key, but there's always an alternative with a free tier.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But the main differentiator is &lt;strong&gt;Privacy&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;On the free tiers of many competing platforms, unless you're paying, all the applications you generate are public by default. Anyone can see what you are working on. On AI Studio, your apps remain &lt;strong&gt;strictly private&lt;/strong&gt;. This is a huge advantage when you are prototyping personal ideas, working on sensitive client projects, or just want to experiment freely without worrying about public visibility. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrn59agd19568fe9mwip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrn59agd19568fe9mwip.png" alt="Sharing app" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sharing uses the same system as any Google Drive file, which makes sharing your apps easy and lets people try them without having to create a new account.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Pro tip:&lt;/em&gt; As with any Drive file, you can set your apps to be accessible to whoever has the link. That's what I do when I post on LinkedIn (cf. the last section of this post for examples).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In any case, even if you don't use AI Studio, my tip should still be relevant as most vibe coding agents are working similarly.&lt;/p&gt;




&lt;h1&gt;
  
  
  1. The App Gallery &amp;amp; Remixing
&lt;/h1&gt;

&lt;p&gt;If you are new to vibe coding, the best way to understand how the code is generated is to explore the &lt;a href="https://aistudio.google.com/apps?source=showcase&amp;amp;showcaseTag=featured" rel="noopener noreferrer"&gt;App Gallery&lt;/a&gt; directly within AI Studio. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbb2o4lbv6rq4fs097pv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbb2o4lbv6rq4fs097pv.png" alt="App Gallery" width="800" height="552"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Practices:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Explore:&lt;/strong&gt; Check out the impressive examples already built by the AI Studio team. Two of my personal favorites are the &lt;a href="https://aistudio.google.com/apps/bundled/spatial-understanding" rel="noopener noreferrer"&gt;Spatial understanding&lt;/a&gt; and the &lt;a href="https://aistudio.google.com/apps/bundled/personalized_comics" rel="noopener noreferrer"&gt;Comic Book Creator&lt;/a&gt; (which need a paid API key to use Nano-banana Pro, but you can try remixing it to only use Nano-Banana's free tier). &lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Check the code:&lt;/strong&gt; For each app, you can click "code" in the top left corner to access all of the app's code and check how things are done (or more likely copy-paste it to an AI coding agent).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7ybxly8je982mgfw9gv.png" alt="Code" width="742" height="468"&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Remix:&lt;/strong&gt; When you like an app and just want to create your own flavor of it, click "remix" to create a copy of it that you'll own. It's an excellent way to start from an existing, working codebase and make it your own.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fprmyr91zr1i76do2mnfe.png" alt="Remix" width="515" height="78"&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  2. Get started with Vibe Coding
&lt;/h1&gt;

&lt;p&gt;Ready to build your own? The principle is incredibly straightforward: open the &lt;a href="https://aistudio.google.com/apps" rel="noopener noreferrer"&gt;build&lt;/a&gt; page, write what you want the app to do in a prompt, hit enter, and watch the coding agent (similar to the &lt;a href="https://antigravity.google/" rel="noopener noreferrer"&gt;Antigravity&lt;/a&gt; one) generate the UI and logic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fae8r3q0qes69e7iab79i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fae8r3q0qes69e7iab79i.png" alt=" " width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note:&lt;/em&gt; the generation can take quite some time (about 5 mins on average), so go get a coffee or read a blog post and come back after the coding agent has finished its job.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When you get a working app, you can start adding new features by continuing to prompt in the code assistant chatbox on the left.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;New:&lt;/em&gt; One of the cool new additions in the past weeks is that the code assistant now works server-side, which means you can close the tab or change devices and it will continue to work for you. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwgqzs5qvx083phjt3w2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwgqzs5qvx083phjt3w2.png" alt="Vibe code with your voice" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Depending on the case, you can also use those two buttons to provide visual clues to the model by drawing things on it, which is very convenient to give UI feedback.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5x7f1hjwomvfxv4f8sn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5x7f1hjwomvfxv4f8sn.png" alt="Visual feedback" width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another option is to dictate what changes you need. It's very convenient when you want to add a new feature on-the-fly while on your phone, but I would not recommend it for very precise updates. &lt;/p&gt;




&lt;h1&gt;
  
  
  3. Create apps with databases
&lt;/h1&gt;

&lt;p&gt;Since &lt;a href="https://blog.google/innovation-and-ai/technology/developers-tools/full-stack-vibe-coding-google-ai-studio/" rel="noopener noreferrer"&gt;this week&lt;/a&gt;, you can also ask the coding agent to create apps that can save things between sessions or users. You just need to ask it to specifically use a database:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq8gre9qxxhmwyizkuzm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq8gre9qxxhmwyizkuzm.png" alt="Create an app with a database" width="512" height="206"&gt;&lt;/a&gt;&lt;br&gt;
(&lt;em&gt;yes, I've been wanting to create my own grocery list app for a very long time&lt;/em&gt;)&lt;/p&gt;

&lt;p&gt;Just click "Enable" when asked and the magic will happen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmpc0cur6h9gt2t4xqw3s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmpc0cur6h9gt2t4xqw3s.png" alt=" " width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What it will do behind the scenes is setting up the Firebase integration and a Firestore to store your data. It will also add authentication using a Google account to your app so it knows who's trying to access which data.&lt;/p&gt;

&lt;p&gt;You don't need to know how your database is structured, the code agent will manage everything for you depending on what your app needs. You want each user to have their own grocery list? Boom, it's done! You now want them to be able to have shared lists, that's also done! Add labels to the items, easy peasy.&lt;/p&gt;

&lt;p&gt;Your imagination is the limit! &lt;/p&gt;


&lt;h1&gt;
  
  
  4. My tips to better Vibe Code
&lt;/h1&gt;

&lt;p&gt;Nowadays, "vibe coding" has become a reflex for me. It is the absolute best way to prototype a user experience before potentially moving to a complex IDE. But if you're not careful, you can easily end up losing a lot of time to make the agent work in an efficient way.&lt;/p&gt;

&lt;p&gt;So here are my top tricks to get the most out of AI Studio (in no particular order). &lt;/p&gt;
&lt;h3&gt;
  
  
  Design your app before building it
&lt;/h3&gt;

&lt;p&gt;If you have opinions about what your app should look like (personally I usually don't, yolo), a good idea is to iterate on designs for it using something like &lt;a href="https://stitch.withgoogle.com/" rel="noopener noreferrer"&gt;Stitch&lt;/a&gt; (that is using Nano-Banana) and give the images to the coding agent so it knows what's expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fugatcj3g3fkafegw6ws3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fugatcj3g3fkafegw6ws3.png" alt="Stitch" width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Save your progress so you can revert (and learn when to do it)
&lt;/h3&gt;

&lt;p&gt;AI makes mistakes. It might misunderstand your prompt or write code that breaks a previously working app. When this happens, you can ask it to "fix the error" and most of the time it works, but sometimes it doesn't.&lt;/p&gt;

&lt;p&gt;One very important skill to learn when vibe coding is when to try to fix things using AI, when to start anew, and when to go fix things yourself. &lt;/p&gt;

&lt;p&gt;My personal advice is that if the agent can't figure out how to fix something after 2 rounds, stop insisting and go back to a previous version otherwise you might end up spending an hour arguing with the AI for nothing. And when you think you're spending as much time explaining what you want than to actually do it yourself (a good example is "change this time for another"), just do it yourself.&lt;/p&gt;

&lt;p&gt;Thankfully AI Studio makes it easy for you to go back to a previous version:&lt;/p&gt;
&lt;h4&gt;
  
  
  Checkpoints
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Checkpoints&lt;/strong&gt; are the built-in version history to instantly revert to the last working state. They are the most convenient way to go back to a previous working version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyajpt8701dzzkcw1iro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyajpt8701dzzkcw1iro.png" alt="Checkpoint" width="766" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Warning:&lt;/strong&gt; Just be careful of something: you can revert the code, but not the database changes, so don't load a checkpoint that was before a database update (what I would do is load the checkpoint, copy the code, load the more recent/broken code, ask the assistant to fix it based on how it was before).&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;
  
  
  Github
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Github&lt;/strong&gt; is what I would recommend to save milestone versions. You should use it to save the state of your app when you reach a certain milestone, like when you finish adding a new feature. You can enable it in a few clicks:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6sh13kq469yi4bxebnl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6sh13kq469yi4bxebnl.png" alt="Open settings" width="592" height="174"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk5kwfewicnl9i8kl8pq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk5kwfewicnl9i8kl8pq1.png" alt="Sign in to GitHub" width="800" height="1082"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9luwpefa3mjf5lp6sep7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9luwpefa3mjf5lp6sep7.png" alt="Create Repo" width="800" height="675"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And then it will just be about describing your new feature and committing it to GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyw3apclih27rrohygzyv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyw3apclih27rrohygzyv.png" alt="Commit to GitHub" width="800" height="647"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One current limitation though is that the sync is one-way, so it's a good way to save your status in a place where you can easily reuse it, but you can't update your code in GitHub and sync it to AI Studio (yet).&lt;/p&gt;
&lt;h3&gt;
  
  
  Use Multi-Modal Prompting
&lt;/h3&gt;

&lt;p&gt;Stop relying purely on text. As I said before, AI studio gives you other options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Voice:&lt;/strong&gt; Incredibly practical for iterating quickly, especially if you are tweaking an app from your phone.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;The "Annotate App" Tool:&lt;/strong&gt; This is my absolute favorite feature for UI work. Take a screenshot of your app, draw directly on it ("Move this button here", "Remove this menu"), and send it. &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Pro Tip:&lt;/em&gt; Always combine the annotated image with a clear text explanation to give the model maximum context.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Split Your Files! (Avoid the Monolith)
&lt;/h3&gt;

&lt;p&gt;As your app grows, the model might start to "hallucinate", forget earlier features, or tangle the logic. This is almost always a structural issue.&lt;/p&gt;

&lt;p&gt;By default, the AI tends to cram everything into one massive &lt;code&gt;app.tsx&lt;/code&gt; file. Veto this immediately.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Golden Rule:&lt;/strong&gt; Tell the model from the very beginning to separate features into distinct files and components. &lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Why?&lt;/strong&gt; It drastically reduces errors and makes generation faster. It also allows you to instantly spot if the AI is messing up (e.g., If you ask for a UI color change and it starts rewriting &lt;code&gt;auth-service.js&lt;/code&gt;, you know it lost the plot and you can stop it immediately). It will save you a lot of time when reviewing and at least gives you an &lt;em&gt;at a glance&lt;/em&gt; confidence that the right part of the codebase was updated.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Force the AI to Write Documentation
&lt;/h3&gt;

&lt;p&gt;To also help the AI remembering what the app is meant to do, have it maintain as much documentation as possible (from micro to macro):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Docstrings:&lt;/strong&gt; Always forces the app to document all its functions, what they do, what the inputs and the outputs are. &lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;File documentation:&lt;/strong&gt; Since you're creating a file per feature, tell the AI to maintain some documentation at the top of them to detail what the feature is about, what use cases should be covered, etc.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Design.md:&lt;/strong&gt; Finally, ask it to maintain a design doc of the whole app at the root of it.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Why?&lt;/strong&gt; By having the AI repeat everything multiple times you both help it (and potentially yourself) find where everything is being done and what is the expected behavior. Kind of like how &lt;a href="https://en.wikipedia.org/wiki/Error_correction_code" rel="noopener noreferrer"&gt;error correction codes&lt;/a&gt; work, having something written multiple times reduces the chances that they will be deleted by mistake.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Supercharge with System Instructions
&lt;/h3&gt;

&lt;p&gt;After some time you'll realize that you're always giving the same instructions to the coding agent and will get tired of repeating yourself. That's why AI Studio allows you to customize the underlying "System Instructions." Don't leave this blank! You can define your preferred tech stack, frameworks, coding style, and of course everything I mentioned before!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3g4fsuqp48ize2kxs9zo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3g4fsuqp48ize2kxs9zo.png" alt="Open settings" width="783" height="325"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahf314jvprwwq926cj3x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahf314jvprwwq926cj3x.png" alt="Set System Instructions" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Think of it as the onboarding package for your new junior developer, they need to know how you are expecting them to work, how to code, document, communicate, etc... You might not get it right the first time, but it's important to reflect on it and to keep on improving your package so that the next newcomers will be better onboarded and thus get productive faster.&lt;/p&gt;

&lt;p&gt;Here's the ones I'm always using on top of more specialized instructions (like trusting me on model names and not changing them):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Coding/documenting guidelines&lt;/span&gt;
&lt;span class="p"&gt;
*&lt;/span&gt; Create a file per feature or related features, split as much as possible in different files;
&lt;span class="p"&gt;*&lt;/span&gt; add docstrings to all functions to explain what they do;
&lt;span class="p"&gt;*&lt;/span&gt; start each file with a long comment explaining in detail what the feature is about and the different use cases;
&lt;span class="p"&gt;*&lt;/span&gt; maintain a &lt;span class="sb"&gt;`Design.md`&lt;/span&gt; document at the root of the app that documents all the features of the app;
&lt;span class="p"&gt;*&lt;/span&gt; log as info all function calls (with their parameters) and log all genai calls with all their parameters (model used, prompt, config) and their outputs, just strip inline data;
&lt;span class="p"&gt;*&lt;/span&gt; group all configurable items (like model names) in a centralized file;
&lt;span class="p"&gt;*&lt;/span&gt; always create a way to test the scripts without altering the data;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll see that I also added some instruction about logging (as it always help debugging) and dry run as these are both good practices, vibe coding or not.&lt;/p&gt;

&lt;p&gt;Try them and tell me if that improved your vibe-coding experience!&lt;/p&gt;




&lt;h1&gt;
  
  
  5. Publish your app
&lt;/h1&gt;

&lt;p&gt;You are now happy with your app and want to share it with the world (or maybe a subset of it), AI Studio offers you two ways of publishing your app:&lt;/p&gt;

&lt;h3&gt;
  
  
  Share it in AI Studio
&lt;/h3&gt;

&lt;p&gt;The easiest way is to just use AI Studio sharing capability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrn59agd19568fe9mwip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrn59agd19568fe9mwip.png" alt="Sharing app" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can either decide to share the app with specific people or to make it available to whoever has its link (that's what I use on LinkedIn for ex.). &lt;/p&gt;

&lt;p&gt;One of the key benefits is that they will also get access to the code and be able to Remix it if they want. But you can also send a link that opens the app full screen and hides the code agent to your less technical friends.&lt;/p&gt;

&lt;p&gt;Another nice benefit is that if your app is using Gemini, your friends will use their free tier when using the app (or their API key if using a paid model), which means it won't cost you anything.&lt;/p&gt;

&lt;h3&gt;
  
  
  Publish the app on Cloud run
&lt;/h3&gt;

&lt;p&gt;This is what you should do if you want to publish the app for real to actual users. In a few clicks it will create a cloud run container, publish the app online and give you a URL for anyone to access it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sjpb3pg72pzdrp8qzoa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sjpb3pg72pzdrp8qzoa.png" alt="Publish" width="603" height="193"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8q8lblzfszx5vt9vy3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8q8lblzfszx5vt9vy3k.png" alt="Publish app" width="800" height="726"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6k5lueeaffjmeuhapr3e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6k5lueeaffjmeuhapr3e.png" alt="App published" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll then be able to buy a domain and give it a proper URL, deploy in different regions, automatically scale, etc... But then you'll also be the one paying for usage as it's your own app now.&lt;/p&gt;




&lt;h1&gt;
  
  
  6. AI Studio vs. Antigravity: When to use which?
&lt;/h1&gt;

&lt;p&gt;Since AI Studio uses a similar underlying coding agent as Google's &lt;a href="https://antigravity.google/" rel="noopener noreferrer"&gt;Antigravity&lt;/a&gt;, you might be wondering when to use which tool. Here is my rule of thumb:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use AI Studio when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  You are prototyping a front-end UI or a lightweight full-stack application.&lt;/li&gt;
&lt;li&gt;  You want to genuinely "vibe code" using multimodal inputs (like drawing directly on the app's UI or using your voice).&lt;/li&gt;
&lt;li&gt;  You want to instantly share a working prototype with stakeholders or friends via a simple link, without managing hosting.&lt;/li&gt;
&lt;li&gt;  You want zero-setup, native access to the Gemini models to build AI features quickly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use Antigravity when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  You are building a production-grade, complex application with deep backend infrastructure requirements.&lt;/li&gt;
&lt;li&gt;  You need fine-grained control over your dependencies, complex build steps, and deployment pipelines.&lt;/li&gt;
&lt;li&gt;  You are integrating the AI coding agent into an &lt;em&gt;existing&lt;/em&gt;, large-scale codebase rather than starting a project from a blank slate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of AI Studio as your creative sketchbook for rapid iteration, and Antigravity as your full-fledged developer workshop.&lt;/p&gt;




&lt;h1&gt;
  
  
  7. My favorite creations
&lt;/h1&gt;

&lt;p&gt;Now that you have mastered the basics of vibe coding, the best way to learn is by doing. I didn't follow all these rules perfectly when I started, but making mistakes is how you refine your workflow!&lt;/p&gt;

&lt;p&gt;To show you what's possible, here are a few applications I vibe-coded entirely from scratch using most of those methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;[AI-powered resume]&lt;/strong&gt;: &lt;em&gt;An AI-powered resume. Don't just read it, but ask Gemini questions about me (it will know some anecdotes that are not written), tailor it according to the role you want to propose to me or even ask for an audio overview.&lt;/em&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fokt00odhpopc0l7v3w2j.png" alt="AI-powered resume" width="800" height="426"&gt;&lt;a href="https://aistudio.google.com/apps/drive/1VRVKZ8qFAG6Rgc1np3u8g5eBgbmI9094" rel="noopener noreferrer"&gt;Check it out here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;[Talk coach]&lt;/strong&gt;: &lt;em&gt;A coach for your talks. Give it a recording or youtube link and it will tell you how to get even better.&lt;/em&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2frit2q7sdl3tbcdipck.png" alt="Talk coach" width="800" height="931"&gt;&lt;a href="https://aistudio.google.com/apps/drive/18XuOzEU1zuseoaPtXrdVUeTY3nNbM80u?appParams=value%253DcOp5rklR3jI" rel="noopener noreferrer"&gt;Check it out here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;[FreshList]&lt;/strong&gt;: &lt;em&gt;A copy of the app I'm working on to simplify the groceries&lt;/em&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpl1xipyut59v2k1aqeo.png" alt="FreshList" width="800" height="957"&gt;&lt;a href="https://aistudio.google.com/apps/cee356a2-e448-4807-9d60-4dc6b734b969" rel="noopener noreferrer"&gt;Check it out here&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;See other examples in the &lt;a href="https://github.com/Giom-V/vibe-coding-challenge" rel="noopener noreferrer"&gt;repo&lt;/a&gt; I created when I thought I will have the time to vibe code an app per week.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>vibecoding</category>
      <category>gemini</category>
      <category>promptengineering</category>
    </item>
    <item>
      <title>Now anyone can host a global AI challenge on Kaggle</title>
      <dc:creator>Megan Risdal</dc:creator>
      <pubDate>Thu, 19 Mar 2026 15:21:02 +0000</pubDate>
      <link>https://forem.com/googleai/now-anyone-can-host-a-global-ai-challenge-on-kaggle-2hp6</link>
      <guid>https://forem.com/googleai/now-anyone-can-host-a-global-ai-challenge-on-kaggle-2hp6</guid>
      <description>&lt;p&gt;Kaggle's community helps the world discover what actually works in AI, and today we're launching Community Hackathons to build on that mission. Community Hackathons is built to help communities, individuals, schools and businesses and create professional-grade AI competitions using Kaggle's infrastructure, at no cost. These are a great way for builders to solve complex problems with AI and hone their professional portfolios.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zrlvldvfsl1n85pybzi.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zrlvldvfsl1n85pybzi.gif" alt="A stylized GIF showing creating a community competition on Kaggle" width="760" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the AI landscape evolves, so too do the ways in which builders showcase their skills — moving beyond traditional predictive models toward building full applications, generating novel data insights and creatively utilizing large language models (LLMs). The gap between builders and the frontier has never been smaller and Community Hackathons are  an amazing way to bring people together to discover novel results. &lt;/p&gt;

&lt;p&gt;Leading companies and research organizations have already been using Hackathons to tackle unique AI problems by challenging the world to solve them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The NFL has used Kaggle hackathons to create &lt;a href="https://www.kaggle.com/competitions/nfl-big-data-bowl-2026-analytics/hackathon-winners" rel="noopener noreferrer"&gt;new statistics&lt;/a&gt;, hire talent and even make rule changes to improve player safety.&lt;/li&gt;
&lt;li&gt;OpenAI used hackathons to &lt;a href="https://www.kaggle.com/competitions/openai-gpt-oss-20b-red-teaming" rel="noopener noreferrer"&gt;red-team their first open-access model&lt;/a&gt;, and to help identify possibly hidden archeological sites.&lt;/li&gt;
&lt;li&gt;The Google AI Studio team ran two Hackathons with the release of Gemini models. One challenged users to &lt;a href="https://www.kaggle.com/competitions/banana/overview" rel="noopener noreferrer"&gt;get creative with Nano Banana&lt;/a&gt;, and the other &lt;a href="https://www.kaggle.com/competitions/gemini-3/overview" rel="noopener noreferrer"&gt;testing developers' vibe coding sprint abilities&lt;/a&gt; with the release of Gemini 3 Pro. These hackathons shared nearly $1M in combined prizes.&lt;/li&gt;
&lt;li&gt;The Gemma 3n release was accompanied by a challenge to use "AI for global impact" and you’ll want to have tissues on hand when you &lt;a href="https://blog.google/innovation-and-ai/technology/developers-tools/developers-changing-lives-with-gemma-3n/" rel="noopener noreferrer"&gt;review the results&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, Community Hackathons allows you to tap into the AI community to solve problems you care about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Community Hackathon Features
&lt;/h2&gt;

&lt;p&gt;Community Hackathons are built to be flexible and self-service, providing a seamless experience for both hosts and participants. By making the platform available to hosts worldwide, Kaggle enables diverse, custom-built challenges that drive skill development and portfolio enhancement. Hosts gain access to all the necessary tools for running a successful event, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Integrated tools for data hosting, interactive notebooks and discussion forums.&lt;/li&gt;
&lt;li&gt;Support for writeup submissions and a project gallery to showcase results.&lt;/li&gt;
&lt;li&gt;Flexibility for multiple competition tracks and judge management.
Prize pools permitted up to $10,000 USD.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to get started today
&lt;/h2&gt;

&lt;p&gt;Whether your goal is to challenge the global AI community to build a world-changing application or to host a private internal skill-building event for your organization, Community Hackathons are ready.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Want to host a Hackathon?&lt;/strong&gt; &lt;a href="https://www.kaggle.com/competitions?new=true&amp;amp;type=hackathon" rel="noopener noreferrer"&gt;Create your own&lt;/a&gt; in minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Want to compete?&lt;/strong&gt; Keep an eye on the &lt;a href="https://www.kaggle.com/competitions" rel="noopener noreferrer"&gt;Kaggle Competitions page&lt;/a&gt; for new Community Hackathons appearing soon.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Have questions?&lt;/strong&gt; Connect with other competition hosts in the dedicated &lt;a href="https://www.kaggle.com/discussions/competition-hosting" rel="noopener noreferrer"&gt;Competition Hosting forum&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We can’t wait to see what you build and what new skills you hone! &lt;/p&gt;

</description>
      <category>ai</category>
      <category>kaggle</category>
      <category>datascience</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
