<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Davide Mibelli</title>
    <description>The latest articles on Forem by Davide Mibelli (@kharonte).</description>
    <link>https://forem.com/kharonte</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kharonte"/>
    <language>en</language>
    <item>
      <title>Spring Boot 4.0 Migration: What Nobody Tells You About the Breaking Changes</title>
      <dc:creator>Davide Mibelli</dc:creator>
      <pubDate>Wed, 29 Apr 2026 13:14:24 +0000</pubDate>
      <link>https://forem.com/kharonte/spring-boot-40-migration-what-nobody-tells-you-about-the-breaking-changes-1ln2</link>
      <guid>https://forem.com/kharonte/spring-boot-40-migration-what-nobody-tells-you-about-the-breaking-changes-1ln2</guid>
      <description>&lt;p&gt;I upgraded two production applications to Spring Boot 4.0 the week it went GA. I read the migration guide, skimmed the release notes, and thought the hardest part would be the Jackson 3 change everyone was talking about.&lt;/p&gt;

&lt;p&gt;The Jackson 3 change was not the hardest part.&lt;/p&gt;

&lt;p&gt;Two applications, a few days of debugging, and one very confusing test suite later, I have a clear picture of what actually breaks — and what the official guide glosses over.&lt;/p&gt;

&lt;p&gt;This is not a walkthrough of the migration guide. This is the stuff that costs you real time.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is a preview. Read the full article on Medium: &lt;a href="https://medium.com/@davide.mib/spring-boot-4-0-migration-what-nobody-tells-you-about-the-breaking-changes-94482a2f886f" rel="noopener noreferrer"&gt;Spring Boot 4.0 Migration: What Nobody Tells You About the Breaking Changes&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>java</category>
      <category>springboot</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Stop Fighting CSS: The Google Stitch + Antigravity Stack for Solo Developers</title>
      <dc:creator>Davide Mibelli</dc:creator>
      <pubDate>Fri, 24 Apr 2026 09:07:34 +0000</pubDate>
      <link>https://forem.com/kharonte/stop-fighting-css-the-google-stitch-antigravity-stack-for-solo-developers-2l2k</link>
      <guid>https://forem.com/kharonte/stop-fighting-css-the-google-stitch-antigravity-stack-for-solo-developers-2l2k</guid>
      <description>&lt;p&gt;Most developers I know have the same problem: the logic is solid, but the UI looks like it was built in a hurry — because it was.&lt;br&gt;
I spent two days on a login flow for a side project before I gave up and tried a different approach: Google Stitch for the UI, Antigravity as the IDE, and the MCP bridge between them. Here's what the workflow actually looks like.&lt;/p&gt;
&lt;h2&gt;
  
  
  The core idea
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpsp2ahy88das4x58tf9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpsp2ahy88das4x58tf9.png" alt=" " width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Google Stitch gives you pre-configured UI patterns — "Stitches" — that are already accessible and mathematically sound. You pick a functional category (Hero, Card, List), set one primary color and one font, and it handles the rest. The MCP connection to Antigravity means you never manually export manifests: change a button style in the Stitch web UI, and it reflects in your running simulator instantly.&lt;/p&gt;
&lt;h2&gt;
  
  
  The implementation
&lt;/h2&gt;

&lt;p&gt;Once you've connected Stitch to Antigravity via MCP: Configure Server in the command palette, your components become requestable by name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useStitch&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@antigravity/react-hooks&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;DashboardCard&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;userId&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Component&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;loading&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useStitch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;UserCard&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;loading&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Placeholder&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Component&lt;/span&gt;
      &lt;span class="nx"&gt;username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="nx"&gt;onAction&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{(&lt;/span&gt;&lt;span class="nx"&gt;ev&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;User tapped:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ev&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;
    &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your React component has no idea what UserCard looks like. It just requests it. If you decide a List works better than a Card, you change it in Stitch — the code stays the same.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I didn't expect
&lt;/h2&gt;

&lt;p&gt;The MCP connection is bidirectional. You can send performance feedback from Antigravity back to Stitch, so the design tool knows which components are causing frame drops on real devices. The documentation barely mentions this — it took me a while to find it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The full guide
&lt;/h2&gt;

&lt;p&gt;I wrote a more detailed walkthrough on Medium covering the full setup, the Gravity Fields fetch pattern, and the telemetry configuration: &lt;a href="https://medium.com/p/02aa7c97e131" rel="noopener noreferrer"&gt;https://medium.com/p/02aa7c97e131&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What's your approach for UI as a solo dev — do you start from the design or the logic?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>From DALL-E to gpt-image-2: The Architectural Bet That Finally Fixed AI Text</title>
      <dc:creator>Davide Mibelli</dc:creator>
      <pubDate>Thu, 23 Apr 2026 21:04:44 +0000</pubDate>
      <link>https://forem.com/kharonte/from-dall-e-to-gpt-image-2-the-architectural-bet-that-finally-fixed-ai-text-5347</link>
      <guid>https://forem.com/kharonte/from-dall-e-to-gpt-image-2-the-architectural-bet-that-finally-fixed-ai-text-5347</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally published on Medium.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Two years ago, if you asked an AI to design a menu for a Mexican restaurant, you’d get a beautiful layout of “enchuita” and “churiros.” It looked like food, and the font looked like letters, but it was essentially a visual fever dream. The “burrto” became a classic meme in dev circles — a reminder that while AI could paint like Caravaggio, it had the literacy of a toddler.&lt;/p&gt;

&lt;p&gt;Yesterday, OpenAI launched ChatGPT Images 2.0 (gpt-image-2). I ran the same test. The menu was perfect. Not just the spelling, but the hierarchy, the prices, and the specialized diacritics. It is no longer just “generating pixels.” It is communicating.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fph1qz3v3md8n9mjm3pet.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fph1qz3v3md8n9mjm3pet.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This isn’t a minor version bump or a better training set. It’s a total architectural pivot that signals the end of an era. If you’ve spent the last three years building workflows around diffusion models, it’s time to rethink your pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Why text was broken (and how they fixed it)
&lt;/h2&gt;

&lt;p&gt;To understand why gpt-image-2 works, you have to understand why DALL-E 3 failed at spelling. Diffusion models — the tech behind almost every major generator until now — work by denoising. They start with static and try to “find” an image. Because text pixels make up a tiny fraction of a training image, the model learned the texture of text rather than the logic of characters. To a diffusion model, an “A” is just a specific arrangement of lines, not a semantic unit.&lt;/p&gt;

&lt;p&gt;OpenAI has quietly abandoned diffusion. While they won’t officially confirm the guts of the system, the PNG metadata and the model’s behavior tell the story: this is an autoregressive model.&lt;/p&gt;

&lt;p&gt;It generates images the same way GPT-4 generates code — by predicting the next token. By integrating image generation directly into the language model pipeline, the model isn’t “drawing” a word; it’s “writing” an image. When the architecture treats a pixel and a letter as parts of the same conceptual stream, the “enchuita” problem simply vanishes.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. The end of the CSS overlay hack
&lt;/h2&gt;

&lt;p&gt;For those of us in agency work or product dev, AI images have always been a “background only” tool. If a client wanted a marketing banner with a specific CTA, we’d generate the art, then use a graphics library or CSS to overlay the text. It was the only way to ensure the brand name wasn’t spelled “Gooogle.”&lt;/p&gt;

&lt;p&gt;Gpt-image-2 changes that calculus. With near-perfect rendering of Latin, Kanji, and Hindi scripts, the “post-processing” stage of the workflow is suddenly on the chopping block. You can now generate multi-paneled assets or social media posts where the text is baked into the composition with proper lighting and perspective.&lt;/p&gt;

&lt;p&gt;But there’s a catch for your budget. At approximately $0.21 per high-quality 1024x1024 render, this is roughly 60% more expensive than the previous generation. If you’re at a high-volume startup, that’s a significant line item.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1dtwi9rh2x4cnvmw4on.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1dtwi9rh2x4cnvmw4on.png" alt=" " width="800" height="1101"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Thinking before rendering
&lt;/h2&gt;

&lt;p&gt;The most impressive part of the new model isn’t the resolution — it’s the “thinking mode.” Borrowed from reasoning models like o3, the generator now spends compute time planning the layout before it touches a single pixel.&lt;/p&gt;

&lt;p&gt;I watched it handle a prompt for “a grid of six distinct objects, each with a label in a different language.” Previous models would lose count by object four and turn the labels into Sanskrit-flavored gibberish. Gpt-image-2 paused, “thought” (generating reasoning tokens), and then executed. It can count. It can follow layout constraints.&lt;/p&gt;

&lt;p&gt;This moves AI generation from “creative toy” to “reliable infrastructure.” Reliability is what we actually need in production. I’d much rather pay more for a single correct image than spend credits on ten “cheap” re-rolls.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. The DALL-E eulogy
&lt;/h2&gt;

&lt;p&gt;OpenAI is shutting down DALL-E 2 and 3 on May 12, 2026. Not moving them to a legacy tier — shutting them down.&lt;/p&gt;

&lt;p&gt;This is a massive signal. It’s an admission that the diffusion approach hit a ceiling that no amount of fine-tuning could break. By retiring the DALL-E brand in favor of a unified ChatGPT Image model, OpenAI is betting that the future of Multimodality is a single, unified architecture.&lt;/p&gt;

&lt;p&gt;The wall between “thinking” and “seeing” is being torn down. We used to have a brain (LLM) that sent instructions to a hand (Diffusion model). Now, the brain is doing the drawing itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. What I’m still worried about
&lt;/h2&gt;

&lt;p&gt;Despite the polish, there are gaps. The knowledge cutoff is December 2025. If you need a render involving a trend or news event from early 2026, you’re reliant on the web search tool, which adds latency and even more cost.&lt;/p&gt;

&lt;p&gt;Furthermore, the pricing model is now “tokenized” for images. Thinking mode adds a variable cost based on how many reasoning tokens the model uses to plan the composition. This makes it incredibly hard to predict API costs for complex apps. You aren’t just paying for an image; you’re paying for the “brain power” required to frame it.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. The 2026 reality check
&lt;/h2&gt;

&lt;p&gt;If you are building a simple placeholder tool, stick to cheaper, older models. But for any workflow where the image is the content — marketing, UI prototyping, or localized assets — the shift to autoregressive generation is a one-way door.&lt;/p&gt;

&lt;p&gt;We’re entering a phase where the term “image model” feels dated. We just have models. They happen to output pixels sometimes and Python code others. The fact that it can finally spell “Burrito” is just the first sign that the gap between human intent and machine execution has finally closed.&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>ai</category>
      <category>productivity</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
