<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ivan Juren</title>
    <description>The latest articles on Forem by Ivan Juren (@ijuren).</description>
    <link>https://forem.com/ijuren</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ijuren"/>
    <language>en</language>
    <item>
      <title>Good Intentions Aren’t Enough — Measure!</title>
      <dc:creator>Ivan Juren</dc:creator>
      <pubDate>Sun, 09 Nov 2025 13:44:50 +0000</pubDate>
      <link>https://forem.com/ijuren/good-intentions-arent-enough-measure-5co8</link>
      <guid>https://forem.com/ijuren/good-intentions-arent-enough-measure-5co8</guid>
      <description>&lt;p&gt;Good thoughts and intuition can lead to good results, but only &lt;strong&gt;benchmarks&lt;/strong&gt; lead to &lt;strong&gt;great results&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Once upon a time, I had the opportunity to build a custom data structure.&lt;br&gt;&lt;br&gt;
We needed something like a queue to act as a cache — a &lt;strong&gt;rolling buffer&lt;/strong&gt; with a constant size, always filled with the latest data as the oldest expires.&lt;/p&gt;

&lt;p&gt;The requirements sounded simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single writer, multiple concurrent readers
&lt;/li&gt;
&lt;li&gt;Readers iterate safely while data is being written
&lt;/li&gt;
&lt;li&gt;As little garbage and synchronization overhead as possible&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Simple, right?&lt;/p&gt;


&lt;h2&gt;
  
  
  The Obvious Candidates
&lt;/h2&gt;

&lt;p&gt;Naturally, the first candidates were Java’s built-in structures:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Structure&lt;/th&gt;
&lt;th&gt;Pros&lt;/th&gt;
&lt;th&gt;Cons&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ArrayList&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Simple, contiguous&lt;/td&gt;
&lt;td&gt;O(n) to remove oldest element&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;LinkedList&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fast removals&lt;/td&gt;
&lt;td&gt;Creates garbage, poor cache locality&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ArrayDeque&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Nearly perfect&lt;/td&gt;
&lt;td&gt;Not thread-safe, fails under concurrent modification&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each had a deal-breaking flaw for our case.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Custom Design
&lt;/h2&gt;

&lt;p&gt;So, led by good intentions (and confidence), I decided to build my own —&lt;br&gt;&lt;br&gt;
the &lt;strong&gt;RollingBuffer&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core idea:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backed by a plain array → cache locality
&lt;/li&gt;
&lt;li&gt;Preallocate and reuse buckets/objects → improves memory locality and avoids garbage
&lt;/li&gt;
&lt;li&gt;A single &lt;code&gt;volatile&lt;/code&gt; field for synchronization
&lt;/li&gt;
&lt;li&gt;Minimal API surface:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;put&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;long&lt;/span&gt; &lt;span class="n"&gt;timestamp&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="no"&gt;V&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="nc"&gt;Iterator&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;V&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getIterator&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;long&lt;/span&gt; &lt;span class="n"&gt;startTimestamp&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The iterator would snapshot the volatile write index and iterate safely up to that point.&lt;br&gt;&lt;br&gt;
No resizing, no garbage, no allocations during runtime.&lt;/p&gt;

&lt;p&gt;Perfect in theory.&lt;/p&gt;


&lt;h2&gt;
  
  
  The First Hurdles
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Which bucket to write to?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
What if there’s a gap in timestamps?&lt;br&gt;&lt;br&gt;
Example: with 10 buckets, data for timestamps 20–30 could yield something like:&lt;br&gt;&lt;br&gt;
&lt;code&gt;[20, 11, 22, 23, 24, …]&lt;/code&gt;&lt;br&gt;&lt;br&gt;
I decided to handle this during iteration — assuming branch prediction would make it cheap.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Synchronization when wrapping around&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
What if an iterator runs while the writer wraps around the array?&lt;br&gt;&lt;br&gt;
Solution: introduce two parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;maxBuckets&lt;/code&gt; — total array capacity
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;exposedBuckets&lt;/code&gt; — visible portion to readers&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This created a small grace zone to prevent corruption.&lt;/p&gt;


&lt;h2&gt;
  
  
  First Benchmarks — Great News?
&lt;/h2&gt;

&lt;p&gt;I wrote the first version and ran JMH benchmarks.&lt;/p&gt;

&lt;p&gt;Results looked &lt;strong&gt;amazing&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Iteration over &lt;em&gt;N&lt;/em&gt; elements: &lt;strong&gt;tiny&lt;/strong&gt; latency
&lt;/li&gt;
&lt;li&gt;Write throughput: &lt;strong&gt;tens of millions&lt;/strong&gt; ops/sec&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Until I compared it against the real collections…&lt;/p&gt;


&lt;h2&gt;
  
  
  Reality Hits
&lt;/h2&gt;

&lt;p&gt;Against the standard structures, my RollingBuffer was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;2.5× slower&lt;/strong&gt; at iteration than &lt;code&gt;ArrayDeque&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;~50% slower&lt;/strong&gt; than &lt;code&gt;LinkedList&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplinx734t641ncx4wk56.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplinx734t641ncx4wk56.png" alt=" " width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not what I expected.&lt;/p&gt;

&lt;p&gt;I suspected the iteration-time gap check was the bottleneck — so I reworked it.&lt;/p&gt;


&lt;h2&gt;
  
  
  The New Approach
&lt;/h2&gt;

&lt;p&gt;Instead of trying to write to the “correct” bucket (skipping gaps),&lt;br&gt;&lt;br&gt;
I wrote to the &lt;strong&gt;next available bucket&lt;/strong&gt; every time.&lt;/p&gt;

&lt;p&gt;That simplified iteration — no runtime checks, just a linear scan.&lt;/p&gt;

&lt;p&gt;But now, given a &lt;code&gt;startTimestamp&lt;/code&gt;, I needed to find the right bucket efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enter binary search.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Even with wrap-around, modular arithmetic made it work beautifully.&lt;/p&gt;

&lt;p&gt;Result: &lt;strong&gt;~15% faster&lt;/strong&gt; iteration.&lt;br&gt;&lt;br&gt;
Better, but still not stellar.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Real Culprit
&lt;/h2&gt;

&lt;p&gt;Then I found the real performance killer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="n"&gt;nextWriteIndex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;nextWriteIndex&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="n"&gt;totalBuckets&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and in the iterator:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="n"&gt;nextIndex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;nextIndex&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="n"&gt;totalBuckets&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Looked harmless.&lt;br&gt;&lt;br&gt;
But &lt;code&gt;%&lt;/code&gt; is &lt;strong&gt;slow&lt;/strong&gt; compared to a conditional branch.&lt;/p&gt;

&lt;p&gt;I replaced it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="n"&gt;nextWriteIndex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;nextWriteIndex&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;totalBuckets&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;writnextWriteIndexeIndex&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;nextIndex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;nextIndex&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;totalBuckets&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;nextIndex&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;3× faster iteration.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7ud0igv63kyjj6mfnxf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7ud0igv63kyjj6mfnxf.png" alt=" " width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yes — &lt;em&gt;just by removing a single &lt;code&gt;%&lt;/code&gt; operator.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Results
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Write throughput:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
≈ &lt;strong&gt;78 million ops/sec&lt;/strong&gt; — about half as fast as &lt;code&gt;ArrayDeque&lt;/code&gt; or &lt;code&gt;LinkedList&lt;/code&gt;,&lt;br&gt;&lt;br&gt;
but &lt;strong&gt;thread-safe&lt;/strong&gt; and similar to &lt;strong&gt;LMAX Disruptor&lt;/strong&gt; performance in single writter (ended up similar in design as well).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read throughput:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Comparison&lt;/th&gt;
&lt;th&gt;RollingBuffer Speed&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;vs &lt;code&gt;ArrayList&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.3× faster&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;vs &lt;code&gt;ArrayDeque&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.4× faster&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;vs &lt;code&gt;LinkedList&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;2.3× faster&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Iteration:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Faster than both &lt;code&gt;ArrayList&lt;/code&gt; and &lt;code&gt;ArrayDeque&lt;/code&gt;,&lt;br&gt;&lt;br&gt;
fully thread-safe, and supports concurrent modification.&lt;/p&gt;

&lt;p&gt;Not bad at all. 😎&lt;/p&gt;

&lt;p&gt;Btw. check out this &lt;a href="https://claude.ai/public/artifacts/a994eb78-2c7d-4048-8491-66978dab0611" rel="noopener noreferrer"&gt;interactive graph I've made with Claude AI&lt;/a&gt; &lt;/p&gt;




&lt;h2&gt;
  
  
  The Lesson
&lt;/h2&gt;

&lt;p&gt;Would I have figured this out without measuring?&lt;br&gt;&lt;br&gt;
&lt;strong&gt;No chance.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Is this level of optimization worth it for most code?&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Absolutely not.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
For 99.9% of use cases, built-ins are fine.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;Intuition gets you close.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Benchmarks get you the truth.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Is this level of performance optimization really worth it?&lt;br&gt;&lt;br&gt;
For most people, jobs, or tasks — it’ll drown in the sea of other inefficiencies.&lt;br&gt;&lt;br&gt;
And not just for most — it’s like 95%+, easily.&lt;br&gt;&lt;br&gt;
But when it comes to the &lt;em&gt;hot path&lt;/em&gt; of the system — the core where performance truly matters —  this is where you’ll find excellence.&lt;/p&gt;

&lt;p&gt;This process was invaluable.&lt;br&gt;&lt;br&gt;
I learned more about &lt;strong&gt;how the CPU, Java Memory Model, and JVM&lt;/strong&gt; actually behave than any book or course could have taught me.&lt;/p&gt;

&lt;p&gt;Github repo: &lt;a href="https://github.com/JurenIvan/rolling-buffer" rel="noopener noreferrer"&gt;https://github.com/JurenIvan/rolling-buffer&lt;/a&gt; &lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;You can reason your way to correctness.&lt;br&gt;&lt;br&gt;
But you can only measure your way to performance.&lt;/p&gt;
&lt;/blockquote&gt;




</description>
      <category>java</category>
      <category>performance</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Good Things (Compression) Take Time!</title>
      <dc:creator>Ivan Juren</dc:creator>
      <pubDate>Wed, 29 Oct 2025 08:53:49 +0000</pubDate>
      <link>https://forem.com/ijuren/good-things-compression-take-time-1aed</link>
      <guid>https://forem.com/ijuren/good-things-compression-take-time-1aed</guid>
      <description>&lt;p&gt;Inspired by &lt;a href="https://www.youtube.com/@AntonPutra" rel="noopener noreferrer"&gt;Anton Putra&lt;/a&gt; style of benchmarks (if you love graphs half as much as I do, you gotta check this guy out!), I decided to dig deeper into one of Kafka producer's most underrated yet powerful settings — &lt;strong&gt;&lt;code&gt;linger.ms&lt;/code&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this post, we'll explore what &lt;code&gt;linger.ms&lt;/code&gt; does and how it impacts &lt;strong&gt;end-to-end latency&lt;/strong&gt;, &lt;strong&gt;CPU usage&lt;/strong&gt;, &lt;strong&gt;compression efficiency&lt;/strong&gt;, and &lt;strong&gt;produce/fetch rates&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 What Is &lt;code&gt;linger.ms&lt;/code&gt; and Why Is It Important?
&lt;/h2&gt;

&lt;p&gt;Every message sent to Kafka isn't just the payload — it's wrapped with metadata (topic name, partition, key, timestamp, checksums, headers, etc.).&lt;br&gt;
Sending messages one by one would therefore carry significant network and serialisation overhead.&lt;/p&gt;

&lt;p&gt;To optimise this, Kafka producers can bundle multiple records into a single batch, reducing the per-message overhead and improving throughput.&lt;br&gt;
Now, the key question becomes: &lt;strong&gt;how big should that batch be&lt;/strong&gt;, and &lt;strong&gt;when should it be sent&lt;/strong&gt;?&lt;br&gt;
That's exactly where &lt;code&gt;linger.ms&lt;/code&gt; comes into play.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;linger.ms&lt;/code&gt; defines how long the Kafka producer should wait before sending a batch of records to the broker.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If &lt;strong&gt;&lt;code&gt;linger.ms&amp;gt;0&lt;/code&gt;&lt;/strong&gt;, the producer &lt;strong&gt;waits up to that duration&lt;/strong&gt; to fill the batch — allowing more messages to accumulate and be sent together.&lt;/li&gt;
&lt;li&gt;If &lt;strong&gt;&lt;code&gt;linger.ms=0&lt;/code&gt;&lt;/strong&gt;, records are sent &lt;strong&gt;immediately&lt;/strong&gt; when the batch is full or as soon as a message arrives.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result? Fewer network calls, better compression efficiency, and lower CPU usage — at the cost of slightly higher latency.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Tradeoff
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lower &lt;code&gt;linger.ms&lt;/code&gt;&lt;/strong&gt; → lower latency, more overhead, higher CPU cost, worse compression.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Higher &lt;code&gt;linger.ms&lt;/code&gt;&lt;/strong&gt; → higher latency, less overhead, lower CPU cost, better compression.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's all about balancing throughput and latency depending on your use case.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔍 Methodology
&lt;/h2&gt;

&lt;p&gt;To measure the impact, I set up a &lt;strong&gt;3-broker Kafka cluster&lt;/strong&gt; using Docker Compose with the following stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kafka 4.0.0&lt;/strong&gt; (KRaft mode)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JMX Exporter&lt;/strong&gt; → Prometheus → &lt;strong&gt;Grafana&lt;/strong&gt; for metrics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spring Boot&lt;/strong&gt; producer and consumer&lt;/li&gt;
&lt;li&gt;The producer sends JSON payloads (~400B each) to a topic (replication:3, min ISR:2, acks: all) with adjustable &lt;code&gt;linger.ms&lt;/code&gt;. &lt;/li&gt;
&lt;li&gt;The consumer stores messages in memory and calculates the difference in time → &lt;strong&gt;end-to-end latency&lt;/strong&gt; that's reported separately.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The latency is measured using timestamps on both sides (&lt;code&gt;System.currentTimeMillis()&lt;/code&gt;), so it reflects &lt;strong&gt;true client-perceived delay&lt;/strong&gt; — not just broker-side performance.&lt;/p&gt;

&lt;p&gt;Disclaimers: I'm doing all of this on one machine. In production, you'd want brokers on separate racks to reduce the chance of them all going down at the same time. To keep things simple, there's only one producer and one consumer. Since usual Kafka workloads have multiple consumers interested in the same stream, producer-side optimisations this way might &lt;strong&gt;exaggerate&lt;/strong&gt; the impact you'd expect in production.&lt;/p&gt;

&lt;p&gt;Stay tuned for follow-ups where I'll cover impactful &lt;strong&gt;consumer-side properties&lt;/strong&gt;!&lt;/p&gt;




&lt;h2&gt;
  
  
  📊 Results and Analysis
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1️⃣ End-to-End Latency
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff71rqo3fjip4iyzfzv2u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff71rqo3fjip4iyzfzv2u.png" alt="Latency Heatmap" width="800" height="303"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f71rqo3fjip4iyzfzv2u.png" rel="noopener noreferrer"&gt;Higer resolution picture&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice how decreasing &lt;code&gt;linger.ms&lt;/code&gt; seems like a big win — the median latency drops from around 30 ms to 10 ms, perfectly&lt;br&gt;
matching the 20 ms &lt;code&gt;linger.ms&lt;/code&gt; we removed.&lt;/p&gt;

&lt;p&gt;At first glance, that feels like a major improvement, but the long tail remains. The distribution still stretches into higher latencies, meaning not everything got faster. Usually, when deployed to the cloud with more distance between brokers, replication takes a bit longer. Since consumers are only served replicated messages, the effect on e2e latency isn't that big.&lt;/p&gt;

&lt;p&gt;By the way, in Kafka clients 4.0, the default &lt;code&gt;linger.ms&lt;/code&gt; is set to 5ms (from the old default of 0ms). Let's see why that might be a good idea.&lt;/p&gt;




&lt;h3&gt;
  
  
  2️⃣ CPU Usage
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnydwbrpsh3lqattjbiuq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnydwbrpsh3lqattjbiuq.png" alt="CPU Usage per Broker" width="800" height="411"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nydwbrpsh3lqattjbiuq.png" rel="noopener noreferrer"&gt;Higer resolution picture&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's the CPU impact — and it's quite telling.&lt;/p&gt;

&lt;p&gt;After decreasing &lt;code&gt;linger.ms&lt;/code&gt; from 20 ms to 0, both the producer and brokers suddenly had to deal with many more, smaller batches. That means more frequent network calls, more compression operations, and generally more CPU churn.&lt;/p&gt;

&lt;p&gt;With &lt;code&gt;linger.ms=20&lt;/code&gt;, batching allowed Kafka to work smarter — fewer system calls, fewer packets, fewer yet better compressions per second.&lt;br&gt;
At &lt;code&gt;linger.ms=0&lt;/code&gt;, the producer becomes a bit of a machine gun, firing messages as soon as they arrive.&lt;/p&gt;

&lt;p&gt;We could counter some of the consumer-side impact with settings like &lt;code&gt;fetch.min.bytes&lt;/code&gt;, but let's leave that deep dive for another time. ⏳&lt;/p&gt;




&lt;h3&gt;
  
  
  3️⃣ Produce/Fetch Count
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0s3c0xr00apidny9rce5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0s3c0xr00apidny9rce5.png" alt="Produce rate" width="800" height="408"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0s3c0xr00apidny9rce5.png" rel="noopener noreferrer"&gt;Higer resolution picture&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffx6rgq7bm1af5b7pooge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffx6rgq7bm1af5b7pooge.png" alt="Fetch rate" width="800" height="410"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fx6rgq7bm1af5b7pooge.png" rel="noopener noreferrer"&gt;Higer resolution picture&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The produce rate going up is the direct consequence of reducing &lt;code&gt;linger.ms&lt;/code&gt;. Instead of batching to amortise the overhead, almost every event is a produce request. This is why CPU churn is so much higher.&lt;/p&gt;

&lt;p&gt;On the other side, the fetch rate also rises. Given that the consumer by default waits until there's one byte ready (the Kafka broker returns data as soon as there is an event to be returned) and given that something lands all the time instead of once every 20ms, the consumer now has a lot more work as well.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;fetch.min.bytes&lt;/code&gt; and &lt;code&gt;fetch.max.wait.ms&lt;/code&gt; can be real lifesavers here. (Teasing the sequel once again 😉)&lt;/p&gt;




&lt;h3&gt;
  
  
  4️⃣ Bytes In Total
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fphrwt02th4rax4ke4oaj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fphrwt02th4rax4ke4oaj.png" alt="Bytes In Total" width="800" height="412"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/phrwt02th4rax4ke4oaj.png" rel="noopener noreferrer"&gt;Higer resolution picture&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(Grand finale for the title metaphor 🎺🎺)&lt;/p&gt;

&lt;p&gt;You can see an increase in &lt;strong&gt;bytes in&lt;/strong&gt; across brokers after raising &lt;code&gt;linger.ms&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Bigger batches mean better compression efficiency, &lt;strong&gt;but&lt;/strong&gt; to have nice batches, you have to wait a little bit.&lt;br&gt;
Compression algorithms work better with larger data blocks. Since data is stored and served as it arrives (check out Kafka zero copy), increasing &lt;code&gt;linger.ms&lt;/code&gt; can reduce your storage and networking requirements. Pretty neat!&lt;/p&gt;




&lt;h2&gt;
  
  
  🧩 Summary
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Lower &lt;code&gt;linger.ms&lt;/code&gt;
&lt;/th&gt;
&lt;th&gt;Higher &lt;code&gt;linger.ms&lt;/code&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Latency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Lower&lt;/td&gt;
&lt;td&gt;🚫 Higher&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Broker CPU Load&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;🚫 Higher&lt;/td&gt;
&lt;td&gt;✅ Lower&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Client CPU Load&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;🚫 Higher&lt;/td&gt;
&lt;td&gt;✅ Lower&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Compression Ratio&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;🚫 Worse&lt;/td&gt;
&lt;td&gt;✅ Better&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Network Throughput&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;🚫 Larger&lt;/td&gt;
&lt;td&gt;✅ Smaller&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🔥 Turning Up the Heat
&lt;/h2&gt;

&lt;p&gt;So far, we've been working with a relatively modest message rate. But what happens when we really stress the system?&lt;/p&gt;

&lt;p&gt;I ramped up the message volume &lt;strong&gt;5x&lt;/strong&gt; and ran a progressive test: starting with &lt;code&gt;linger.ms=0&lt;/code&gt;, then after ~5 minutes switching to &lt;code&gt;linger.ms=5&lt;/code&gt;, another 5 minutes at &lt;code&gt;linger.ms=10&lt;/code&gt;, and finally 5 minutes at &lt;code&gt;linger.ms=20&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxvwdjddn0jd8c2nv1zu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxvwdjddn0jd8c2nv1zu.png" alt="Broker cpu at 5x messages" width="800" height="410"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nxvwdjddn0jd8c2nv1zu.png" rel="noopener noreferrer"&gt;Higer resolution picture&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1dxj4h0acokfnyf5rwm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1dxj4h0acokfnyf5rwm.png" alt="Clients cpu at 5x messages" width="800" height="410"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l1dxj4h0acokfnyf5rwm.png" rel="noopener noreferrer"&gt;Higer resolution picture&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under high load, we can observe the following CPU gains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Producer CPU&lt;/strong&gt; drops by ~25% as you move from &lt;code&gt;linger.ms=0&lt;/code&gt; to &lt;code&gt;linger.ms=20&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Broker CPU&lt;/strong&gt; drops by ~40% across the same range&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Given that we're pushing 5x the messages, it seems that cpu usage is a function of the number of produceRequests more than the number of messages itself.&lt;/p&gt;

&lt;p&gt;But here's the really interesting finding: &lt;strong&gt;end-to-end latency doesn't improve with lower &lt;code&gt;linger.ms&lt;/code&gt; values under high load&lt;/strong&gt;. In fact, it gets worse.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjxx4tjs7pft9w5zxrnri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjxx4tjs7pft9w5zxrnri.png" alt="E2E latency at 5x messages" width="800" height="410"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jxx4tjs7pft9w5zxrnri.png" rel="noopener noreferrer"&gt;Higer resolution picture&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When &lt;code&gt;linger.ms=0&lt;/code&gt;, the e2e latency is actually &lt;strong&gt;higher&lt;/strong&gt; than when &lt;code&gt;linger.ms=5&lt;/code&gt; or &lt;code&gt;linger.ms=10&lt;/code&gt;. It's as bad or even worse than &lt;code&gt;linger.ms=20&lt;/code&gt;. Why? Because all those network calls, compression operations, and context switches create so much overhead that the system itself becomes the bottleneck. You're not getting faster responses — you're just thrashing the CPU while simultaneously degrading throughput.&lt;/p&gt;

&lt;p&gt;This is a crucial insight: &lt;strong&gt;at scale, lower &lt;code&gt;linger.ms&lt;/code&gt; doesn't mean lower latency&lt;/strong&gt;. The overhead of frequent small batches actually increases latency perception through system congestion, while also burning CPU cycles that could be spent on actual message processing.&lt;/p&gt;

&lt;p&gt;For completeness, check out how compression becomes better, resulting in more compact data.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpv4mxxcl8t9opx6gy139.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpv4mxxcl8t9opx6gy139.png" alt="Bytes-in at 5x messages" width="800" height="407"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pv4mxxcl8t9opx6gy139.png" rel="noopener noreferrer"&gt;Higer resolution picture&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;linger.ms&lt;/code&gt; is a powerful lever&lt;/strong&gt; for balancing throughput vs latency.&lt;/li&gt;
&lt;li&gt;Setting it too low wastes CPU and bandwidth on small batches.&lt;/li&gt;
&lt;li&gt;Setting it too high hurts responsiveness — especially for latency-sensitive workloads.&lt;/li&gt;
&lt;li&gt;Tune it alongside &lt;code&gt;batch.size&lt;/code&gt; for your message volume and rate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check your setup.&lt;/strong&gt; The old default (before Kafka clients 4.0) was &lt;code&gt;linger.ms=0&lt;/code&gt;; the new default is 5ms. This switch alone can already have a nice impact on your Kafka bill.&lt;/li&gt;
&lt;li&gt;If you have high load with lots of messages, &lt;strong&gt;adding some linger.ms might lower your e2e latency&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words — &lt;strong&gt;good things take time&lt;/strong&gt; 😉&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>java</category>
      <category>performance</category>
    </item>
    <item>
      <title>The Art of Knowing When to Upgrade</title>
      <dc:creator>Ivan Juren</dc:creator>
      <pubDate>Fri, 24 Oct 2025 07:08:42 +0000</pubDate>
      <link>https://forem.com/ijuren/the-art-of-knowing-when-to-upgrade-59jg</link>
      <guid>https://forem.com/ijuren/the-art-of-knowing-when-to-upgrade-59jg</guid>
      <description>&lt;p&gt;I recently jumped on the JSpecify hype train. It promises to finally give Java a unified approach to null-safety. The dream!&lt;br&gt;
It’s been around for a while, but apart from conference mentions, I haven’t really heard companies jumping into it. I think it's a shame, given how much potential it has to transform Java (and even its reputation), but that got me thinking...&lt;/p&gt;




&lt;h2&gt;
  
  
  When do I decide to adopt something new in my stack?
&lt;/h2&gt;

&lt;p&gt;Over time, I realized that every “should we upgrade?” question boils down to timing, risk, and expected value. Here’s my quick mental model.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Area&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;Decision Frequency&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Production/Business Approach&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Experimental/Personal Approach&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;1. New Java Versions&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Once every 6 months&lt;/td&gt;
&lt;td&gt;Wait for LTS releases or notable performance/features gains in non-preview/final features .&lt;/td&gt;
&lt;td&gt;Upgrade immediately to understand new features and see whether improvements matter.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;2. New Framework Versions&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Once every 2-3 months&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Major&lt;/strong&gt; version upgrade after the first patch release (e.g., 4.0.0 to 4.0.1) to avoid early rough edges.&lt;br&gt; &lt;strong&gt;Minor&lt;/strong&gt; in the next sprint (in about a month)&lt;/td&gt;
&lt;td&gt;Upgrade immediately to familiarize yourself.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;3. Tools like JSpecify&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;More than once a month&lt;/td&gt;
&lt;td&gt;Consider community momentum, prioritize and validate your gains from it. Be pragmatic!&lt;/td&gt;
&lt;td&gt;Trust your professional instinct; adopt if you believe in the future of the project. Don't jump on every train. Let others rough out the edges.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  The Risk and the Reward
&lt;/h2&gt;

&lt;p&gt;JSpecify is still in its infancy, and it’s not the first attempt to solve nullability in Java.&lt;br&gt;&lt;br&gt;
If you want a reminder of how many such efforts have faded, just look at this StackOverflow thread — it’s a graveyard of well-intentioned efforts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvms2legu74yb8lnxsa5t.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvms2legu74yb8lnxsa5t.jpg" alt="maybe it's gonna be different this time" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And I think it actually might!&lt;br&gt;
There’s a rare consensus between the community, library authors, and toolmakers (JetBrains, looking at you 👀) to push&lt;br&gt;
for a unified standard.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Best- and Worst-Case Scenarios
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;🚀 Best case:&lt;/strong&gt;&lt;br&gt;
The ecosystem embraces JSpecify, tooling evolves around it, and Java becomes a much safer, more expressive language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📉Worst case:&lt;/strong&gt;&lt;br&gt;
The initiative loses momentum — but you still end up with well-defined annotations that serve as clear hints to&lt;br&gt;
developers (and maybe future tooling).&lt;/p&gt;

&lt;p&gt;So even in the “bad” case, you gain something. In the good case, you gain a lot. 📈&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For me, that’s enough.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How Deep Should You Dive In?
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Closed for Modification, Open for Extension!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Recently, I wrote a post on &lt;em&gt;Rethinking Optional&amp;lt;?&amp;gt;&lt;/em&gt; where I suggested structural changes in how we handle&lt;br&gt;
nullability.&lt;br&gt;&lt;br&gt;
The gist: There’s no need to hide nullable fields behind Optional &lt;strong&gt;if we can ensure compile-time checks&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That’s a brave (maybe even reckless) stance — because if the tooling doesn’t evolve alongside the language, you could&lt;br&gt;
end up with code that’s worse than before.&lt;/p&gt;

&lt;p&gt;We’re effectively writing a subset of Java here.&lt;br&gt;&lt;br&gt;
If this initiative dies off and the tooling stops verifying, those “nullable” annotations won’t give the hard guarantees that my blogpost relies on.&lt;br&gt;
So my final verdict; &lt;strong&gt;don’t throw away three decades of hard-learned safety habits just yet.&lt;/strong&gt;, but use the added safety!&lt;/p&gt;




&lt;h2&gt;
  
  
  My Pointers
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;✅ &lt;strong&gt;Use JSpecify&lt;/strong&gt; -&amp;gt;   Every compile-time warning is one less NPE in production.&lt;/li&gt;
&lt;li&gt;⚠️ &lt;strong&gt;Don’t ditch proven patterns&lt;/strong&gt;  -&amp;gt; Keep using validation, good API design, and clear contracts. JSpecify enhances them — it doesn’t replace them (yet).&lt;/li&gt;
&lt;li&gt;🔮 &lt;strong&gt;Look ahead&lt;/strong&gt; -&amp;gt; Java language arhitects are already considering nullability in Java itself. It's important to say that these are very! very! early drafts &lt;a href="https://mail.openjdk.org/pipermail/valhalla-spec-observers/2023-February/002181.html" rel="noopener noreferrer"&gt;mailing list&lt;/a&gt;, but JSpecify is a great stepping stone for your project to be ready when Valhalla and nullability in language actually drop.
When that happens, tools like OpenRewrite will make migrating from annotations to native constructs a breeze. &lt;strong&gt;You’ll already be halfway there — and safer along the way.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Adopting new tech is about balancing enthusiasm and pragmatism.&lt;br&gt;&lt;br&gt;
Sometimes you wait. Sometimes you jump. But if the cost of trying is low and the upside is huge — like with JSpecify — then experimenting isn’t just reasonable, it's responsible engineering.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Experiment, but with both eyes open.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🔗 Related Reads
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/ijuren/rethinking-optional-2lj"&gt;Rethinking Optional&amp;lt;?&amp;gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/ijuren/hey-i-did-a-thing-you-might-like-it-java-jspecify-exercise-1gam"&gt;JSpecify Exercise Repo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Would you adopt JSpecify in your production code today or would you wait for IDEs and compilers to catch up? Curious to hear your take 👇&lt;/p&gt;

</description>
      <category>java</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Hey, I did a thing, you might like it! Java JSpecify Exercise</title>
      <dc:creator>Ivan Juren</dc:creator>
      <pubDate>Thu, 23 Oct 2025 08:35:29 +0000</pubDate>
      <link>https://forem.com/ijuren/hey-i-did-a-thing-you-might-like-it-java-jspecify-exercise-1gam</link>
      <guid>https://forem.com/ijuren/hey-i-did-a-thing-you-might-like-it-java-jspecify-exercise-1gam</guid>
      <description>&lt;blockquote&gt;
&lt;h2&gt;
  
  
  Learning by doing is the most durable kind of learning.
&lt;/h2&gt;
&lt;/blockquote&gt;

&lt;p&gt;I’ve put together a small repo with ~20 classes, organised into &lt;strong&gt;easy&lt;/strong&gt;, &lt;strong&gt;medium&lt;/strong&gt;, and &lt;strong&gt;hard&lt;/strong&gt; levels to introduce you to JSpecify.   &lt;/p&gt;

&lt;p&gt;So after reading the &lt;a href="https://jspecify.dev/" rel="noopener noreferrer"&gt;JSpecify docs&lt;/a&gt;, try to annotate given classes, specify the nulls, and get some real hands-on JSpecify experience.&lt;/p&gt;

&lt;p&gt;I believe it’s &lt;em&gt;really&lt;/em&gt; important to understand the &lt;strong&gt;“type” semantics shift&lt;/strong&gt; — what &lt;code&gt;@Nullable&lt;/code&gt; actually means. The generics really piqued my interest. &lt;/p&gt;

&lt;p&gt;If you want a sneak peek at the kind of reasoning you’ll be doing, check out my other short read where I question one of my favourite patterns for dealing with nulls: &lt;a href="https://dev.to/ijuren/rethinking-optional-2lj"&gt;Rethinking Optional&amp;lt;?&amp;gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Going from this:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcan9aegzk8w24y4jo18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcan9aegzk8w24y4jo18.png" alt="Optionals way of handling nullability" width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To this:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bet50n9fm27newp194o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bet50n9fm27newp194o.png" alt="JSpecify way of handling nullability" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;More concise and safer. &lt;/p&gt;




&lt;p&gt;🔗 &lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/JurenIvan/jspecify-exercise" rel="noopener noreferrer"&gt;https://github.com/JurenIvan/jspecify-exercise&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Can you see yourself using JSpecify in your projects?&lt;br&gt;&lt;br&gt;
Would love to hear what you think 👇&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>java</category>
    </item>
    <item>
      <title>Rethinking Optional&lt;?&gt;!</title>
      <dc:creator>Ivan Juren</dc:creator>
      <pubDate>Wed, 22 Oct 2025 10:14:33 +0000</pubDate>
      <link>https://forem.com/ijuren/rethinking-optional-2lj</link>
      <guid>https://forem.com/ijuren/rethinking-optional-2lj</guid>
      <description>&lt;h1&gt;
  
  
  Compile-Time Safety vs Runtime Safety: JSpecify in Practice
&lt;/h1&gt;

&lt;p&gt;The classic defense mechanism against the billion-dollar mistake in java? &lt;br&gt;
Common wisdom: use Optional, runtime checks, and Objects.requireNonNull.&lt;br&gt;&lt;br&gt;
But none of these eradicated the problem.&lt;/p&gt;

&lt;p&gt;Let’s look at a simple case — a UserProfile with an optional email and last login date — and see how the code changes when using JSpecify.&lt;/p&gt;


&lt;h2&gt;
  
  
  Without JSpecify: Optional All the Way
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;UserProfile&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;Instant&lt;/span&gt; &lt;span class="n"&gt;lastLogin&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;UserProfile&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                       &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                       &lt;span class="nc"&gt;Instant&lt;/span&gt; &lt;span class="n"&gt;lastLogin&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;Objects&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;requireNonNull&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Username must not be null!"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// must check the value to ensure correctness&lt;/span&gt;
        &lt;span class="c1"&gt;//alternatively, check nullness on each usage of username... &lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;username&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;email&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;lastLogin&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;lastLogin&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;   &lt;span class="c1"&gt;// do you test this kind of code? How do you ensure invariants don't break when changes happen?&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;getUsername&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;Optional&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getEmail&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;    &lt;span class="c1"&gt;// returns optional so caller is forced to consider Optional.empty()&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Optional&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ofNullable&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;  &lt;span class="c1"&gt;// exists due to developers experience, but has runtime costs(instantiation, GC) &lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;Optional&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Instant&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getLastLogin&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Optional&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ofNullable&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lastLogin&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;  
    &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="c1"&gt;// ... rest of the impl&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And the consumer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MailSenderOptional&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;doSomething&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;UserProfile&lt;/span&gt; &lt;span class="n"&gt;userProfile&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;someOptional&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;isPresent&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;userProfile&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getLastLogin&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;isPresent&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;doSomeLogic&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;userProfile&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getUsername&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
                        &lt;span class="n"&gt;userProfile&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getEmail&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
                        &lt;span class="n"&gt;userProfile&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getLastLogin&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;doSomeLogic&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Instant&lt;/span&gt; &lt;span class="n"&gt;lastLogin&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;Objects&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;requireNonNull&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;   &lt;span class="c1"&gt;// given method is private, these checks might be excessive, but had it been &lt;/span&gt;
        &lt;span class="nc"&gt;Objects&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;requireNonNull&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;      &lt;span class="c1"&gt;// public, why/when would caller care about invariants of this class?&lt;/span&gt;
        &lt;span class="nc"&gt;Objects&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;requireNonNull&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lastLogin&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;  &lt;span class="c1"&gt;// If this call is deep down, developer will notice only at runtime.&lt;/span&gt;
        &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"runtime safe: "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;" "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;" "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;lastLogin&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, everything compiles fine, even if you accidentally pass null to email.&lt;br&gt;&lt;br&gt;
The compiler won’t warn you — only a &lt;strong&gt;runtime&lt;/strong&gt; exception can catch that.&lt;/p&gt;

&lt;p&gt;Optional usage makes it safer in runtime, but verbose, hard to chain, and doesn’t truly make your API null-safe.&lt;/p&gt;

&lt;p&gt;Btw. Have you even run into following scenario:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt; &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;unlucky&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;UserProfile&lt;/span&gt; &lt;span class="n"&gt;userProfile&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;Optional&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;userProfile&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getEmail&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;isPresent&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="c1"&gt;// ❌ NullPointerException! variable email is null! &lt;/span&gt;
        &lt;span class="c1"&gt;// impl...&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Optionals give you certain safety, but it's not a guarantee. To be honest I've run into this just once in my life so this is more of a theoretical issue.&lt;/p&gt;




&lt;h2&gt;
  
  
  With JSpecify: Compile-Time Null Awareness
&lt;/h2&gt;

&lt;p&gt;Now, let’s bring in JSpecify and its &lt;code&gt;@NullMarked&lt;/code&gt; and &lt;code&gt;@Nullable&lt;/code&gt; annotations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@NullMarked&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;UserProfile&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nd"&gt;@Nullable&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nd"&gt;@Nullable&lt;/span&gt; &lt;span class="nc"&gt;Instant&lt;/span&gt; &lt;span class="n"&gt;lastLogin&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;UserProfile&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                       &lt;span class="nd"&gt;@Nullable&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                       &lt;span class="nd"&gt;@Nullable&lt;/span&gt; &lt;span class="nc"&gt;Instant&lt;/span&gt; &lt;span class="n"&gt;lastLogin&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// No need for requireNonNull — compiler checks this.&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;username&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;email&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;lastLogin&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;lastLogin&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;getUsername&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nd"&gt;@Nullable&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;getEmail&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;    &lt;span class="c1"&gt;// no need for Optional here, caller is forced to consider nullability&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nd"&gt;@Nullable&lt;/span&gt; &lt;span class="nc"&gt;Instant&lt;/span&gt; &lt;span class="nf"&gt;getLastLogin&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;lastLogin&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="c1"&gt;// if you're a fan -&amp;gt; these can be generated by lombok &lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that we no longer have any custom code/checks and that now this class fits into record very nicely so it looks like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@NullMarked&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt; &lt;span class="nf"&gt;UserProfile&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nd"&gt;@Nullable&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nd"&gt;@Nullable&lt;/span&gt; &lt;span class="nc"&gt;Instant&lt;/span&gt; &lt;span class="n"&gt;lastLogin&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the consumer...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@NullMarked&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MailSenderJSpecify&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;doSomething&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;UserProfile&lt;/span&gt; &lt;span class="n"&gt;userProfile&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// ❌ Compile error! email and lastLogin are nullable strings&lt;/span&gt;
        &lt;span class="c1"&gt;// doSomeLogic(userProfile.username(), userProfile.email(), userProfile.lastLogin());  &lt;/span&gt;
        &lt;span class="c1"&gt;// developer has to tackle it right away! &lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;userProfile&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;userProfile&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;lastLogin&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;// ✅ compiler knows they email and lastLogin are non-null within this block.&lt;/span&gt;
            &lt;span class="n"&gt;doSomeLogic&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;userProfile&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
                        &lt;span class="n"&gt;userProfile&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
                        &lt;span class="n"&gt;userProfile&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;lastLogin&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;doSomeLogic&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Instant&lt;/span&gt; &lt;span class="n"&gt;lastLogin&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// no need for runtime checks (or writting tests for it)&lt;/span&gt;
        &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"compile safe: "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;" "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s"&gt;" "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;lastLogin&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;...is more concise and safer than ever before. You gotta love modern Java! 🥰&lt;/p&gt;




&lt;h2&gt;
  
  
  The Differences
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Without JSpecify&lt;/th&gt;
&lt;th&gt;With JSpecify&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Compiler is unaware of nullability&lt;/td&gt;
&lt;td&gt;Compiler knows what can be null&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Optional used to signal nullability&lt;/td&gt;
&lt;td&gt;Plain nullable fields, checked at compile time&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Safety at runtime only&lt;/td&gt;
&lt;td&gt;Safety enforced at compile time&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Verbose API (&lt;code&gt;Optional.ofNullable(...)&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;Natural Java API with null-awareness&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code compiles even if nulls passed incorrectly&lt;/td&gt;
&lt;td&gt;Code fails to compile if nulls aren’t handled&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Null-related bugs are common causes of production incidents and put a lot of strain in the brain-context of the developer building stuff.&lt;br&gt;&lt;br&gt;
JSpecify brings the type system closer to what we intuitively mean in code:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“This might be null” vs “This can never be null.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And it integrates cleanly — no new types, no wrapper objects, just annotations and compiler support.&lt;/p&gt;




&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Before JSpecify: You defend against null at runtime.
&lt;/li&gt;
&lt;li&gt;With JSpecify: You prevent null at compile time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Compile-time safety beats runtime surprises.  📈&lt;/p&gt;




&lt;p&gt;Btw. check my repo where I've created ~20 examples (easy to hard) where you can exercise JSpecify usage and get a hang of it.&lt;br&gt;&lt;br&gt;
Github repo: &lt;a href="https://github.com/JurenIvan/jspecify-exercise" rel="noopener noreferrer"&gt;https://github.com/JurenIvan/jspecify-exercise&lt;/a&gt;&lt;/p&gt;

</description>
      <category>java</category>
      <category>webdev</category>
      <category>compiling</category>
      <category>programming</category>
    </item>
    <item>
      <title>High-Availability Schema Changes: Rolling Updates with Flyway and Spring Boot</title>
      <dc:creator>Ivan Juren</dc:creator>
      <pubDate>Mon, 20 Oct 2025 17:11:03 +0000</pubDate>
      <link>https://forem.com/ijuren/high-availability-schema-changes-rolling-updates-with-flyway-and-spring-boot-4gpa</link>
      <guid>https://forem.com/ijuren/high-availability-schema-changes-rolling-updates-with-flyway-and-spring-boot-4gpa</guid>
      <description>&lt;p&gt;Supporting multiple live instances of an application that interact with a single relational database where the schema and data must remain consistent throughout deployments is significantly more complex than migrating with just one instance, or in a simple "stop all instances, migrate, deploy" scenario.&lt;/p&gt;

&lt;p&gt;Let's explore how to perform seamless rolling migrations with Flyway across multiple instances while ensuring high availability!&lt;/p&gt;

&lt;h2&gt;
  
  
  Issue at Hand
&lt;/h2&gt;

&lt;p&gt;Our original deployment strategy was straightforward: anchored by &lt;strong&gt;Flyway&lt;/strong&gt; for database migrations and JPA with &lt;code&gt;ddl-auto=validate&lt;/code&gt;, we would stop all instances, run migrations on the first instance at startup, and then start the remaining instances. This “stop the world” approach worked well under previous requirements—downtime during off-peak hours was acceptable, and availability targets were met.&lt;/p&gt;

&lt;p&gt;Over time, business requirements evolved. Clients began expecting constant access, making any downtime, even for maintenance, count against our SLA (more on it &lt;a href="https://dev.to/ijuren/thoughts-on-sla-5dkd"&gt;Thoughts on SLA&lt;/a&gt;. While not a crisis, this change highlighted that our old approach was insufficient for a high-availability environment.&lt;/p&gt;

&lt;p&gt;In other words, we needed a strategy that allowed database migrations and application updates to occur &lt;strong&gt;on the fly&lt;/strong&gt;, without interrupting service or breaking running instances. In other words, this evolution of requirements prompted us to rethink how deployments and migrations should be handled in a multi-instance setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Did We Address It?
&lt;/h2&gt;

&lt;p&gt;We adopted a &lt;strong&gt;rolling update&lt;/strong&gt; strategy, updating application instances gradually while keeping others running. The key challenge was &lt;strong&gt;interoperability&lt;/strong&gt;: old and new versions must operate concurrently with the database without causing downtime or errors.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This way, we have several instances running the old version and one instance running the updated version. This indicates that we must be able to support different versions at the same time if we would like to use this deployment strategy. Otherwise, the old or the new instances might not work as expected. &lt;strong&gt;In other words, interoperability is key.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.baeldung.com/ops/deployment-strategies#:~:text=Otherwise%2C%20the%20old%20or%20the%20new%20instances%20might%20not%20work%20as%20expected.%20In%20other%20words%2C%20interoperability%20is%20key." rel="noopener noreferrer"&gt;Deployment Strategies breakdown on Baeldung&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Since Flyway triggers our migrations, we needed to verify that it is up to the task. We addressed this with a small POC, which I covered in a previous &lt;a href="https://dev.to/ijuren/navigating-flyways-unexpected-behavior-in-database-evolution-12cf"&gt;blog post&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;In short, Flyway is forward-compatible: older application versions can start even if newer migrations (based on database migration table) are detected. &lt;/p&gt;

&lt;p&gt;Except the flyways schema/migration validation, we still have to make sure actual schema in DB is OK for queries old version will still be execution, and the JPA startup db schema validation. Simply deleting a column from DB will result with old version of service failing at query time. Also, if an old instance is restarted, JPA database schema validation will fail and the instance will not restart.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;The solution is &lt;strong&gt;multi-step&lt;/strong&gt; migration. An &lt;strong&gt;intermediary version&lt;/strong&gt; writes "extra/legacy data" to the DB for older instances to function properly while deprecating its use of the old columns by using the new ones. This way, the both old and new version of app can work with the DB at the same time. Once all the instances are updated, we deploy yet another version that safely removes the old columns from the DB. Only then the migration is done.&lt;/p&gt;

&lt;h2&gt;
  
  
  Procedures to Handle the Deployment Process
&lt;/h2&gt;

&lt;p&gt;For further examples, assume we have 2 instances of the app running version 1.0.0, and we want to deploy a new version.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding a New Field
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a new version (v1.0.1) of the app with a migration script to &lt;strong&gt;add the new field&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy&lt;/strong&gt; the new version to &lt;strong&gt;all the instances&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;(If you want to ensure non-nullability, the following steps are needed:)&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new version (v1.0.2) of the app with a migration script that &lt;strong&gt;adds the non-null constraint to the new column with a default value for all the existing rows&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy&lt;/strong&gt; the new version (v1.0.2) to &lt;strong&gt;at least one instance&lt;/strong&gt; (to run the migration).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dw6xz2q48eataxwqrhk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dw6xz2q48eataxwqrhk.png" alt="Standard operating procedure - add" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Deleting a Field
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a new version (v1.0.1) of the app that &lt;strong&gt;does not use the field&lt;/strong&gt; in the application logic but &lt;strong&gt;lacks the migration script that will actually delete the column from the DB&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy&lt;/strong&gt; the new version (v1.0.1) to &lt;strong&gt;all the instances&lt;/strong&gt; (no DB migration is run).&lt;/li&gt;
&lt;li&gt;Create a new version (v1.1.0) of the app &lt;strong&gt;with a migration script that deletes the column from the DB&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy&lt;/strong&gt; the new version (v1.1.0) to at &lt;strong&gt;least one instance&lt;/strong&gt; (to run the migration).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2mzojz877wpndxt15uey.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2mzojz877wpndxt15uey.png" alt="Standard operating procedure - delete" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Changing the Type of the Field or Any Other Complex Operation
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a new version (v1.0.1) of the app with a migration script to add the new field, copy/map the data from the old column, and ensure that any &lt;strong&gt;new data is written to both the new column and the old column&lt;/strong&gt; (write to both so that old instances can still work with the data, read from the new column to enable the latter step).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy&lt;/strong&gt; the new version to &lt;strong&gt;all the instances&lt;/strong&gt; (at this point, the app is ready to stop using the old field).&lt;/li&gt;
&lt;li&gt;Create a new version (v1.0.2) that deprecates the old column by &lt;strong&gt;writing only to the new column&lt;/strong&gt; (final sync of the data).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy&lt;/strong&gt; the new version (v1.0.2) to &lt;strong&gt;all instances&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Create a new version (v1.1.0) of the app with a &lt;strong&gt;migration script that copies/maps the data from the old column to the new one&lt;/strong&gt; (final sync of the data) &lt;strong&gt;and a migration script that deletes the old column from the DB&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy&lt;/strong&gt; the new version (v1.1.0) &lt;strong&gt;to at least one instance&lt;/strong&gt; (to run the migration).&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtw1xvy3snjqddjygcyg.png" alt="Standard operating procedure - rename" width="800" height="325"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Ensuring interoperability between different versions of your application during rolling updates is crucial for enabling rollout when there's more than one instance interested. Flyways validation defaults play the key role in supporting this process, enabling forward-compatible database migrations that allow older versions of the application to coexist with newer ones.&lt;/p&gt;

&lt;p&gt;Achieving this seamless transition requires careful planning and "additional steps". These extra steps, while adding complexity to the deployment process, are essential to ensure data consistency, rollback and avoiding breaking changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interoperability is key&lt;/strong&gt;. I hope this blog post provides valuable insights and practical guidance beyond the simplistic approach of merely deploying multiple instances.&lt;/p&gt;

</description>
      <category>springboot</category>
      <category>devops</category>
      <category>architecture</category>
      <category>database</category>
    </item>
    <item>
      <title>Thoughts on SLA</title>
      <dc:creator>Ivan Juren</dc:creator>
      <pubDate>Mon, 20 Oct 2025 16:00:51 +0000</pubDate>
      <link>https://forem.com/ijuren/thoughts-on-sla-5dkd</link>
      <guid>https://forem.com/ijuren/thoughts-on-sla-5dkd</guid>
      <description>&lt;p&gt;When people talk about a “Service Level Agreement” (SLA), it rarely points to a single, clear metric. The term is often vague and open to interpretation. You’ll hear statements like “Our service guarantees 99.9% uptime”, but what does that really mean in practice? After firefighting many incidents, I have some thoughts and insights on it.&lt;/p&gt;

&lt;p&gt;Let's put together an overview of the most common types of SLAs and share some practical examples and what it means for engineers.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an SLA?
&lt;/h2&gt;

&lt;p&gt;An SLA is a &lt;strong&gt;promise&lt;/strong&gt; from a service provider to its customers about &lt;strong&gt;quality and reliability&lt;/strong&gt;.&lt;br&gt;
The most common metric and the one that usually takes the spotlight is &lt;strong&gt;availability&lt;/strong&gt;, often expressed as a percentage of uptime.&lt;/p&gt;

&lt;p&gt;Here’s a reference table showing the allowed downtime for various levels of availability, assuming 24/7 operation:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Availability %&lt;/th&gt;
&lt;th&gt;Downtime per year&lt;/th&gt;
&lt;th&gt;Downtime per month&lt;/th&gt;
&lt;th&gt;Downtime per day&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;90% ("one nine")&lt;/td&gt;
&lt;td&gt;36.53 days&lt;/td&gt;
&lt;td&gt;73.05 hours&lt;/td&gt;
&lt;td&gt;2.40 hours&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;99% ("two nines")&lt;/td&gt;
&lt;td&gt;3.65 days&lt;/td&gt;
&lt;td&gt;7.31 hours&lt;/td&gt;
&lt;td&gt;14.40 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;99.9% ("three nines")&lt;/td&gt;
&lt;td&gt;8.77 hours&lt;/td&gt;
&lt;td&gt;43.83 minutes&lt;/td&gt;
&lt;td&gt;1.44 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;99.99% ("four nines")&lt;/td&gt;
&lt;td&gt;52.60 minutes&lt;/td&gt;
&lt;td&gt;4.38 minutes&lt;/td&gt;
&lt;td&gt;8.64 seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;99.999% ("five nines")&lt;/td&gt;
&lt;td&gt;5.26 minutes&lt;/td&gt;
&lt;td&gt;26.30 seconds&lt;/td&gt;
&lt;td&gt;864 milliseconds&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/High_availability#:~:text=for%20the%20night.-,Percentage%20calculation,-%5Bedit%5D" rel="noopener noreferrer"&gt;Wikipedia - High Availability&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Alternative SLA Measurements
&lt;/h2&gt;

&lt;p&gt;Uptime is the easiest number to quote, but it doesn’t account for &lt;strong&gt;when&lt;/strong&gt; and &lt;strong&gt;how (badly)&lt;/strong&gt; a system fails. Most services fail under peak load, and five minutes of downtime during that window are usually be far more damaging than an hour of scheduled maintenance at midnight. Even worse, a service might appear working (responding with &lt;code&gt;200 OK&lt;/code&gt; on &lt;code&gt;/health&lt;/code&gt;), while performing terribly for users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Common peak-load failure patterns include&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Increased latency&lt;/strong&gt; - depending on business domain, doing things slowly can be worse than not doing it at all.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Timeouts&lt;/strong&gt; - If system has a chain of dependencies, one slow component can snowball into increased load on all downstream services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Partial outages&lt;/strong&gt; - only certain users or actions fail, escaping detection, but breaking clients workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most of these failures wouldn't even necessarily count up to the typical availability metric, but from single users perspective, &lt;strong&gt;these can have the same impact as downtime&lt;/strong&gt;”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;There are other ways to measure service quality that can sometimes provide a bit more accurate picture of clients experience:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Yield:&lt;/strong&gt; &lt;strong&gt;percentage of successful transactions&lt;/strong&gt;, providing a more accurate reflection of system performance metric for systems with variable usage. Related interesting read: &lt;a href="https://users.ece.cmu.edu/~adrian/731-sp04/readings/FB-cap.pdf" rel="noopener noreferrer"&gt;Readings on Yield&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Latency thresholds&lt;/strong&gt;: percentage of requests served within an acceptable latency range (e.g., &lt;code&gt;95% under 200 ms&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Suboptimal result rate:&lt;/strong&gt; how often the system serves degraded or approximate responses to save resources. Useful for performance/accuracy trade-offs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Impact of a single monthly incident on 4-nines availability
&lt;/h2&gt;

&lt;p&gt;“Four nines” (99.99%) availability is often a target for high-reliability systems.&lt;br&gt;
This gives downtime budget of just &lt;strong&gt;~52 minutes of downtime per year&lt;/strong&gt;, or &lt;strong&gt;about five minutes per month&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Consider a scenario with a &lt;strong&gt;single incident per month&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Incident occurs&lt;/li&gt;
&lt;li&gt;Monitoring detects and raises an alert, ~30s&lt;/li&gt;
&lt;li&gt;Engineer receives and answers the page, ~3 minutes&lt;/li&gt;
&lt;li&gt;Engineer logs in and begins investigating, ~1 minute&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point, &lt;strong&gt;you’ve already burned through almost your entire monthly downtime budget.&lt;/strong&gt;&lt;br&gt;
Good luck diagnosing, fixing, and recovering in the 30 seconds that remain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To make 4-nines uptime even remotely achievable, you need early detection, self-healing mechanisms, and automated recovery long before humans get involved.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Traditional uptime metrics are useful, but they don't tell the whole story. True reliability means being transparent, measuring the right things, and understanding the real impact of failures on users.&lt;/li&gt;
&lt;li&gt;SLAs are a foundational part of service reliability, and they require careful thought and design. Fixing problems in the design phase, way before they have a chance to surface takes time, but it protects your reputation.&lt;/li&gt;
&lt;li&gt;High availability isn’t achieved through heroics! It’s engineered through prevention, visibility, and automation.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"An SLA is not just a number. It's a commitment to quality and trust."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>sre</category>
      <category>webdev</category>
      <category>agile</category>
    </item>
    <item>
      <title>Navigating Flyway's Unexpected Behavior in Database Evolution</title>
      <dc:creator>Ivan Juren</dc:creator>
      <pubDate>Mon, 20 Oct 2025 11:03:29 +0000</pubDate>
      <link>https://forem.com/ijuren/navigating-flyways-unexpected-behavior-in-database-evolution-12cf</link>
      <guid>https://forem.com/ijuren/navigating-flyways-unexpected-behavior-in-database-evolution-12cf</guid>
      <description>&lt;p&gt;As developers, we often rely on tools like &lt;strong&gt;Flyway&lt;/strong&gt; to streamline the management of database migrations, ensuring that our applications evolve smoothly over time. However, there are subtleties that might be unexpected beneath the surface of this seemingly straightforward process. We came across one and would like to share it with you. &lt;/p&gt;

&lt;p&gt;Let's examine Flyway's response to newer database versions and its implications for migration management!&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Flyway's Startup Checks and the Usual Use Cases
&lt;/h2&gt;

&lt;p&gt;Before diving into the peculiar behaviour that we've uncovered, let's quickly grasp Flyway's startup checks. Upon initialization, Flyway compares the checksums of migration files with those applied to the database, ensuring consistency and integrity. Any discrepancies, be they altered files or missing (executed) scripts, prompt Flyway to halt, warning us of potential issues before they escalate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Try this at home, NOT on prod!
&lt;/h3&gt;

&lt;p&gt;In some scenarios, developers may resort to manual interventions for various reasons. For instance, if there's a need to repeat the execution of the last script, they might opt to delete the corresponding migration entry in the database. This action essentially prompts Flyway to rerun the script during the next migration cycle. Similarly, if a script needs to be amended or fixed directly in the database, developers might execute the necessary changes manually, adjust the script accordingly, and adjust the stored checksum. However, such manual adjustments should be approached with caution to avoid inadvertently introducing inconsistencies or breaking the migration process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When you start doing these things you might expect the following errors in the logs:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Checksum mismatch&lt;/strong&gt; - when the script is altered and the checksum is not updated:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'flywayInitializer' defined in class path resource
[org/springframework/boot/autoconfigure/flyway/FlywayAutoConfiguration$FlywayConfiguration.class]: Validate failed: Migrations have failed validation
Migration checksum mismatch for migration version 1.00
-&amp;gt; Applied to database : 1993450877
-&amp;gt; Resolved locally    : 1305205389
Either revert the changes to the migration, or run repair to update the schema history.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Missing migration&lt;/strong&gt; - when script is deleted from the database, or if you're using timestamps as migration versions and ignore scripts merged into the master/main in the meantime after you've created the migration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'flywayInitializer' defined in class path resource
[org/springframework/boot/autoconfigure/flyway/FlywayAutoConfiguration$FlywayConfiguration.class]: Validate failed: Migrations have failed validation
Detected resolved migration not applied to database: 1.50.
To ignore this migration, set -ignoreMigrationPatterns='*:ignored'. To allow executing this migration, set -outOfOrder=true.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  An Unexpected Twist: Detecting Newer Database Versions
&lt;/h2&gt;

&lt;p&gt;Here's where things get interesting. While exploring Flyway's inner workings, I stumbled upon a behaviour that defied my expectations. Imagine this scenario: our database is humming along with the latest migrations applied, proudly sporting version 2.00. However, upon inspecting our migration files, we find ourselves stuck in the past, lingering at version 1.00. One might expect Flyway to raise a red flag at this apparent mismatch, halting operations until the files catch up to the database. Instead, Flyway surprises us with a mere warning – a gentle (WARN) reminder in the log that a migration file is missing.&lt;/p&gt;

&lt;p&gt;We can see what is happening in the logs (below).&lt;br&gt;
When starting the application multiple times with different versions, we observe the following sequence of events:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqt8dmh24tyu50vc0nydz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqt8dmh24tyu50vc0nydz.png" alt="Sequence of events during schema rollout" width="800" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(Instance 1, Version 1):&lt;/strong&gt; The first instance of the application is started with version 1. Flyway validates and applies the migration, initializing the database schema to version 1.00.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INFO 1 --- [flyway-testing] [main] o.f.c.i.c.DbValidate: Successfully validated 1 migration (execution time 00:00.194s)
INFO 1 --- [flyway-testing] [main] o.f.c.i.c.DbMigrate : Current version of schema "public": &amp;lt;&amp;lt; Empty Schema &amp;gt;&amp;gt;
INFO 1 --- [flyway-testing] [main] o.f.c.i.c.DbMigrate : Migrating schema "public" to version "1.00 - init car"
INFO 1 --- [flyway-testing] [main] o.f.c.i.c.DbMigrate : Successfully applied 1 migration to schema "public", now at version v1.00 (execution time 00:00.036s)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;(Instance 2, Version 1):&lt;/strong&gt; A second instance of the application, meant for high availability, is launched also with version 1. Flyway validates the schema, finding it already at version 1.00, thus no migration is necessary.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INFO 1 --- [flyway-testing] [main] o.f.c.i.c.DbValidate: Successfully validated 1 migration (execution time 00:00.296s)
INFO 1 --- [flyway-testing] [main] o.f.c.i.c.DbMigrate : Current version of schema "public": 1.00
INFO 1 --- [flyway-testing] [main] o.f.c.i.c.DbMigrate : Schema "public" is up to date. No migration necessary.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;(Instance 2, Version 2):&lt;/strong&gt; A new version 2 is deployed to the second instance. Flyway validates the schema, still finding it at version 1.00. However, it proceeds to migrate the schema to version 2.00, adding the necessary changes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INFO 1 --- [flyway-testing] [main] o.f.c.i.c.DbValidate: Successfully validated 2 migrations (execution time 00:00.243s)
INFO 1 --- [flyway-testing] [main] o.f.c.i.c.DbMigrate : Current version of schema "public": 1.00
INFO 1 --- [flyway-testing] [main] o.f.c.i.c.DbMigrate : Migrating schema "public" to version "2.00 - add column"
INFO 1 --- [flyway-testing] [main] o.f.c.i.c.DbMigrate : Successfully applied 1 migration to schema "public", now at version v2.00 (execution time 00:00.034s)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;(Instance 1, Version 1):&lt;/strong&gt; The first instance is then redeployed with version 1, effectively restarting it. Flyway validates the schema and detects that it's already at version 2.00, even though the migration files only go up to version 1.00. It issues a &lt;strong&gt;warning&lt;/strong&gt; about the discrepancy but &lt;strong&gt;proceeds without applying any migrations or startup failure&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INFO 1 --- [flyway-testing] [main] o.f.c.i.c.DbValidate: Successfully validated 2 migrations (execution time 00:00.184s)
INFO 1 --- [flyway-testing] [main] o.f.c.i.c.DbMigrate : Current version of schema "public": 2.00
WARN 1 --- [flyway-testing] [main] o.f.c.i.c.DbMigrate : Schema "public" has a version (2.00) that is newer than the latest available migration (1.00) !
INFO 1 --- [flyway-testing] [main] o.f.c.i.c.DbMigrate : Schema "public" is up to date. No migration necessary.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Implications of Flyway's Decision
&lt;/h2&gt;

&lt;p&gt;This seemingly lenient approach raises questions about the trade-offs involved. While it doesn't explicitly grant us the freedom to apply migrations out of sequence, it does allow the application to start even if later migrations have been applied to the database. This can serve as an indicator of potential skipped migrations and it also enables a rolling migration approach where older instances of the application can be restarted even after the newer instance version has been started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;One might say that the flyway is "forward compatible" in this case. It is engineered to manage future database migrations and schema changes seamlessly, allowing ongoing functionality without needing complete future code.&lt;/p&gt;

&lt;p&gt;Understanding Flyway's behaviour is key to successful database management. By following best practices and embracing their quirks, such as keeping migration files current and avoiding manual schema alterations, developers can navigate migrations with confidence. Hope this interesting quirk that we've found helps you!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Happy migrating!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>flyway</category>
      <category>java</category>
      <category>database</category>
      <category>spring</category>
    </item>
  </channel>
</rss>
