<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Mariano Barcia</title>
    <description>The latest articles on Forem by Mariano Barcia (@mbarcia).</description>
    <link>https://forem.com/mbarcia</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mbarcia"/>
    <language>en</language>
    <item>
      <title>The Script That Refused to Stay Small</title>
      <dc:creator>Mariano Barcia</dc:creator>
      <pubDate>Thu, 09 Apr 2026 21:54:41 +0000</pubDate>
      <link>https://forem.com/mbarcia/the-script-that-refused-to-stay-small-1mod</link>
      <guid>https://forem.com/mbarcia/the-script-that-refused-to-stay-small-1mod</guid>
      <description>&lt;p&gt;It started, as these things often do, with a single process running on a single machine in a server room nobody liked visiting. The system took in shipment requests, enriched them with a few heuristics, and spat out routing hints. Nothing fancy, just enough logic to save operations teams a few hours a day, and Marta built it on her own as a side project.&lt;/p&gt;

&lt;p&gt;Marta wasn’t new to Java, she had spent a few months working in a microservices project in Spring and liked the serverless functional-style thinking: small transformations composed together. She had enough scar-tissue by now having experienced the inevitable issues with Spring and wanted to experiment with Quarkus anyway.&lt;/p&gt;

&lt;p&gt;She picked TPF because, on top of featuring a canvas UI to design the pipeline, she got the complete app's scaffolding as a pure Quarkus project. She could keep developing on her IDE as usual, only with the addition of the Quarkus environment plugin. And, she got to use Dev Services which she also wanted to try.&lt;/p&gt;

&lt;p&gt;Her choices for the script were deliberately boring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a JVM runtime (a single process) with&lt;/li&gt;
&lt;li&gt;no high availability&lt;/li&gt;
&lt;li&gt;a traditional direct method invocation between the orchestrator and the pipeline steps&lt;/li&gt;
&lt;li&gt;no DB, just basic HTML responses to human queries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each step in the pipeline was just a function. Input came in as loosely structured key/value data. No rigid schema, no upfront modeling, no SQL persistence.&lt;/p&gt;

&lt;p&gt;She didn’t yet have to think about&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;execution tracking or audits&lt;/li&gt;
&lt;li&gt;fan-out/fan-in logic&lt;/li&gt;
&lt;li&gt;retry semantics&lt;/li&gt;
&lt;li&gt;error handling&lt;/li&gt;
&lt;li&gt;availability&lt;/li&gt;
&lt;li&gt;high throughput&lt;/li&gt;
&lt;li&gt;latency&lt;/li&gt;
&lt;li&gt;integrations with 3rd parties&lt;/li&gt;
&lt;li&gt;monitoring&lt;/li&gt;
&lt;li&gt;security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And because she had started using AI-assisted coding tools, even the tests came almost for free, as TPF’s step isolation made it trivial to generate meaningful unit tests for each transformation.&lt;br&gt;
It was fast. It was understandable. It worked and, crucially, it already had shape—even if nobody called it that yet.&lt;/p&gt;

&lt;p&gt;But during one summer's day, a sandstorm from the Sahara impacted  Valencia and caused temperatures to rise (a phenomenon called kalima). The A/C unit in the server room failed and, by the next morning morning, alerts were firing, CPUs were throttling, and someone mentioned, only half jokingly, the possibility of actual fire.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0tquvjm4qn8haoc57at.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0tquvjm4qn8haoc57at.webp" alt="Dust cloud over Gandia, Valencia" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>quarkus</category>
      <category>java</category>
      <category>tpf</category>
      <category>unittest</category>
    </item>
    <item>
      <title>Architectural mobility for stronger software</title>
      <dc:creator>Mariano Barcia</dc:creator>
      <pubDate>Tue, 07 Apr 2026 19:37:36 +0000</pubDate>
      <link>https://forem.com/mbarcia/architectural-mobility-for-stronger-software-2nh4</link>
      <guid>https://forem.com/mbarcia/architectural-mobility-for-stronger-software-2nh4</guid>
      <description>&lt;p&gt;There’s a concept in sports science that’s easy to overlook because it sounds so basic: “mobility”. Mobility is defined as “the ability of a joint to move actively through its full, functional range of motion with control, stability, and strength“.&lt;/p&gt;

&lt;p&gt;If you look at a simplified “athlete performance” pyramid, the foundation isn’t power or strength; it’s mobility. Everything else builds on top of that, but we spend most of our time discussing the upper layers and very little attention is paid to what sits at the foundation of the pyramid. Take away mobility, and the strength will falter.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;       / - \
      /     \
     / Power \    Athlete Performance Pyramid
    /_________\
   / Strength  \  
  /_____________\
 /   Mobility    \ 
/_________________\
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, the reasons why mobility is so important are many and complex, but basically have to do with how the nervous system is able to recruit muscle fibres, whilst at the same time staying healthy (ie. by doing “reciprocal inhibition”). &lt;/p&gt;

&lt;p&gt;Putting aside the inherent complexity of the muscles in the human body, I think software systems behave in a similar way, and we spend most of our time discussing the upper layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;scalability&lt;/li&gt;
&lt;li&gt;throughput&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;whilst paying very little attention to what sits underneath all of that.&lt;/p&gt;

&lt;p&gt;Most software systems miss something fundamental: they cannot change shape without breaking themselves.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You move from containers to functions → everything rewrites&lt;/li&gt;
&lt;li&gt;You switch from REST to gRPC → contracts ripple through the system&lt;/li&gt;
&lt;li&gt;You change how remote work is executed → business logic gets entangled&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The system might be “strong” in its current form, but it lacks mobility. It’s rarely stated explicitly, and runtime, transport, and protocol are treated as the same decision.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You deploy to containers → you expose REST → you serialise JSON.&lt;/li&gt;
&lt;li&gt;Or you go serverless → you wire HTTP → you pass payloads around.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nothing looks wrong and in fact, it feels coherent, but still:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;your execution model has dictated your transport&lt;/li&gt;
&lt;li&gt;your transport has dictated your protocol&lt;/li&gt;
&lt;li&gt;and all three together have dictated your system shape&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This affects “architectural mobility” because&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You cannot change where things run without rewriting how they communicate&lt;/li&gt;
&lt;li&gt;You cannot change how they communicate without rewriting what they mean&lt;/li&gt;
&lt;li&gt;You cannot evolve protocols without locking yourself into a runtime&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That mobility is what allows your business logic to evolve regardless of any external infrastructure decision, and will allow you to make these deliberate choices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;do your workloads belong in containers?&lt;/li&gt;
&lt;li&gt;do your workloads belong in serverless functions?&lt;/li&gt;
&lt;li&gt;do they benefit from gRPC, or REST?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In &lt;a href="https://pipelineframework.org" rel="noopener noreferrer"&gt;The Pipeline Framework&lt;/a&gt;, architectural mobility is enabled by decoupling the transport from the execution platform from the runtime topology, like so: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                     Transport
               (LOCAL / REST / gRPC / Proto-HTTP)
                           ↑
                           │
                           │
                           │
                           │
                           │
                           ●───────────────→ Execution Platform
                          /                (COMPUTE / FUNCTION)
                         /
                        /
                       /
                      ↓
             Runtime Topology
  (Monolith / Pipeline / Modular)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Photo credit: &lt;a href="https://unsplash.com/es/@weareambitious" rel="noopener noreferrer"&gt;https://unsplash.com/es/@weareambitious&lt;/a&gt;&lt;/p&gt;

</description>
      <category>quarkus</category>
      <category>java</category>
      <category>software</category>
      <category>tpf</category>
    </item>
    <item>
      <title>From majestic monoliths to runtime topologies</title>
      <dc:creator>Mariano Barcia</dc:creator>
      <pubDate>Wed, 25 Mar 2026 21:50:26 +0000</pubDate>
      <link>https://forem.com/mbarcia/from-majestic-monoliths-to-runtime-topologies-497g</link>
      <guid>https://forem.com/mbarcia/from-majestic-monoliths-to-runtime-topologies-497g</guid>
      <description>&lt;p&gt;In the &lt;a href="https://dev.to/mbarcia/the-old-works-or-the-humble-monolith-5fdl"&gt;previous post&lt;/a&gt;, I used a scene from &lt;em&gt;The Eternaut&lt;/em&gt; — Favalli starting an old car after an EMP — as a way to introduce the “humble monolith” and how that differs from the "majestic monolith". Also, why the monolith vs microservices discussion is the tree in front of the forest: &lt;strong&gt;rigidity&lt;/strong&gt; is the real problem hiding behind it. So, how could we be more flexible?&lt;/p&gt;

&lt;p&gt;What if we could have the same business logic, adopting not one but &lt;em&gt;many&lt;/em&gt; runtime shapes? I realised that the functional and reactive pipelines underpinning &lt;a href="https://pipelineframework.org" rel="noopener noreferrer"&gt;&lt;strong&gt;The Pipeline Framework&lt;/strong&gt;&lt;/a&gt;, paired with the TPF's own architecture could actually enable &lt;em&gt;choices&lt;/em&gt; for the users: should I go for a monolith? Or deploy steps as microservices? I thought, why have to choose?&lt;/p&gt;

&lt;p&gt;Introducing: runtime topologies in &lt;strong&gt;The Pipeline Framework&lt;/strong&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The Pipeline Framework&lt;/strong&gt; treats the &lt;em&gt;business flow&lt;/em&gt; as the stable asset, and the &lt;em&gt;runtime topology&lt;/em&gt; as something that can change over time, and even co-exist (local environment vs. production). Let's take a look at the currently supported runtime topology shapes.&lt;/p&gt;

&lt;p&gt;None of these shapes is inherently “better.” They are just different ways of balancing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how much change you can isolate
&lt;/li&gt;
&lt;li&gt;how much infrastructure you want to operate
&lt;/li&gt;
&lt;li&gt;how much latency you can afford
&lt;/li&gt;
&lt;li&gt;where your security boundaries sit
&lt;/li&gt;
&lt;li&gt;how teams are organised
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And yet, in most systems, choosing one of these ends up locking you in. TPF instead makes this possible by adapting the input/outputs of each step in an elegant way at build time, to match the runtime shape of choice.  &lt;/p&gt;




&lt;h3&gt;
  
  
  Monolith
&lt;/h3&gt;

&lt;p&gt;Everything runs in-process: steps, orchestrator, and plugins.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7xdg5h0msy42onwbnjs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7xdg5h0msy42onwbnjs.png" alt="Monolith runtime topology" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the simplest possible setup. No network hops, minimal operational overhead, very direct debugging.&lt;/p&gt;

&lt;p&gt;The trade-off is clear: everything shares the same blast radius.&lt;/p&gt;




&lt;h3&gt;
  
  
  Pipeline Runtime
&lt;/h3&gt;

&lt;p&gt;Here, the orchestrator is separated, but the pipeline steps still run in a grouped runtime. Plugins are also externalised as shared services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskr2uap6w7id48lbpsxb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskr2uap6w7id48lbpsxb.png" alt="Pipeline runtime topology" width="800" height="343"&gt;&lt;/a&gt;&lt;br&gt;
This tends to be a very practical middle ground.&lt;/p&gt;

&lt;p&gt;You get a clear ingress point, reduced exposure of internal components, and some separation of concerns — without fully embracing distributed complexity.&lt;/p&gt;




&lt;h3&gt;
  
  
  Modular / Distributed
&lt;/h3&gt;

&lt;p&gt;Each step becomes independently deployable, and plugins remain shared services rather than being embedded per step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m83lrf8wyn5cyoulh34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m83lrf8wyn5cyoulh34.png" alt="Modular/distributed runtime topology" width="800" height="354"&gt;&lt;/a&gt;&lt;br&gt;
This gives you strong isolation, independent scaling, and clearer ownership boundaries.&lt;/p&gt;

&lt;p&gt;It also introduces the usual trade-offs: more infrastructure, more network hops, and more operational complexity.&lt;/p&gt;




&lt;p&gt;In the framework, the pipeline itself is defined independently of where it runs. Runtime mapping (&lt;code&gt;PipelineRuntimeMapping&lt;/code&gt; and its resolver) determines &lt;em&gt;placement&lt;/em&gt;, not behavior. In the &lt;a href="https://github.com/The-Pipeline-Framework/pipelineframework/blob/main/examples/csv-payments/README.md" rel="noopener noreferrer"&gt;&lt;code&gt;csv-payments&lt;/code&gt;&lt;/a&gt; example, the same pipeline can run as a monolith, inside a pipeline runtime, or in a more modular layout — without rewriting the business logic.&lt;/p&gt;

&lt;p&gt;That separation shows up elsewhere too. Step contracts define intent, mappers isolate boundaries, and services remain focused on transformation logic. Transport concerns don’t leak into the core, which means the system doesn’t become hostage to how components communicate.&lt;/p&gt;

&lt;p&gt;Even at generation level, there’s a single semantic model that gets projected into different execution modes. Local calls, gRPC, REST, protobuf-over-http — they’re not treated as fundamentally different architectures, but as different ways of expressing the same flow.&lt;/p&gt;

&lt;p&gt;And importantly, this isn’t just theoretical. The same reference system is built and tested in more than one topology, so the idea of switching shapes is exercised, not just claimed.&lt;/p&gt;

&lt;p&gt;While the 3 topologies above are actual values in a YAML config, none of this means architecture becomes automatic or point-n-click: you still choose your topology deliberately, and maintain the build.&lt;/p&gt;

&lt;p&gt;But that choice stops being irreversible: if the business flow can outlive the topology, then moving from a monolith to something more distributed (or even back again) stops being a rewrite and becomes a transition.&lt;/p&gt;

&lt;p&gt;And that’s a very different place to be.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>quarkus</category>
      <category>java</category>
    </item>
    <item>
      <title>The old works! (or the humble monolith)</title>
      <dc:creator>Mariano Barcia</dc:creator>
      <pubDate>Mon, 23 Mar 2026 18:47:29 +0000</pubDate>
      <link>https://forem.com/mbarcia/the-old-works-or-the-humble-monolith-5fdl</link>
      <guid>https://forem.com/mbarcia/the-old-works-or-the-humble-monolith-5fdl</guid>
      <description>&lt;p&gt;In the Netflix series The Eternaut, there’s a moment that hits harder than it probably should.&lt;/p&gt;

&lt;p&gt;After the electromagnetic pulse wipes out anything electronic, the world just… stops. Modern cars are useless, cities freeze, everything familiar suddenly becomes fragile.&lt;/p&gt;

&lt;p&gt;Then Favalli finds an old car and tries to start it.&lt;/p&gt;

&lt;p&gt;And it works.&lt;/p&gt;

&lt;p&gt;“Lo viejo funciona, Juan.” &lt;em&gt;("the old works, Juan")&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It’s a simple line, but it lands because it does something subtle: it pulls you back in time. Not just to “older technology,” but to a different world — ten, twenty, forty years ago — when things worked differently, when we were different, when the assumptions we made about the future were completely different.&lt;/p&gt;

&lt;p&gt;That feeling shows up in a lot of stories. A forgotten machine that still works. An old tool that suddenly becomes essential. Not because it’s better, but because it was built for a different context — and somehow fits the present moment again.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;Over the past few years, many microservices projects have turned into cautionary tales: systems that looked elegant on paper but became difficult to operate, evolve, or even understand.&lt;/p&gt;

&lt;p&gt;In response, the idea of the “majestic monolith” has made a comeback.&lt;/p&gt;

&lt;p&gt;And to be fair, there are majestic monoliths — just like there are beautifully restored classic cars. Carefully engineered, impressive, and sometimes exactly the right thing.&lt;/p&gt;

&lt;p&gt;But also, very expensive and time-consuming and perhaps over-engineered, which is why that framing can be misleading.&lt;/p&gt;

&lt;p&gt;Because most systems don’t need to be majestic. They need to be appropriate for their context, especially over the next one, two, or three years.&lt;/p&gt;

&lt;p&gt;If you’re honest about those constraints, the question becomes less ideological and more practical: what kind of system will let you move, adapt, and operate with the least friction?&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;That’s where the idea of the “humble monolith” feels more grounded.&lt;/p&gt;

&lt;p&gt;Not as a statement, but as a baseline.&lt;/p&gt;

&lt;p&gt;A system with fewer moving parts, clearer boundaries, and behavior that is easier to reason about when something goes wrong. Something you can understand without needing to reconstruct a distributed narrative across multiple components.&lt;/p&gt;

&lt;p&gt;Of course, monoliths can degrade into tightly coupled, hard-to-change systems. We’ve all seen that.&lt;/p&gt;

&lt;p&gt;But so can distributed architectures — and often in ways that are harder to see and harder to fix.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;Which points to the real issue.&lt;/p&gt;

&lt;p&gt;The problem is not monolith vs microservices.&lt;/p&gt;

&lt;p&gt;It’s rigidity.&lt;/p&gt;

&lt;p&gt;We make architectural decisions early, based on assumptions about scale, growth, and future needs. And then those decisions get embedded into the system in ways that are difficult to reverse.&lt;/p&gt;

&lt;p&gt;What started as a good fit becomes a constraint.&lt;/p&gt;

&lt;p&gt;The system needs to evolve, but the architecture doesn’t.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;Maybe that’s why moments like “lo viejo funciona” resonate so much.&lt;/p&gt;

&lt;p&gt;Not because the past was better, but because it reminds us that different choices made sense under different conditions — and that those choices can still be valid when circumstances change.&lt;/p&gt;

&lt;p&gt;In software, we rarely give ourselves that flexibility.&lt;/p&gt;

&lt;p&gt;We pick a shape, and we’re stuck with it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9v0yrj3a810a9k9s3w3.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9v0yrj3a810a9k9s3w3.webp" alt="Juan and Favalli in The Eternaut" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next post, I’ll explore a different approach: what it looks like when architecture becomes a runtime decision, and how the same system can take on different shapes — from a humble monolith to something more distributed — without rewriting everything.&lt;/p&gt;

&lt;p&gt;Because sometimes, what matters isn’t choosing the right architecture upfront.&lt;/p&gt;

&lt;p&gt;It’s being able to choose again when the world changes.&lt;/p&gt;




&lt;p&gt;Thank you for having reached this far, I hope you find these ideas interesting. Now, have you worked on monoliths? how would you call it :-) , Or perhaps, contributed to projects using microservices or server-less architecture? I'm curious to hear from you in the comments! &lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Photo credits:&lt;br&gt;
El Eternauta. César Troncoso As Favalli In El Eternauta. Cr. Marcos Ludevid / Netflix ©2025&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>architecture</category>
      <category>backend</category>
      <category>discuss</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>The Pipeline Framework is out</title>
      <dc:creator>Mariano Barcia</dc:creator>
      <pubDate>Wed, 03 Dec 2025 15:33:55 +0000</pubDate>
      <link>https://forem.com/mbarcia/the-pipeline-framework-is-out-70f</link>
      <guid>https://forem.com/mbarcia/the-pipeline-framework-is-out-70f</guid>
      <description>&lt;p&gt;Every major pipeline or workflow engine today is either heavyweight, cluster-oriented, monolithic, or tied to Python. Nobody has built a microservice-native, Java/Quarkus/JVM, gRPC-first, type-safe pipeline engine, like The Pipeline Framework.&lt;/p&gt;

&lt;p&gt;Happy to announce I've published TPF to Maven Central. Feel free to ask me anything!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pipelineframework.org" rel="noopener noreferrer"&gt;https://pipelineframework.org&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Photo: &lt;a href="https://www.flickr.com/photos/quintanomedia/" rel="noopener noreferrer"&gt;Anthony Quintano Media&lt;/a&gt;&lt;/p&gt;

</description>
      <category>tpf</category>
      <category>quarkus</category>
      <category>microservices</category>
      <category>java</category>
    </item>
    <item>
      <title>Introducing The Pipeline Framework</title>
      <dc:creator>Mariano Barcia</dc:creator>
      <pubDate>Wed, 24 Sep 2025 22:02:09 +0000</pubDate>
      <link>https://forem.com/mbarcia/introducing-the-pipeline-framework-3b45</link>
      <guid>https://forem.com/mbarcia/introducing-the-pipeline-framework-3b45</guid>
      <description>&lt;p&gt;&lt;a href="https://pipelineframework.org" rel="noopener noreferrer"&gt;https://pipelineframework.org&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Pipeline Framework is a powerful tool for building reactive pipeline processing systems. It simplifies the development of distributed systems by providing a consistent way to create, configure, and deploy pipeline steps.&lt;/p&gt;

&lt;p&gt;Key Features&lt;br&gt;
Reactive Programming: Built on top of Mutiny for non-blocking operations&lt;br&gt;
Annotation-Based Configuration: Simplifies adapter generation with @PipelineStep&lt;br&gt;
gRPC and REST Support: Automatically generates adapters for both communication protocols&lt;br&gt;
Modular Design: Clear separation between runtime and deployment components&lt;br&gt;
Auto-Generation: Generates necessary infrastructure at build time&lt;br&gt;
Observability: Built-in metrics, tracing, and logging support&lt;br&gt;
Error Handling: Comprehensive error handling with DLQ support&lt;br&gt;
Concurrency Control: Virtual threads and backpressure management&lt;/p&gt;

</description>
      <category>quarkus</category>
      <category>java</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Introducing: the pipeline framework</title>
      <dc:creator>Mariano Barcia</dc:creator>
      <pubDate>Wed, 17 Sep 2025 14:58:44 +0000</pubDate>
      <link>https://forem.com/mbarcia/introducing-the-pipeline-framework-44i</link>
      <guid>https://forem.com/mbarcia/introducing-the-pipeline-framework-44i</guid>
      <description>&lt;p&gt;Too often we see development teams struggle to finish a project. Here is a nice open-source framework I wrote to help with your upcoming microservices (or legacy!) project: the pipeline framework. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://pipelineframework.org" rel="noopener noreferrer"&gt;https://pipelineframework.org&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Welcome to the Pipeline Framework - a robust, scalable, and maintainable solution for processing data through a series of steps with built-in benefits for high-throughput, distributed systems.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Step-based Processing: Each business logic operation is encapsulated in a step that implements a specific interface.&lt;/li&gt;
&lt;li&gt;Reactive Programming: Steps use Mutiny reactive streams for non-blocking I/O operations.&lt;/li&gt;
&lt;li&gt;Type Safety: Steps are strongly typed with clear input and output types that chain together.&lt;/li&gt;
&lt;li&gt;Configuration Management: Steps can be configured globally or individually for retry logic, concurrency, and more.&lt;/li&gt;
&lt;li&gt;Observability: Built-in metrics, tracing, and logging for monitoring and debugging.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It doesn't ask for much, only that you translate the business case into a "pipeline" model with each step having a defined input/output. You &lt;em&gt;only&lt;/em&gt; write the business logic for each step and the framework will do all the Kubernetes heavy lifting for you (messaging, auto-persist, error handling, etc.)&lt;/p&gt;

&lt;p&gt;I'm in the process of making it an independent set of JARs on Maven central. At the moment it is included in the "CSV Payments Processing" system which is its "reference implementation". Reach out to me in the comments if you would like to know more. Cheers.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>quarkus</category>
      <category>microservices</category>
      <category>java</category>
    </item>
    <item>
      <title>The pioneering microservice</title>
      <dc:creator>Mariano Barcia</dc:creator>
      <pubDate>Wed, 20 Aug 2025 15:35:40 +0000</pubDate>
      <link>https://forem.com/mbarcia/the-pioneering-microservice-41cd</link>
      <guid>https://forem.com/mbarcia/the-pioneering-microservice-41cd</guid>
      <description>&lt;p&gt;From the perspective of a monolith, handing over ownership of a piece of data to a microservice feels like a hostile takeover. To make matters worse, the monolith doesn’t just lose those features — it has to be refactored to depend on the new service instead. If the whole point of migration was to stop investing in the monolith, this can feel like regression. Strong justification is required.&lt;/p&gt;

&lt;p&gt;This is why the first microservice is always the hardest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two challenges at once
&lt;/h2&gt;

&lt;p&gt;When you launch that first service, you’re not just building a new component. You’re building the platform that supports it — infrastructure, pipelines, processes, observability. At the same time, you’re modifying the monolith to work with it.&lt;/p&gt;

&lt;p&gt;What used to be a straightforward in-memory call now becomes a network request. Data that lived in the same heap must now travel through an API. Latency appears where there was none, and eventual consistency creeps into flows that were once instantaneous.&lt;/p&gt;

&lt;p&gt;The dual effort — building the new while bending the old — makes the first step disproportionately painful.&lt;/p&gt;

&lt;h2&gt;
  
  
  The grind of infrastructure
&lt;/h2&gt;

&lt;p&gt;On paper, spinning up microservices sounds simple. In practice, you often need to establish a baseline of tooling before you can ship anything meaningful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Containerization (Dockerfiles, images, registries)&lt;/li&gt;
&lt;li&gt;Orchestration (Kubernetes clusters or alternatives)&lt;/li&gt;
&lt;li&gt;Messaging or event streams (Kafka, RabbitMQ, SQS, …)&lt;/li&gt;
&lt;li&gt;API gateways, authentication, and routing&lt;/li&gt;
&lt;li&gt;Observability (logging, metrics, tracing)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And sometimes, service meshes or other advanced tools sneak in before you’re ready. All of this setup is invisible to business stakeholders, yet it consumes enormous time and energy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data gravity and adaptation
&lt;/h2&gt;

&lt;p&gt;Meanwhile, the monolith doesn’t go quietly. It has strong “data gravity” — everything around it is pulled into its orbit because it already has the data and the logic close at hand. Asking it to delegate that responsibility to a new service means changing well-worn paths: queries become requests, joins become lookups, consistency guarantees weaken.&lt;/p&gt;

&lt;p&gt;This is the moment when a team can feel like progress has slowed to a crawl.&lt;/p&gt;

&lt;h2&gt;
  
  
  The turning point
&lt;/h2&gt;

&lt;p&gt;But once the first microservice is truly in production, something important shifts. The monolith no longer owns everything. There is now precedent: data and responsibility can be transferred out.&lt;/p&gt;

&lt;p&gt;You’ve also learned hard lessons — how to replace in-memory function calls with inter-process communication, how to cope with retries and idempotency, how to migrate ownership of data, and how to operate a new service in production.&lt;/p&gt;

&lt;p&gt;The next service won’t require reinventing all of this. The platform is in place, the practices are established, and the team’s confidence has grown.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>cloudnative</category>
      <category>migrations</category>
    </item>
    <item>
      <title>On data sovereignty and the black hole monolith</title>
      <dc:creator>Mariano Barcia</dc:creator>
      <pubDate>Thu, 14 Aug 2025 15:13:27 +0000</pubDate>
      <link>https://forem.com/mbarcia/on-data-sovereignty-and-the-black-hole-monolith-523m</link>
      <guid>https://forem.com/mbarcia/on-data-sovereignty-and-the-black-hole-monolith-523m</guid>
      <description>&lt;p&gt;In most organisations, everything INSIDE the main monolithic system has access to pretty much all the relevant data it needs. It is like second nature and as such, we rarely stop to consider what it’s like to not have access to data.&lt;/p&gt;

&lt;p&gt;But crucially, anything OUTSIDE this main monolithic system has access to... nothing. For a new microservice to be meaningful, it will need to manage a piece of the data pie: it will need access to SOMETHING.&lt;/p&gt;

&lt;p&gt;That's when maybe many will be tempted to access the main database directly, trying to bypass the monolith. However, keeping a shared database between microservices is an anti-pattern. A strong rule in a microservices architecture is: &lt;em&gt;No shared database between services&lt;/em&gt;. Data should be owned and managed by a single service: this is called data sovereignty. Read about this principle in more detail &lt;a href="https://learn.microsoft.com/en-us/dotnet/architecture/microservices/architect-microservice-container-applications/data-sovereignty-per-microservice" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7j9ygjrl8dv9a3szsusw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7j9ygjrl8dv9a3szsusw.png" alt="Data sovereignty per microservice - .NET architecture ebook" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Without a significant effort, any new development outside the monolith is a "pariah", isolated from any existing data and relationship with any relevant area of the business.&lt;/p&gt;

&lt;p&gt;That is why I say that the monolith has this &lt;em&gt;black hole&lt;/em&gt; effect on the business, pulling in more and more data under its control. This happens because it’s faster and simpler to bolt new features and data onto the monolith rather than build a new service from scratch. Because of that, and because microservices bring their own complexities, there is never a good time to start a microservice.&lt;/p&gt;

&lt;p&gt;Like a black hole bending the space around it, the monolith warps development priorities and architectures. Anything that gets too close is pulled in — data, features, even whole projects — until escape becomes almost impossible. &lt;/p&gt;

&lt;p&gt;A monolith has an extraordinarily influential effect on any new development, just as a black hole will absorb everything in its vicinity by the sheer power of its mass and gravity. Escaping that gravitational pull is hard — but the first service to truly orbit outside its reach will define the future of your architecture.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>antipatterns</category>
    </item>
    <item>
      <title>From Monolith to Microservices: surfing the pipeline</title>
      <dc:creator>Mariano Barcia</dc:creator>
      <pubDate>Thu, 24 Jul 2025 19:49:23 +0000</pubDate>
      <link>https://forem.com/mbarcia/from-monolith-to-microservices-surfing-the-pipeline-nmk</link>
      <guid>https://forem.com/mbarcia/from-monolith-to-microservices-surfing-the-pipeline-nmk</guid>
      <description>&lt;p&gt;In my &lt;a href="https://dev.to/mbarcia/from-monolith-to-microservices-lessons-learned-migrating-the-csv-payments-processing-project-part-3p5m"&gt;previous post&lt;/a&gt; I quickly covered how I did the breakup of the majestic monolith into smaller services. In the end, I went for this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An input streaming service (parse CSV input files)&lt;/li&gt;
&lt;li&gt;An output streaming service (write CSV output files)&lt;/li&gt;
&lt;li&gt;A service for sending payments to a 3rd-party&lt;/li&gt;
&lt;li&gt;A service for polling for responses from that 3rd-party&lt;/li&gt;
&lt;li&gt;A service to produce (decorated) output records.&lt;/li&gt;
&lt;li&gt;An orchestrator service (to read folder and orchestrate pipeline steps), inspired by the Saga pattern. This service also holds the CLI application that it is used to kick-off the system.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;However, there is a lot more to this than it might appear at first sight. I first started asking myself the following question.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Is there a tried-and-true methodology for breaking up a monolith into smaller microservices?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Domain-Driven Design
&lt;/h2&gt;

&lt;p&gt;The number one methodology is called Domain-Driven Design (DDD). By carefully analysing your domain you are able to break up the monolith by business capability, not technical layers. Each microservice aligns to a so called "bounded context" (e.g., Billing, Inventory, User Management). &lt;/p&gt;

&lt;h2&gt;
  
  
  Domain logic and I/O
&lt;/h2&gt;

&lt;p&gt;Whilst I had given DDD a shot at Commonplace, I did not see it as a great fit for my PoC project. I remembered watching this &lt;a href="https://www.youtube.com/watch?v=ipceTuJlw-M" rel="noopener noreferrer"&gt;Scott Wlaschin's conference&lt;/a&gt; some time ago.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/ipceTuJlw-M"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;and finding it fascinating, as I saw so much synergy with what I was doing with the CSV Payments Processing PoC. I also remember this similar version called "Functional Core, Imperative Shell" (&lt;a href="https://www.youtube.com/watch?v=P1vES9AgfC4" rel="noopener noreferrer"&gt;link to video&lt;/a&gt;), where DDD is also very much at play.&lt;/p&gt;

&lt;h2&gt;
  
  
  It's the pipeline, stupid!
&lt;/h2&gt;

&lt;p&gt;Later on, when I was thinking about this "big breakup", I experienced this bulb moment when realised that, naturally, the boundaries were right there in front of me. A microservice per step of the pipeline was all I needed, give or take. And it was so much simpler than I thought it would be! I thought that was brilliant and gave myself a good pat on the back.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits
&lt;/h2&gt;

&lt;p&gt;Having a microservice per step of the processing pipeline has several benefits.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extensibility: very easy to add new processing steps as microservices
&lt;/li&gt;
&lt;li&gt;Easy to orchestrate: as it plays along the original design&lt;/li&gt;
&lt;li&gt;Aligned with DDD: as each step in the pipeline takes care of a different business capability&lt;/li&gt;
&lt;li&gt;Simplicity: business logic and data flow in and out always in a single direction&lt;/li&gt;
&lt;li&gt;Loosely coupled: does not create any new dependencies. There are services that need to deal with the file system, whilst others just don't, and that is only dictated by the business logic and not forced by the architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Photo Credit: &lt;a href="https://www.flickr.com/photos/surfingthenations/4094560762" rel="noopener noreferrer"&gt;Pipeline - North Shore&lt;/a&gt;&lt;/p&gt;

</description>
      <category>functional</category>
      <category>ddd</category>
      <category>java</category>
      <category>microservices</category>
    </item>
    <item>
      <title>From Monolith to Microservices: Lessons Learned Migrating the CSV Payments Processing project (Part One)</title>
      <dc:creator>Mariano Barcia</dc:creator>
      <pubDate>Thu, 03 Jul 2025 10:51:32 +0000</pubDate>
      <link>https://forem.com/mbarcia/from-monolith-to-microservices-lessons-learned-migrating-the-csv-payments-processing-project-part-3p5m</link>
      <guid>https://forem.com/mbarcia/from-monolith-to-microservices-lessons-learned-migrating-the-csv-payments-processing-project-part-3p5m</guid>
      <description>&lt;p&gt;I've been building a CSV Payments Processing system, based on a real-world project I've worked on at &lt;a href="https://www.worldfirst.com/eu/" rel="noopener noreferrer"&gt;Worldfirst&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/mbarcia/CSV-Payments-PoC" rel="noopener noreferrer"&gt;https://github.com/mbarcia/CSV-Payments-PoC&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Originally, I set out only to write a better version using a &lt;a href="https://youtu.be/ipceTuJlw-M?si=SJapgerJvfCLIsVE" rel="noopener noreferrer"&gt;"pipeline-oriented" design&lt;/a&gt; based on &lt;a href="https://refactoring.guru/design-patterns/command" rel="noopener noreferrer"&gt;the Command pattern&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgqihi3zf2z84lejwqq9i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgqihi3zf2z84lejwqq9i.png" alt="Command pipeline for the CSV Payments Processing system" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The idea was to use the project as a sandbox/playground, that could also serve as a proof of concept and to communicate new ideas to other people.&lt;/p&gt;

&lt;p&gt;About the time I had finished what I originally set out to do, I started exploring microservices at &lt;a href="https://www.commonplace.is" rel="noopener noreferrer"&gt;Commonplace&lt;/a&gt;, along with a better automated testing strategy. &lt;/p&gt;

&lt;p&gt;So, although I was happy with how the project achieved its original goals, as I was making progress learning all about microservices, I decided to evolve the project even further. First things first, I was interested in showing how easy automated testing was when the underlying design is good. Hence, I got it to a point of full meaningful testing coverage after a short time, enlisting the help of AI to write unit tests. Yeah I know, TDD right?&lt;/p&gt;

&lt;p&gt;By then, I had also achieved a good understanding of microservices, not least because I have been managing the Kubernetes-based Commonplace platform, and decided to apply these learnings to the CSV project.&lt;/p&gt;

&lt;h2&gt;
  
  
  The big breakup
&lt;/h2&gt;

&lt;p&gt;As a prerequisite, I had to decide how I was going to do the breakup of the system into smaller services. This merits a blog post on its own but, long story short, I decided to breakup the application more or less into &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An input streaming service (parse CSV files)&lt;/li&gt;
&lt;li&gt;An output streaming (write CSV results)&lt;/li&gt;
&lt;li&gt;A number of pipeline steps (send payment, polling, produce output).&lt;/li&gt;
&lt;li&gt;An orchestrator (read folder, orchestrate pipeline steps, handle errors), inspired by &lt;a href="https://microservices.io/patterns/data/saga.html" rel="noopener noreferrer"&gt;the Saga pattern&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Migrating from Springboot to Quarkus 3
&lt;/h2&gt;

&lt;p&gt;Originally, the project started as a Springboot CLI app, classic. Again enlisting the help of ChatGPT, I managed to refactor a Springboot CLI application into a multi-module microservices architecture over the course of a few intense sessions. &lt;/p&gt;

&lt;p&gt;This was no copy-paste exercise. Along the way, I uncovered a treasure trove of lessons in Quarkus configuration, build systems, dependency management, and inter-service communication. A part of me knew I had started my journey into The Rabbit Hole though, so let's start from the beginning: Maven dependencies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build Mayhem: Maven vs. Gradle
&lt;/h3&gt;

&lt;p&gt;I considered switching to Gradle for its composite build capabilities but stuck with Maven for simplicity and IntelliJ IDEA support. Quarkus works well with Maven multi-module projects, and IDEA recognizes and manages them cleanly—even when all modules live in a single Git repo.&lt;/p&gt;

&lt;p&gt;Still, Maven wasn’t always smooth sailing. Issues included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mojo execution errors due to conflicting plugin configurations&lt;/li&gt;
&lt;li&gt;Obsolete Java version warnings (&lt;code&gt;source 8 is obsolete&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Javadoc plugin failures due to undocumented interfaces&lt;/li&gt;
&lt;li&gt;Lombok not being picked up by the Maven compiler (even though it worked in the IDE)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each problem required its own fix—from upgrading plugin versions to tweaking &lt;code&gt;maven-compiler-plugin&lt;/code&gt; settings and suppressing irrelevant warnings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Toward Microservices: Structuring and Sharing Code
&lt;/h3&gt;

&lt;p&gt;The monolith eventually gave way to a structured microservice approach. Early on, I realised it was going to be far easier to share a "common" domain module as I wasn't going to get things right on the first attempt. So, I created such common/shared module, which housed reusable domain classes and interfaces shared across services.&lt;/p&gt;

&lt;p&gt;Here I faced a critical architectural decision: &lt;strong&gt;how should services communicate?&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Thoughts on synchronous and asynchronous communication
&lt;/h3&gt;

&lt;p&gt;Async communication seems to be the preferred method of communication between microservices. But, I see async as a higher-level tier wrapping or adapting the tier of sync APIs. I see a microservice taking gRPC and REST calls now, and responding to asynchronous events in the future. Naturally then, I decided to go with a synchronous model first.&lt;/p&gt;

&lt;p&gt;The sync vs async topic is quite interesting. Adding a constellation of message queues as the "glue" between microservices needs justification in my opinion. SQS or Kafka are nice, but they are also complex.&lt;/p&gt;

&lt;p&gt;If you take Project Loom's virtual threads (=compute density), structured concurrency, Quarkus, Hibernate reactive, Kubernetes high availability, and Mutiny streams (with back pressure and retry w/backoff), you have pretty much solved the issue of availability of distributed services running remotely on heterogeneous hardware capacity (and therefore, varying availability).&lt;/p&gt;

&lt;p&gt;Is it too foolish to think that async comms might become irrelevant in the future, in cases like a greenfield internal project? If sync comms get much better, it might just become the preferred choice.&lt;/p&gt;

&lt;p&gt;But as I said: the two comms models are not mutually exclusive in my humble opinion. For me, it was absolutely fine to do sync comms at the beginning, and leave async for phase 2 when the services are deemed robust enough. Of course, I expect asynchronous to require some re-factoring, but it should mostly be a "wrapper" of the existing stuff, plus the addition of the feature itself. &lt;/p&gt;

&lt;p&gt;Very curious to hear comments on this topic! &lt;/p&gt;

&lt;h3&gt;
  
  
  Choosing gRPC Over REST
&lt;/h3&gt;

&lt;p&gt;REST felt unnecessarily heavyweight for intra-service communication within the same organization, especially when services live on the same cluster or host. I settled on &lt;strong&gt;gRPC&lt;/strong&gt; for its:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Efficient binary protocol&lt;/li&gt;
&lt;li&gt;Built-in support for streaming&lt;/li&gt;
&lt;li&gt;Strong API contracts via &lt;code&gt;.proto&lt;/code&gt; files&lt;/li&gt;
&lt;li&gt;Mature integration with modern dev stacks and Kubernetes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It was clear that gRPC offered the best mix of performance and developer productivity—especially when paired with Quarkus’s excellent Protobuf tooling.&lt;/p&gt;

&lt;p&gt;Later on, I'd learn about Vertex and its dual support for gRPC and REST. Again, another non mutually exclusive choice! Happy days!&lt;/p&gt;

&lt;h3&gt;
  
  
  Kicking things off: A CLI App with Quarkus and Picocli
&lt;/h3&gt;

&lt;p&gt;The CLI tier shifted to a Quarkus-based CLI tool using &lt;code&gt;@QuarkusMain&lt;/code&gt; and &lt;code&gt;QuarkusApplication&lt;/code&gt; to bootstrap logic. Added some much needed conveniency features along the way though. Early challenges included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Passing command-line arguments (e.g., &lt;code&gt;--csv-folder&lt;/code&gt;) via Picocli&lt;/li&gt;
&lt;li&gt;Getting &lt;code&gt;@Inject&lt;/code&gt; to work properly within CLI apps&lt;/li&gt;
&lt;li&gt;Managing configuration overrides for testing without relying solely on &lt;code&gt;@QuarkusTest&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Eventually, I settled on combining Quarkus's DI with &lt;code&gt;@QuarkusMain&lt;/code&gt; and Picocli’s &lt;code&gt;@CommandLine&lt;/code&gt;. It took several iterations to get the CLI runtime and testing environment to coexist without interfering.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Quarkus apps are powerful&lt;/strong&gt;, but require careful setup when combining dependency injection, Picocli, and testing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-module Maven setups scale well&lt;/strong&gt;, especially with good IDE support. But be ready to fight your tooling when things go wrong.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code sharing needs structure&lt;/strong&gt;: create clean interfaces and use tools like Lombok and config mapping to reduce boilerplate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;gRPC is ideal for internal microservice communication&lt;/strong&gt;, offering the speed and contract-first development REST often lacks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Coming up on Part Two
&lt;/h2&gt;

&lt;p&gt;I was much impressed with Quarkus, its capabilities and ease of use. Having broken up the Springboot app into these new foundational modules, I felt excited and wanted to keep on going at a steady pace. &lt;/p&gt;

&lt;p&gt;Next steps involve things like defining proper &lt;code&gt;.proto&lt;/code&gt; APIs, building mapper classes (MapStruct), redefine concurrency processing, and more.&lt;/p&gt;

&lt;p&gt;-&lt;br&gt;
Cover photo by &lt;a href="a%20href="&gt;Sincerely Media&lt;/a&gt; at &lt;a href="a%20href="&gt;Unsplash&lt;/a&gt;      &lt;/p&gt;

</description>
      <category>quarkus</category>
      <category>springboot</category>
      <category>unittest</category>
      <category>microservices</category>
    </item>
    <item>
      <title>What I've learned about distributed services (part three)</title>
      <dc:creator>Mariano Barcia</dc:creator>
      <pubDate>Wed, 25 Jun 2025 12:27:29 +0000</pubDate>
      <link>https://forem.com/mbarcia/what-ive-learned-about-distributed-services-part-three-2c05</link>
      <guid>https://forem.com/mbarcia/what-ive-learned-about-distributed-services-part-three-2c05</guid>
      <description>&lt;p&gt;This is an anecdote really, but draws a very interesting conclusion.&lt;/p&gt;

&lt;p&gt;As it turns out when you adventure yourself out of the boundaries of a single process/application (a.k.a. the monolith), you find yourself missing tools that used to make your life easier as a (monolith) developer. But it's the paradigm change that calls for a great deal of adaptation.&lt;/p&gt;

&lt;p&gt;One of these things in Java land is the compiler's ability to "check" exceptions are handled properly, both downstream and upstream any method. &lt;/p&gt;

&lt;p&gt;But once services become remote, the domain objects and their exceptions no longer live within the confines of a single repository, as is the case with a monolithic application.&lt;/p&gt;

&lt;p&gt;Hence, the libraries you need are now IDL definitions and/or several RPC (or REST) APIs, which are not able to leverage the power of the Java compiler's checked exceptions feature.&lt;/p&gt;

&lt;p&gt;So what I've found in some of the pre-existing code is that the authors started to use and abuse the runtime exceptions, as most services were only able to issue an RPC failure with an error code to the callers.&lt;/p&gt;

&lt;p&gt;So, even when you were not dealing with remote calls, internal helper libraries only used runtime exceptions instead of the regular checked exceptions, effectively bypassing the compiler and freeing developers from doing any sort of exception handling.&lt;/p&gt;

&lt;p&gt;I recall this was a double edge sword, as in some cases I found myself in the situation of needing exception handling. But overall (wink wink Golang) it wasn't too bad!&lt;/p&gt;

&lt;p&gt;The way it worked was: every service had a top-tier exception handling code, so every runtime exception would bubble up and be caught by this code before returning to the caller. This provided for the proper error codes to be relayed to the caller, instead of just a generic runtime failure.&lt;/p&gt;

&lt;p&gt;Now that I'm migrating a Spring application to Quarkus (and breaking it into microservices in the process), I'm learning that a few things are now different from the SOFA stack. I reckon the stack trace of the runtime exception can actually be sent back to the caller. I'll cover this a bit more of this in a future post.&lt;/p&gt;

&lt;p&gt;For now, I think I understand now, why in a newer language like Golang, its creators took the bold decision to remove the checked exceptions feature. After all, if developers are working in a microservices environment, checked exceptions are less useful.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>exceptions</category>
      <category>java</category>
      <category>grpc</category>
    </item>
  </channel>
</rss>
