<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Robert Goniszewski</title>
    <description>The latest articles on Forem by Robert Goniszewski (@goniszewski).</description>
    <link>https://forem.com/goniszewski</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/goniszewski"/>
    <language>en</language>
    <item>
      <title>Decoupling a Live App with Domain Events (Part 2)</title>
      <dc:creator>Robert Goniszewski</dc:creator>
      <pubDate>Sat, 21 Mar 2026 10:15:38 +0000</pubDate>
      <link>https://forem.com/goniszewski/decoupling-a-live-app-with-domain-events-part-2-4oio</link>
      <guid>https://forem.com/goniszewski/decoupling-a-live-app-with-domain-events-part-2-4oio</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/goniszewski/from-nextjs-monolith-to-event-driven-architecture-why-we-started-and-what-we-built-167h"&gt;In Part 1 of this series&lt;/a&gt;, we introduced RabbitMQ and built our proof-of-concept: the &lt;code&gt;EventBusService&lt;/code&gt;, &lt;code&gt;RabbitMQProvider&lt;/code&gt;, a DLX retry pattern, and our first emitter (&lt;code&gt;CommentService.createComment()&lt;/code&gt;). By the end of Phase 0, we had one event running reliably in production behind a feature flag.&lt;/p&gt;

&lt;p&gt;For Phase 1, we applied this pattern to all Tier 1 services. 14 Zod schemas, 25 queue handlers across 3 consumer classes were built, and emit sites were added to &lt;code&gt;CommentService&lt;/code&gt;, &lt;code&gt;RecordService&lt;/code&gt;, and &lt;code&gt;OccurrenceCrudHelper&lt;/code&gt;. We also added 79 new tests. Here is how we executed Phase 1, the design choices we made, and a tricky bug that changed how we guard event bus access.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Dual-Write Strategy
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogcwirzje4bvbv1d3kb8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogcwirzje4bvbv1d3kb8.jpg" alt="obligatory meme tax"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our top priority for Phase 1 was safety. Since IHA has active users, we couldn't risk breaking the app. Switching entirely from direct service calls to event emissions wasn't safe yet.&lt;/p&gt;

&lt;p&gt;Instead, we used a dual-write strategy. Services now emit events &lt;em&gt;and&lt;/em&gt; run their existing inline side effects. If the event broker fails, the inline call still handles the work, and the user notices nothing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eventBus&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;emit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;record.created&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;recordId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;record.title, ... })&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;recordId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;record&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[RecordService] Failed to emit record.created event&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The inline side effects below it still run. The event goes to consumers who perform the same side effects redundantly. In Phase 2, once consumers are proven reliable under production load, the inline calls are removed.&lt;br&gt;
The &lt;code&gt;.catch()&lt;/code&gt; is crucial. Without it, a rejected &lt;code&gt;emit()&lt;/code&gt; promise crashes the Node.js process or clutters the logs. Using &lt;code&gt;void&lt;/code&gt; tells TypeScript we're intentionally not awaiting the promise—this is a fire-and-forget action. Once consumers prove reliable under production load in Phase 2, we will remove the inline calls.&lt;/p&gt;


&lt;h3&gt;
  
  
  Designing the Event Schemas
&lt;/h3&gt;

&lt;p&gt;We added ten new schemas to the four we built in Phase 0. We made a few specific design choices to keep the system robust:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Keep payloads minimal:&lt;/strong&gt; We only send IDs, not full objects. Consumers fetch the extra data they need. This keeps payloads light and ensures consumers use current data, avoiding stale data from the exact moment of emission.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Include optional fields for update diffs:&lt;/strong&gt; For events like &lt;code&gt;record.updated&lt;/code&gt;, we include the previous values of changed fields.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;RecordUpdatedPayloadSchema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;object&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;recordId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;uuid&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="na"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;optional&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="c1"&gt;// ...other current fields&lt;/span&gt;
  &lt;span class="na"&gt;previousTitle&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;optional&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="na"&gt;previousVisibility&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;optional&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This lets the consumer easily check if something important changed (like visibility) without fetching the old state from the database, which prevents race conditions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Treat visibility changes as distinct events:&lt;/strong&gt; Instead of bundling visibility changes into regular updates, we created a specific &lt;code&gt;visibility_changed&lt;/code&gt; event. These are moderated actions with different rules and consumers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Only validate UUIDs when guaranteed:&lt;/strong&gt; We didn't use &lt;code&gt;.uuid()&lt;/code&gt; validation for &lt;code&gt;userId&lt;/code&gt; because we use Lucia session IDs, not UUIDs. We only strictly validated actual database UUIDs to avoid rejecting valid events.&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;
  
  
  The Emit Pattern
&lt;/h3&gt;

&lt;p&gt;Every emit site uses a fire-and-forget pattern with &lt;code&gt;.catch()&lt;/code&gt; logging. Because our &lt;code&gt;ServiceRegistry&lt;/code&gt; doesn't support circular constructor injection, we use a lazy getter for the &lt;code&gt;eventBus&lt;/code&gt;. It resolves the service on its first access:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nf"&gt;eventBus&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nx"&gt;EventBusService&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;_eventBus&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;_eventBus&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getService&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;EventBusService&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;SERVICE_NAMES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;EVENT_BUS_SERVICE&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;_eventBus&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;For static helper classes like &lt;code&gt;record-moderation.helper.ts&lt;/code&gt;, we can't use &lt;code&gt;this&lt;/code&gt;. Instead, we use a dynamic import:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;getService&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;_getService&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;SERVICE_NAMES&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;_SERVICE_NAMES&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../ServiceRegistry&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;eventBus&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;_getService&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../../EventBusService&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;EventBusService&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;_SERVICE_NAMES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;EVENT_BUS_SERVICE&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nx"&gt;eventBus&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;emit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;record.visibility_changed&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;recordId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;moderatorUserId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;visible&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;hide&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;moderatorNote&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;note&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;unknown&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;recordId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[RecordModerationHelper] Failed to emit record.visibility_changed event&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Three Services, Three Integration Patterns
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;CommentService&lt;/code&gt;&lt;/strong&gt;: Added three new emit sites for updating, deleting, and toggling visibility.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;RecordService&lt;/code&gt;&lt;/strong&gt;: Added the lazy getter and emit sites for creating, updating, and deleting records. The update payload cleverly includes only the previous values for fields that actually changed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;OccurrenceCrudHelper&lt;/code&gt;&lt;/strong&gt;: This helper touches multiple services and isn't registered in the &lt;code&gt;ServiceRegistry&lt;/code&gt;. We made the getter null-safe with a try/catch, which led to an interesting bug.&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;
  
  
  The EventBus Emit Guard Problem
&lt;/h3&gt;

&lt;p&gt;During testing, we hit a tricky bug with the null-safe getter in &lt;code&gt;OccurrenceCrudHelper&lt;/code&gt;. We initially used optional chaining (&lt;code&gt;_eventBus?.emit&lt;/code&gt;) to guard the call.&lt;/p&gt;

&lt;p&gt;In tests, the mocked &lt;code&gt;ServiceRegistry&lt;/code&gt; returns a truthy mock object, but it lacks the &lt;code&gt;.emit&lt;/code&gt; method. Because the object isn't null, optional chaining proceeds, tries to call &lt;code&gt;.emit&lt;/code&gt;, and throws a &lt;code&gt;TypeError: _eventBus.emit is not a function&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Fix:&lt;/strong&gt; We replaced optional chaining with an explicit type guard.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;_eventBusForCreate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eventBus&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;_eventBusForCreate&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;_eventBusForCreate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;emit&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;function&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nx"&gt;_eventBusForCreate&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;emit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;occurrence.created&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;unknown&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;occurrenceId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;occurrence&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[OccurrenceCrudHelper] Failed to emit occurrence.created event&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Checking &lt;code&gt;typeof _eventBusForCreate.emit === 'function'&lt;/code&gt; safely verifies the method exists before calling it.&lt;/p&gt;


&lt;h3&gt;
  
  
  Bounded-Context Consumers
&lt;/h3&gt;

&lt;p&gt;Instead of creating a consumer for every single event, we grouped them by entity context: &lt;code&gt;CommentEventConsumers&lt;/code&gt;, &lt;code&gt;RecordEventConsumers&lt;/code&gt;, and &lt;code&gt;OccurrenceEventConsumers&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Each class registers its queue bindings in the constructor. A single event can fan out to multiple queues simultaneously. If a cache invalidation handler fails, it doesn't break the search indexer handler.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// record.updated -&amp;gt; re-index + notifications + cache&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;eventBus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;RecordUpdatedPayload&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;EVENT_NAMES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;RECORD_UPDATED&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;QUEUES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SEARCH_INDEXER_RECORD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;handleUpdatedIndexing&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;eventBus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;RecordUpdatedPayload&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;EVENT_NAMES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;RECORD_UPDATED&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;QUEUES&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CACHE_INVALIDATION_RECORD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;handleCacheInvalidation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;recordId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Testing Strategy
&lt;/h3&gt;

&lt;p&gt;We wrote 79 new tests broken into two main groups:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Schema Contract Tests (38 tests):&lt;/strong&gt; These catch schema drift. We test valid payloads, missing fields, and invalid UUIDs. &lt;em&gt;Note: Zod v4 strictly checks for UUID v4 formats, so your test fixtures must use valid v4 strings (with a '4' in the 13th position).&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consumer Unit Tests (41 tests):&lt;/strong&gt; We mocked the dependencies and tested the handler methods directly to ensure they call the right downstream services. No RabbitMQ broker is needed here.&lt;/li&gt;
&lt;/ol&gt;


&lt;h3&gt;
  
  
  What We Learned
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dual-write works:&lt;/strong&gt; Running both paths simultaneously was safe because our redundant side effects were idempotent.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explicit emit guards are necessary:&lt;/strong&gt; Optional chaining isn't enough when dealing with test mocks. Always check if the method is a function.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contract tests save time:&lt;/strong&gt; They caught schema drift three times during development, preventing silent runtime failures.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Group consumers by context:&lt;/strong&gt; Routing logic is much easier to manage in 3 entity-based files rather than 14 event-based files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic imports shine in static helpers:&lt;/strong&gt; They solve circular dependency risks cleanly without cluttering constructors.&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;
  
  
  What Comes Next
&lt;/h3&gt;

&lt;p&gt;In Phase 2, we will remove the inline side effects. The consumer will become the only path. We will also extract our tools into a Turborepo monorepo, spinning out the notification and search services into their own isolated processes. Our event boundary work makes this extraction safe and reliable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repository Updates (Phase 1):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;New Files:&lt;/strong&gt; 10 Zod schemas (records/occurrences), 3 consumer classes, 69 unit/contract tests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modified Files:&lt;/strong&gt; Constants, index exports, &lt;code&gt;CommentService&lt;/code&gt;, &lt;code&gt;RecordService&lt;/code&gt;, helper classes, and RabbitMQ initialization.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Previous Post
&lt;/h3&gt;


&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/goniszewski/from-nextjs-monolith-to-event-driven-architecture-why-we-started-and-what-we-built-167h" class="crayons-story__hidden-navigation-link"&gt;From Next.js Monolith to Event-Driven Architecture: Why We Started and What We Built&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/goniszewski" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F202144%2F6a2dd115-3a56-4fc8-8837-46c11a235e26.jpg" alt="goniszewski profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/goniszewski" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Robert Goniszewski
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Robert Goniszewski
                
              
              &lt;div id="story-author-preview-content-3284518" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/goniszewski" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F202144%2F6a2dd115-3a56-4fc8-8837-46c11a235e26.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Robert Goniszewski&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/goniszewski/from-nextjs-monolith-to-event-driven-architecture-why-we-started-and-what-we-built-167h" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Feb 25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/goniszewski/from-nextjs-monolith-to-event-driven-architecture-why-we-started-and-what-we-built-167h" id="article-link-3284518"&gt;
          From Next.js Monolith to Event-Driven Architecture: Why We Started and What We Built
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/nextjs"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;nextjs&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/eventdriven"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;eventdriven&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/architecture"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;architecture&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/refactoring"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;refactoring&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/goniszewski/from-nextjs-monolith-to-event-driven-architecture-why-we-started-and-what-we-built-167h" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/fire-f60e7a582391810302117f987b22a8ef04a2fe0df7e3258a5f49332df1cec71e.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/exploding-head-daceb38d627e6ae9b730f36a1e390fca556a4289d5a41abb2c35068ad3e2c4b5.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;3&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/goniszewski/from-nextjs-monolith-to-event-driven-architecture-why-we-started-and-what-we-built-167h#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            11 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;/div&gt;
&lt;br&gt;


</description>
      <category>architecture</category>
      <category>eventdriven</category>
      <category>refactorit</category>
      <category>nextjs</category>
    </item>
    <item>
      <title>From Next.js Monolith to Event-Driven Architecture: Why We Started and What We Built</title>
      <dc:creator>Robert Goniszewski</dc:creator>
      <pubDate>Wed, 25 Feb 2026 12:08:50 +0000</pubDate>
      <link>https://forem.com/goniszewski/from-nextjs-monolith-to-event-driven-architecture-why-we-started-and-what-we-built-167h</link>
      <guid>https://forem.com/goniszewski/from-nextjs-monolith-to-event-driven-architecture-why-we-started-and-what-we-built-167h</guid>
      <description>&lt;p&gt;This is my first post in a short series documenting the migration of &lt;a href="https://ithappenedagain.fyi" rel="noopener noreferrer"&gt;It Happened Again&lt;/a&gt; (IHA) from a Next.js monolith to a distributed, event-driven architecture. This series documents what was built, what broke, and the things I have learned.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Is IHA?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0fu7p9gi3ajfdgtxmue.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0fu7p9gi3ajfdgtxmue.jpg" alt="Two Nickels Meme" width="500" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;IHA is a community platform for tracking recurring events - things that keep happening again and again. Users submit and verify occurrences, leave comments, add tags, earn badges, and get notified when patterns they follow are updated. You can think of it as a structured, community-verified record of things that keep repeating themselves.&lt;/p&gt;

&lt;p&gt;The backend handles a few distinct workloads:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Write path&lt;/strong&gt;: comments, occurrences, reactions, verifications - transactional, user-facing, latency-sensitive.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Side effects&lt;/strong&gt;: search indexing, notifications, badge awarding, visit aggregation - can tolerate latency, must not block writes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Read path&lt;/strong&gt;: record pages, occurrence timelines, user profiles - heavily cached, served via RSC and Redis.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time&lt;/strong&gt;: SSE streams for live notification delivery to connected users.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For the last few months, all of this lived in a single Next.js application. That made sense in the MVP stage, because it was all in one place. As one can imagine, as the codebase grew, each change started touching too many moving parts.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Monolith
&lt;/h2&gt;

&lt;p&gt;At the beginning, the backend was simple and clean. A few API routes, a handful of services, a PostgreSQL database, Redis for caching and queues, Meilisearch for full-text search. Standard Next.js setup with some discipline around layering: Presentation -&amp;gt; Application -&amp;gt; Service/Domain -&amp;gt; Infrastructure, dependencies pointing inward.&lt;/p&gt;

&lt;p&gt;Then features piled up and refactors became routine, and the monolith started showing cracks.&lt;/p&gt;

&lt;p&gt;By early 2026, the numbers looked like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Over 130 API routes under &lt;code&gt;api/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;70+ services registered in &lt;code&gt;ServiceRegistry&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;4 infrastructure dependencies: PostgreSQL (via Drizzle ORM), Redis (caching + BullMQ queues), Meilisearch, and our newest addition, RabbitMQ&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;ServiceRegistry&lt;/code&gt; with async initialization, health checks, bidirectional dependency wiring, and HMR-safe global state management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;ServiceRegistry&lt;/code&gt; itself became a significant chunk of infrastructure. It handles registration order, async factory functions, timeout protection on &lt;code&gt;initializeAll()&lt;/code&gt;, HMR re-registration detection via &lt;code&gt;globalThis.__servicesRegistered&lt;/code&gt;, and graceful degradation in production when a service fails to initialize.&lt;/p&gt;

&lt;p&gt;None of that came from bad engineering, but from solving real problems inside one process. At some point, that same complexity became a sign that those parts should be separated.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Three Pain Points Problem
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Tight Coupling Between Write and Side-Effect Logic
&lt;/h3&gt;

&lt;p&gt;The clearest example is &lt;code&gt;CommentService.createComment()&lt;/code&gt;. Before the migration, the method that saved a comment to the database also called &lt;code&gt;SearchService.indexDocumentNow()&lt;/code&gt; and &lt;code&gt;NotificationService.sendCommentNotifications()&lt;/code&gt; synchronously before returning.&lt;/p&gt;

&lt;p&gt;What it meant in practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A sneaky Meilisearch timeout causes the comment creation endpoint to return a 500.&lt;/li&gt;
&lt;li&gt;A notification delivery failure rolls back a successful write.&lt;/li&gt;
&lt;li&gt;Adding a new side effect (say, awarding a badge for a first comment) requires modifying &lt;code&gt;CommentService&lt;/code&gt; and adding another service dependency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The issue was not code quality, but the boundary between services. Saving a comment and indexing a comment have different failure and latency requirements. When they are coupled, the write path inherits failures from side effects.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. SSE Scaling
&lt;/h3&gt;

&lt;p&gt;Our &lt;code&gt;NotificationService&lt;/code&gt; maintains live SSE connections in an in-process data structure. And this works perfectly for a single instance. But when you start to run two instances - for a deploy, for load, for anything - the connections are split across processes. A notification triggered in instance &lt;em&gt;A&lt;/em&gt; cannot reach a user connected to instance &lt;em&gt;B&lt;/em&gt;. Simple as that.&lt;/p&gt;

&lt;p&gt;The usual fix is Redis pub/sub as a connection broker. I could have bolted that into the monolith, but that would add even more state and coupling to an already busy service. A cleaner approach was to move notification delivery into its own process so it owns SSE connections directly and consumes events.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Deploy Coupling
&lt;/h3&gt;

&lt;p&gt;As with the previous point, every backend change - a new field in payload, a fix to a background queue handler, a tweak to notification logic - triggers a full Next.js rebuild. Turbopack could help in development, but the production build is still a monolithic artifact. You cannot deploy just the notification logic without redeploying the entire application.&lt;/p&gt;

&lt;p&gt;When notification delivery, search indexing, badge jobs, and the API evolve at different speeds, that coupling starts to release friction.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why RabbitMQ Instead of Something Simpler?
&lt;/h2&gt;

&lt;p&gt;Good question: IHA already had BullMQ + Redis in the stack. And don't get me wrong, BullMQ is excellent, the app uses it for scheduled jobs, retry queues, and batch processing. The actual question was whether to extend BullMQ for all event-driven side effects or introduce RabbitMQ.&lt;/p&gt;

&lt;p&gt;The case for BullMQ alone:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Already in the stack, no new infrastructure&lt;/li&gt;
&lt;li&gt;BullBoard can be used for queue inspection&lt;/li&gt;
&lt;li&gt;First-class TypeScript support&lt;/li&gt;
&lt;li&gt;Solid dead letter queue semantics via &lt;code&gt;removeOnFail&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The case for RabbitMQ:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Topic exchanges with routing key patterns&lt;/strong&gt;: a single events exchange can fan-out &lt;code&gt;comment.created&lt;/code&gt; to both &lt;code&gt;search-indexer-comment&lt;/code&gt; and &lt;code&gt;notifications-comment&lt;/code&gt; queues simultaneously, without the publisher knowing about either consumer (BullMQ queues are point-to-point by design).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consumer isolation&lt;/strong&gt;: each consumer service declares its own durable queue bound to the exchange. Adding a new consumer (let's say, a moderation service that wants to review new comments) requires no changes to the publisher.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-process fan-out&lt;/strong&gt;: when the notification service and search service become separate processes, they each connect to RabbitMQ and bind their own queues. The publisher (&lt;code&gt;EventBusService.emit()&lt;/code&gt;) doesn't change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Protocol-level durability&lt;/strong&gt;: messages survive broker restarts when declared persistent. RabbitMQ AMQP acknowledgement semantics give us finer control over retry behavior than polling a BullMQ queue.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The answer: use BullMQ for scheduled and batch work (badge processing, visit aggregation, data retention), and RabbitMQ for domain events that need fan-out to multiple consumers.&lt;/p&gt;

&lt;p&gt;BullMQ is here to stay as both can coexist. They solve different problems.&lt;/p&gt;




&lt;h2&gt;
  
  
  The RabbitMQ POC: What Was Built
&lt;/h2&gt;

&lt;p&gt;The proof-of-concept had basically three goals:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Prove the routing topology works (topic exchange + multiple queue bindings per event)&lt;/li&gt;
&lt;li&gt;Prove that &lt;code&gt;CommentService.createComment()&lt;/code&gt; can emit an event replacing calling services directly&lt;/li&gt;
&lt;li&gt;Prove the consumer can reconstruct the required side effects from the event payload&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Interface
&lt;/h3&gt;

&lt;p&gt;As usually, I had to start with an interface so the &lt;code&gt;EventBusService&lt;/code&gt; is not coupled to RabbitMQ:&lt;/p&gt;

&lt;p&gt;The interface defines four methods: &lt;code&gt;connect()&lt;/code&gt;, &lt;code&gt;close()&lt;/code&gt;, &lt;code&gt;publish()&lt;/code&gt;, and &lt;code&gt;subscribe()&lt;/code&gt;. This means we can swap RabbitMQ for any other broker - or for a no-op stub when the feature flag is disabled. The &lt;code&gt;publish&lt;/code&gt; method returns a &lt;code&gt;Promise&amp;lt;boolean&amp;gt;&lt;/code&gt; (that can be described as "false signals backpressure", as I have learned some time ago), and &lt;code&gt;subscribe&lt;/code&gt; is generic over the message type &lt;code&gt;T&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Event Names and Schemas
&lt;/h3&gt;

&lt;p&gt;Events follow a &lt;code&gt;{entity}.{action}&lt;/code&gt; routing key convention. All constants live in &lt;code&gt;src/lib/events/constants.ts&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;EVENT_NAMES&lt;/code&gt; contains &lt;code&gt;COMMENT_CREATED&lt;/code&gt; mapped to the string literal &lt;code&gt;comment.created&lt;/code&gt;, &lt;code&gt;COMMENT_UPDATED&lt;/code&gt; to &lt;code&gt;comment.updated&lt;/code&gt;, &lt;code&gt;RECORD_CREATED&lt;/code&gt; to &lt;code&gt;record.created&lt;/code&gt;, &lt;code&gt;OCCURRENCE_VERIFIED&lt;/code&gt; to &lt;code&gt;occurrence.verified&lt;/code&gt;, and so on.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;QUEUES&lt;/code&gt; maps consumer identifiers to their durable queue names: &lt;code&gt;SEARCH_INDEXER_COMMENT&lt;/code&gt; maps to &lt;code&gt;search-indexer-comment&lt;/code&gt;, &lt;code&gt;NOTIFICATIONS_COMMENT&lt;/code&gt; to &lt;code&gt;notifications-comment&lt;/code&gt;, and so on for each consumer-purpose pair.&lt;/p&gt;

&lt;p&gt;Payloads are validated with Zod schemas at consumer boundaries. &lt;code&gt;CommentCreatedPayloadSchema&lt;/code&gt; requires a &lt;code&gt;commentId&lt;/code&gt; as UUID, a &lt;code&gt;userId&lt;/code&gt; as string, an &lt;code&gt;occurrenceId&lt;/code&gt; as string, a &lt;code&gt;content&lt;/code&gt; string, and an optional &lt;code&gt;parentId&lt;/code&gt; UUID.&lt;/p&gt;

&lt;h3&gt;
  
  
  The EventBusService
&lt;/h3&gt;

&lt;p&gt;The new &lt;code&gt;EventBusService&lt;/code&gt; is a thin orchestration layer over &lt;code&gt;IMessageBrokerProvider&lt;/code&gt;. Its two public methods are &lt;code&gt;emit()&lt;/code&gt; and &lt;code&gt;on()&lt;/code&gt;. It has message buffering built in: &lt;code&gt;emit()&lt;/code&gt; calls before &lt;code&gt;initialize()&lt;/code&gt; completes are queued in &lt;code&gt;pendingMessages&lt;/code&gt; and flushed once connected, rather than failing silently. The &lt;code&gt;isReady&lt;/code&gt; flag guards all publish paths.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Emit Site
&lt;/h3&gt;

&lt;p&gt;In &lt;code&gt;CommentService&lt;/code&gt;, the change from direct calls to event emission reduces the method to its core function: persist the comment, then emit an event. The event payload carries just enough data for consumers to fetch what they need - the &lt;code&gt;commentId&lt;/code&gt;, &lt;code&gt;userId&lt;/code&gt;, &lt;code&gt;occurrenceId&lt;/code&gt;, &lt;code&gt;content&lt;/code&gt;, and optional &lt;code&gt;parentId&lt;/code&gt; (for nested comments).&lt;/p&gt;

&lt;p&gt;Before: &lt;code&gt;createComment()&lt;/code&gt; awaited both &lt;code&gt;searchService.indexDocumentNow()&lt;/code&gt; and &lt;code&gt;notificationService.sendCommentNotifications()&lt;/code&gt; before returning. A failure in either blocked or failed the entire operation.&lt;/p&gt;

&lt;p&gt;After: &lt;code&gt;createComment()&lt;/code&gt; awaits &lt;code&gt;eventBus.emit(EVENT_NAMES.COMMENT_CREATED, payload)&lt;/code&gt; and returns. Search indexing and notification delivery happen asynchronously in consumer handlers. The comment creation endpoint succeeds or fails on its own merits.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Consumer
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;CommentCreatedConsumer&lt;/code&gt; registers two queue bindings for the same event - one for search indexing, and one for notifications. Both subscriptions use &lt;code&gt;eventBus.on()&lt;/code&gt; with the event name &lt;code&gt;EVENT_NAMES.COMMENT_CREATED&lt;/code&gt;, but different queue names (&lt;code&gt;QUEUES.SEARCH_INDEXER_COMMENT&lt;/code&gt; and &lt;code&gt;QUEUES.NOTIFICATIONS_COMMENT&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;RabbitMQ delivers a copy of each published message to both queues. The two handlers then execute independently. A failure in &lt;code&gt;handleIndexing&lt;/code&gt; does not affect &lt;code&gt;handleNotifications&lt;/code&gt;, and vice versa.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;handleIndexing&lt;/code&gt; fetches the full comment object via &lt;code&gt;commentService.getComment(commentId)&lt;/code&gt; - the event payload carries just the ID, not the full object to keep payloads small and avoid serialization of potentially stale data. Then it calls &lt;code&gt;searchService.indexDocumentNow()&lt;/code&gt;. If this throws, the error propagates and &lt;code&gt;RabbitMQProvider&lt;/code&gt; NACKs the message, triggering the DLX retry pattern.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;handleNotifications&lt;/code&gt; delegates to &lt;code&gt;CommentNotificationHelper&lt;/code&gt;, which holds the complex notification rules (notify occurrence subscribers, notify parent comment authors, skip the author of the triggering comment, and more). This helper existed before the event bus - the consumer just instantiates it with its dependencies rather than duplicating the logic.&lt;/p&gt;




&lt;h2&gt;
  
  
  POC: The Good
&lt;/h2&gt;

&lt;p&gt;The core design did hold up. The topic exchange fan-out worked as expected (emitting &lt;code&gt;comment.created&lt;/code&gt; once delivers a copy to both &lt;code&gt;search-indexer-comment&lt;/code&gt; and &lt;code&gt;notifications-comment&lt;/code&gt;). Adding a third consumer (e.g. moderation review queue) requires zero changes to the publisher. Which is one of the main reasons to use events.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;IMessageBrokerProvider&lt;/code&gt; interface paid off immediately. For unit tests and for the &lt;code&gt;ENABLE_EVENT_BUS=false&lt;/code&gt; code path, I registered a no-op stub with &lt;code&gt;connect&lt;/code&gt;, &lt;code&gt;close&lt;/code&gt;, &lt;code&gt;publish&lt;/code&gt;, and &lt;code&gt;subscribe&lt;/code&gt; all returning immediately (or returning false for &lt;code&gt;publish&lt;/code&gt;). No RabbitMQ process is needed for tests or for environments where the event bus is disabled.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;CommentNotificationHelper&lt;/code&gt; was reused as-is. The consumer delegates complex notification rules to the same helper class that the synchronous path used, rather than just duplicating logic. The helper is instantiated with its dependencies passed as constructor arguments, and it is straightforward to test in isolation.&lt;/p&gt;

&lt;p&gt;Typing was meant to be solid, so &lt;code&gt;CommentCreatedPayloadSchema&lt;/code&gt; provides runtime validation at consumer entry points, and the inferred TypeScript type &lt;code&gt;CommentCreatedPayload&lt;/code&gt; gives compile-time safety on the payload fields. &lt;code&gt;EVENT_NAMES&lt;/code&gt; and &lt;code&gt;QUEUES&lt;/code&gt; as const objects prevent string literal typos at emit and subscribe sites.&lt;/p&gt;




&lt;h2&gt;
  
  
  POC: The Bad - Three Bugs That Had To Be Fixed
&lt;/h2&gt;

&lt;p&gt;The initial POC had a few defects. These three were the most interesting and the most useful to fix early.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bug 1: Single Channel Shared Between Publisher and Consumer
&lt;/h3&gt;

&lt;p&gt;The original &lt;code&gt;RabbitMQProvider&lt;/code&gt; used one AMQP channel for both &lt;code&gt;publish()&lt;/code&gt; and &lt;code&gt;subscribe()&lt;/code&gt;. That looked fine in simple tests, but it caused trouble under higher artificial load. AMQP flow control is channel-scoped, so slow consumers could backpressure the same channel used for publishing.&lt;/p&gt;

&lt;p&gt;The fix was to split responsibilities: &lt;code&gt;publishChannel&lt;/code&gt; for &lt;code&gt;publish()&lt;/code&gt;, &lt;code&gt;consumeChannel&lt;/code&gt; for &lt;code&gt;subscribe()&lt;/code&gt;. Now slow consumption does not stall publishing. I also run &lt;code&gt;setupDLXInfrastructure()&lt;/code&gt; on &lt;code&gt;publishChannel&lt;/code&gt;, since exchange assertions can run on any channel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bug 2: No Dead Letter Exchange
&lt;/h3&gt;

&lt;p&gt;The first POC had no proper retry or dead-lettering logic. If a handler threw (for example when Meilisearch was temporarily unavailable), messages could just bounce in unhelpful ways and failed payloads were hard to inspect.&lt;/p&gt;

&lt;p&gt;I implemented a Dead Letter Exchange (DLX) pattern. Now the infrastructure uses three exchanges and three queue variants per every consumer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;events.dlx&lt;/code&gt;: the dead letter exchange. Messages go here when NACKed with &lt;code&gt;requeue=false&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;events.retry&lt;/code&gt;: a retry exchange. Failed messages are published here with a TTL and then expire back to the main events exchange via &lt;code&gt;x-dead-letter-exchange&lt;/code&gt; (on the retry queue).&lt;/li&gt;
&lt;li&gt;Per-queue DLQ (e.g. &lt;code&gt;search-indexer-comment.dlq&lt;/code&gt;): the permanent dead letter queue for messages that have exhausted all retries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The retry counter lives in the &lt;code&gt;x-retries&lt;/code&gt; message header. On each failed delivery, the handler increments the counter and republishes to the retry queue. After &lt;code&gt;MAX_RETRY_COUNT&lt;/code&gt; (3) failures, the message is NACKed with &lt;code&gt;requeue=false&lt;/code&gt; and lands in the DLQ, where it can be inspected and replayed from the RabbitMQ Management UI.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;RETRY_DELAY_MS&lt;/code&gt; is set to 5000 (5 seconds). It is simple and predictable for now. Exponential backoff can be added later.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bug 3: Consumer Registration Timing
&lt;/h3&gt;

&lt;p&gt;The first draft of &lt;code&gt;initializeServices()&lt;/code&gt; registered consumers before &lt;code&gt;serviceRegistry.initializeAll()&lt;/code&gt;. &lt;code&gt;CommentCreatedConsumer&lt;/code&gt; called &lt;code&gt;registerConsumer()&lt;/code&gt;, which resolved &lt;code&gt;EventBusService&lt;/code&gt; before its async initialization finished. Subscription calls then ran against a bus that was not connected yet.&lt;/p&gt;

&lt;p&gt;We fixed this by deferring consumer registration until after &lt;code&gt;initializeAll()&lt;/code&gt; completes. The &lt;code&gt;initializeServices()&lt;/code&gt; bootstrap sequence now uses this order:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;registerProviders()&lt;/code&gt;, &lt;code&gt;registerEventBusServices()&lt;/code&gt;, &lt;code&gt;registerCoreServices()&lt;/code&gt;, and all other registration calls. These only register service factories with the &lt;code&gt;ServiceRegistry&lt;/code&gt; - they do not instantiate or connect anything.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;serviceRegistry.initializeAll()&lt;/code&gt;. This one awaits each factory in registration order, including the async &lt;code&gt;EventBusService&lt;/code&gt; factory that establishes the RabbitMQ connection.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;setupBidirectionalDependencies()&lt;/code&gt;. Wires up circular references between services that cannot be expressed as constructor dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;initializeRabbitMQConsumers()&lt;/code&gt;, guarded by the &lt;code&gt;ENABLE_EVENT_BUS&lt;/code&gt; flag. By this point, &lt;code&gt;EventBusService&lt;/code&gt; is guaranteed to be connected and ready. Consumers then can safely call &lt;code&gt;eventBus.on()&lt;/code&gt; and the broker will accept the subscription.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  The ENABLE_EVENT_BUS Feature Flag
&lt;/h2&gt;

&lt;p&gt;Shipping a fundamental change to how side effects are triggered requires a way to turn it off instantly if something goes wrong. The &lt;code&gt;ENABLE_EVENT_BUS&lt;/code&gt; environment variable controls whether &lt;code&gt;EventBusService&lt;/code&gt; is backed by a real RabbitMQ connection or a no-op stub.&lt;/p&gt;

&lt;p&gt;When &lt;code&gt;ENABLE_EVENT_BUS&lt;/code&gt; is not set or is false:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No RabbitMQ connection is attempted at startup.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;EventBusService&lt;/code&gt; is registered with a no-op provider that implements the full &lt;code&gt;IMessageBrokerProvider&lt;/code&gt; interface but in practice does nothing: &lt;code&gt;connect()&lt;/code&gt; resolves immediately, &lt;code&gt;publish()&lt;/code&gt; returns false, &lt;code&gt;subscribe()&lt;/code&gt; resolves immediately.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;emit()&lt;/code&gt; calls in application code do not throw and do not block.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;CommentService&lt;/code&gt; can safely call &lt;code&gt;getService(SERVICE_NAMES.EVENT_BUS_SERVICE)&lt;/code&gt; and emit events that won't be delivered.&lt;/li&gt;
&lt;li&gt;The legacy synchronous side-effect path remains in place as a fallback.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When &lt;code&gt;ENABLE_EVENT_BUS=true&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RabbitMQProvider connects during &lt;code&gt;initializeAll()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;EventBusService.initialize()&lt;/code&gt; awaits the connection before returning.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;CommentCreatedConsumer&lt;/code&gt; registers its queue bindings against the live broker.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;createComment()&lt;/code&gt; emits events instead of calling services directly.&lt;/li&gt;
&lt;li&gt;The synchronous fallback is bypassed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The no-op path creates a stub that fully satisfies the &lt;code&gt;IMessageBrokerProvider&lt;/code&gt; interface contract. The real path creates a &lt;code&gt;RabbitMQProvider&lt;/code&gt;, wraps it in &lt;code&gt;EventBusService&lt;/code&gt;, and awaits the connection inside an async factory so &lt;code&gt;ServiceRegistry&lt;/code&gt; only marks the service ready once the broker is connected.&lt;/p&gt;

&lt;p&gt;This means the migration can be shipped behind the flag, enabled in staging, load tested, validated, and enabled in production without a code change. A rollback is a single environment variable update and a process restart.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture Diagrams
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Monolith Architecture (Before)
&lt;/h3&gt;

&lt;p&gt;The current state with pain points annotated:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhi5axdfb13f35mg6q26g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhi5axdfb13f35mg6q26g.png" alt="Monolith Architecture Diagram" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Target Event-Driven Architecture
&lt;/h3&gt;

&lt;p&gt;The target state with all extracted services:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmlb3tnzmp0344826iwr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmlb3tnzmp0344826iwr.png" alt="Event-Driven Architecture Diagram" width="800" height="506"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What Comes Next
&lt;/h2&gt;

&lt;p&gt;The POC established that the pattern works and the bugs are fixable. What remains is doing this systematically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1: Systematic decoupling within the monolith
&lt;/h3&gt;

&lt;p&gt;The next services to migrate are &lt;code&gt;RecordService&lt;/code&gt;, &lt;code&gt;OccurrenceService&lt;/code&gt;, &lt;code&gt;BadgeService&lt;/code&gt; (as a consumer), and &lt;code&gt;NotificationService&lt;/code&gt; (as a consumer). Each follows the same pattern:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Define the event schema in &lt;code&gt;src/lib/events/schemas/&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Add the &lt;code&gt;EVENT_NAMES&lt;/code&gt; and &lt;code&gt;QUEUES&lt;/code&gt; constants.&lt;/li&gt;
&lt;li&gt;Replace direct side-effect calls in the service with &lt;code&gt;eventBus.emit()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Write a consumer class that handles the side effects.&lt;/li&gt;
&lt;li&gt;Register the consumer in &lt;code&gt;initializeRabbitMQConsumers()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Write unit tests for the consumer handlers in &lt;code&gt;tests/unit/&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By the end of Phase 1, the monolith, although more modular, is still a monolith, but the coupling between write paths and side-effect paths is severed. Every side effect is now a consumer of a message queue, and the synchronous call graph becomes significantly more shallow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 2: Turborepo monorepo and extracting a Hono backend
&lt;/h3&gt;

&lt;p&gt;Once the event boundaries are clean, the next step is process extraction. The plan is to move the ~130 API routes to a standalone Hono application (&lt;code&gt;apps/api&lt;/code&gt;) with Bun runtime and use Next.js purely as a BFF (Backend for Frontend) that proxies API calls, with additional headers, and handles SSR. The notification service becomes its own process (&lt;code&gt;apps/notification-service&lt;/code&gt;) that subscribes to RabbitMQ and manages SSE connections backed by Redis pub/sub. As for the search indexer and worker jobs, they move to &lt;code&gt;apps/search-service&lt;/code&gt; and &lt;code&gt;apps/worker-service&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The new Turborepo monorepo structure will share types and schemas through workspace packages (&lt;code&gt;packages/events&lt;/code&gt;, &lt;code&gt;packages/db&lt;/code&gt;, &lt;code&gt;packages/shared&lt;/code&gt;), so the extracted services have compile-time safety without duplicating definitions.&lt;/p&gt;

&lt;p&gt;Part 2 of this series covers Phase 1: the systematic decoupling work, the event schema design decisions, and what I have learned about ordering consumer registration across a service graph with bidirectional dependencies. Stay tuned!&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>eventdriven</category>
      <category>architecture</category>
      <category>refactoring</category>
    </item>
    <item>
      <title>It Happened Again - Building a Platform for Tracking Recurring Events</title>
      <dc:creator>Robert Goniszewski</dc:creator>
      <pubDate>Wed, 18 Feb 2026 10:43:33 +0000</pubDate>
      <link>https://forem.com/goniszewski/it-happened-again-building-a-platform-for-tracking-recurring-events-1e5g</link>
      <guid>https://forem.com/goniszewski/it-happened-again-building-a-platform-for-tracking-recurring-events-1e5g</guid>
      <description>&lt;p&gt;We repeatedly encounter "unprecedented" events that, on inspection, are anything but unprecedented. The problem is rarely lack of information - it's lack of structure.&lt;/p&gt;

&lt;p&gt;Social platforms feel too chaotic, ephemeral, and optimize for reaction, not continuity. Media covers isolated incidents. Discussions fragment across threads. Anyone trying to track recurring events - political shifts, service outages, regulatory cycles - ends up maintaining spreadsheets or personal notes that go nowhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It Happened Again (IHA)&lt;/strong&gt; is my attempt to solve that problem structurally. It's a platform for tracking recurring events through source-backed timelines rather than discussion threads - built from scratch as a solo project.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Incidents to Patterns
&lt;/h2&gt;

&lt;p&gt;The core problem is what I call &lt;em&gt;pattern blindness&lt;/em&gt;. We observe events, but we rarely preserve their recurrence in a structured, referenceable form.&lt;/p&gt;

&lt;p&gt;IHA is built around two domain primitives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Record&lt;/strong&gt; - the pattern being tracked (e.g., "Bitcoin Crashes").&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Occurrence&lt;/strong&gt; - a dated, source-backed instance of that pattern (e.g., "The 2022 Crash").&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This separation drives the entire design. A Record is not a post. An Occurrence is not a comment. Each has its own lifecycle, ownership rules, verification logic, and visibility constraints.&lt;/p&gt;

&lt;p&gt;The timeline is not a UI enhancement - it's the primary interface. Chronology is how patterns actually become visible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;The system runs on &lt;strong&gt;Next.js 15 (App Router)&lt;/strong&gt;, &lt;strong&gt;TypeScript (strict mode)&lt;/strong&gt;, &lt;strong&gt;PostgreSQL with Drizzle ORM&lt;/strong&gt;, and &lt;strong&gt;Redis + BullMQ&lt;/strong&gt; for background processing.&lt;/p&gt;

&lt;p&gt;The key architectural decision was enforcing a strict service layer. Route handlers stay thin - validate input, delegate to a service, return a response. Business logic lives in services like &lt;code&gt;RecordService&lt;/code&gt; and &lt;code&gt;ModerationService&lt;/code&gt;, and database access goes through providers so transaction handling never leaks into routing code.&lt;/p&gt;

&lt;p&gt;This might seem like overhead for a solo project, but it paid off quickly by solving two problems that tend to creep in otherwise:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Circular dependencies between domain areas.&lt;/li&gt;
&lt;li&gt;Permission logic scattered across the UI layer.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With authorization and business rules centralized in services, I could add features like reputation scoring and moderation workflows without destabilizing unrelated parts of the codebase.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dual-Identifier Strategy
&lt;/h2&gt;

&lt;p&gt;Every entity carries two identifiers - each serving a different purpose.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UUID v7&lt;/strong&gt; serves as the primary key: globally unique and time-ordered, well-suited for indexing. For URLs, I use short &lt;strong&gt;NanoIDs&lt;/strong&gt;, giving clean paths like &lt;code&gt;/rec/bitcoin-crash/occ/V1StGXR8&lt;/code&gt; instead of opaque hex strings.&lt;/p&gt;

&lt;p&gt;This keeps URLs readable and shareable while avoiding unnecessary exposure of internal identifiers. A small decision with long-term payoff.&lt;/p&gt;

&lt;h2&gt;
  
  
  Moderation as Infrastructure
&lt;/h2&gt;

&lt;p&gt;Most platforms treat moderation as an admin overlay. I wanted it to be part of the core architecture from the start.&lt;/p&gt;

&lt;p&gt;IHA distinguishes between global and local moderators (scoped to specific Records). Authorization is enforced at the service layer - not conditionally hidden in the UI. All moderation actions are logged through an audit mechanism, and visibility changes use timestamped fields rather than hard deletes, keeping actions reversible and traceable.&lt;/p&gt;

&lt;p&gt;Enforcing scope at the service level means moderation logic can't be bypassed through inconsistent entry points. Trust is hard to retrofit, so I made it foundational.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background Processing and Reliability
&lt;/h2&gt;

&lt;p&gt;Several processes run asynchronously - email delivery, badge recalculation, search indexing, visit aggregation - all through &lt;strong&gt;BullMQ&lt;/strong&gt; workers.&lt;/p&gt;

&lt;p&gt;Jobs are idempotent with retry policies and exponential backoff. Critical tasks use deterministic job keys to prevent duplicates. Search operations are wrapped with fallbacks so degraded dependencies fail gracefully instead of cascading.&lt;/p&gt;

&lt;p&gt;Reputation scoring runs inside transactions to avoid race conditions and gaming exploits. Badge rarity stats are cached and periodically recomputed, balancing performance with accuracy.&lt;/p&gt;

&lt;p&gt;Not the most visible features, but they're what separates a prototype from something that actually holds up in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Sovereignty and GDPR
&lt;/h2&gt;

&lt;p&gt;Infrastructure runs on EU-hosted servers (Hetzner) behind Cloudflare. More importantly, privacy requirements shaped the data model from the beginning rather than being added later.&lt;/p&gt;

&lt;p&gt;Soft-deletion has configurable time windows. Audit logs have retention policies. Analytics are consent-based. Account deletion distinguishes between voluntary anonymization and GDPR erasure.&lt;/p&gt;

&lt;p&gt;Aligning legal constraints with the technical design early avoids the painful reality of retrofitting compliance into an already fragile system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deliberate Tradeoffs
&lt;/h2&gt;

&lt;p&gt;I consciously avoided:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;BaaS&lt;/strong&gt; - I wanted full control over transactions and data flow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Microservices&lt;/strong&gt; - the domain is tightly coupled; splitting it would add complexity without real benefit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event sourcing&lt;/strong&gt; - no clear need for replay mechanisms, so it would be unnecessary overhead.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'd rather have a monolith I can reason about than a distributed system that fights me. The codebase stays straightforward to navigate and extend.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;Building IHA meant working across the entire stack - schema design, background workers, moderation workflows, deployment - and keeping it all coherent as a solo developer. It's been a rewarding exercise in domain-first modeling, operational reliability, and making architectural decisions that hold up over time.&lt;/p&gt;

&lt;p&gt;The platform is live at &lt;a href="https://ithappenedagain.fyi" rel="noopener noreferrer"&gt;ithappenedagain.fyi&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>systemdesign</category>
      <category>nextjs</category>
      <category>architecture</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>Grimoire: A Retrospective on Building Open Source Tools</title>
      <dc:creator>Robert Goniszewski</dc:creator>
      <pubDate>Wed, 18 Feb 2026 10:37:47 +0000</pubDate>
      <link>https://forem.com/goniszewski/grimoire-a-retrospective-on-building-open-source-tools-3l3l</link>
      <guid>https://forem.com/goniszewski/grimoire-a-retrospective-on-building-open-source-tools-3l3l</guid>
      <description>&lt;p&gt;I created &lt;strong&gt;Grimoire&lt;/strong&gt; to solve a personal problem: most existing bookmark managers were either too basic, overloaded with features I didn’t need, or architecturally heavy for something that should be simple (I was picky). I wanted a simple yet intuitive, self-hosted solution that combined the user experience of a modern web app powered by a database that was easy to back up, inspect, and maintain - preferably SQLite.&lt;/p&gt;

&lt;p&gt;What started as an excuse to learn &lt;strong&gt;SvelteKit&lt;/strong&gt; turned into a short-lived but surprisingly popular open-source project. While Grimoire development is currently paused, the journey of building it forced me to confront real trade-offs around architecture, marketing, and technical debt.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Evolution of the Stack
&lt;/h2&gt;

&lt;p&gt;Grimoire's architecture was never set in stone; it evolved alongside my understanding of the problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1: The MVP (Velocity First)
&lt;/h3&gt;

&lt;p&gt;In the beginning (v0.1.0), I prioritized speed to market. I have chosen &lt;strong&gt;PocketBase&lt;/strong&gt; as a backend-as-a-service to handle authentication, the database, user management, and file storage - so I could focus on the frontend and business logic. This allowed me to ship a Dockerized MVP with categories, tags, and basic CRUD operations very quickly. Of course, it also had a dark mode.&lt;/p&gt;

&lt;p&gt;It worked. It was simple. It attracted users.&lt;/p&gt;

&lt;p&gt;But it also set the stage for future constraints and obstacles.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 2: The Great Refactor (Architecture First)
&lt;/h3&gt;

&lt;p&gt;As the feature set grew, I hit a wall. The tight coupling with PocketBase made custom logic difficult and the codebase felt unmaintainable in the long run.&lt;/p&gt;

&lt;p&gt;In v0.4.0, I made the difficult decision to halt feature development and perform a massive refactor.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Decoupling:&lt;/strong&gt; I moved away from the BaaS dependency to a custom backend.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance:&lt;/strong&gt; I adopted &lt;strong&gt;Bun&lt;/strong&gt; as the runtime and &lt;strong&gt;Drizzle ORM&lt;/strong&gt; for type-safe database interactions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cleanup:&lt;/strong&gt; I deleted nearly as much code as I wrote, streamlining the application logic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This pivot resulted in a swifter UI, faster metadata processing, and a codebase that was significantly easier for contributors to navigate.&lt;/p&gt;

&lt;p&gt;The lesson there was blunt: early convenience becomes long-term rigidity if you outgrow the abstraction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building an Ecosystem
&lt;/h2&gt;

&lt;p&gt;I realized early on that a bookmark manager that requires manual copy-paste is already failing. Grimoire needed to meet users where they were starting: in the browser tab.&lt;/p&gt;

&lt;p&gt;To solve this, I developed the &lt;strong&gt;Grimoire Companion&lt;/strong&gt;, a browser extension for Chrome and Firefox.&lt;/p&gt;

&lt;p&gt;That required:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;designing an external API&lt;/li&gt;
&lt;li&gt;documenting it with OpenAPI&lt;/li&gt;
&lt;li&gt;implementing token-based authentication&lt;/li&gt;
&lt;li&gt;handling secure communication between extension and instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I had to buckle up and start properly thinking about stability, security, and versioning.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenges of In-Flight Changes
&lt;/h2&gt;

&lt;p&gt;The most complex technical challenge was not building features, but taking responsibility for existing users' data. When I moved from PocketBase to Drizzle, I could not leave them behind.&lt;/p&gt;

&lt;p&gt;I had to engineer a dedicated &lt;strong&gt;migration tool&lt;/strong&gt; to map data from the old schema to the new structure, ensuring users preserved their bookmarks, tags, and stored images during the upgrade. Even though Grimoire was far from being stable and could easily lack backward compatibility, I have chosen to do it the proper way, even if that meant a lot more work to do.&lt;/p&gt;

&lt;p&gt;Empathy for the user often manifests as robust migration scripts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Current Status: On the Shelf
&lt;/h2&gt;

&lt;p&gt;After a successful run and multiple releases, I decided that Grimoire achieved what I needed it to.&lt;/p&gt;

&lt;p&gt;The landscape of web development moves fast. While the current iteration served its purpose well, I have paused active development to focus on other projects. I may resurrect Grimoire in the future, but with a fresh perspective, aiming for different goals and likely leveraging a more modern approach (perhaps focusing more heavily on AI-driven organization or local-first architectures).&lt;/p&gt;

&lt;p&gt;For now, the code remains open and available as a reference implementation for a modern SvelteKit application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Stack Overview
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; SvelteKit 2 (Vite 5)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runtime:&lt;/strong&gt; Bun&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database:&lt;/strong&gt; Drizzle ORM (SQLite/PostgreSQL)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auth:&lt;/strong&gt; Lucia&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure:&lt;/strong&gt; Docker Compose&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/goniszewski/grimoire" rel="noopener noreferrer"&gt;github.com/goniszewski/grimoire&lt;/a&gt;&lt;/p&gt;

</description>
      <category>sveltekit</category>
      <category>bunjs</category>
      <category>drizzleorm</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
