<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Usama Ashraf</title>
    <description>The latest articles on Forem by Usama Ashraf (@usamaashraf).</description>
    <link>https://forem.com/usamaashraf</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/usamaashraf"/>
    <language>en</language>
    <item>
      <title>Using Events In Node.js The Right Way</title>
      <dc:creator>Usama Ashraf</dc:creator>
      <pubDate>Sat, 03 Nov 2018 18:06:55 +0000</pubDate>
      <link>https://forem.com/usamaashraf/using-events-in-nodejs-the-right-way-449b</link>
      <guid>https://forem.com/usamaashraf/using-events-in-nodejs-the-right-way-449b</guid>
      <description>&lt;p&gt;Before event-driven programming became popular, the standard way to communicate between different parts of an application was pretty straightforward: a component that wanted to send out a message to another one explicitly invoked a method on that component. But event-driven code is written to &lt;em&gt;react&lt;/em&gt; rather than be &lt;em&gt;called&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Benefits Of Eventing
&lt;/h2&gt;

&lt;p&gt;This approach causes our components to be much more decoupled. Basically, as we continue to write an application we’ll identify events along the way, fire them at the right time and attach one or more event listeners to each one. Extending functionality becomes much easier since we can just add on more listeners to a particular event without tampering with the existing listeners or the part of the application where the event was fired from. What we’re talking about is essentially the Observer pattern.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkby6thd07mrctlwzkub.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkby6thd07mrctlwzkub.jpeg" width="" height=""&gt;&lt;/a&gt;&lt;br&gt;
&lt;sup&gt;&lt;em&gt;Source: &lt;a href="https://www.dofactory.com/javascript/observer-design-pattern" rel="noopener noreferrer"&gt;https://www.dofactory.com/javascript/observer-design-pattern&lt;/a&gt;&lt;/em&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Designing An Event-Driven Architecture
&lt;/h2&gt;

&lt;p&gt;Identifying events is pretty important since we don’t want to end up having to remove/replace existing events from the system, which might force us to delete/modify any number of listeners that were attached to the event. The general principle I use is to &lt;em&gt;consider firing an event only when a unit of business logic finishes execution&lt;/em&gt;. &lt;br&gt;
So say you want to send out a bunch of different emails after a user’s registration. Now, the registration process itself might involve many complicated steps, queries etc. But from a business point of view it is &lt;em&gt;one step&lt;/em&gt;. And each of the emails to be sent out are individual steps as well. So it would make sense to fire an event as soon as registration finishes and have multiple listeners attached to it, each of which is responsible for sending out one type of email.&lt;/p&gt;

&lt;p&gt;Node’s asynchronous, event-driven architecture has certain kinds of objects called “emitters” that emit named events which cause functions called "listeners" to be invoked. All objects that emit events are instances of the &lt;a href="https://nodejs.org/api/events.html#events_class_eventemitter" rel="noopener noreferrer"&gt;EventEmitter&lt;/a&gt; class. Using it we can create our own events.&lt;/p&gt;
&lt;h2&gt;
  
  
  An Example
&lt;/h2&gt;

&lt;p&gt;Let’s use the built-in &lt;a href="https://nodejs.org/api/events.html" rel="noopener noreferrer"&gt;events&lt;/a&gt; module (which I encourage you to check out in detail) to gain access to &lt;code&gt;EventEmitter&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// my_emitter.js&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;EventEmitter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;events&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;myEmitter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;EventEmitter&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;myEmitter&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the part of the application where our server receives an HTTP request, saves a new user and emits an event accordingly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// registration_handler.js&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;myEmitter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./my_emitter&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Perform the registration steps&lt;/span&gt;

&lt;span class="c1"&gt;// Pass the new user object as the message passed through by this event.&lt;/span&gt;
&lt;span class="nx"&gt;myEmitter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;emit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user-registered&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And a separate module where we attach a listener:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// listener.js&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;myEmitter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./my_emitter&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;myEmitter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user-registered&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Send an email or whatever.&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s a good practice to &lt;em&gt;separate policy from implementation&lt;/em&gt;. In this case policy means which listeners are subscribed to which events and implementation means the listeners themselves.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// subscriptions.js&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;myEmitter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./my_emitter&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sendEmailOnRegistration&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./send_email_on_registration&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;someOtherListener&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./some_other_listener&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;


&lt;span class="nx"&gt;myEmitter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user-registered&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;sendEmailOnRegistration&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;myEmitter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user-registered&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;someOtherListener&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// send_email_on_registration.js&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Send a welcome email or whatever.&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This separation allows for the listener to become re-usable too i.e. it can be attached to other events that send out the same message (a user object). It’s also important to mention that &lt;em&gt;when multiple listeners are attached to a single event, they will be executed synchronously and in the order that they were attached&lt;/em&gt;. Hence &lt;code&gt;someOtherListener&lt;/code&gt; will run after &lt;code&gt;sendEmailOnRegistration&lt;/code&gt; finishes execution.&lt;br&gt;
However if you want your listeners to run asynchronously you can simply wrap their implementations with &lt;code&gt;setImmediate&lt;/code&gt; like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// send_email_on_registration.js&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;setImmediate&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Send a welcome email or whatever.&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Keep Your Listeners Clean
&lt;/h2&gt;

&lt;p&gt;Stick to the Single Responsibility Principle when writing listeners: one listener should do one thing only and do it well. Avoid, for instance, writing too many conditionals within a listener that decide what to do depending on the data (message) that was transmitted by the event. It would be much more appropriate to use different events in that case:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// registration_handler.js&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;myEmitter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./my_emitter&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Perform the registration steps&lt;/span&gt;

&lt;span class="c1"&gt;// The application should react differently if the new user has been activated instantly.&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;activated&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;myEmitter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;emit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user-registered:activated&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;myEmitter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;emit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user-registered&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// subscriptions.js&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;myEmitter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./my_emitter&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sendEmailOnRegistration&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./send_email_on_registration&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;someOtherListener&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./some_other_listener&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;doSomethingEntirelyDifferent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./do_something_entirely_different&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;


&lt;span class="nx"&gt;myEmitter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user-registered&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;sendEmailOnRegistration&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;myEmitter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user-registered&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;someOtherListener&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;myEmitter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user-registered:activated&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;doSomethingEntirelyDifferent&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Detaching Listeners Explicitly When Necessary
&lt;/h2&gt;

&lt;p&gt;In the previous example our listeners were totally independent functions. But in cases where a listener is associated with an object (it’s a method), it has to be manually detached from the events it had subscribed to. Otherwise, the object will never be garbage-collected since a part of the object (the listener) will continue to be referenced by an external object (the emitter). Thus the possibility of a memory-leak.&lt;/p&gt;

&lt;p&gt;For example if we’re building a chat application and we want that the responsibility for showing a notification when a new message arrives in a chat room that a user has connected to should lie within that user object itself, we might do this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// chat_user.js&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ChatUser&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

  &lt;span class="nf"&gt;displayNewMessageNotification&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;newMessage&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Push an alert message or something.&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// `chatroom` is an instance of EventEmitter.&lt;/span&gt;
  &lt;span class="nf"&gt;connectToChatroom&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chatroom&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;chatroom&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;message-received&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;displayNewMessageNotification&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nf"&gt;disconnectFromChatroom&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chatroom&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;chatroom&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;removeListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;message-received&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;displayNewMessageNotification&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the user closes his/her tab or loses their internet connection for a while, naturally, we might want to fire a callback on the server-side that notifies the other users that one of them just went offline. At this point of course it doesn’t make any sense for &lt;code&gt;displayNewMessageNotification&lt;/code&gt; to be invoked for the offline user, but it will continue to be called on new messages unless we remove it explicitly. If we don’t, aside from the unnecessary call, the user object will also stay in memory indefinitely. So be sure to call &lt;code&gt;disconnectFromChatroom&lt;/code&gt; in your server-side callback that executes whenever a user goes offline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beware
&lt;/h2&gt;

&lt;p&gt;The loose coupling in event-driven architectures can also lead to increased complexity if we’re not careful. It can be difficult to keep track of dependencies in our system i.e. which listeners end up executing on which events. Our application will become especially prone to this problem if we start emitting events from within listeners, possibly triggering chains of unexpected events.&lt;/p&gt;

</description>
      <category>node</category>
      <category>events</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Serialization in Node REST APIs</title>
      <dc:creator>Usama Ashraf</dc:creator>
      <pubDate>Thu, 27 Sep 2018 03:00:17 +0000</pubDate>
      <link>https://forem.com/usamaashraf/serialization-in-node-rest-apis-1fkl</link>
      <guid>https://forem.com/usamaashraf/serialization-in-node-rest-apis-1fkl</guid>
      <description>&lt;p&gt;One of the things I learned a while back was that database columns should not be mapped directly to the JSON responses an API serves (columns can change, backward compatibility etc).&lt;br&gt;
There should be some layer of separation between the logic of forming the response and fetching/querying it.&lt;br&gt;
In Rails we have Active Model Serializers and the new fast_jsonapi gem from Netflix. Is there a widely-used analogous package for Node or some best practices that large-scale organizations using Node (like Ebay, Paypal, Netflix etc) employ?&lt;br&gt;
Assuming we're talking about an API built up on Express.&lt;/p&gt;

</description>
      <category>help</category>
      <category>node</category>
    </item>
    <item>
      <title>Juggling Multiple Languages Simultaneously</title>
      <dc:creator>Usama Ashraf</dc:creator>
      <pubDate>Fri, 21 Sep 2018 16:54:29 +0000</pubDate>
      <link>https://forem.com/usamaashraf/juggling-multiple-languages-simultaneously-i4o</link>
      <guid>https://forem.com/usamaashraf/juggling-multiple-languages-simultaneously-i4o</guid>
      <description>&lt;p&gt;I've been coding Javascript, Ruby and Go almost everyday for the past few months and I'm just wondering if in the long run it hurts my chances of learning expertise in either of them. I clearly feel the "switches" I have to make daily.&lt;br&gt;
Plus, is the "jack of all trades, master of none" problem a real and significant one?&lt;/p&gt;

</description>
      <category>discuss</category>
    </item>
    <item>
      <title>Playing With Apache Storm On Docker - Like A Boss</title>
      <dc:creator>Usama Ashraf</dc:creator>
      <pubDate>Tue, 15 May 2018 09:49:46 +0000</pubDate>
      <link>https://forem.com/usamaashraf/playing-with-apache-storm-on-docker---like-a-boss-4bgb</link>
      <guid>https://forem.com/usamaashraf/playing-with-apache-storm-on-docker---like-a-boss-4bgb</guid>
      <description>&lt;p&gt;This article is not the ultimate guide to &lt;a href="https://storm.apache.org/" rel="noopener noreferrer"&gt;Storm&lt;/a&gt; nor is it meant to be. Storm's pretty huge, and just one long-read probably can't do it justice anyways. Of course, any additions, feedback or constructive criticism will be greatly appreciated.&lt;br&gt;
OK, now that that's out of the way, let's see what we'll be covering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The necessity of Storm, the 'why' of it, what it is and what it isn't&lt;/li&gt;
&lt;li&gt;A bird's eye view of how it works.&lt;/li&gt;
&lt;li&gt;What a Storm topology roughly looks like in code (Java)&lt;/li&gt;
&lt;li&gt;Setting up and playing with a production-worthy Storm cluster on Docker.&lt;/li&gt;
&lt;li&gt;A few words on message processing reliability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm also assuming that you're at least somewhat familiar with &lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; and containerization.&lt;/p&gt;

&lt;p&gt;Continuous streams of data are ubiquitous and becoming even more so with the &lt;a href="http://www.businessinsider.com/75-billion-devices-will-be-connected-to-the-internet-by-2020-2013-10#ixzz3Il8nN9oB%20%20%20" rel="noopener noreferrer"&gt;increasing number of IoT devices being used&lt;/a&gt;. Of course this data is stored, processed and analyzed to provide predictive, actionable results. But petabytes take long to analyze, even with &lt;a href="http://hadoop.apache.org/" rel="noopener noreferrer"&gt;Hadoop&lt;/a&gt; (as good as MapReduce may be) or &lt;a href="https://spark.apache.org/" rel="noopener noreferrer"&gt;Spark&lt;/a&gt; (a remedy to the limitations of MapReduce). Secondly, very often we don't need to deduce patterns over long periods of time. Of the petabytes of incoming data collected over months, &lt;em&gt;at any given moment&lt;/em&gt;, we might not need to take into account all of it, just a real-time snapshot. Perhaps we don't need to know the longest trending hashtag over five years, but just the one right now. This is what Storm is built for, to accept tons of data coming in extremely fast, possibly from various sources, analyze it and publish the real-time updates to a UI or some other place &lt;em&gt;without storing any itself&lt;/em&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;The architecture of Storm can be compared to a network of roads connecting a set of checkpoints. Traffic begins at a certain checkpoint (called a &lt;strong&gt;spout&lt;/strong&gt;) and passes through other checkpoints (called &lt;strong&gt;bolts&lt;/strong&gt;). The traffic is of course the stream of data that is retrieved by the &lt;strong&gt;spout&lt;/strong&gt; (from a data source, a public API for example) and routed to various &lt;strong&gt;bolts&lt;/strong&gt; where the data is filtered, sanitized, aggregated, analyzed, sent to a UI for people to view or any other target. The network of spouts and bolts is called a &lt;strong&gt;topology&lt;/strong&gt;, and the data flows in the form of &lt;strong&gt;tuples&lt;/strong&gt; (list of values that may have different types).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tepohdev1q4nvgacjya.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tepohdev1q4nvgacjya.png" width="700" height="371"&gt;&lt;/a&gt;&lt;br&gt;
&lt;sup&gt;&lt;em&gt;Source: &lt;a href="https://dzone.com/articles/apache-storm-architecture" rel="noopener noreferrer"&gt;https://dzone.com/articles/apache-storm-architecture&lt;/a&gt;&lt;/em&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;One important thing to talk about is the direction of the data traffic. Conventionally, we would have one or multiple spouts reading the data from an API, a &lt;a href="https://kafka.apache.org/documentation/#intro_topics" rel="noopener noreferrer"&gt;Kafka topic&lt;/a&gt; or some other queuing system. The data would then flow &lt;em&gt;one-way&lt;/em&gt; to one or multiple bolts which may forward it to other bolts and so on. Bolts may publish the analyzed data to a UI or to another bolt. But &lt;em&gt;the traffic is almost always unidirectional&lt;/em&gt;, like a DAG. Although it is certainly possible to make cycles, we're unlikely to need such a convoluted topology.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.tutorialspoint.com/apache_storm/apache_storm_installation.htm" rel="noopener noreferrer"&gt;Installing a Storm release&lt;/a&gt; involves a number of steps, which you're free to follow on your machine. But later on I'll be using Docker containers for a Storm cluster deployment and the images will take care of setting up everything we need.&lt;/p&gt;
&lt;h2&gt;
  
  
  Some Code
&lt;/h2&gt;

&lt;p&gt;While Storm does offer &lt;a href="http://storm.apache.org/about/multi-language.html" rel="noopener noreferrer"&gt;support for other languages&lt;/a&gt;, most topologies are written in Java since it's the most efficient option we have.&lt;/p&gt;

&lt;p&gt;A very basic spout, that just emits random digits, may look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;RandomDigitSpout&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;BaseRichSpout&lt;/span&gt; 
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// To output tuples from spout to the next stage bolt&lt;/span&gt;
  &lt;span class="nc"&gt;SpoutOutputCollector&lt;/span&gt; &lt;span class="n"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;  

  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;nextTuple&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; 
  &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;randomDigit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ThreadLocalRandom&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;current&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;nextInt&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Emit the digit to the next stage bolt&lt;/span&gt;
    &lt;span class="n"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;emit&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Values&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;randomDigit&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;declareOutputFields&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;OutputFieldsDeclarer&lt;/span&gt; &lt;span class="n"&gt;outputFieldsDeclarer&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Tell Storm the schema of the output tuple for this spout.&lt;/span&gt;
    &lt;span class="c1"&gt;// It consists of a single column called 'random-digit'.&lt;/span&gt;
    &lt;span class="n"&gt;outputFieldsDeclarer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;declare&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Fields&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"random-digit"&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And a simple bolt that takes in the stream of random and just emits the even ones:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;EvenDigitBolt&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;BaseRichBolt&lt;/span&gt; 
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// To output tuples from this bolt to the next bolt.&lt;/span&gt;
  &lt;span class="nc"&gt;OutputCollector&lt;/span&gt; &lt;span class="n"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Tuple&lt;/span&gt; &lt;span class="n"&gt;tuple&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; 
  &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Get the 1st column 'random-digit' from the tuple&lt;/span&gt;
    &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;randomDigit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tuple&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getInt&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;randomDigit&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;emit&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Values&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;randomDigit&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;declareOutputFields&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;OutputFieldsDeclarer&lt;/span&gt; &lt;span class="n"&gt;declarer&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; 
  &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Tell Storm the schema of the output tuple for this bolt.&lt;/span&gt;
    &lt;span class="c1"&gt;// It consists of a single column called 'even-digit'&lt;/span&gt;
    &lt;span class="n"&gt;declarer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;declare&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Fields&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"even-digit"&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another simple bolt that'll receive the filtered stream from &lt;code&gt;EvenDigitBolt&lt;/code&gt;, and just multiply each even digit by 10 and emit it forward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MultiplyByTenBolt&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;BaseRichBolt&lt;/span&gt; 
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nc"&gt;OutputCollector&lt;/span&gt; &lt;span class="n"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Tuple&lt;/span&gt; &lt;span class="n"&gt;tuple&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; 
  &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Get 'even-digit' from the tuple.&lt;/span&gt;
    &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;evenDigit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tuple&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getInt&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="n"&gt;collector&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;emit&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Values&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;evenDigit&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;declareOutputFields&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;OutputFieldsDeclarer&lt;/span&gt; &lt;span class="n"&gt;declarer&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; 
  &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;declarer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;declare&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Fields&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"even-digit-multiplied-by-ten"&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Putting them together to form our topology:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kn"&gt;package&lt;/span&gt; &lt;span class="nn"&gt;packagename&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;

&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;OurSimpleTopology&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; 

  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;static&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="kd"&gt;throws&lt;/span&gt; &lt;span class="nc"&gt;Exception&lt;/span&gt;
  &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Create the topology&lt;/span&gt;
    &lt;span class="nc"&gt;TopologyBuilder&lt;/span&gt; &lt;span class="n"&gt;builder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TopologyBuilder&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

    &lt;span class="c1"&gt;// Attach the random digit spout to the topology.&lt;/span&gt;
    &lt;span class="c1"&gt;// Use just 1 thread for the spout.&lt;/span&gt;
    &lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setSpout&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"random-digit-spout"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RandomDigitSpout&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

    &lt;span class="c1"&gt;// Connect the even digit bolt to our spout. &lt;/span&gt;
    &lt;span class="c1"&gt;// The bolt will use 2 threads and the digits will be randomly&lt;/span&gt;
    &lt;span class="c1"&gt;// shuffled/distributed among the 2 threads.&lt;/span&gt;
    &lt;span class="c1"&gt;// The third parameter is formally called the parallelism hint.&lt;/span&gt;
    &lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setBolt&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"even-digit-bolt"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;EvenDigitBolt&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
           &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;shuffleGrouping&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"random-digit-spout"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Connect the multiply-by-10 bolt to our even digit bolt.&lt;/span&gt;
    &lt;span class="c1"&gt;// This bolt will use 4 threads, among which data from the&lt;/span&gt;
    &lt;span class="c1"&gt;// even digit bolt will be shuffled/distributed randomly.&lt;/span&gt;
    &lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setBolt&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"multiplied-by-ten-bolt"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MultiplyByTenBolt&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
           &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;shuffleGrouping&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"even-digit-bolt"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Create a configuration object.&lt;/span&gt;
    &lt;span class="nc"&gt;Config&lt;/span&gt; &lt;span class="n"&gt;conf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Config&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

    &lt;span class="c1"&gt;// The number of independent JVM processes this topology will use.&lt;/span&gt;
    &lt;span class="n"&gt;conf&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setNumWorkers&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Submit our topology with the configuration.&lt;/span&gt;
    &lt;span class="nc"&gt;StormSubmitter&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;submitTopology&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"our-simple-topology"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;conf&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;createTopology&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Parallelism In Storm Topologies
&lt;/h3&gt;

&lt;p&gt;Fully understanding parallelism in Storm can be daunting, at least in my experience. A topology requires at least one process to operate on (obviously). Within this process we can parallelize the execution of our spouts and bolts using threads. In our example, &lt;code&gt;RandomDigitSpout&lt;/code&gt; will launch just one thread, and the data spewed from that thread will be distributed among 2 threads of the &lt;code&gt;EvenDigitBolt&lt;/code&gt;. But the way this distribution happens, referred to as the &lt;strong&gt;stream grouping&lt;/strong&gt;, can be important. For example you may have a stream of temperature recordings from two cities, where the tuples emitted by the spout look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// City name, temperature, time of recording

("Atlanta",       94, "2018-05-11 23:14")
("New York City", 75, "2018-05-11 23:15")
("New York City", 76, "2018-05-11 23:16")
("Atlanta",       96, "2018-05-11 23:15")
("New York City", 77, "2018-05-11 23:17")
("Atlanta",       95, "2018-05-11 23:16")
("New York City", 76, "2018-05-11 23:18")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Suppose we're attaching just one bolt whose job is to calculate the changing average temperature of each city. If we can reasonably expect that in any given time interval we'll get roughly an equal number of tuples from both the cities, it would make sense to dedicate 2 threads to our bolt and send the data for Atlanta to one of them and New York to the other. A &lt;strong&gt;fields grouping&lt;/strong&gt; would serve our purpose, which partitions data among the threads by the value of the field specified in the grouping:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="c1"&gt;// The tuples with the same city name will go to the same thread.&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setBolt&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"avg-temp-bolt"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AvgTempBolt&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
       &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;fieldsGrouping&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"temp-spout"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Fields&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"city_name"&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And of course there are &lt;a href="http://www.corejavaguru.com/bigdata/storm/stream-groupings" rel="noopener noreferrer"&gt;other types of groupings as well&lt;/a&gt;. For most cases, though, the grouping probably won't matter much and you can just shuffle the data and throw it among the bolt threads randomly (&lt;strong&gt;shuffle grouping&lt;/strong&gt;).&lt;br&gt;
Now there's another important component to this: the number of worker processes that our topology will run on. &lt;em&gt;The total number of threads that we specified will then be equally divided among the worker processes&lt;/em&gt;. So in our example random digit topology we had 1 spout thread, 2 even-digit bolt threads and 4 multiply-by-ten bolt threads (7 total). Each of the 2 worker processes would be responsible for running 2 multiply-by-ten bolt threads, 1 even-digit bolt and one of the processes will run the 1 spout thread.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplzbyrad39tpr88lotx1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplzbyrad39tpr88lotx1.jpg" width="770" height="444"&gt;&lt;/a&gt;&lt;br&gt;
Of course, the 2 worker processes will have their main threads, which in turn will launch the spout and bolt threads. So all in all we'll have 9 threads. These are collectively called &lt;strong&gt;executors&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;It's important to realize that if you set a spout's parallelism hint &amp;gt; 1 (i.e. multiple executors), you can end up emitting the same data several times. Say, the spout reads from the public Twitter stream API and uses two executors. That means that the bolts receiving the data from the spout will get the same tweet twice. It is only after the spout emits the tuples that data parallelism comes into play, i.e. the tuples get divided among the bolts according to the specified stream grouping.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Running multiple workers on a single node would be fairly pointless. Later, however, we'll use a proper, distributed, multi-node cluster and see how workers are divided on different nodes.&lt;/p&gt;
&lt;h3&gt;
  
  
  Building Our Topology
&lt;/h3&gt;

&lt;p&gt;Here's the directory structure I suggest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yourproject/
            pom.xml  
            src/
                jvm/
                    packagename/
                           RandomDigitSpout.java
                           EvenDigitBolt.java
                           MultiplyByTenBolt.java
                           OurSimpleTopology.java
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://maven.apache.org/" rel="noopener noreferrer"&gt;Maven&lt;/a&gt; is commonly used for building Storm topologies, and it requires a &lt;code&gt;pom.xml&lt;/code&gt; file (The POM) that &lt;a href="https://maven.apache.org/guides/introduction/introduction-to-the-pom.html" rel="noopener noreferrer"&gt;defines various configuration details, project dependencies etc&lt;/a&gt;. Getting into the &lt;a href="https://maven.apache.org/pom.html" rel="noopener noreferrer"&gt;nitty-gritty of the POM&lt;/a&gt; will probably be an overkill here.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, we'll run &lt;strong&gt;&lt;code&gt;mvn clean&lt;/code&gt;&lt;/strong&gt; inside &lt;code&gt;yourproject&lt;/code&gt; to clear any compiled files we may have, making sure to compile each module from scratch.&lt;/li&gt;
&lt;li&gt;And then &lt;strong&gt;&lt;code&gt;mvn package&lt;/code&gt;&lt;/strong&gt; to compile our code and package it in an executable JAR file, inside a newly created &lt;code&gt;target&lt;/code&gt; folder. This might take quite a few minutes the first time, especially if your topology has many dependencies.&lt;/li&gt;
&lt;li&gt;To submit our topology: &lt;strong&gt;&lt;code&gt;storm jar target/packagename-{version number}.jar packagename.OurSimpleTopology&lt;/code&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hopefully, by now the gap between concept and code in Storm has been somewhat bridged. However, no serious Storm deployment will be a single topology instance running on one server.&lt;/p&gt;

&lt;h2&gt;
  
  
  What A Storm Cluster Looks Like
&lt;/h2&gt;

&lt;p&gt;To take full advantage of Storm's &lt;a href="http://storm.apache.org/about/scalable.html" rel="noopener noreferrer"&gt;scalability&lt;/a&gt; and &lt;a href="http://storm.apache.org/releases/current/Fault-tolerance.html" rel="noopener noreferrer"&gt;fault-tolerance&lt;/a&gt;, any production-grade topology would be submitted to a cluster of machines.&lt;/p&gt;

&lt;p&gt;Storm distributions are installed on the master node (Nimbus) and all the slave nodes (Supervisors).&lt;br&gt;
The &lt;em&gt;master&lt;/em&gt; node runs the Storm &lt;a href="https://github.com/apache/storm/blob/exclamation/storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java" rel="noopener noreferrer"&gt;Nimbus&lt;/a&gt; daemon and the Storm UI. The &lt;em&gt;slave&lt;/em&gt; nodes run the Storm &lt;a href="https://github.com/apache/storm/blob/exclamation/storm-server/src/main/java/org/apache/storm/daemon/supervisor/Supervisor.java" rel="noopener noreferrer"&gt;Supervisor&lt;/a&gt; daemons. A &lt;a href="http://zookeeper.apache.org/" rel="noopener noreferrer"&gt;Zookeeper&lt;/a&gt; daemon on a separate node is used for coordination among the master node and the slave nodes. Zookeeper, by the way, is only used for cluster management and never any kind of message passing. It's not like spouts and bolts are sending data to each other through it or anything like that. The Nimbus daemon finds available Supervisors via ZooKeeper, to which the Supervisor daemons register themselves. And other managerial tasks, some of which will become clear shortly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3t68y2nqwxv9av4px8q.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3t68y2nqwxv9av4px8q.jpg" width="714" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;sup&gt;The Storm UI is a web interface used to manage the state of our cluster. We'll get to this later.&lt;sup&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Our topology is submitted to the Nimbus daemon on the master node and then distributed among the worker processes running on the slave/supervisor nodes&lt;/strong&gt;. Because of Zookeeper, it doesn't matter how many slave/supervisor nodes you run initially, as you can always seamlessly add more and Storm will automatically integrate them into the cluster. &lt;/p&gt;

&lt;p&gt;Whenever we start a Supervisor it allocates a certain number of worker processes (that we can configure) which can then be used by the submitted topology. So in the image above there are a total of 5 &lt;em&gt;allocated workers&lt;/em&gt;. Remember this line:&lt;br&gt;
&lt;strong&gt;&lt;code&gt;conf.setNumWorkers(5)&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
This means that the topology will try to use a total of 5 workers. And since our two Supervisor nodes have a total of 5 &lt;em&gt;allocated workers&lt;/em&gt;: each of the 5 allocated worker processes will run one instance of the topology. If we had done:&lt;br&gt;
&lt;strong&gt;&lt;code&gt;conf.setNumWorkers(4)&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
then one worker process would have remained idle/unused. If the number of specified workers was 6 and the total &lt;em&gt;allocated workers&lt;/em&gt; were 5, then because of the limitation only 5 actual topology workers would've been functional.&lt;/p&gt;

&lt;p&gt;Before we set this all up using Docker, a few important things to keep in mind regarding fault-tolerance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If any worker on any slave node dies, the Supervisor daemon will have it restarted. If restarting repeatedly fails, the worker will be reassigned to another machine.&lt;/li&gt;
&lt;li&gt;If an entire slave node dies, its share of the work will be given to another supervisor/slave node.&lt;/li&gt;
&lt;li&gt;If the Nimbus goes down, the workers will remain unaffected. However, until the Nimbus is restored workers won't be reassigned to other slave nodes if, say, their node crashes.&lt;/li&gt;
&lt;li&gt;The Nimbus &amp;amp; Supervisors are themselves stateless, but with Zookeeper, some state information is stored so that things can begin where they were left off if a node crashes or a daemon dies unexpectedly.&lt;/li&gt;
&lt;li&gt;Nimbus, Supervisor &amp;amp; Zookeeper daemons are all fail-fast. This means that they themselves are not very tolerant to unexpected errors, and will shut down if they encounter one. For this reason &lt;em&gt;they have to be run under supervision using a watchdog program that monitors them constantly and restarts them automatically if they ever crash&lt;/em&gt;. &lt;a href="http://supervisord.org/" rel="noopener noreferrer"&gt;Supervisord&lt;/a&gt; is probably the most popular option for this (not to be confused with the Storm Supervisor daemon).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Note: In most Storm clusters, the Nimbus itself is never deployed as a single instance but as a cluster. If this fault-tolerance is not incorporated and our sole Nimbus goes down, &lt;a href="https://hortonworks.com/blog/fault-tolerant-nimbus-in-apache-storm/" rel="noopener noreferrer"&gt;we'll lose the ability to submit new topologies, gracefully kill running topologies, reassign work to other Supervisor nodes if one crashes etc&lt;/a&gt;. For simplicity, our illustrative cluster will use a single instance. Similarly, the Zookeeper is very often deployed as a cluster but we'll use just one.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Dockerizing The Cluster
&lt;/h3&gt;

&lt;p&gt;Launching individual containers and all that goes along with them can be cumbersome, so I prefer to use &lt;a href="https://docs.docker.com/compose/" rel="noopener noreferrer"&gt;Docker Compose&lt;/a&gt;. We'll be going with one Zookeeper node, one Nimbus node and one Supervisor node initially. They'll be defined as Compose services, all corresponding to one container each at the beginning. Later on, I'll use &lt;a href="https://docs.docker.com/compose/reference/scale/" rel="noopener noreferrer"&gt;Compose scaling&lt;/a&gt; to add another Supervisor node (container). Here's our &lt;a href="https://github.com/UsamaAshraf/coincident-hashtags-with-apache-storm/tree/exclamation" rel="noopener noreferrer"&gt;entire code&lt;/a&gt; &amp;amp; the project structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zookeeper/
         Dockerfile
storm-nimbus/
         Dockerfile
         storm.yaml
         code/
             pom.xml
             src/
                 jvm/
                     coincident_hashtags/
                                ExclamationTopology.java  
storm-supervisor/
         Dockerfile
         storm.yaml
docker-compose.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And our &lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.2'&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;zookeeper&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./zookeeper&lt;/span&gt;
        &lt;span class="c1"&gt;# Keep it running.  &lt;/span&gt;
        &lt;span class="na"&gt;tty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

    &lt;span class="na"&gt;storm-nimbus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./storm-nimbus&lt;/span&gt;
        &lt;span class="c1"&gt;# Run this service after 'zookeeper' and make 'zookeeper' reference.&lt;/span&gt;
        &lt;span class="na"&gt;links&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;zookeeper&lt;/span&gt;
        &lt;span class="na"&gt;tty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="c1"&gt;# Map port 8080 of the host machine to 8080 of the container.&lt;/span&gt;
        &lt;span class="c1"&gt;# To access the Storm UI from our host machine.&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;8080:8080&lt;/span&gt;
        &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./storm-nimbus:/theproject'&lt;/span&gt;

    &lt;span class="na"&gt;storm-supervisor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./storm-supervisor&lt;/span&gt;
        &lt;span class="na"&gt;links&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;zookeeper&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;storm-nimbus&lt;/span&gt;
        &lt;span class="na"&gt;tty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;span class="c1"&gt;# Host volume used to store our code on the master node (Nimbus).&lt;/span&gt;
&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;storm-nimbus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Feel free to explore the Dockerfiles. They basically just install the dependencies (Java 8, Storm, Maven, Zookeeper etc) on the relevant containers.&lt;br&gt;
The &lt;code&gt;storm.yaml&lt;/code&gt; files override certain default configurations for the Storm installations. The line &lt;strong&gt;&lt;code&gt;ADD storm.yaml /conf&lt;/code&gt;&lt;/strong&gt; inside the Nimbus and Supervisor Dockerfiles puts them inside the containers where Storm can read them.&lt;br&gt;
&lt;code&gt;storm-nimbus/storm.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# The Nimbus needs to know where the Zookeeper is. This specifies the list of the&lt;/span&gt;
&lt;span class="c1"&gt;# hosts in the Zookeeper cluster. We're using just one node, of course.&lt;/span&gt;
&lt;span class="c1"&gt;# 'zookeeper' is the Docker Compose network reference.&lt;/span&gt;
&lt;span class="na"&gt;storm.zookeeper.servers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;zookeeper"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;storm-supervisor/storm.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Telling the Supervisor where the Zookeeper is.&lt;/span&gt;
&lt;span class="na"&gt;storm.zookeeper.servers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;zookeeper"&lt;/span&gt;

&lt;span class="c1"&gt;# The worker nodes need to know which machine(s) are the candidate of master&lt;/span&gt;
&lt;span class="c1"&gt;# in order to download the topology jars.&lt;/span&gt;
&lt;span class="na"&gt;nimbus.seeds &lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;storm-nimbus"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# For each Supervisor, we configure how many workers run on that machine. &lt;/span&gt;
&lt;span class="c1"&gt;# Each worker uses a single port for receiving messages, and this setting &lt;/span&gt;
&lt;span class="c1"&gt;# defines which ports are open for use. We define four ports here, so Storm will &lt;/span&gt;
&lt;span class="c1"&gt;# allocate up to four workers to run on this node.&lt;/span&gt;
&lt;span class="na"&gt;supervisor.slots.ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="m"&gt;6700&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="m"&gt;6701&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="m"&gt;6702&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="m"&gt;6703&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These options are adequate for our cluster. The more curious can check out all the &lt;a href="https://github.com/apache/storm/blob/exclamation/conf/defaults.yaml" rel="noopener noreferrer"&gt;default configurations here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Run &lt;strong&gt;&lt;code&gt;docker-compose up&lt;/code&gt;&lt;/strong&gt; at the project root.&lt;/p&gt;

&lt;p&gt;After all the images have been built and all the service started, open a new terminal, type &lt;strong&gt;&lt;code&gt;docker ps&lt;/code&gt;&lt;/strong&gt; and you'll see something like this:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pqlmyktwotjoyqilc2x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pqlmyktwotjoyqilc2x.png" width="800" height="119"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Starting The Nimbus
&lt;/h3&gt;

&lt;p&gt;Let's SSH into the Nimbus container using its name:&lt;br&gt;
&lt;strong&gt;&lt;code&gt;docker exec -it coincidenthashtagswithapachestorm_storm-nimbus_1 bash&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
and then start the Nimbus daemon:&lt;br&gt;
&lt;strong&gt;&lt;code&gt;storm nimbus&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzt7f06rbk9csk5s045za.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzt7f06rbk9csk5s045za.png" width="800" height="97"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Starting The Storm UI
&lt;/h3&gt;

&lt;p&gt;Similarly, open another terminal, SSH into the Nimbus again and launch the UI using &lt;strong&gt;&lt;code&gt;storm ui&lt;/code&gt;&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvh53x0puhos7fh3h8r7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvh53x0puhos7fh3h8r7.png" width="800" height="91"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to &lt;code&gt;localhost:8080&lt;/code&gt; on your browser and you'll see a nice overview of our cluster:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fip7ffxj02jvlv6meeq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fip7ffxj02jvlv6meeq.png" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Free slots&lt;/strong&gt; in the &lt;strong&gt;Cluster Summary&lt;/strong&gt; indicate how many total workers (on all Supervisor nodes) are available &amp;amp; waiting for a topology to consume them. &lt;strong&gt;Used Slots&lt;/strong&gt; indicate how many of the total are currently busy with a topology. Since we haven't launched any Supervisors yet, they're both zero. We'll get to &lt;em&gt;Executors&lt;/em&gt; and &lt;em&gt;Tasks&lt;/em&gt; later. Also, as we can see, no topologies have been  submitted yet.&lt;/p&gt;
&lt;h3&gt;
  
  
  Starting A Supervisor Node
&lt;/h3&gt;

&lt;p&gt;SSH into the one Supervisor container and launch the Supervisor daemon:&lt;br&gt;
&lt;strong&gt;&lt;code&gt;docker exec -it coincidenthashtagswithapachestorm_storm-supervisor_1 bash&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;&lt;code&gt;storm supervisor&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsz0c8e53ackb75n5313.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsz0c8e53ackb75n5313.png" width="800" height="78"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let's go refresh our UI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlhlws57kzxq24l6r0f8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlhlws57kzxq24l6r0f8.png" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: Any changes in our cluster may take a few seconds to reflect on the UI.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We have a new running Supervisor which comes with four allocated workers. These four workers are the result of specifying four ports in our &lt;code&gt;storm.yaml&lt;/code&gt; for the Supervisor node. Of course, they're all free (four &lt;strong&gt;Free slots&lt;/strong&gt;). Let's submit a topology to the Nimbus and put'em to work.&lt;/p&gt;
&lt;h3&gt;
  
  
  Submitting A Topology To The Nimbus
&lt;/h3&gt;

&lt;p&gt;SSH into the Nimbus on a new terminal. I've written the &lt;a href="https://github.com/UsamaAshraf/coincident-hashtags-with-apache-storm/blob/exclamation/storm-nimbus/Dockerfile#L65" rel="noopener noreferrer"&gt;Dockerfile&lt;/a&gt; so that we land on our working (landing) directory &lt;code&gt;/theproject&lt;/code&gt;. Inside this is &lt;code&gt;code&lt;/code&gt;, where our topology resides. &lt;a href="https://github.com/UsamaAshraf/coincident-hashtags-with-apache-storm/blob/exclamation/storm-nimbus/code/src/jvm/coincident_hashtags/ExclamationTopology.java" rel="noopener noreferrer"&gt;Our topology is pretty simple&lt;/a&gt;. It uses a spout that generates random words and a bolt that just appends three exclamation marks (!!!) to the words. Two of these bolts are added back-to-back and so at the end of the stream we'll get words with six exclamation marks. It also specifies that it needs three workers (&lt;a href="https://github.com/UsamaAshraf/coincident-hashtags-with-apache-storm/blob/exclamation/storm-nimbus/code/src/jvm/coincident_hashtags/ExclamationTopology.java#L77" rel="noopener noreferrer"&gt;conf.setNumWorkers(3)&lt;/a&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;static&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="kd"&gt;throws&lt;/span&gt; &lt;span class="nc"&gt;Exception&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;TopologyBuilder&lt;/span&gt; &lt;span class="n"&gt;builder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TopologyBuilder&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

    &lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setSpout&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"word"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TestWordSpout&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setBolt&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"exclaim1"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ExclamationBolt&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;shuffleGrouping&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"word"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setBolt&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"exclaim2"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ExclamationBolt&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;shuffleGrouping&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"exclaim1"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="nc"&gt;Config&lt;/span&gt; &lt;span class="n"&gt;conf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Config&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

    &lt;span class="c1"&gt;// Turn on  debugging mode&lt;/span&gt;
    &lt;span class="n"&gt;conf&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setDebug&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="n"&gt;conf&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setNumWorkers&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="nc"&gt;StormSubmitter&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;submitTopology&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"exclamation-topology"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;conf&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;createTopology&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;cd code&lt;/code&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;mvn clean&lt;/code&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;mvn package&lt;/code&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;storm jar target/coincident-hashtags-1.2.1.jar coincident_hashtags.ExclamationTopology&lt;/code&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After the topology has been submitted successfully, refresh the UI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fepe3cata5maiua6f447e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fepe3cata5maiua6f447e.png" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As soon as we submitted the topology, the Zookeeper was notified. The Zookeeper in turn notified the Supervisor to download the code from the Nimbus. We now see our topology along with its three occupied workers, leaving just one free. &lt;br&gt;
And 10 word spout threads + 3 exclaim1 bolt threads + 2 exclaim bolt threads + the 3 main threads from the workers = &lt;strong&gt;total of 18 executors&lt;/strong&gt;. And you might've noticed something new: &lt;strong&gt;tasks&lt;/strong&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  WTF Are Tasks
&lt;/h4&gt;

&lt;p&gt;Another concept in Storm's parallelism. But don't sweat it, a task is just an instance of a spout or bolt that an executor uses; what actually does the processing. By default the number of tasks is equal to the number of executors. In rare cases you might need each executor to instantiate more tasks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Each of the two executors (threads) of this bolt will instantiate&lt;/span&gt;
&lt;span class="c1"&gt;// two objects of this bolt (total 4 bolt objects/tasks).&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setBolt&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"even-digit-bolt"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;EvenDigitBolt&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
       &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setNumTasks&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; 
       &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;shuffleGrouping&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"random-digit-spout"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dryahu0f3vwbf95gasz.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dryahu0f3vwbf95gasz.jpg" width="729" height="276"&gt;&lt;/a&gt;&lt;br&gt;
This is a shortcoming on my part, but I can't think of a good use case where we'd need multiple tasks per executor. May be if we were adding some parallelism ourselves, like spawning a new thread within the bolt to handle a long running task, then the main executor thread won't block and will be able to continue processing using the other bolt. However this can make our topology hard to understand. If any one knows of scenarios where the performance gain from multiple tasks outweighs the added complexity, please post a comment.&lt;/p&gt;

&lt;p&gt;Anyways, returning from that slight detour, let's see an overview of our topology. Click on the name under &lt;strong&gt;Topology Summary&lt;/strong&gt; and scroll down to &lt;strong&gt;Worker Resources&lt;/strong&gt;:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqronjr505bkeb64e8rht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqronjr505bkeb64e8rht.png" width="800" height="230"&gt;&lt;/a&gt;&lt;br&gt;
We can clearly see the division of our executors (threads) among the 3 workers. And of course all the 3 workers are on the same, single Supervisor node we're running. &lt;/p&gt;

&lt;p&gt;Now, let's say scale out! &lt;/p&gt;
&lt;h3&gt;
  
  
  Add Another Supervisor
&lt;/h3&gt;

&lt;p&gt;From the project root, let's add another Supervisor node/container&lt;br&gt;
&lt;strong&gt;&lt;code&gt;docker-compose scale storm-supervisor=2&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SSH into the new container:&lt;br&gt;
&lt;strong&gt;&lt;code&gt;docker exec -it coincidenthashtagswithapachestorm_storm-supervisor_2 bash&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And fire up:&lt;br&gt;
&lt;strong&gt;&lt;code&gt;storm supervisor&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzk5whoy4h2ursjqc4upx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzk5whoy4h2ursjqc4upx.png" width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you refresh the UI you'll see that we've successfully added another Supervisor and four more workers (total of 8 workers/slots). To really take advantage of the new Supervisor, let's increase the topology's workers. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First kill the running one: 
&lt;code&gt;storm kill exclamation-topology&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Change &lt;a href="https://github.com/UsamaAshraf/coincident-hashtags-with-apache-storm/blob/exclamation/storm-nimbus/code/src/jvm/coincident_hashtags/ExclamationTopology.java#L77" rel="noopener noreferrer"&gt;this line&lt;/a&gt; to:
&lt;code&gt;conf.setNumWorkers(6)&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Change the project version number in your &lt;code&gt;pom.xml&lt;/code&gt;. Try using a proper scheme, like semantic versioning. I'll just stick with 1.2.1.&lt;/li&gt;
&lt;li&gt;Rebuild the topology: &lt;code&gt;mvn package&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Resubmit it: &lt;code&gt;storm jar target/coincident-hashtags-1.2.1.jar coincident_hashtags.ExclamationTopology&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Reload the UI:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmwwwnabq5bpbcmym4xhy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmwwwnabq5bpbcmym4xhy.png" width="800" height="420"&gt;&lt;/a&gt;&lt;br&gt;
You can now see the new Supervisor and the 6 busy workers out of a total of 8 available ones. Also important to note is that the 6 busy ones have been equallly divided among the two Supervisors. Again, click the topology name and scroll down.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuuc0f6rmhyxmmw1bu9hf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuuc0f6rmhyxmmw1bu9hf.png" width="800" height="364"&gt;&lt;/a&gt;&lt;br&gt;
We see two unique Supervisor IDs, both running on different nodes, and all our executors pretty evenly divided among them. This is great. But Storm comes with another nifty way of doing so &lt;em&gt;while the topology is running&lt;/em&gt;. Something called &lt;em&gt;rebalancing&lt;/em&gt;. On the Nimbus we'd run:&lt;br&gt;
&lt;strong&gt;&lt;code&gt;storm rebalance exclamation-topology -n 6&lt;/code&gt;&lt;/strong&gt; &lt;em&gt;(go from 3 to 6 workers)&lt;/em&gt;&lt;br&gt;
Or to change the number of executors for a particular component:&lt;br&gt;
&lt;strong&gt;&lt;code&gt;storm rebalance exclamation-topology -e even-digit-bolt=3&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Reliable Message Processing
&lt;/h3&gt;

&lt;p&gt;One question we haven't tackled is about what happens if a bolt fails to process a tuple. Well, Storm provides us a mechanism using which the originating spout (specifically the &lt;em&gt;task&lt;/em&gt;) can replay the failed tuple. This processing guarantee doesn't just happen by itself, it's a conscious design choice and does add latency.&lt;br&gt;
Spouts send out tuples to bolts, which emit tuples derived from the input tuples to other bolts and so on. That one, original tuple spurs an entire tree of tuples. If any child tuple, so to speak, of the original one fails then any remedial steps (rollbacks etc) may well have to be taken at multiple bolts. That could get pretty hairy, and so what Storm does is that it allows the original tuple to be emitted again right from the source (the spout). Consequentially, any &lt;strong&gt;operations performed by bolts that are a function of the incoming tuples should be idempotent&lt;/strong&gt;. A tuple is considered "fully processed" when every tuple in its tree has been processed, and every tuple has to be explicitly acknowledged by the bolts. However, that's not all. There's another thing to be done explicitly: maintain a link between the original tuple and its child tuples. Storm will then be able to trace the origin of the child tuples and thus be able to replay the original tuple. This is called &lt;em&gt;anchoring&lt;/em&gt;. &lt;a href="https://github.com/UsamaAshraf/coincident-hashtags-with-apache-storm/blob/exclamation/storm-nimbus/code/src/jvm/coincident_hashtags/ExclamationTopology.java#L44" rel="noopener noreferrer"&gt;And this has been done in our exclamation bolt&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ExclamationBolt&lt;/span&gt;

&lt;span class="c1"&gt;// 'tuple' is the original one received from the test word spout.&lt;/span&gt;
&lt;span class="c1"&gt;// It's been anchored to/with the tuple going out.&lt;/span&gt;
&lt;span class="n"&gt;_collector&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;emit&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tuple&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Values&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;exclamatedWord&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toString&lt;/span&gt;&lt;span class="o"&gt;()));&lt;/span&gt;

&lt;span class="c1"&gt;// Explicitly acknowledge that the tuple has been processed.&lt;/span&gt;
&lt;span class="n"&gt;_collector&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ack&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tuple&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;ack&lt;/code&gt; call will result in the &lt;code&gt;ack&lt;/code&gt; method on the spout being called, if it has been implemented. So, say, you're reading the tuple data from some queue and you can only take it off the queue if the tuple has been fully processed. Well, the &lt;code&gt;ack&lt;/code&gt; method is where you'd do that. You can also emit out tuples without anchoring: &lt;code&gt;_collector.emit(new Values(exclamatedWord.toString()))&lt;/code&gt; and forgo reliability.&lt;/p&gt;

&lt;p&gt;A tuple can fail two ways:&lt;br&gt;
i) A bolt dies and a tuple times out. Or it times out for some other reason. The timeout is 30 seconds by default and can be changed using &lt;code&gt;config.put(Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS, 60)&lt;/code&gt;&lt;br&gt;
ii) The &lt;code&gt;fail&lt;/code&gt; method is explicitly called on the tuple in a bolt: &lt;code&gt;_collector.fail(tuple)&lt;/code&gt;. You may do this in case of an exception.&lt;/p&gt;

&lt;p&gt;In both these cases, the &lt;code&gt;fail&lt;/code&gt; method on the spout will be called, if it is implemented. And if we want the tuple to be replayed, it would have to be done explicitly in the &lt;code&gt;fail&lt;/code&gt; method by calling &lt;code&gt;emit&lt;/code&gt;, just like in &lt;code&gt;nextTuple()&lt;/code&gt;. When tracking tuples, every one has to be &lt;code&gt;ack&lt;/code&gt;ed or &lt;code&gt;fail&lt;/code&gt;ed. Otherwise, the topology will eventually run out of memory.&lt;br&gt;
It's also important to know that you have to do all of this yourself when writing custom spouts and bolts. But the Storm core can help. For example, a bolt implementing &lt;a href="https://storm.apache.org/releases/1.2.1/javadocs/org/apache/storm/topology/base/BaseBasicBolt.html" rel="noopener noreferrer"&gt;BaseBasicBolt&lt;/a&gt; does acking automatically. Or built-in spouts for popular data sources like &lt;a href="https://github.com/apache/storm/blob/master/external/storm-kafka/src/jvm/org/apache/storm/kafka/KafkaSpout.java" rel="noopener noreferrer"&gt;Kafka&lt;/a&gt; take care of queuing and replay logic after acknowledgment and failure. &lt;/p&gt;

&lt;h2&gt;
  
  
  Parting Shots
&lt;/h2&gt;

&lt;p&gt;Designing a Storm topology or cluster is always about tweaking the various knobs we have and settling where the result seems optimal. There are a few things that'll help in this process, like using a configuration file to read parallelism hints, number of workers etc so you don't have to edit and recompile your code repeatedly. Define your bolts logically, one per indivisible task, and keep them light and efficient. Similarly, your spouts' &lt;code&gt;nextTuple()&lt;/code&gt; methods should be optimized. &lt;br&gt;
Use the Storm UI effectively. By default it doesn't show us the complete picture, only 5% of the total tuples emitted. To monitor all of them use &lt;code&gt;config.setStatsSampleRate(1.0d)&lt;/code&gt;. Keep an eye on the &lt;strong&gt;Acks&lt;/strong&gt; and &lt;strong&gt;Latency&lt;/strong&gt; values for individual bolts and topologies via the UI, that's what you want to look at when turning the knobs.&lt;/p&gt;

</description>
      <category>storm</category>
      <category>java</category>
      <category>docker</category>
      <category>data</category>
    </item>
    <item>
      <title>Microservices &amp; RabbitMQ On Docker</title>
      <dc:creator>Usama Ashraf</dc:creator>
      <pubDate>Wed, 25 Apr 2018 23:59:44 +0000</pubDate>
      <link>https://forem.com/usamaashraf/microservices--rabbitmq-on-docker-e2f</link>
      <guid>https://forem.com/usamaashraf/microservices--rabbitmq-on-docker-e2f</guid>
      <description>&lt;p&gt;A microservices-based architecture involves decomposing your monolith app into multiple, totally &lt;em&gt;independently deployable and scalable services&lt;/em&gt;. Beyond this base definition, what constitutes a microservice can be somewhat subjective, though there are several battle-tested practices adopted by giants like &lt;a href="https://www.nginx.com/blog/microservices-at-netflix-architectural-best-practices/" rel="noopener noreferrer"&gt;Netflix&lt;/a&gt; and &lt;a href="https://eng.uber.com/building-tincup/" rel="noopener noreferrer"&gt;Uber&lt;/a&gt; that should always be considered. And I'll discuss some of them. Ultimately, we want to &lt;em&gt;divide our app into smaller apps, each of which is a system apart &amp;amp; deals with only one aspect of the whole app and does it really well&lt;/em&gt;. This decomposition is a very consequential step and can be done on the basis of &lt;a href="http://microservices.io/patterns/decomposition/decompose-by-subdomain.html" rel="noopener noreferrer"&gt;subdomains&lt;/a&gt;, which have to be identified correctly. The smaller apps are more &lt;strong&gt;modular &amp;amp; manageable&lt;/strong&gt; with well-defined boundaries, can be written using &lt;strong&gt;different languages/frameworks&lt;/strong&gt;, &lt;strong&gt;fail in isolation&lt;/strong&gt; so that the entire app doesn't go down (no SPOF). Take a Cinema ticketing example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8s84vije71teazhwiyi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8s84vije71teazhwiyi.png" width="800" height="513"&gt;&lt;/a&gt;&lt;br&gt;
&lt;sup&gt;&lt;em&gt;Source: &lt;a href="https://codeburst.io/build-a-nodejs-cinema-api-gateway-and-deploying-it-to-docker-part-4-703c2b0dd269" rel="noopener noreferrer"&gt;https://codeburst.io/build-a-nodejs-cinema-api-gateway-and-deploying-it-to-docker-part-4-703c2b0dd269&lt;/a&gt;&lt;/em&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Let's break down this bird's eye view:&lt;/p&gt;

&lt;p&gt;i) The user app can be a mobile client, SPA etc or any client consuming our backend services.&lt;/p&gt;

&lt;p&gt;ii) &lt;strong&gt;It's considered a bad practice to ask our clients to communicate with each of our services separately&lt;/strong&gt;, for reasons I tried to explain &lt;a href="https://dev.to/djviolin/how-do-you-protecting-your-backend-api-in-your-microservice-if-you-use-a-single-page-application-on-the-frontend-2ie1/comments/2jj9"&gt;here&lt;/a&gt;. This is what the API gateways are for: to receive client requests, call our service(s), return a response. Thus the client only has to talk to one server, giving the illusion of a monolith. Multiple gateways may be used for &lt;em&gt;different kinds of clients&lt;/em&gt; (mobile apps, tablets, browsers etc.). They can and should be responsible for limited functionality like merging/joining responses from services, authentication, ACLs. In large applications, which need to scale and move dynamically, gateways also need access to a &lt;a href="http://microservices.io/patterns/service-registry.html" rel="noopener noreferrer"&gt;Service Registry&lt;/a&gt; which holds the locations of our microservice instances, databases etc.&lt;/p&gt;

&lt;p&gt;iii) &lt;strong&gt;Each service has its own storage&lt;/strong&gt;. This is a key point and ensures loose coupling. Some queries will then need to join data that is owned by multiple services. To avoid this major performance hit, data may be replicated and sharded. This principle is &lt;a href="http://microservices.io/patterns/data/database-per-service.html" rel="noopener noreferrer"&gt;not just tolerated in microservices, but encouraged&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;iv) The &lt;strong&gt;REST calls made to our API gateway&lt;/strong&gt; are passed to the services, which in turn talk to other services, return a result to the gateway which, perhaps, compiles it and responds with it to the client. Communication among services like this on one client request to the app should not happen. Otherwise we'll be sacrificing performance, on account of another HTTP round-trip, for the newly introduced modularity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A single request should, ideally, only invoke one service to fetch the response&lt;/strong&gt;. This means any synchronous requests between the services should be minimized, and that's not always possible; mechanisms like &lt;a href="https://grpc.io/" rel="noopener noreferrer"&gt;gRPC&lt;/a&gt;, &lt;a href="https://thrift.apache.org/" rel="noopener noreferrer"&gt;Thrift&lt;/a&gt; or even simple HTTP (as in our example) are commonly employed when necessary. As you may have guessed, this implies that data will have to be replicated across our services. Say, the &lt;code&gt;GET /catalog/&amp;lt;&amp;lt;cityId&amp;gt;&amp;gt;&lt;/code&gt; endpoint is also supposed to return the premieres at each cinema in the city at that time. With our new strategy, the premieres will have to be stored in the database for the &lt;em&gt;Cinema Catalog&lt;/em&gt; service as well. Hence, point &lt;code&gt;iii)&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Communicating Asynchronously Between The Services
&lt;/h2&gt;

&lt;p&gt;So, say, the premieres change as a result of some CRUD operation on the &lt;em&gt;Movies&lt;/em&gt; service. To keep the data in sync, that update event will have to be emitted and applied to the &lt;em&gt;Cinema Catalog&lt;/em&gt; service as well. &lt;strong&gt;Try to picture our microservices as a cluster of state machines where updates in states may have to be communicated across the cluster to achieve eventual consistency&lt;/strong&gt;. Of course, we should never expect our end-users to have to wait longer for requests to finish and sacrifice their time for modularity to our benefit. Thereby, all of this communication has to be non-blocking. And that's where &lt;a href="https://www.rabbitmq.com" rel="noopener noreferrer"&gt;RabbitMQ&lt;/a&gt; comes in.&lt;/p&gt;

&lt;p&gt;RabbitMQ is a very &lt;a href="https://www.rabbitmq.com/#features" rel="noopener noreferrer"&gt;powerful&lt;/a&gt; message broker that implements the &lt;a href="https://www.rabbitmq.com/amqp-0-9-1-quickref.html" rel="noopener noreferrer"&gt;AMQP messaging protocol&lt;/a&gt;. Here's the abstract: first, you install a RabbitMQ server instance (broker) on a system. Then a &lt;em&gt;publisher/producer&lt;/em&gt; program connects to this server and sends out a message. RabbitMQ queues that message and siphons it off to a single or multiple &lt;em&gt;subscriber/consumer&lt;/em&gt; programs that are out there listening on the RabbitMQ server.&lt;/p&gt;

&lt;p&gt;Before I get to the crux of this article, I want to explicitly declare that microservices are &lt;em&gt;way&lt;/em&gt; more complex and we won't be covering critical topics like &lt;a href="https://blog.risingstack.com/designing-microservices-architecture-for-failure/" rel="noopener noreferrer"&gt;fault tolerance&lt;/a&gt; because of the intricacy of distributed systems, the full &lt;a href="http://microservices.io/patterns/apigateway.html" rel="noopener noreferrer"&gt;role of the API gateway&lt;/a&gt;, &lt;a href="http://microservices.io/patterns/client-side-discovery.html" rel="noopener noreferrer"&gt;Service Discovery&lt;/a&gt;, data consistency patterns like &lt;a href="http://microservices.io/patterns/data/saga.html" rel="noopener noreferrer"&gt;Sagas&lt;/a&gt;, preventing service failure cascading using &lt;a href="http://microservices.io/patterns/reliability/circuit-breaker.html" rel="noopener noreferrer"&gt;Circuit Breakers&lt;/a&gt;, health checks and architectural patterns like &lt;a href="https://martinfowler.com/bliki/CQRS.html" rel="noopener noreferrer"&gt;CQRS&lt;/a&gt;. Not to mention &lt;a href="https://martinfowler.com/articles/microservice-trade-offs.html" rel="noopener noreferrer"&gt;how to decide whether microservices will work for you or not&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  How RabbitMQ Works
&lt;/h2&gt;

&lt;p&gt;More specifically, the messages are &lt;em&gt;published&lt;/em&gt; to an &lt;em&gt;exchange&lt;/em&gt; inside the RabbitMQ broker. The exchange then distributes copies of that message to &lt;em&gt;queues&lt;/em&gt; on the basis of certain developer-defined rules called &lt;em&gt;bindings&lt;/em&gt;. This part of the messages' journey is called &lt;em&gt;routing&lt;/em&gt;. And this indirection is of course what makes for the non-blocking message transmissions.  &lt;em&gt;Consumers&lt;/em&gt; listening on those &lt;em&gt;queues&lt;/em&gt; that got the message will receive it. Pretty simple, right?&lt;/p&gt;

&lt;p&gt;Not exactly. There are &lt;a href="https://www.rabbitmq.com/tutorials/amqp-concepts.html#exchanges" rel="noopener noreferrer"&gt;four different types of &lt;em&gt;exchanges&lt;/em&gt;&lt;/a&gt; and each, along with the &lt;em&gt;bindings&lt;/em&gt;, defines a routing algorithm. "Routing algorithm" means, essentially, how the messages are distributed among the queues. Going into the details about each type might be an overkill here, so I'll just expand on the one we'll be using: the &lt;a href="https://www.rabbitmq.com/tutorials/amqp-concepts.html#exchange-topic" rel="noopener noreferrer"&gt;&lt;em&gt;topic exchange&lt;/em&gt;&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;For an exchange to push a message onto a queue, that queue must be bound to the exchange. We can create multiple exchanges with unique names, explicitly. However, when you deploy RabbitMQ it comes with a default, nameless exchange. And every queue we create will be automatically bound to this exchange. To be descriptive, I'll be creating a named exchange manually and then bind a queue to it. This binding is defined by a &lt;em&gt;binding key&lt;/em&gt;. The exact way a binding key works, again, depends on the type of the exchange. Here's how it works with a topic exchange:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;em&gt;queue&lt;/em&gt; is bound to an &lt;em&gt;exchange&lt;/em&gt; using a &lt;em&gt;string pattern&lt;/em&gt; (the binding key)&lt;/li&gt;
&lt;li&gt;The published message is delivered to the &lt;em&gt;exchange&lt;/em&gt; along with a &lt;em&gt;routing key&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;exchange&lt;/em&gt; checks which &lt;em&gt;queues&lt;/em&gt; match the &lt;em&gt;routing key&lt;/em&gt; based on the &lt;em&gt;binding key&lt;/em&gt; pattern defined before.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;sup&gt;&lt;code&gt;*&lt;/code&gt; can substitute for exactly one word. &lt;code&gt;#&lt;/code&gt; can substitute for zero or more words.&lt;sup&gt;&lt;/sup&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwutrh9kc968ldeq8q15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwutrh9kc968ldeq8q15.png" width="424" height="171"&gt;&lt;/a&gt;&lt;br&gt;
&lt;sup&gt;&lt;em&gt;Source: &lt;a href="https://www.rabbitmq.com/tutorials/tutorial-five-python.html" rel="noopener noreferrer"&gt;https://www.rabbitmq.com/tutorials/tutorial-five-python.html&lt;/a&gt;&lt;/em&gt;&lt;sup&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Any message with a routing key &lt;code&gt;"quick.orange.rabbit"&lt;/code&gt; will be delivered to both queues. However, messages with &lt;code&gt;"lazy.brown.fox"&lt;/code&gt; will only reach &lt;code&gt;Q2&lt;/code&gt;. Those with a routing key not matching any pattern will be lost.&lt;/p&gt;

&lt;p&gt;For some perspective, let's just skim over two other exchange types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fanout exchange&lt;/strong&gt;: Messages sent to this kind of exchange will be sent to ALL the queues bound to it. The routing key, if provided, will be completely ignored. This can be used, for example, for broadcasting global configuration updates across a distributed system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Direct exchange&lt;/strong&gt; (simplest): Sends the message to the queue whose binding key is &lt;em&gt;exactly&lt;/em&gt; equal to the given routing key. If there are multiple consumers listening on the queue, then the messages will be load-balanced among them, hence, it is commonly used to distribute tasks between multiple workers in a round robin manner.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My illustration will be &lt;em&gt;very&lt;/em&gt; simple: a Python &lt;a href="http://flask.pocoo.org/" rel="noopener noreferrer"&gt;Flask&lt;/a&gt; app with a single POST endpoint, which, when called, will purport to update a user's info, emit a message to the RabbitMQ broker (non-blocking of course) and return a 201. A separate Go service will be listening for the message from the broker and hence have the chance to update its data accordingly. All three will be hosted on separate containers. &lt;/p&gt;
&lt;h2&gt;
  
  
  Setting Up Our Containerized Microservices &amp;amp; Broker Using Docker Compose
&lt;/h2&gt;

&lt;p&gt;Provisioning a bunch of containers and all that goes along with them can be a pain, so I always rely on &lt;a href="https://docs.docker.com/compose/" rel="noopener noreferrer"&gt;Docker Compose&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here's &lt;a href="https://github.com/UsamaAshraf/microservices-using-rabbitmq" rel="noopener noreferrer"&gt;the entire code&lt;/a&gt;. We're declaring three services that will be used for the three containers. The two &lt;a href="https://docs.docker.com/storage/volumes/" rel="noopener noreferrer"&gt;volumes&lt;/a&gt; are needed for putting our code inside the containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# docker-compose.yml&lt;/span&gt;

&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.2"&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;rabbitmq-server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./rabbitmq-server&lt;/span&gt;

    &lt;span class="na"&gt;python-service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./python-service&lt;/span&gt;
        &lt;span class="c1"&gt;# 'rabbitmq-server' will be available as a network reference inside this service &lt;/span&gt;
        &lt;span class="c1"&gt;# and this service will start only after the RabbitMQ service does.&lt;/span&gt;
        &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;rabbitmq-server&lt;/span&gt;
        &lt;span class="c1"&gt;# Keep it running.  &lt;/span&gt;
        &lt;span class="na"&gt;tty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="c1"&gt;# Map port 3000 on the host machine to port 3000 of the container.&lt;/span&gt;
        &lt;span class="c1"&gt;# This will be used to receive HTTP requests made to the service.&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3000:3000"&lt;/span&gt;
        &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./python-service:/python-service'&lt;/span&gt;

    &lt;span class="na"&gt;go-service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./go-service&lt;/span&gt;
        &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;rabbitmq-server&lt;/span&gt;
        &lt;span class="na"&gt;tty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./go-service:/go-service'&lt;/span&gt;

&lt;span class="c1"&gt;# Host volumes used to store code.&lt;/span&gt;
&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;python-service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;go-service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Dockerfiles are pretty much the standard ones from &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;Docker Hub&lt;/a&gt;, to which I've added:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/go-service&lt;/code&gt; working directory in the Go service container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/python-service&lt;/code&gt; working directory in the Python service container.&lt;/li&gt;
&lt;li&gt;Go's RabbitMQ client library called &lt;a href="https://github.com/streadway/amqp" rel="noopener noreferrer"&gt;amqp&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Python's RabbitMQ client &lt;a href="https://pypi.org/project/pika/0.11.0/" rel="noopener noreferrer"&gt;Pika&lt;/a&gt; &amp;amp; &lt;a href="http://flask.pocoo.org/" rel="noopener noreferrer"&gt;Flask&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our Flask app has just one endpoint that receives a &lt;code&gt;user_id&lt;/code&gt; and a &lt;code&gt;full_name&lt;/code&gt;, which will be used to update the user's profile. A message indicating this update will then be sent to the RabbitMQ broker.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# main.py
&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;jsonify&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;services.user_event_handler&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;emit_user_profile_update&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@app.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/users/&amp;lt;int:user_id&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;methods&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;POST&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;new_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;form&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;full_name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="c1"&gt;# Update the user in the datastore using a local transaction...
&lt;/span&gt;
    &lt;span class="nf"&gt;emit_user_profile_update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;full_name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;new_name&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;jsonify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;full_name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;new_name&lt;/span&gt;&lt;span class="p"&gt;}),&lt;/span&gt; &lt;span class="mi"&gt;201&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The logic for emitting events to other services should always be separate from the rest of the app, so I've extracted it to a module. The &lt;em&gt;exchange&lt;/em&gt; should be explicitly checked for and created by the publisher and consumer both, because we can't know (nor should we rely on) which service starts first. This is a good practice, which most RabbitMQ client libraries facilitate seamlessly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# services/user_event_handler.py
&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pika&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;emit_user_profile_update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;new_data&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# 'rabbitmq-server' is the network reference we have to the broker, 
&lt;/span&gt;    &lt;span class="c1"&gt;# thanks to Docker Compose.
&lt;/span&gt;    &lt;span class="n"&gt;connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pika&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BlockingConnection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pika&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ConnectionParameters&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;rabbitmq-server&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="n"&gt;channel&lt;/span&gt;    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;exchange_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;user_updates&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
    &lt;span class="n"&gt;routing_key&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;user.profile.update&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;

    &lt;span class="c1"&gt;# This will create the exchange if it doesn't already exist.
&lt;/span&gt;    &lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exchange_declare&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;exchange&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;exchange_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exchange_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;topic&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;durable&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;new_data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;

    &lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;basic_publish&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;exchange&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;exchange_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                          &lt;span class="n"&gt;routing_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;routing_key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                          &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;new_data&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                          &lt;span class="c1"&gt;# Delivery mode 2 makes the broker save the message to disk.
&lt;/span&gt;                          &lt;span class="c1"&gt;# This will ensure that the message be restored on reboot even  
&lt;/span&gt;                          &lt;span class="c1"&gt;# if RabbitMQ crashes before having forwarded the message.
&lt;/span&gt;                          &lt;span class="n"&gt;properties&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;pika&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BasicProperties&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                            &lt;span class="n"&gt;delivery_mode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                        &lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%r sent to exchange %r with data: %r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;routing_key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exchange_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;new_data&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don't get confused by &lt;code&gt;channel&lt;/code&gt;. A &lt;em&gt;channel&lt;/em&gt; is just a &lt;a href="https://www.rabbitmq.com/tutorials/amqp-concepts.html#amqp-channels" rel="noopener noreferrer"&gt;virtual, lightweight connection&lt;/a&gt; &lt;em&gt;within&lt;/em&gt; the TCP connection that is meant to prevent opening multiple, expensive TCP connections. Especially in multithreaded environments.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;durable&lt;/code&gt; parameter ensures that the &lt;em&gt;exchange&lt;/em&gt; is persisted to the disk and can be restored if the broker crashes or goes offline for any reason. The publisher (Python service) creates an &lt;em&gt;exchange&lt;/em&gt; named &lt;code&gt;user_updates&lt;/code&gt; and sends it the user's updated data with &lt;code&gt;user.profile.update&lt;/code&gt; as the &lt;em&gt;routing key&lt;/em&gt;. This will be matched with a &lt;code&gt;user.profile.*&lt;/code&gt; &lt;em&gt;binding key&lt;/em&gt;, which our Go service will define:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// main.go&lt;/span&gt;

&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"fmt"&lt;/span&gt;
    &lt;span class="s"&gt;"log"&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/streadway/amqp"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;failOnError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c"&gt;// Exit the program.&lt;/span&gt;
        &lt;span class="nb"&gt;panic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"%s: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// 'rabbitmq-server' is the network reference we have to the broker, &lt;/span&gt;
    &lt;span class="c"&gt;// thanks to Docker Compose.&lt;/span&gt;
    &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;amqp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Dial&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"amqp://guest:guest@rabbitmq-server:5672/"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;failOnError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Error connecting to the broker"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c"&gt;// Make sure we close the connection whenever the program is about to exit.&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;ch&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Channel&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;failOnError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Failed to open a channel"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c"&gt;// Make sure we close the channel whenever the program is about to exit.&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;ch&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;exchangeName&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="s"&gt;"user_updates"&lt;/span&gt;
    &lt;span class="n"&gt;bindingKey&lt;/span&gt;   &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="s"&gt;"user.profile.*"&lt;/span&gt;

    &lt;span class="c"&gt;// Create the exchange if it doesn't already exist.&lt;/span&gt;
    &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ch&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ExchangeDeclare&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;exchangeName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c"&gt;// name&lt;/span&gt;
            &lt;span class="s"&gt;"topic"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;        &lt;span class="c"&gt;// type&lt;/span&gt;
            &lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;           &lt;span class="c"&gt;// durable&lt;/span&gt;
            &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;failOnError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Error creating the exchange"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// Create the queue if it doesn't already exist.&lt;/span&gt;
    &lt;span class="c"&gt;// This does not need to be done in the publisher because the&lt;/span&gt;
    &lt;span class="c"&gt;// queue is only relevant to the consumer, which subscribes to it.&lt;/span&gt;
    &lt;span class="c"&gt;// Like the exchange, let's make it durable (saved to disk) too.&lt;/span&gt;
    &lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;ch&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;QueueDeclare&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;    &lt;span class="c"&gt;// name - empty means a random, unique name will be assigned&lt;/span&gt;
            &lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c"&gt;// durable&lt;/span&gt;
            &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c"&gt;// delete when the last consumer unsubscribes&lt;/span&gt;
            &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
            &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
            &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;failOnError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Error creating the queue"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// Bind the queue to the exchange based on a string pattern (binding key).&lt;/span&gt;
    &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ch&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;QueueBind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;       &lt;span class="c"&gt;// queue name&lt;/span&gt;
            &lt;span class="n"&gt;bindingKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c"&gt;// binding key&lt;/span&gt;
            &lt;span class="n"&gt;exchangeName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c"&gt;// exchange&lt;/span&gt;
            &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;failOnError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Error binding the queue"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// Subscribe to the queue.&lt;/span&gt;
    &lt;span class="n"&gt;msgs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;ch&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Consume&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c"&gt;// queue&lt;/span&gt;
            &lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;     &lt;span class="c"&gt;// consumer id - empty means a random, unique id will be assigned&lt;/span&gt;
            &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c"&gt;// auto acknowledgement of message delivery&lt;/span&gt;
            &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
            &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
            &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  
            &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;failOnError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"Failed to register as a consumer"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


    &lt;span class="n"&gt;forever&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="kt"&gt;bool&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;msgs&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Received message: %s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

            &lt;span class="c"&gt;// Update the user's data on the service's &lt;/span&gt;
            &lt;span class="c"&gt;// associated datastore using a local transaction...&lt;/span&gt;

            &lt;span class="c"&gt;// The 'false' indicates the success of a single delivery, 'true' would&lt;/span&gt;
            &lt;span class="c"&gt;// mean that this delivery and all prior unacknowledged deliveries on this&lt;/span&gt;
            &lt;span class="c"&gt;// channel will be acknowledged, which I find no reason for in this example.&lt;/span&gt;
            &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Ack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}()&lt;/span&gt;

    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Service listening for events..."&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// Block until 'forever' receives a value, which will never happen.&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;-&lt;/span&gt;&lt;span class="n"&gt;forever&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;RabbitMQ uses port 5672 by default for non-TLS connections and "guest" as the username &amp;amp; password. You can study the &lt;a href="https://www.rabbitmq.com/configure.html" rel="noopener noreferrer"&gt;plethora of configuration options&lt;/a&gt; available and how to use them with &lt;a href="https://pika.readthedocs.io/en/0.11.2/modules/index.html" rel="noopener noreferrer"&gt;Pika&lt;/a&gt; and &lt;a href="https://godoc.org/github.com/streadway/amqp" rel="noopener noreferrer"&gt;Go amqp&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You might be wondering what this line is for: &lt;strong&gt;&lt;code&gt;d.Ack(false)&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This tells the broker that the message has been delivered, processed successfully and can be deleted. By default, these acknowledgments happen automatically. But we specified so otherwise when we subscribed to the queue: &lt;strong&gt;&lt;code&gt;ch.Consume&lt;/code&gt;&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Now, if the Go service crashes (for any unforeseeable reason), the acknowledgment won't be sent and this will cause the broker to re-queue the message so that it may be given another chance to be processed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Launching The Microservices
&lt;/h2&gt;

&lt;p&gt;Alright, let's fire'em up:&lt;/p&gt;

&lt;p&gt;Run &lt;strong&gt;&lt;code&gt;docker-compose up&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When the three services have been built (will take at least a few minutes the first time), check for their names using &lt;strong&gt;&lt;code&gt;docker ps&lt;/code&gt;&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpy084qxgvl19jttkqodx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpy084qxgvl19jttkqodx.png" width="800" height="95"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open two new terminals, SSH into the Python and Go containers using the respective container names and start the servers:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;docker exec -it microservicesusingrabbitmq_python-service_1 bash&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;&lt;code&gt;FLASK_APP=main.py python -m flask run --port 3000 --host 0.0.0.0&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;docker exec -it microservicesusingrabbitmq_go-service_1 bash&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;&lt;code&gt;go run main.go&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open a third terminal to send the POST request. I'll use Curl:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;curl -d "full_name=usama" -X POST http://localhost:3000/users/1&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And you'll see the transmission:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezqf3vafb7hxtx2hgrbt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezqf3vafb7hxtx2hgrbt.png" width="800" height="120"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frakomu282ke1o1dgyciz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frakomu282ke1o1dgyciz.png" width="800" height="115"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At any point, you may also SSH into the RabbitMQ container and just look around:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;rabbitmqctl list_exchanges&lt;/code&gt; (list all the exchanges on this broker node)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;rabbitmqctl list_queues&lt;/code&gt; (list all the queues on this broker node)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;rabbitmqctl list_bindings&lt;/code&gt; (list all the bindings on this broker node)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;rabbitmqctl list_queues name messages_ready messages_unacknowledged&lt;/code&gt; (list all the queues with the number of messages each has that are &lt;em&gt;ready to be delivered to clients but not yet delivered&lt;/em&gt; and those &lt;em&gt;delivered but not yet acknowledged&lt;/em&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As I mentioned at the start, this is by no means a deep dive into microservices. There are many questions to be asked and I'll try to answer an important one: how do we make this communication transactional? &lt;strong&gt;So what happens if our Go service (consumer) throws an exception while updating the state on its end and we need to make sure that the update event rolls back across all the services that were affected by it?&lt;/strong&gt;. Imagine how complicated this could get when we have several microservices and thousands of such "update events". &lt;strong&gt;Essentially, we'll need to incorporate separate events that perform the rollback&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In our case, if the Go service throws an exception while updating the data, it will have to send a message back to the Python service telling it to rollback the update. &lt;em&gt;It's also important to note that in the case of these kinds of errors, the message delivery will have to be acknowledged (even though the processing wasn't successful), so that the message doesn't get re-queued by the broker&lt;/em&gt;. When writing our consumer, we'll have to decide which errors mean that the message should be re-queued (tried again) and which mean that the message should not be re-queued and just rolled back.&lt;/p&gt;

&lt;p&gt;But how do we specify which update event to rollback and how exactly would the rollback happen? The &lt;a href="https://blog.couchbase.com/saga-pattern-implement-business-transactions-using-microservices-part/" rel="noopener noreferrer"&gt;Saga pattern&lt;/a&gt; along with &lt;a href="http://microservices.io/patterns/data/event-sourcing.html" rel="noopener noreferrer"&gt;event sourcing&lt;/a&gt; is widely used to ensure such data consistency.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Few Words On Designing The Broker
&lt;/h2&gt;

&lt;p&gt;Consider two things: the &lt;strong&gt;types of exchanges to use&lt;/strong&gt; and &lt;strong&gt;how to group the exchanges&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;If you need to broadcast certain kinds of messages to all the services in your system, have a look at the &lt;a href="https://www.rabbitmq.com/tutorials/amqp-concepts.html#exchange-fanout" rel="noopener noreferrer"&gt;fanout exchange type&lt;/a&gt;. Then, one way to group the exchanges could be on the basis of events e.g. three fanout exchanges named &lt;code&gt;user.profile.updated&lt;/code&gt;, &lt;code&gt;user.profile.deleted&lt;/code&gt;, &lt;code&gt;user.profile.added&lt;/code&gt;. This might not be what you want all the time since you might end up with too many exchanges and won’t be able to filter messages for specific consumers without creating a new &lt;em&gt;exchange&lt;/em&gt;, &lt;em&gt;queue&lt;/em&gt; and &lt;em&gt;binding&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Another way could be to create &lt;em&gt;topic exchanges&lt;/em&gt; in terms of entities in your system. So in our first example, &lt;code&gt;user&lt;/code&gt;, &lt;code&gt;movie&lt;/code&gt;, &lt;code&gt;cinema&lt;/code&gt; etc could be entities and, say, &lt;em&gt;queues&lt;/em&gt; bound to &lt;code&gt;user&lt;/code&gt; could use binding keys like &lt;code&gt;user.created&lt;/code&gt; (get message when a user is created), &lt;code&gt;user.login&lt;/code&gt; (get message when a user has just logged in), &lt;code&gt;user.roles.grant&lt;/code&gt; (get message telling that the user has been given an authorization role), &lt;code&gt;user.notify&lt;/code&gt; (send the user a notification) etc.&lt;/p&gt;

&lt;p&gt;Always use routing to filter and deliver messages to specific consumers. Writing code to discard certain incoming messages while accepting others is an anti-pattern. Consumers should only receive messages that they require.&lt;/p&gt;

&lt;p&gt;Finally, if your needs are complex and you require that messages be filtered down to certain consumers on the basis of multiple properties, use a &lt;a href="https://www.rabbitmq.com/tutorials/amqp-concepts.html#exchange-headers" rel="noopener noreferrer"&gt;headers exchange&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/UsamaAshraf/microservices-using-rabbitmq" rel="noopener noreferrer"&gt;&lt;em&gt;Enjoy!&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>go</category>
      <category>microservices</category>
      <category>rabbitmq</category>
    </item>
    <item>
      <title>Service Workers &amp; Rails Middleware</title>
      <dc:creator>Usama Ashraf</dc:creator>
      <pubDate>Mon, 09 Apr 2018 11:39:51 +0000</pubDate>
      <link>https://forem.com/usamaashraf/service-workers--rails-middleware-3im3</link>
      <guid>https://forem.com/usamaashraf/service-workers--rails-middleware-3im3</guid>
      <description>

&lt;p&gt;One of the most promising features of the HTML5 API are &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers"&gt;Web Workers&lt;/a&gt;. JavaScript is of course single-threaded and thus will block its event loop while waiting on any long-running, synchronous operation. On the browser this could mean a frozen interface or worse. But when using Web Workers, this doesn't apply since they're entirely &lt;strong&gt;isolated browser threads&lt;/strong&gt; running in the background, in an entirely different context from the web page and therefore have no access to the DOM, the &lt;code&gt;window&lt;/code&gt; or &lt;code&gt;document&lt;/code&gt; object.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API"&gt;Service Workers&lt;/a&gt; are simply special kinds of Web Workers that are installed by an  origin server, run in the background and &lt;strong&gt;allow requests made to that server to be intercepted&lt;/strong&gt;. So they kind of sit in between the browser and the server. Service Workers are a core part of modern Progressive Web Apps (PWAs).&lt;/p&gt;

&lt;p&gt;Now, say, your internet is down. Since there's a running process that can intercept requests made to your site, &lt;strong&gt;we can write that process to display a custom, previously cached page rather than a miserable, grey page of death.&lt;/strong&gt; It's cosmetic. Sure. But trust me, your end-users will notice. Don't believe me? Turn off your Wifi and reload this page. Looks pretty cool.&lt;/p&gt;

&lt;p&gt;Admittedly, this article is meant more to be a leisurely excursion into two relatively advanced topics than a perfect solution to a critical problem (which it isn't). And the reason is...well why not. &lt;/p&gt;

&lt;h2&gt;
  
  
  Adding Our Service Worker To The Browser
&lt;/h2&gt;

&lt;p&gt;On the Rails side I'll follow this directory structure to store the service worker script and the offline assets:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app/
   ...
   public/
       ...
       service-worker.js
       offline/
               offline.html
               offline.css
               offline.jpg
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Thus the relevant URIs would be &lt;code&gt;/service-worker.js&lt;/code&gt;, &lt;code&gt;/offline/offline.html&lt;/code&gt; and so on.&lt;/p&gt;

&lt;p&gt;The following script will register a given service worker in your browser. On Rails, just dumping it inside the manifest &lt;code&gt;application.js&lt;/code&gt; (or a custom manifest file at another location) will work. Albeit it'd be much better to use a separate file:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/assets/javascripts/application.js&lt;/span&gt;
&lt;span class="nb"&gt;navigator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;serviceWorker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'/service-worker.js'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;reg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Service worker registration succeeded!'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Service worker registration failed: '&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After which you can see the following in the Chrome DevTools:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cufkOEZp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/z8nv2e8vohm4ieosd36l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cufkOEZp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/z8nv2e8vohm4ieosd36l.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Registration might fail for the following reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your site is not running on HTTPS  (security concern). &lt;code&gt;localhost&lt;/code&gt; works as well since it's considered secure but if you're developing locally on something other than &lt;code&gt;localhost&lt;/code&gt;, say, a Vagrant-provisioned VM, just forward traffic on &lt;code&gt;localhost&lt;/code&gt; to the VM IP. Here's how to create that proxy on Windows if our site in dev is running on &lt;code&gt;192.168.99.100:3000&lt;/code&gt;:&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;netsh interface portproxy add v4tov4 &lt;span class="nv"&gt;listenport&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80 &lt;span class="nv"&gt;listenaddress&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;localhost &lt;span class="nv"&gt;connectport&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3000 &lt;span class="nv"&gt;connectaddress&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;192.168.99.100
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The service worker file is on a different origin to that of your app (security concern, can't have a service worker mess with requests made to other sites!).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The path to your service worker file is not right. It has to be relative to the origin.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some out-dated browsers don't support Service Workers and since I'll be employing ES6 as well as the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/CacheStorage"&gt;Cache API&lt;/a&gt; (for storing the offline page), let's check for their availability first:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// application.js&lt;/span&gt;

&lt;span class="c1"&gt;// Use feature-detection to check for ES6 support.&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;browserSupportsES6&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="kr"&gt;eval&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"var foo = (x)=&amp;gt;x+1"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Use service workers only if the browser supports ES6, &lt;/span&gt;
&lt;span class="c1"&gt;// the Cache API and of course Service Workers themselves.&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;browserSupportsES6&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'caches'&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'serviceWorker'&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nb"&gt;navigator&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;navigator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;serviceWorker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'/service-worker.js'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;reg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Service worker registration succeeded!'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Service worker registration failed: '&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Alright, let's get to the meat of the story: the service worker itself.&lt;br&gt;
After registration, the browser will attempt to &lt;strong&gt;install&lt;/strong&gt; and then &lt;strong&gt;activate&lt;/strong&gt; the service worker. The difference between these two will become clear shortly. For now, it's enough to know that the &lt;code&gt;install&lt;/code&gt; event only happens once (for the same service worker script) and is a great place to hook into and fetch &amp;amp; cache our offline page.&lt;br&gt;
As I mentioned earlier, the service worker can intercept requests made to the server. This is achieved by hooking into the &lt;code&gt;fetch&lt;/code&gt; event. &lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// service-worker.js&lt;/span&gt;

&lt;span class="c1"&gt;// Path is relative to the origin.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;OFFLINE_PAGE_URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'offline/offline.html'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;// We'll add more URIs to this array later.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ASSETS_TO_BE_CACHED&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;OFFLINE_PAGE_URL&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;

&lt;span class="nb"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'install'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;waitUntil&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="c1"&gt;// The Cache API is domain specific and allows an app to create &amp;amp; name &lt;/span&gt;
        &lt;span class="c1"&gt;// various caches it'll use. This allows for better data organization.&lt;/span&gt;
        &lt;span class="c1"&gt;// Under each named cache, we'll add our key-value pairs.&lt;/span&gt;
        &lt;span class="nx"&gt;caches&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'my-service-worker-cache-name'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;// addAll() hits (GET request) all the URIs in the array and caches &lt;/span&gt;
            &lt;span class="c1"&gt;// the results, with the URIs as the keys.&lt;/span&gt;
            &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;addAll&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ASSETS_TO_BE_CACHED&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Assets added to cache.'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
                &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Error while fetching assets'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
            &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nb"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'fetch'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// All requests made to the server will pass through here.&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="c1"&gt;// If one fails, return the offline page from the cache.&lt;/span&gt;
        &lt;span class="c1"&gt;// caches.match doesn't require the name of the specific&lt;/span&gt;
        &lt;span class="c1"&gt;// cache in which the key is located. It just traverses all created&lt;/span&gt;
        &lt;span class="c1"&gt;// by the current domain and fetches the first one.&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;caches&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;OFFLINE_PAGE_URL&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

    &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;respondWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The promise passed to &lt;code&gt;event.waitUntil()&lt;/code&gt; lets the browser know when your install completes, and if it was successful.&lt;br&gt;
&lt;em&gt;It's important to know that "failure" of a request passed through the &lt;code&gt;fetch&lt;/code&gt; handler, using the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API"&gt;Fetch API&lt;/a&gt;, doesn't mean a 4xx or 5xx response, unlike other AJAX implementations. That promise &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#Checking_that_the_fetch_was_successful"&gt;only fails in case of a network error or wrong CORS config on the server&lt;/a&gt;. Which is exactly what we want.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So our service worker in its current state will work. Except...&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;What about the assets used by our offline page (CSS, JS, images etc)? Shouldn't they be cached just like the HTML of the offline page?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What if we want to change any of the cached assets or the page itself? How would we update the cache on the browser accordingly?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What if we change our service worker script? How will the update happen on the browser?&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All valid questions. Let's take'em on one by one:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Yes. Just add the asset URIs to the &lt;code&gt;ASSETS_TO_BE_CACHED&lt;/code&gt; array and they'll be cached. In the &lt;code&gt;fetch&lt;/code&gt; handler add a simple check for these assets, pull them from the cache and only hit the server if they're not found in the cache (ideally this will never happen).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Version your cache. Whenever you change the offline page or its assets, rename the cache created by the service worker and delete all other caches that have a different name (which will include the previously stored cache).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Every time you refresh your page the browser will perform a byte-by-byte comparison with the installed service worker script and if there's a difference it'll re-install it, immediately. This comparison also occurs &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API#Download_install_and_activate"&gt;every 24 hours or so&lt;/a&gt; automatic update checks. Of course, as you may have guessed, &lt;strong&gt;the browser will cache any assets that your Rails app serves by way of the &lt;code&gt;Cache-Control&lt;/code&gt; header. If our service worker remains cached for a long time, well, that's bad. We want immediate updates.&lt;/strong&gt; That's where we'll use a custom Rails middleware to skip the header for our service worker(s).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Note: The new worker won't be &lt;code&gt;activated&lt;/code&gt;, i.e. in running state, until the previous worker is removed. This requires either a new session (new tab) or a manual update (the button for which is visible in the Chrome console picture above). We don't need to worry about this for the most part.&lt;/em&gt;&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// service-worker.js&lt;/span&gt;

&lt;span class="c1"&gt;// Changing the cache version will cause existing cached resources to be&lt;/span&gt;
&lt;span class="c1"&gt;// deleted the next time the service worker is re-installed and re-activated.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;CACHE_VERSION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;CURRENT_CACHE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`your-app-name-cache-v-&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;CACHE_VERSION&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;OFFLINE_PAGE_URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'offline/offline.html'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ASSETS_TO_BE_CACHED&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'offline/offline.css'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'offline/offline.jpg'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;OFFLINE_PAGE_URL&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;

&lt;span class="nb"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'install'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;waitUntil&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nx"&gt;caches&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;CURRENT_CACHE&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;// addAll() hits all the URIs in the array and caches &lt;/span&gt;
            &lt;span class="c1"&gt;// the results, with the URIs as the keys.&lt;/span&gt;
            &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;addAll&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ASSETS_TO_BE_CACHED&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Assets added to cache'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
                &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Error while fetching assets'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
            &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nb"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'activate'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Delete all caches except for CURRENT_CACHE, thus deleting the previous cache&lt;/span&gt;
    &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;waitUntil&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nx"&gt;caches&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;keys&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cacheNames&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="nx"&gt;cacheNames&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cacheName&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cacheName&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="nx"&gt;CURRENT_CACHE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Deleting out of date cache:'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;cacheName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;caches&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cacheName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="p"&gt;})&lt;/span&gt;
            &lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nb"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'fetch'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="c1"&gt;// If it's a request for an asset of the offline page.&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ASSETS_TO_BE_CACHED&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;uri&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;respondWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="nx"&gt;caches&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="c1"&gt;// Pull from cache, otherwise fetch from the server.&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="p"&gt;})&lt;/span&gt;
        &lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;caches&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;OFFLINE_PAGE_URL&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

    &lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;respondWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You may ask why we purged the previously cached assets in &lt;code&gt;activate&lt;/code&gt; and not &lt;code&gt;install&lt;/code&gt;. &lt;strong&gt;This is done because if we were to purge the out-dated cache in &lt;code&gt;install&lt;/code&gt;, the currently running worker at that time, still the old one because the new one hasn't yet been &lt;code&gt;activated&lt;/code&gt;, might crash because of any dependency on the previous cache.&lt;/strong&gt; Now, in our case there is no such strong dependency. But it's generally a good practice so I chose to follow it.&lt;/p&gt;

&lt;h2&gt;
  
  
  OK, But Where Does Rails Middleware Come In? And By The Way, WTF Is Middleware?
&lt;/h2&gt;

&lt;p&gt;As I mentioned in point number 3 above, &lt;em&gt;ideally service workers shouldn't be cacheable because we may require the clients' browsers to update them ASAP. This is our ultimate objective and middleware is just a means to that end.&lt;/em&gt; There could be better ways of doing this. Say, serving assets via Nginx and configuring it to not make service workers cacheable. But let's do it this way:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Middleware is basically any piece of code that wraps HTTP requests and responses made to and received from a server&lt;/strong&gt;. It's very similar to &lt;a href="https://expressjs.com/en/guide/using-middleware.html"&gt;the concept in Express&lt;/a&gt;. When a request passes through the entire middleware stack, it hits the app and then bounces back up the stack as the complete HTTP response.&lt;br&gt;
As far as we're concerned, it's some code that a response generated by our Rails app has to go through before being returned to the browser. Rails is a &lt;a href="https://rack.github.io/"&gt;Rack-based&lt;/a&gt; framework and this is what allows us to use this construct. Getting into the details of Rack here would probably be an overkill though.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gdjxpwnr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ds47ctkp8hj8pw89pm2q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gdjxpwnr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ds47ctkp8hj8pw89pm2q.png" alt=""&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Source: &lt;a href="https://philsturgeon.uk"&gt;https://philsturgeon.uk&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Rails middleware specification is really simple: it just has to implement a &lt;code&gt;call&lt;/code&gt; method. Let's create a new directory &lt;code&gt;app/middleware&lt;/code&gt; and put a new class in:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# app/middleware/service_worker_manager.rb&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ServiceWorkerManager&lt;/span&gt;
  &lt;span class="c1"&gt;# We'll pass 'service_workers' when we register this middleware.&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;initialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;service_workers&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="vi"&gt;@app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;
    &lt;span class="vi"&gt;@service_workers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;service_workers&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c1"&gt;# Let the next middleware classes &amp;amp; app do their thing first...&lt;/span&gt;
    &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="vi"&gt;@app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;dont_cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="vi"&gt;@service_workers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;any?&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;worker_name&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'REQUEST_PATH'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;include?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;worker_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;# ...and modify the response if a service worker was fetched.&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;dont_cache&lt;/span&gt;
      &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'Cache-Control'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'no-cache'&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;

    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We'll need to register our middleware:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# config/environments/development.rb&lt;/span&gt;

&lt;span class="c1"&gt;# ...&lt;/span&gt;

&lt;span class="c1"&gt;# Add our own middleware before the ActionDispatch::Static middleware&lt;/span&gt;
&lt;span class="c1"&gt;# and pass it an array of service worker URIs as a parameter.&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;middleware&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insert_before&lt;/span&gt; &lt;span class="no"&gt;ActionDispatch&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Static&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;ServiceWorkerManager&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'service-worker.js'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you run &lt;code&gt;rake middleware&lt;/code&gt; you'll get a list of all the middleware being used by your app:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eV7TahyL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/lzt32p9ual9kn865xkw2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eV7TahyL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/lzt32p9ual9kn865xkw2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we specified, &lt;code&gt;ServiceWorkerManager&lt;/code&gt; has been added right before &lt;code&gt;ActionDispatch::Static&lt;/code&gt;. The reason we added it at this particular position is because &lt;code&gt;ActionDispatch::Static&lt;/code&gt; is &lt;a href="http://api.rubyonrails.org/classes/ActionDispatch/Static.html"&gt;used to serve static files from the &lt;code&gt;public&lt;/code&gt; directory&lt;/a&gt; and automatically sets the &lt;code&gt;Cache-Control&lt;/code&gt; header as well. In fact, &lt;a href="https://github.com/rails/rails/blob/efb935a5daf04cf3309453351f87faea4a3a2e6e/actionpack/lib/action_dispatch/middleware/static.rb#L121"&gt;static file requests don't even go through the rest of the middleware stack or the app and are returned immediately&lt;/a&gt;. So by placing our middleware &lt;em&gt;before&lt;/em&gt; it, the &lt;em&gt;outgoing&lt;/em&gt; response will have to pass through our middleware.&lt;/p&gt;

&lt;p&gt;Here are two alternative ways to use your own middleware classes:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Add our middleware after another one in the stack.&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;middleware&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insert_after&lt;/span&gt; &lt;span class="no"&gt;SomeOtherMiddleware&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;MyMiddleware&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;





&lt;div class="highlight"&gt;&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Appending our middleware to the end of the stack.&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;middleware&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt; &lt;span class="no"&gt;MyMiddleware&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And we're done! Load your site, shut down your local server and try refreshing the page to see &lt;code&gt;offline.html&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;I've intentionally left out using the asset pipeline to transpile, minify our worker script and offline assets as that is not straightforward: fingerprinting (dynamic URIs that always change with the asset's content) &amp;amp; long-lived caching headers (which we just got rid of) cause problems with service workers. These are &lt;a href="https://github.com/rossta/serviceworker-rails"&gt;explained &amp;amp; solved very well here&lt;/a&gt;: a much better way to do all that we just did IMO :)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Offline page caching was just sort of a case study I used. Service Workers can be utilized to implement much richer features like &lt;strong&gt;background syncing, push notifications&lt;/strong&gt; &amp;amp; others. And so can middleware: &lt;strong&gt;custom request logging, tracking, throttling&lt;/strong&gt; etc. Hopefully this was a good learning experience and has motivated you to explore the two subjects further.&lt;/p&gt;


</description>
      <category>rails</category>
      <category>serviceworkers</category>
      <category>middleware</category>
      <category>pwa</category>
    </item>
    <item>
      <title>Scaling Out With Docker and Nginx</title>
      <dc:creator>Usama Ashraf</dc:creator>
      <pubDate>Wed, 28 Mar 2018 14:10:28 +0000</pubDate>
      <link>https://forem.com/usamaashraf/scaling-out-with-docker-andnginx-315g</link>
      <guid>https://forem.com/usamaashraf/scaling-out-with-docker-andnginx-315g</guid>
      <description>&lt;p&gt;&lt;a href="https://medium.com/r/?url=https%3A%2F%2F12factor.net%2Fdev-prod-parity" rel="noopener noreferrer"&gt;Number 10 of the 12 Commandments&lt;/a&gt; states that our development environment should be as similar as possible to what we have in production. Not doing so can lead to many predicaments when debugging critical, live issues.&lt;/p&gt;

&lt;p&gt;One important step in this regard would be to mimic a standard distributed production setup: &lt;em&gt;a load-balancer sitting in front of multiple backend server instances, dividing incoming HTTP traffic among them.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This article is not an introduction to Docker, Compose or Nginx&lt;/strong&gt;. But rather a guide to setting up the distributed system described above, assuming we’ve installed Docker and are familiar with images, containers etc. I’ll try to provide enough information about Compose and Nginx so we can get our hands dirty without succumbing to the noise that normally goes with them.&lt;/p&gt;

&lt;p&gt;Each of our backend server instances (simple Node.js servers) and the Nginx load-balancer will be hosted inside Docker-based Linux containers. If we stick to 3 backend servers and 1 load-balancer we’ll need to manage/provision 4 Docker containers, for which Compose is a great tool.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What We Want&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmm222ygujkszcwm5st0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmm222ygujkszcwm5st0.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s our directory structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-nginx/
       backend/
              src/
                 index.js
                 package-lock.json  
              Dockerfile
       load-balancer/
              nginx.conf
              Dockerfile
       docker-compose.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;src&lt;/strong&gt; directory will contain our server-side code, in this case a simple Hello World Node (Express) app (of course your backend can be anything):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// index.js&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;  &lt;span class="p"&gt;{&lt;/span&gt;      
   &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;I just received a GET request on port 3000!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Hello World!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;I just connected on port 3000!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://gist.github.com/UsamaAshraf/1c1219fb44b8d044fb6ce8f67ef6a10b#file-package-lock-json" rel="noopener noreferrer"&gt;package-lock.json&lt;/a&gt; has nothing but an &lt;a href="https://expressjs.com" rel="noopener noreferrer"&gt;Express&lt;/a&gt; dependency .&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Setting Up Our Dockerfiles and Compose Config&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;backend/Dockerfile&lt;/code&gt; will be used to build an image, which will then be used by Compose to provision 3 identical containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="c"&gt;# Use one of the standard Node images from Docker Hub 
&lt;/span&gt;&lt;span class="n"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;node&lt;/span&gt;:&lt;span class="n"&gt;boron&lt;/span&gt;
&lt;span class="c"&gt;# The Dockerfile's author
&lt;/span&gt;&lt;span class="n"&gt;LABEL&lt;/span&gt; &lt;span class="n"&gt;Usama&lt;/span&gt; &lt;span class="n"&gt;Ashraf&lt;/span&gt;
&lt;span class="c"&gt;# Create a directory in the container where the code will be placed
&lt;/span&gt;&lt;span class="n"&gt;RUN&lt;/span&gt; &lt;span class="n"&gt;mkdir&lt;/span&gt; -&lt;span class="n"&gt;p&lt;/span&gt; /&lt;span class="n"&gt;backend&lt;/span&gt;-&lt;span class="n"&gt;dir&lt;/span&gt;-&lt;span class="n"&gt;inside&lt;/span&gt;-&lt;span class="n"&gt;container&lt;/span&gt;
&lt;span class="c"&gt;# Set this as the default working directory.
# We'll land here when we SSH into the container.
&lt;/span&gt;&lt;span class="n"&gt;WORKDIR&lt;/span&gt; /&lt;span class="n"&gt;backend&lt;/span&gt;-&lt;span class="n"&gt;dir&lt;/span&gt;-&lt;span class="n"&gt;inside&lt;/span&gt;-&lt;span class="n"&gt;container&lt;/span&gt;
&lt;span class="c"&gt;# Our Nginx container will forward HTTP traffic to containers of 
# this image via port 3000. For this, 3000 needs to be 'open'.
&lt;/span&gt;&lt;span class="n"&gt;EXPOSE&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;load-balancer/Dockerfile&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="c"&gt;# Use the standard Nginx image from Docker Hub
&lt;/span&gt;&lt;span class="n"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;nginx&lt;/span&gt;
&lt;span class="c"&gt;# The Dockerfile's author
&lt;/span&gt;&lt;span class="n"&gt;LABEL&lt;/span&gt; &lt;span class="n"&gt;Usama&lt;/span&gt; &lt;span class="n"&gt;Ashraf&lt;/span&gt;
&lt;span class="c"&gt;# Copy the configuration file from the current directory and paste 
# it inside the container to use it as Nginx's default config.
&lt;/span&gt;&lt;span class="n"&gt;COPY&lt;/span&gt; &lt;span class="n"&gt;nginx&lt;/span&gt;.&lt;span class="n"&gt;conf&lt;/span&gt; /&lt;span class="n"&gt;etc&lt;/span&gt;/&lt;span class="n"&gt;nginx&lt;/span&gt;/&lt;span class="n"&gt;nginx&lt;/span&gt;.&lt;span class="n"&gt;conf&lt;/span&gt;
&lt;span class="c"&gt;# Port 8080 of the container will be exposed and then mapped to port
# 8080 of our host machine via Compose. This way we'll be able to 
# access the server via localhost:8080 on our host.
&lt;/span&gt;&lt;span class="n"&gt;EXPOSE&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;

&lt;span class="c"&gt;# Start Nginx when the container has provisioned.
&lt;/span&gt;&lt;span class="n"&gt;CMD&lt;/span&gt; [&lt;span class="s2"&gt;"nginx"&lt;/span&gt;, &lt;span class="s2"&gt;"-g"&lt;/span&gt;, &lt;span class="s2"&gt;"daemon off;"&lt;/span&gt;]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;load-balancer/nginx.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="n"&gt;http&lt;/span&gt; {

 &lt;span class="n"&gt;events&lt;/span&gt; { &lt;span class="n"&gt;worker_connections&lt;/span&gt; &lt;span class="m"&gt;1024&lt;/span&gt;; }
 &lt;span class="n"&gt;upstream&lt;/span&gt; &lt;span class="n"&gt;localhost&lt;/span&gt; {
    &lt;span class="c"&gt;# These are references to our backend containers, facilitated by
&lt;/span&gt;    &lt;span class="c"&gt;# Compose, as defined in docker-compose.yml   
&lt;/span&gt;    &lt;span class="n"&gt;server&lt;/span&gt; &lt;span class="n"&gt;backend1&lt;/span&gt;:&lt;span class="m"&gt;3000&lt;/span&gt;;
    &lt;span class="n"&gt;server&lt;/span&gt; &lt;span class="n"&gt;backend2&lt;/span&gt;:&lt;span class="m"&gt;3000&lt;/span&gt;;
    &lt;span class="n"&gt;server&lt;/span&gt; &lt;span class="n"&gt;backend3&lt;/span&gt;:&lt;span class="m"&gt;3000&lt;/span&gt;;
 }
 &lt;span class="n"&gt;server&lt;/span&gt; {
    &lt;span class="n"&gt;listen&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;;
    &lt;span class="n"&gt;server_name&lt;/span&gt; &lt;span class="n"&gt;localhost&lt;/span&gt;;
    &lt;span class="n"&gt;location&lt;/span&gt; / {
       &lt;span class="n"&gt;proxy_pass&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;://&lt;span class="n"&gt;localhost&lt;/span&gt;;
       &lt;span class="n"&gt;proxy_set_header&lt;/span&gt; &lt;span class="n"&gt;Host&lt;/span&gt; $&lt;span class="n"&gt;host&lt;/span&gt;;
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a bare-bones Nginx configuration file. If anyone would like help with more advanced options please do post a comment and I’ll be happy to assist.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.2'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;backend1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./backend&lt;/span&gt;
      &lt;span class="na"&gt;tty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./backend/src:/backend-dir-inside-container'&lt;/span&gt;
  &lt;span class="na"&gt;backend2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./backend&lt;/span&gt;
      &lt;span class="na"&gt;tty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./backend/src:/backend-dir-inside-container'&lt;/span&gt;
  &lt;span class="na"&gt;backend3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./backend&lt;/span&gt;
      &lt;span class="na"&gt;tty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./backend/src:/backend-dir-inside-container'&lt;/span&gt;
  &lt;span class="na"&gt;loadbalancer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./load-balancer&lt;/span&gt;
      &lt;span class="na"&gt;tty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;links&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;backend1&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;backend2&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;backend3&lt;/span&gt;
      &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;8080:8080'&lt;/span&gt;
&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without going into details, here’s some insight into our Compose config:&lt;/p&gt;

&lt;p&gt;A single Compose &lt;em&gt;service&lt;/em&gt; generally uses one image, defined by a Dockerfile. When we build our services, the images are built &amp;amp; provisioned as containers. &lt;br&gt;
&lt;em&gt;If you’re new to Docker but familiar with VMs then may be this analogy will help: an ISO file for an OS (image) is used by VirtualBox (Compose) to launch a running VM (container). A service is made up of at least one running container.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;build&lt;/code&gt; tells Compose where to look for a Dockerfile to build the image for the service.&lt;br&gt;
&lt;code&gt;tty&lt;/code&gt; just tells the container to keep running even when there’s no daemon specified via CMD in the Dockerfile. Otherwise, it’ll shut down right after provisioning (sounds strange, I know).&lt;br&gt;
&lt;code&gt;volumes&lt;/code&gt; in our case defines where to put the server-side code in the container (oversimplification). Volumes are a storage mechanism within containers and &lt;a href="https://docs.docker.com/storage/volumes/" rel="noopener noreferrer"&gt;not a trivial feature&lt;/a&gt;.&lt;br&gt;
&lt;code&gt;links&lt;/code&gt; does two things: makes sure theloadbalancer service doesn’t start unless the backend services have started. And it allows backend1 , backend2 and backend3 to be used as references within loadbalancer, which we did in our nginx.conf.&lt;br&gt;
&lt;code&gt;ports&lt;/code&gt; specifies a mapping between a host port and a container port. 8080 of the container will receive client requests made to localhost:8080 on the host.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Launch&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Run &lt;code&gt;sudo docker-compose up --build&lt;/code&gt; inside &lt;code&gt;docker-nginx&lt;/code&gt; (or whatever your project root is). After all the services have started, run &lt;code&gt;sudo docker ps&lt;/code&gt; and you’ll see something like the following, a list of all the containers just launched:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foe4bcucxhwupfaq8z81l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foe4bcucxhwupfaq8z81l.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let’s SSH into one of the backend containers&lt;/strong&gt; and &lt;strong&gt;start our Node server&lt;/strong&gt;. After hitting &lt;code&gt;localhost:8080&lt;/code&gt; from our browser 5 times, this is what we get:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwisd1lch70ltp456f4nf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwisd1lch70ltp456f4nf.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Of course &lt;em&gt;the browser is hitting 8080 on the host machine, which has been mapped to 8080 on the Nginx container, which in turn forwards the request to port 3000 of the backend container. Hence the log shows requests on port 3000.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Open two new terminals on your host machine, SSH into the other two backend containers and start the Node servers within both:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo docker exec -it dockernginx_backend1_1 bash&lt;/code&gt;&lt;br&gt;
&lt;code&gt;node index.js&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo docker exec -it dockernginx_backend3_1 bash&lt;/code&gt;&lt;br&gt;
&lt;code&gt;node index.js&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hit &lt;code&gt;localhost:8080&lt;/code&gt; on your browser multiple times (fast) and you’ll see that the requests are being divided among the 3 servers!&lt;/strong&gt;&lt;br&gt;
Now you can simulate stuff like session, cache persistence across multiple servers, concurrency issues or even get a rough idea about the throughput increase we can achieve when we scale out (e.g. by using &lt;a href="https://github.com/wg/wrk" rel="noopener noreferrer"&gt;wrk&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/UsamaAshraf/scaling-out-docker-nginx" rel="noopener noreferrer"&gt;Here it is all in one place.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To deploy a similar Compose setup in production I recommend &lt;a href="https://docs.docker.com/compose/production/" rel="noopener noreferrer"&gt;this&lt;/a&gt;; and you might enjoy &lt;a href="https://github.com/kubernetes/kompose" rel="noopener noreferrer"&gt;kompose&lt;/a&gt; if you need to export it to Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: The more experienced might bring up &lt;a href="https://docs.docker.com/compose/reference/scale/" rel="noopener noreferrer"&gt;Compose’s ability to scale a running service&lt;/a&gt; and ask whether it is possible to configure Nginx so as to auto-scale at run-time: sort of. The Nginx daemon will have to be restarted (downtime) and of course we’ll need a way to dynamically edit and add to the upstream servers’ group, which is certainly possible, but a fruitless hassle in this case IMO. If more server instances are needed, add more services and rebuild the images.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>dockercompose</category>
      <category>nginx</category>
      <category>loadbalancing</category>
    </item>
    <item>
      <title>N+1 Queries, Batch Loading &amp; Active Model Serializers in Rails</title>
      <dc:creator>Usama Ashraf</dc:creator>
      <pubDate>Fri, 16 Mar 2018 11:16:15 +0000</pubDate>
      <link>https://forem.com/usamaashraf/n1-queries-batch-loading--active-model-serializers-in-rails--3hkf</link>
      <guid>https://forem.com/usamaashraf/n1-queries-batch-loading--active-model-serializers-in-rails--3hkf</guid>
      <description>

&lt;p&gt;The n+1 query problem is one of the most common scalability bottlenecks. If you’re comfortable with Rails, &lt;a href="https://github.com/rails-api/active_model_serializers"&gt;Active Model Serializers&lt;/a&gt; and already have a good idea about what our problem is going to be, then may be you can just jump straight into the code &lt;a href="https://gist.github.com/UsamaAshraf/95b0c8d0d64ee193148342a931c0a423"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Say you’re fetching an array of &lt;strong&gt;Post&lt;/strong&gt; objects at a GET endpoint and you also want to load the respective authors of the posts, embedding an author object within each of the post objects. Here’s a naive way of doing it:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;PostsController&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ApplicationController&lt;/span&gt;  
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;index&lt;/span&gt;    
    &lt;span class="n"&gt;posts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all&lt;/span&gt;     

    &lt;span class="n"&gt;render&lt;/span&gt; &lt;span class="ss"&gt;json: &lt;/span&gt;&lt;span class="n"&gt;posts&lt;/span&gt;  
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Post&lt;/span&gt;
  &lt;span class="n"&gt;belongs_to&lt;/span&gt; &lt;span class="ss"&gt;:author&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;class_name: &lt;/span&gt;&lt;span class="s1"&gt;'User'&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;PostSerializer&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ActiveModel&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Serializer&lt;/span&gt;  
  &lt;span class="n"&gt;attributes&lt;/span&gt; &lt;span class="ss"&gt;:id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:details&lt;/span&gt;
  &lt;span class="n"&gt;belongs_to&lt;/span&gt; &lt;span class="ss"&gt;:author&lt;/span&gt; 
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;For each of the n &lt;strong&gt;Post&lt;/strong&gt; objects being rendered a query will run to fetch the corresponding &lt;strong&gt;User&lt;/strong&gt; object, hence we’ll run a total of n+1 queries. This is disastrous. And here’s how you fix it by eager loading the &lt;strong&gt;User&lt;/strong&gt; object:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;PostsController&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ApplicationController&lt;/span&gt;  
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;index&lt;/span&gt;    
    &lt;span class="c1"&gt;# Runs a SQL join with the users table.&lt;/span&gt;
    &lt;span class="n"&gt;posts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:author&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;all&lt;/span&gt;     

    &lt;span class="n"&gt;render&lt;/span&gt; &lt;span class="ss"&gt;json: &lt;/span&gt;&lt;span class="n"&gt;posts&lt;/span&gt;  
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Until now there’s absolutely nothing new for veterans.&lt;/p&gt;

&lt;p&gt;But let’s complicate this. &lt;em&gt;Let’s assume that the site’s users are not being stored in the same RDBMS as the posts are, rather, the users are documents stored in MongoDB (for whatever reason)&lt;/em&gt;. How do we modify our &lt;strong&gt;Post&lt;/strong&gt; serializer to fetch the user now, optimally? This would be going back to square one:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;PostSerializer&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ActiveModel&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Serializer&lt;/span&gt;  
  &lt;span class="n"&gt;attributes&lt;/span&gt; &lt;span class="ss"&gt;:id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:details&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:author&lt;/span&gt;

  &lt;span class="c1"&gt;# Will run n Mongo queries for n posts being rendered.&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;author&lt;/span&gt;
    &lt;span class="no"&gt;User&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;author_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="c1"&gt;# This is now a Mongoid document, not an ActiveRecord model.&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;User&lt;/span&gt;  
  &lt;span class="kp"&gt;include&lt;/span&gt; &lt;span class="no"&gt;Mongoid&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Document&lt;/span&gt;  
  &lt;span class="kp"&gt;include&lt;/span&gt; &lt;span class="no"&gt;Mongoid&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Timestamps&lt;/span&gt;  
  &lt;span class="c1"&gt;# ...&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The predicament that our users now reside in a Mongo database can be substituted with, say, calling a 3rd party HTTP service for fetching the users or storing them in a completely different RDBMS. &lt;em&gt;Our essential problem remains that there’s no way to ‘join’ the users datastore with the posts table and get the response we want in a single query.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Of course, we can do better. We can fetch the entire response in two queries:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fetch all the posts without the &lt;strong&gt;author&lt;/strong&gt; attribute (1 SQL query).&lt;/li&gt;
&lt;li&gt;Fetch all the corresponding authors by running a where-in query with the user IDs plucked from the array of posts (1 Mongo query with an IN clause).&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;posts&lt;/span&gt;      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all&lt;/span&gt;
&lt;span class="n"&gt;author_ids&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pluck&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:author_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;authors&lt;/span&gt;    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;User&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;where&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:_id&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;in&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;author_ids&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Somehow pass the author objects to the post serializer and&lt;/span&gt;
&lt;span class="c1"&gt;# map them to the correct post objects. Can't imagine what &lt;/span&gt;
&lt;span class="c1"&gt;# exactly that would look like, but probably not pretty.&lt;/span&gt;
&lt;span class="n"&gt;render&lt;/span&gt; &lt;span class="ss"&gt;json: &lt;/span&gt;&lt;span class="n"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;pass_some_parameter_maybe: &lt;/span&gt;&lt;span class="n"&gt;authors&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So our original optimization problem has been reduced to “how do we make this code readable and maintainable”. The folks at &lt;a href="https://www.universe.com/about"&gt;Universe&lt;/a&gt; have come up with an absolute gem (too obvious?). &lt;a href="https://github.com/exAspArk/batch-loader"&gt;Batch Loader&lt;/a&gt; has been incredibly helpful to me recently.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gem 'batch-loader'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;bundle install&lt;/code&gt;&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;PostSerializer&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ActiveModel&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Serializer&lt;/span&gt;  
  &lt;span class="n"&gt;attributes&lt;/span&gt; &lt;span class="ss"&gt;:id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:details&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:author&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;author&lt;/span&gt;
    &lt;span class="n"&gt;object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_author_lazily&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Post&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_author_lazily&lt;/span&gt;
    &lt;span class="c1"&gt;# The current post object is added to the batch here,&lt;/span&gt;
    &lt;span class="c1"&gt;# which is eventually processed when the block executes.   &lt;/span&gt;
    &lt;span class="no"&gt;BatchLoader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;for&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;self&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;batch&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;batch_loader&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;          
      &lt;span class="n"&gt;author_ids&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pluck&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:author_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  
      &lt;span class="no"&gt;User&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;where&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:_id&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;in&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;author_ids&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
        &lt;span class="n"&gt;post&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;detect&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="nb"&gt;p&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;author_id&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_id&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_s&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="c1"&gt;#'Assign' the user object to the right post.&lt;/span&gt;
        &lt;span class="n"&gt;batch_loader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;post&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;      
      &lt;span class="k"&gt;end&lt;/span&gt;    
    &lt;span class="k"&gt;end&lt;/span&gt;  
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you’re familiar with Javascript Promises, think of the &lt;code&gt;get_author_lazily&lt;/code&gt; method as returning a Promise which is evaluated later. That’s a decent analogy, I think, since &lt;code&gt;BatchLoader&lt;/code&gt; uses &lt;a href="https://ruby-doc.org/core-2.4.1/Enumerable.html#method-i-lazy"&gt;lazy Ruby objects&lt;/a&gt;. &lt;br&gt;
One caveat: &lt;code&gt;BatchLoader&lt;/code&gt; caches the loaded values and to keep the responses up-to-date you should add this to your &lt;code&gt;config/application.rb&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;config.middleware.use BatchLoader::Middleware&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;That’s basically it! We’ve solved an advanced version of the n+1 queries problem while keeping our code clean and using Active Model Serializers the right way.&lt;/p&gt;

&lt;p&gt;One problem though. If you have a &lt;strong&gt;User&lt;/strong&gt; serializer (Active Model Serializers work with Mongoid as well), that &lt;em&gt;won’t&lt;/em&gt; be called for the lazily loaded &lt;strong&gt;author&lt;/strong&gt; objects, unlike before. To fix this we can use a Ruby block and serialize the &lt;strong&gt;author&lt;/strong&gt; objects before they’re ‘assigned’ to the posts.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;PostSerializer&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ActiveModel&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Serializer&lt;/span&gt;  
  &lt;span class="n"&gt;attributes&lt;/span&gt; &lt;span class="ss"&gt;:id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:details&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:author&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;author&lt;/span&gt;
    &lt;span class="n"&gt;object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_author_lazily&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;author&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
      &lt;span class="c1"&gt;# Serialize the author after it has been loaded.     &lt;/span&gt;
      &lt;span class="no"&gt;ActiveModelSerializers&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;SerializableResource&lt;/span&gt;
                             &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;author&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                             &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;as_json&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:user&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Post&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_author_lazily&lt;/span&gt;
    &lt;span class="c1"&gt;# The current post object is added to the batch here,&lt;/span&gt;
    &lt;span class="c1"&gt;# which is eventually processed when the block executes.   &lt;/span&gt;
    &lt;span class="no"&gt;BatchLoader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;for&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;self&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;batch&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;batch_loader&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
      &lt;span class="n"&gt;author_ids&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pluck&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:author_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="no"&gt;User&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;where&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:_id&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;in&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;author_ids&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;each&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
        &lt;span class="n"&gt;modified_user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;block_given?&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="k"&gt;yield&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;
        &lt;span class="n"&gt;post&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;posts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;detect&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="nb"&gt;p&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="nb"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;author_id&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_id&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_s&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;  
        &lt;span class="c1"&gt;# 'Assign' the user object to the right post.&lt;/span&gt;
        &lt;span class="n"&gt;batch_loader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;post&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;modified_user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;      
      &lt;span class="k"&gt;end&lt;/span&gt;    
    &lt;span class="k"&gt;end&lt;/span&gt;  
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://gist.github.com/UsamaAshraf/95b0c8d0d64ee193148342a931c0a423"&gt;Here’s&lt;/a&gt; the entire code. Enjoy!&lt;/p&gt;


</description>
      <category>rails</category>
      <category>nplus1queries</category>
      <category>batchloading</category>
      <category>ams</category>
    </item>
  </channel>
</rss>
