<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Lucas Aguiar</title>
    <description>The latest articles on Forem by Lucas Aguiar (@lusqua).</description>
    <link>https://forem.com/lusqua</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/lusqua"/>
    <language>en</language>
    <item>
      <title>Understanding MongoDB $lookup performance</title>
      <dc:creator>Lucas Aguiar</dc:creator>
      <pubDate>Fri, 05 Sep 2025 00:30:01 +0000</pubDate>
      <link>https://forem.com/lusqua/understanding-mongodb-lookup-performance-1l15</link>
      <guid>https://forem.com/lusqua/understanding-mongodb-lookup-performance-1l15</guid>
      <description>&lt;p&gt;After running into some issues with $lookup in our banking application, I decided to dive deeper into how it actually works and why it sometimes ends up being so slow.&lt;/p&gt;

&lt;h4&gt;
  
  
  The problem
&lt;/h4&gt;

&lt;p&gt;It was just a normal day. I was working on a new feature when my tech lead mentioned me on Slack with something like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Lucas, your query is too slow”&lt;br&gt;
(and then pasted the Explain Plan)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Right after that, he casually added: “&lt;em&gt;This query took 33 minutes to finish.&lt;/em&gt;”&lt;/p&gt;

&lt;p&gt;In my head, I was like: “&lt;em&gt;Wow… how did this slip through? The code has been there for three months already!&lt;/em&gt;”&lt;br&gt;
That’s when I decided to investigate the problem more deeply.&lt;/p&gt;

&lt;p&gt;Here’s the query he sent me:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"$match"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"simple start filtering"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"$lookup"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"from"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Company"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"let"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"company"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"$company"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"pipeline"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"$match"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"removedAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"$expr"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"$and"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                  &lt;/span&gt;&lt;span class="nl"&gt;"$eq"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="s2"&gt;"$_id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="s2"&gt;"$$company"&lt;/span&gt;&lt;span class="w"&gt;
                  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"as"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"company"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"$lookup"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"from"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Customer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"let"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"customer"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"$customer"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"pipeline"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"$match"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"removedAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"$expr"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"$and"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                  &lt;/span&gt;&lt;span class="nl"&gt;"$eq"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="s2"&gt;"$_id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="s2"&gt;"$$customer"&lt;/span&gt;&lt;span class="w"&gt;
                  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"as"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"customer"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"$match"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"more filtering"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Right after reading the query, my first thought was: “Why are we using $expr and pipelines inside the $lookup?”&lt;/p&gt;

&lt;p&gt;So the very first thing I did was paste this aggregation pipeline into MongoDB Compass to run an explain and try to visualize why the query was so slow.&lt;br&gt;
And to my surprise… I just got an infinite loading screen 😅&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87zk8r3l6frtb74i3u26.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87zk8r3l6frtb74i3u26.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I promise, I waited a long time hoping to see some output. But in the end, all I got was a timeout error.&lt;/p&gt;

&lt;p&gt;At that point, I decided to break things down step by step, isolating each stage to figure out where the bottleneck was hiding.&lt;/p&gt;
&lt;h4&gt;
  
  
  Testing
&lt;/h4&gt;

&lt;p&gt;First, I ran an explain on just the $match. Everything worked perfectly: no problems at all. The query took only 10ms and used a simple company index.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"$match"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"removedAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"company"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;ObjectId(&lt;/span&gt;&lt;span class="s2"&gt;"some_ID"&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TYPE_FILTERING"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbbi71ws42nd8sc2izu9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbbi71ws42nd8sc2izu9.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So far, so good.&lt;/p&gt;

&lt;p&gt;Next, I added the first $lookup and here’s where things started to smell funny. The execution time jumped from 10ms to 314ms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20ziigbdydzvvbeyx0ke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F20ziigbdydzvvbeyx0ke.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before moving on, I decided to test both $lookups together, just to confirm where the problem was. And… boom! 6 minutes to return the result. So clearly, the issue had to be in the $lookups.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vlncp1zlhhjtkc2y6m1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vlncp1zlhhjtkc2y6m1.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Understanding the lookup
&lt;/h4&gt;

&lt;p&gt;Everything here are based on the &lt;a href="https://www.mongodb.com/docs/manual/reference/operator/aggregation/lookup/" rel="noopener noreferrer"&gt;official documentation of lookup on mongodb&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So, to start, $lookup has this syntax and these fields&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
   $lookup:
     {
       from: &amp;lt;collection to join&amp;gt;,
       localField: &amp;lt;field from the input documents&amp;gt;,
       foreignField: &amp;lt;field from the documents of the "from" collection&amp;gt;,
       let: { &amp;lt;var_1&amp;gt;: &amp;lt;expression&amp;gt;, …, &amp;lt;var_n&amp;gt;: &amp;lt;expression&amp;gt; },
       pipeline: [ &amp;lt;pipeline to run&amp;gt; ],
       as: &amp;lt;output array field&amp;gt;
     }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At the basic, the easy way to use an lookup is with &lt;strong&gt;localField&lt;/strong&gt; and &lt;strong&gt;foreignField&lt;/strong&gt;, but on our query we are using &lt;strong&gt;let&lt;/strong&gt; and &lt;strong&gt;pipeline&lt;/strong&gt; to find the results. &lt;br&gt;
But why? At the application we use soft delete to prevent deleting important data by an error, so everything keeps stored but with &lt;em&gt;removedAt&lt;/em&gt; flag, but we really needs to filter with pipeline?&lt;/p&gt;
&lt;h4&gt;
  
  
  Understanding the problem of &lt;code&gt;$expr&lt;/code&gt;
&lt;/h4&gt;

&lt;p&gt;According to &lt;a href="https://www.mongodb.com/docs/manual/reference/operator/query/expr/" rel="noopener noreferrer"&gt;the official $expr documentation&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When &lt;code&gt;$expr&lt;/code&gt; appears in a &lt;code&gt;$match&lt;/code&gt; stage that is part of a &lt;code&gt;$lookup&lt;/code&gt; subpipeline, &lt;code&gt;$expr&lt;/code&gt; can refer to let variables defined by the &lt;code&gt;$lookup&lt;/code&gt; stage&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The problem here is that when MongoDB applies the filter, it has to evaluate the condition document by document inside the collection. As the collection grows, this operation gets slower and slower, since it can’t rely on indexes the same way a direct match does.&lt;/p&gt;

&lt;p&gt;In other words: &lt;code&gt;$expr&lt;/code&gt; inside a $lookup pipeline forces MongoDB to do more work, which quickly becomes painful at scale.&lt;/p&gt;

&lt;p&gt;The better approach is to use localField and foreignField whenever possible. That way, MongoDB can take advantage of indexes efficiently. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  {
    "$lookup": {
      "from": "Company",
      "localField": "company",
      "foreignField": "_id",
      "pipeline": [
        {
          "$match": {
            "removedAt": null,
          }
        }
      ],
      "as": "company"
    }
  },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here we go! 🎉&lt;br&gt;
Now the response time is totally acceptable:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7cfesw5pdd7noi8kewv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7cfesw5pdd7noi8kewv.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Improving the pipeline even more
&lt;/h4&gt;

&lt;p&gt;After some experiments, I discovered that whenever we use a pipeline inside a &lt;code&gt;$lookup&lt;/code&gt;, the query slows down a lot sometimes 5x to 10x slower.&lt;/p&gt;

&lt;p&gt;So, if your application doesn’t really need extra validations, I strongly recommend avoiding the &lt;code&gt;pipeline&lt;/code&gt;. Instead, stick to &lt;code&gt;localField&lt;/code&gt;/&lt;code&gt;foreignField&lt;/code&gt;. This is much simpler and far more efficient when you’re just joining on _ids.&lt;/p&gt;

&lt;p&gt;In my case, by removing the unnecessary &lt;code&gt;pipeline&lt;/code&gt;, I improved the query from 259ms down to 36ms. Here’s the simplified version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
  {
    "$match": "simple filtering"
  },
  {
    "$lookup": {
      "from": "Company",
      "localField": "company",
      "foreignField": "_id",
      "as": "company"
    }
  },
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;After all the tweaks, the final query ended up looking like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
  {
    "$match": "simple filtering"
  },
  {
    "$lookup": {
      "from": "Company",
      "localField": "company",
      "foreignField": "_id",
      "as": "company",
    }
  },
  {
    "$lookup": {
      "from": "Customer",
      "localField": "customer",
      "foreignField": "_id",
      "as": "customer",
    }
  },
  {
    "$match": "more filtering"
  },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpui0kzc1g2xlvwo8m92x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpui0kzc1g2xlvwo8m92x.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that’s it a (not so short) story, but with some practical tips on how to improve $lookup performance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Prefer &lt;code&gt;localField&lt;/code&gt; / &lt;code&gt;foreignField&lt;/code&gt; whenever possible&lt;/li&gt;
&lt;li&gt;⚠️ $expr can slow down queries a lot&lt;/li&gt;
&lt;li&gt;🛠️ For more complex lookups, combine &lt;code&gt;pipeline&lt;/code&gt; with &lt;code&gt;localField&lt;/code&gt;/&lt;code&gt;foreignField&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Understanding IMAP and SMTP Mail Protocols</title>
      <dc:creator>Lucas Aguiar</dc:creator>
      <pubDate>Sat, 26 Jul 2025 01:48:38 +0000</pubDate>
      <link>https://forem.com/woovi/understanding-imap-and-smtp-mail-protocols-1bch</link>
      <guid>https://forem.com/woovi/understanding-imap-and-smtp-mail-protocols-1bch</guid>
      <description>&lt;p&gt;During a recent implementation of a customer service platform, I encountered several configuration challenges related to email communication protocols. This experience prompted me to conduct thorough research into SMTP and IMAP protocols, their interactions, and best implementation practices. The knowledge gained proved invaluable for resolving our technical issues, and I'm sharing these insights to benefit others facing similar challenges in their systems architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  📮 Introduction
&lt;/h2&gt;

&lt;p&gt;In the digital world, sending and receiving emails are fundamental communication processes. During my recent experience, I realized that understanding how these protocols work behind the scenes can make all the difference, especially when configuring or troubleshooting customer service systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  🌐 Overview of Email Protocols
&lt;/h2&gt;

&lt;p&gt;Emails travel between computers using standardized protocols. The most common ones are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SMTP (Simple Mail Transfer Protocol)&lt;/strong&gt;: Used for sending messages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IMAP (Internet Message Access Protocol)&lt;/strong&gt;: Used for accessing and managing messages stored on the server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While POP3 also exists, this article focuses on SMTP and IMAP, which offer advantages for synchronization and email management across multiple devices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvs5286a2ee98i0571sq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvs5286a2ee98i0571sq.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  📤 SMTP - Simple Mail Transfer Protocol
&lt;/h2&gt;

&lt;p&gt;SMTP is the "engine" that sends emails. Here's a summary of what I learned:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Primary Function&lt;/strong&gt;: SMTP is responsible for sending messages from the sender to the server, and between servers, ensuring the email reaches its recipient.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How It Works&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;The process begins with the email client sending the message to the SMTP server&lt;/li&gt;
&lt;li&gt;Commands like EHLO/HELO, MAIL FROM, RCPT TO, and DATA structure this communication&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Ports and Security&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Common ports include 25 (traditional) and 587 (for authenticated sending)&lt;/li&gt;
&lt;li&gt;Using TLS or SSL encryption is highly recommended to protect data during transmission&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9xv1ytfz4rcg5k9ek7l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9xv1ytfz4rcg5k9ek7l.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  📥 IMAP - Internet Message Access Protocol
&lt;/h2&gt;

&lt;p&gt;While SMTP handles sending, IMAP is the "manager" of received emails:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Primary Function&lt;/strong&gt;: IMAP allows you to access and manage emails directly on the server. This means that regardless of which device you use—smartphone, tablet, or computer—your messages remain synchronized.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advantages over POP3&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Messages remain stored on the server, allowing viewing and organization from any device&lt;/li&gt;
&lt;li&gt;You can create, move, and delete folders directly on the server without downloading emails&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;How It Works&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;IMAP synchronizes message status (e.g., read/unread) and organizes emails into folders&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Ports and Security&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Port 143 is used for unencrypted connections, while port 993 is for secure connections via SSL/TLS&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83celu7utrwjfeyjxpm7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83celu7utrwjfeyjxpm7.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚖️ Comparison Between SMTP and IMAP
&lt;/h2&gt;

&lt;p&gt;Although both protocols are part of email communication, they have distinct, complementary roles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SMTP&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Responsible for sending emails&lt;/li&gt;
&lt;li&gt;Functions as the "digital mail carrier" distributing messages between servers&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;IMAP&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Responsible for accessing and managing stored messages&lt;/li&gt;
&lt;li&gt;Allows you to track your emails in a synchronized manner across multiple devices&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5b2exe0nuam5cu3pnut.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5b2exe0nuam5cu3pnut.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚙️ Technical Aspects and Implementation
&lt;/h2&gt;

&lt;p&gt;During my experience, I found that visualizing the email flow helps better understand how these protocols interact:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sending and Receiving Flow&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;The email leaves the client and goes to the SMTP server&lt;/li&gt;
&lt;li&gt;From there, it may be forwarded to other servers until it reaches the recipient's server&lt;/li&gt;
&lt;li&gt;Finally, the email client uses IMAP to access and manage this message on the server&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Configuration and Authentication&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Correctly configuring ports and authentication methods (such as TLS/SSL) is essential to ensure everything works smoothly&lt;/li&gt;
&lt;li&gt;In many situations, the challenge lies precisely in aligning these configurations for efficient and secure customer service operation&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe94jtlo60lldh614oqaa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe94jtlo60lldh614oqaa.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🔒 Security and Best Practices
&lt;/h2&gt;

&lt;p&gt;To avoid headaches (like the one I faced), it's important to pay attention to these points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Common Threats&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Without proper security, data can be intercepted during transmission&lt;/li&gt;
&lt;li&gt;Configuration errors can create vulnerabilities for spam or even more serious attacks&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Security Measures&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Always use encryption (SSL/TLS) in communications&lt;/li&gt;
&lt;li&gt;Regularly verify and update server authentication configurations&lt;/li&gt;
&lt;li&gt;Follow recommended best practices for both administrators and end users&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  ✅ Conclusion
&lt;/h2&gt;

&lt;p&gt;Understanding SMTP and IMAP protocols was essential to solving the problem I encountered when configuring the customer service system. With SMTP handling sending and IMAP enabling access and message management, both complement each other to make email communication fluid and secure.&lt;/p&gt;

&lt;p&gt;I hope this article, written simply and based on my experience, helps you better understand how these protocols work. After all, sharing knowledge is a way to grow and make life easier for those who also need to deal with these configurations daily.&lt;/p&gt;




&lt;p&gt;If you have questions or want to explore any topic in greater depth, feel free to comment. We're all learning together!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Prometheus + Redis: Simple Guide to Monitor Redis Instances</title>
      <dc:creator>Lucas Aguiar</dc:creator>
      <pubDate>Mon, 21 Apr 2025 22:26:33 +0000</pubDate>
      <link>https://forem.com/woovi/prometheus-redis-simple-guide-to-monitor-redis-instances-n1f</link>
      <guid>https://forem.com/woovi/prometheus-redis-simple-guide-to-monitor-redis-instances-n1f</guid>
      <description>&lt;p&gt;Redis is often a critical part of modern infrastructure — whether used as a cache, message broker, or ephemeral store. Monitoring it properly helps you detect issues early and ensure system stability.&lt;/p&gt;

&lt;p&gt;In this guide, you’ll learn how to build a Redis monitoring dashboard with Prometheus and Grafana to quickly identify issues and improve system reliability.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚙️ Installing Prometheus (Quick Setup)
&lt;/h2&gt;

&lt;p&gt;We’ll start by deploying Prometheus using Helm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

helm install kube-prometheus prometheus-community/kube-prometheus-stack \
  --namespace prometheus --create-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;This will install Prometheus, Grafana, and some built-in exporters.&lt;br&gt;
To access Grafana:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward svc/kube-prometheus-grafana -n prometheus 3000:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can access here -&amp;gt; &lt;a href="http://localhost:3000/login" rel="noopener noreferrer"&gt;http://localhost:3000/login&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Default login: admin / prom-operator&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  🧱 Installing Redis
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

helm install redis bitnami/redis \
  --namespace app-stg \
  --set architecture=standalone \
  --set auth.enabled=false \
  --create-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;This installs a standalone Redis with no password, perfect for testing.&lt;br&gt;
After this, you can point the Redis Exporter and your application to the correct Redis service. The hostname will be:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;redis-master.app-stg.svc.cluster.local:6379
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  📦 Installing Redis Exporter
&lt;/h2&gt;

&lt;p&gt;Now let’s deploy the Redis Exporter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install redis-exporter prometheus-community/prometheus-redis-exporter \
  --namespace prometheus \
  --set serviceMonitor.enabled=true \
  --set serviceMonitor.namespace=prometheus \
  --set serviceMonitor.interval=15s \
  --set serviceMonitor.labels.release=kube-prometheus \
  --set redisAddress=redis://redis-master.app-stg.svc.cluster.local:6379 \
  --create-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Redis Exporter exposes metrics at a Prometheus-compatible /metrics endpoint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward svc/redis-exporter-prometheus-redis-exporter -n prometheus 9121:9121
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then open: &lt;a href="http://localhost:9121/metrics" rel="noopener noreferrer"&gt;http://localhost:9121/metrics&lt;/a&gt;&lt;br&gt;
You should see a long list of metrics like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc6uk08hc0buxzlke9ans.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc6uk08hc0buxzlke9ans.png" alt="Metrics example print screen" width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;✅ redis_up 1 means the exporter is connected to a Redis instance and it’s healthy.&lt;br&gt;
🚫 redis_up 0 means the exporter failed to connect — check the Redis address or service availability.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If everything is working, you’ll also see metrics like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;redis_connected_clients&lt;/li&gt;
&lt;li&gt;redis_memory_used_bytes&lt;/li&gt;
&lt;li&gt;redis_total_commands_processed&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  🔍 Checking in Prometheus
&lt;/h2&gt;

&lt;p&gt;To validate that Prometheus is collecting Redis metrics:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward svc/kube-prometheus-kube-prome-prometheus -n prometheus 9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;a href="http://localhost:9090" rel="noopener noreferrer"&gt;http://localhost:9090&lt;/a&gt;, go to &lt;a href="http://localhost:9090/targets" rel="noopener noreferrer"&gt;Status &amp;gt; Targets&lt;/a&gt;, and check that Redis Exporter is listed and UP.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7s4xgevvwa2crwfv43b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7s4xgevvwa2crwfv43b.png" alt="Prometheus target debuging" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Everything is working right!&lt;/p&gt;

&lt;h2&gt;
  
  
  📊 Importing the Redis Dashboard on Grafana
&lt;/h2&gt;

&lt;p&gt;To visualize your Redis metrics in a clean and ready-to-use panel, you can import a community-maintained dashboard from Grafana Labs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://grafana.com/grafana/dashboards/14091-redis-dashboard-for-prometheus-redis-exporter-1-x/" rel="noopener noreferrer"&gt;https://grafana.com/grafana/dashboards/14091-redis-dashboard-for-prometheus-redis-exporter-1-x/&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  🧭 How to Import
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Access Grafana by establishing port forwarding and navigating to &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt; in your browser&lt;/li&gt;
&lt;li&gt;Navigate to the "+" icon in the left sidebar and select "Import"&lt;/li&gt;
&lt;li&gt;Enter the dashboard ID or URL in the provided field and click "Load"&lt;/li&gt;
&lt;li&gt;Click "Import" to finalize the installation&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Allow a few moments for the system to process your request as the metrics populate on your dashboard&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl46lkebqnsnph5s743dx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl46lkebqnsnph5s743dx.png" alt="Grafana redis dashboard" width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thats it!&lt;br&gt;
⎈Happy Helming!⎈&lt;/p&gt;

</description>
      <category>redis</category>
      <category>prometheus</category>
      <category>grafana</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Explorando um Cluster MongoDB com Terraform</title>
      <dc:creator>Lucas Aguiar</dc:creator>
      <pubDate>Sun, 02 Feb 2025 02:12:41 +0000</pubDate>
      <link>https://forem.com/lusqua/explorando-um-cluster-mongodb-com-terraform-19fm</link>
      <guid>https://forem.com/lusqua/explorando-um-cluster-mongodb-com-terraform-19fm</guid>
      <description>&lt;p&gt;A pouco tempo após algumas conversas me interessei por aprender mais sobre terraform e como provisionar infraestrutura de maneira mais fácil, simples de replicar em outros lugares e manter a longo prazo.&lt;/p&gt;

&lt;p&gt;A ideia aqui é demonstrar como a abordagem de Infrastructure as Code pode simplificar a criação de um cluster MongoDB, composto por um nó primário e dois nós secundários. Utilizando o Terraform e o Docker como provedor para teste local.&lt;/p&gt;

&lt;h2&gt;
  
  
  O passo a passo
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Preparando o ambiente
&lt;/h3&gt;

&lt;p&gt;Primeiro, vamos definir o docker como provedor e uma rede para que os containers se comuniquem usando a resolução de DNS interno, podendo usar "mongo_primary", "mongo_secondary_1" e "mongo_secondary_2" apontando para os ips internos corretos.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;docker&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kreuzwerker/docker"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 3.0.2"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"docker"&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"docker_network"&lt;/span&gt; &lt;span class="s2"&gt;"mongo_network"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"mongo_network"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Gerando os arquivos internos de autenticação
&lt;/h3&gt;

&lt;p&gt;Antes de subir os containers, é necessário gerar um arquivo keyfile que será utilizado para a autenticação interna entre os nós do replica set do MongoDB. Esse arquivo garante que somente os nós autorizados possam se comunicar.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl rand &lt;span class="nt"&gt;-base64&lt;/span&gt; 756 &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; mongo-keyfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alterando as permissões para que somente o proprietário possa ler o arquivo&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod &lt;/span&gt;400 mongo-keyfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Certifique-se de que o arquivo mongo-keyfile esteja na mesma pasta onde se encontra o arquivo main.tf ( ou em uma pasta no mesmo nível ), pois ele será referenciado nos volumes dos containers.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Configurando o replicaset
&lt;/h3&gt;

&lt;p&gt;Separamos aqui os containers em duas partes 1 nó primário e 2 secundários, em um cluster replica set de mongodb o &lt;strong&gt;nó primário&lt;/strong&gt; é responsável por aceitar operações de escrita e propagar as alterações para os &lt;strong&gt;secundários&lt;/strong&gt;, que mantêm uma cópia dos dados e podem ser utilizados para balanceamento de carga em leituras.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// init-replica.js&lt;/span&gt;

&lt;span class="nx"&gt;conn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Mongo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mongodb://admin:adminpass@localhost:27017&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getDB&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;admin&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;rs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ℹ️ Replica Set já configurado.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;codeName&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;NotYetInitialized&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;🚀 Iniciando Replica Set...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="nx"&gt;rs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;initiate&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="na"&gt;_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;rs0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;members&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mongo_primary:27017&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mongo_secondary_1:27017&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mongo_secondary_2:27017&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;✅ Replica Set inicializado com sucesso!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;❌ Erro ao verificar status do Replica Set:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Subindo os containers mongodb
&lt;/h3&gt;

&lt;p&gt;No terraform a configuração dos containers mongodb é similar, entretanto, só o nó primário recebe instruções de exposição das portas e o script de configuração do cluster.&lt;/p&gt;

&lt;p&gt;Montamos os dois volumes referente ao keyfile de autenticação e o script de configuração de replica em js.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"docker_container"&lt;/span&gt; &lt;span class="s2"&gt;"mongo_primary"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"mongo_primary"&lt;/span&gt;
  &lt;span class="nx"&gt;image&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"mongo:7.0"&lt;/span&gt;
  &lt;span class="nx"&gt;restart&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"always"&lt;/span&gt;

  &lt;span class="nx"&gt;networks_advanced&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;docker_network&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mongo_network&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;ports&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;internal&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;27017&lt;/span&gt;
    &lt;span class="nx"&gt;external&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;27017&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;volumes&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;host_path&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;abspath&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/mongo-keyfile"&lt;/span&gt;
    &lt;span class="nx"&gt;container_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/data/keyfile"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;volumes&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;host_path&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;abspath&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/init-replica.js"&lt;/span&gt;
    &lt;span class="nx"&gt;container_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/docker-entrypoint-initdb.d/init-replica.js"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;env&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"MONGO_INITDB_ROOT_USERNAME=admin"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"MONGO_INITDB_ROOT_PASSWORD=adminpass"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"MONGO_REPLICA_SET_NAME=rs0"&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"mongod"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"--replSet"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"rs0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"--keyFile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"/data/keyfile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"--bind_ip_all"&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="k"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"local-exec"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"sleep 10 &amp;amp;&amp;amp; docker exec mongo_primary mongosh /docker-entrypoint-initdb.d/init-replica.js"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"docker_container"&lt;/span&gt; &lt;span class="s2"&gt;"mongo_secondary_1"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"mongo_secondary_1"&lt;/span&gt;
  &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"mongo:7.0"&lt;/span&gt;
  &lt;span class="nx"&gt;restart&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"always"&lt;/span&gt;

  &lt;span class="nx"&gt;networks_advanced&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;docker_network&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mongo_network&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;volumes&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;host_path&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;abspath&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/mongo-keyfile"&lt;/span&gt;
    &lt;span class="nx"&gt;container_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/data/keyfile"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;env&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"MONGO_INITDB_ROOT_USERNAME=admin"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"MONGO_INITDB_ROOT_PASSWORD=adminpass"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"MONGO_REPLICA_SET_NAME=rs0"&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"mongod"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"--replSet"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"rs0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"--keyFile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"/data/keyfile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"--bind_ip_all"&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"docker_container"&lt;/span&gt; &lt;span class="s2"&gt;"mongo_secondary_2"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"mongo_secondary_2"&lt;/span&gt;
  &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"mongo:7.0"&lt;/span&gt;
  &lt;span class="nx"&gt;restart&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"always"&lt;/span&gt;

  &lt;span class="nx"&gt;networks_advanced&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;docker_network&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mongo_network&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;volumes&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;host_path&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;abspath&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/mongo-keyfile"&lt;/span&gt;
    &lt;span class="nx"&gt;container_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/data/keyfile"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;env&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"MONGO_INITDB_ROOT_USERNAME=admin"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"MONGO_INITDB_ROOT_PASSWORD=adminpass"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"MONGO_REPLICA_SET_NAME=rs0"&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"mongod"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"--replSet"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"rs0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"--keyFile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"/data/keyfile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"--bind_ip_all"&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Integração com aplicações
&lt;/h3&gt;

&lt;p&gt;Uma vez com o cluster configurado, é possível se conectar a ele através do comando:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; mongo_primary mongosh &lt;span class="s2"&gt;"mongodb://admin:adminpass@localhost:27017/?replicaSet=rs0"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Para que uma aplicação tenha acesso ao cluster, é necessário que ela esteja junto a "mongo_network" para que não haja problemas na resolução de DNS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusão
&lt;/h2&gt;

&lt;p&gt;Este exemplo demonstra como o terraform pode ser uma ferramenta poderosa para criar ambientes consistentes e fáceis de serem replicados, lembrando que o exemplo anterior não é recomendado para uso em produção pois cada nó deve estar em uma máquina separada para garantir alta disponibilidade.&lt;/p&gt;

&lt;p&gt;Caso tenha ficado algo confuso, você pode acompanhar meu código no Github:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/lusqua/terraform-mongodb-cluster" rel="noopener noreferrer"&gt;https://github.com/lusqua/terraform-mongodb-cluster&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Desvendando a VPC na AWS: Subnets, Gateways e Configuração de Rede</title>
      <dc:creator>Lucas Aguiar</dc:creator>
      <pubDate>Thu, 31 Oct 2024 10:30:00 +0000</pubDate>
      <link>https://forem.com/lusqua/desvendando-a-vpc-na-aws-subnets-gateways-e-configuracao-de-rede-4lb6</link>
      <guid>https://forem.com/lusqua/desvendando-a-vpc-na-aws-subnets-gateways-e-configuracao-de-rede-4lb6</guid>
      <description>&lt;p&gt;Ontem à noite, após o expediente, decidi resolver alguns problemas técnicos em um side project na AWS. O desafio? Criar uma infraestrutura robusta, com subnets públicas e privadas, garantindo que apenas os recursos certos tivessem acesso à internet. A ideia era entender de uma vez por todas como cada peça se encaixa: Internet Gateway, NAT Gateway e os detalhes de roteamento.&lt;/p&gt;

&lt;p&gt;O que parecia simples se revelou mais complexo. Ao configurar meu Auto Scaling Group, notei que as instâncias EC2 não estavam acessando a internet, mesmo quando atribuídas a subnets públicas. Isso me fez mergulhar fundo nas configurações de roteamento, nas permissões de IP público e nas nuances da AWS.&lt;/p&gt;

&lt;p&gt;Aqui estão as lições e conceitos principais que consegui extrair desse sofrimento desenfreado:&lt;/p&gt;

&lt;h3&gt;
  
  
  Subnet Pública vs. Privada
&lt;/h3&gt;

&lt;p&gt;Para simplificar, uma subnet pública permite que instâncias tenham acesso direto à internet através de um Internet Gateway (IGW), essencial para servidores web ou APIs externas. No entanto, minha aplicação precisava de segurança adicional, então configurei uma subnet privada para meus recursos internos, como bancos de dados. Essas subnets privadas não têm acesso direto à internet, mas, com um NAT Gateway, consegui permitir que os recursos façam atualizações ou se comuniquem externamente quando necessário.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2nvj70oq9pudhxzq9bj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2nvj70oq9pudhxzq9bj.png" alt="Um diagrama retangular com uma subnet pública à esquerda, conectada diretamente à internet via Internet Gateway. À direita, a subnet privada se conecta indiretamente à internet através de um NAT Gateway, exibindo recursos internos protegidos, como bancos de dados." width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configurando o Internet Gateway e o NAT Gateway
&lt;/h3&gt;

&lt;p&gt;Para as subnets públicas, configurei um Internet Gateway e adicionei uma rota que apontava 0.0.0.0/0 para ele, o que garantiu que as instâncias pudessem enviar e receber tráfego da internet. Já para as subnets privadas, configurei um NAT Gateway na subnet pública, permitindo que o tráfego externo fosse roteado indiretamente. Isso evitou a exposição direta à internet, mas ainda permitiu que as instâncias privadas acessem atualizações externas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd910xf58v6mvbp66pm9s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd910xf58v6mvbp66pm9s.png" alt="A imagem mostra uma subnet pública conectada a um Internet Gateway para tráfego direto de internet, enquanto uma subnet privada se conecta indiretamente via NAT Gateway, que reside na subnet pública." width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Desafios com o Auto Scaling e Distribuição de Tarefas
&lt;/h3&gt;

&lt;p&gt;Um dos problemas mais complexos foi garantir que as tarefas do ECS fossem distribuídas entre as instâncias EC2 de maneira balanceada. Inicialmente, minhas tarefas não se espalhavam como esperado, então experimentei diferentes estratégias de Task Placement, como “AZ balanced binpack”, que ajudou a otimizar o uso das instâncias e garantir redundância entre as zonas de disponibilidade.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0iujc8t25266arxo4fns.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0iujc8t25266arxo4fns.png" alt="Um diagrama de várias instâncias EC2 espalhadas por zonas de disponibilidade (AZs) com tarefas ECS distribuídas uniformemente entre elas, utilizando a estratégia “AZ balanced binpack” para otimização e redundância de recursos." width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Após uma noite inteira, finalmente consegui configurar uma arquitetura estável, com recursos bem distribuídos e segura. Este side project foi uma experiência reveladora, me dando não apenas o domínio sobre subnets e gateways na AWS, mas também uma lição sobre resiliência e atenção aos detalhes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusão
&lt;/h3&gt;

&lt;p&gt;O processo de configurar subnets públicas e privadas, Internet Gateway, e NAT Gateway pode ser complicado, mas é essencial para arquiteturas seguras e escaláveis. Para quem enfrenta os mesmos desafios, espero que este artigo forneça uma visão prática e inspiradora para perseverar, mesmo que isso signifique uma longa noite de trabalho.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>learning</category>
      <category>network</category>
    </item>
    <item>
      <title>Automatizando deployments com WatchTower e GitHub Container Registry</title>
      <dc:creator>Lucas Aguiar</dc:creator>
      <pubDate>Tue, 08 Oct 2024 17:05:55 +0000</pubDate>
      <link>https://forem.com/lusqua/automatizando-atualizacoes-de-containers-com-watchtower-e-github-container-registry-1l7c</link>
      <guid>https://forem.com/lusqua/automatizando-atualizacoes-de-containers-com-watchtower-e-github-container-registry-1l7c</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2jjiblog4farzqybpa6.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2jjiblog4farzqybpa6.jpeg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Manter aplicações atualizadas é crucial para garantir segurança, performance e a incorporação de novas funcionalidades. No mundo dos containers Docker, WatchTower surge como uma ferramenta poderosa para automatizar esse processo. Neste artigo, vamos explorar como integrar o WatchTower com o GitHub Container Registry (GHCR), criando um fluxo contínuo de atualização para seus containers. Vamos abordar desde a configuração inicial até um exemplo prático para ilustrar todo o processo.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 Introdução ao WatchTower e GHCR
&lt;/h2&gt;

&lt;p&gt;O que é o WatchTower?&lt;/p&gt;

&lt;p&gt;WatchTower é uma ferramenta de código aberto que monitora seus containers Docker e automaticamente os atualiza quando uma nova imagem é detectada no registry configurado. Isso elimina a necessidade de intervenções manuais para manter suas aplicações sempre atualizadas com as últimas versões.&lt;/p&gt;

&lt;p&gt;O que é o GitHub Container Registry (GHCR)?&lt;/p&gt;

&lt;p&gt;O GitHub Container Registry é um serviço de registro de containers integrado ao GitHub, permitindo que você armazene e gerencie suas imagens Docker diretamente nos repositórios do GitHub. Ele oferece integração perfeita com workflows de CI/CD, facilitando a publicação e o gerenciamento de imagens.&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠️ Pré-requisitos
&lt;/h2&gt;

&lt;p&gt;Antes de começarmos, certifique-se de ter o seguinte:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker instalado na sua máquina ou servidor.&lt;/li&gt;
&lt;li&gt;Conta no GitHub com acesso ao GitHub Container Registry.&lt;/li&gt;
&lt;li&gt;Repositório GitHub para hospedar sua aplicação e workflow.&lt;/li&gt;
&lt;li&gt;Conhecimentos básicos de Docker e GitHub Actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🔧 Passo a Passo: Integrando WatchTower com GHCR&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configurando o GitHub Container Registry&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Primeiro, vamos configurar o GHCR para armazenar nossas imagens Docker.&lt;/p&gt;

&lt;p&gt;a. Autenticação no GHCR&lt;/p&gt;

&lt;p&gt;Para autenticar o Docker com o GHCR, você precisa de um token de acesso pessoal com permissão para &lt;code&gt;write:packages&lt;/code&gt; e &lt;code&gt;read:packages&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;echo $CR_PAT | docker login ghcr.io -u USERNAME --password-stdin&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Substitua CR_PAT pelo seu token de acesso pessoal e USERNAME pelo seu nome de usuário do GitHub.&lt;/p&gt;

&lt;p&gt;b. Criando uma Imagem Docker&lt;/p&gt;

&lt;p&gt;Vamos criar uma imagem Docker simples para este exemplo. Crie um arquivo Dockerfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Dockerfile
FROM node:14-alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;E um arquivo package.json básico:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name": "watchtower-example",
  "version": "1.0.0",
  "main": "index.js",
  "scripts": {
    "start": "node index.js"
  },
  "dependencies": {
    "express": "^4.17.1"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Crie também um index.js simples:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require('express');
const app = express();
const port = 3000;

app.get('/', (req, res) =&amp;gt; {
  res.send('Hello from WatchTower and GHCR!');
});

app.listen(port, () =&amp;gt; {
  console.log(`App running on port ${port}`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;c. Construindo e Publicando a Imagem&lt;/p&gt;

&lt;p&gt;Construa a imagem Docker e publique-a no GHCR.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker build -t ghcr.io/USERNAME/watchtower-example:latest .&lt;/code&gt;&lt;br&gt;
&lt;code&gt;docker push ghcr.io/USERNAME/watchtower-example:latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Substitua &lt;code&gt;USERNAME&lt;/code&gt; pelo seu &lt;strong&gt;nome de usuário&lt;/strong&gt; do GitHub.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configurando o WatchTower&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Agora, vamos configurar o WatchTower para monitorar o GHCR e atualizar automaticamente nossos containers.&lt;/p&gt;

&lt;p&gt;a. Criando um Arquivo de Configuração do WatchTower&lt;/p&gt;

&lt;p&gt;Crie um arquivo &lt;strong&gt;docker-compose.yml&lt;/strong&gt; para orquestrar o WatchTower e sua aplicação.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3'

services:
  app:
    image: ghcr.io/USERNAME/watchtower-example:latest
    container_name: watchtower-example
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=production

  watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - WATCHTOWER_CLEANUP=true
      - WATCHTOWER_POLL_INTERVAL=300
    command: --registry ghcr.io --interval 300
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explicações:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;app: Serviço da aplicação que será monitorada pelo WatchTower.&lt;/li&gt;
&lt;li&gt;watchtower: Serviço do WatchTower configurado para monitorar o GHCR com um intervalo de verificação de 5 minutos (300 segundos).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;b. Variáveis de Ambiente para Autenticação&lt;/p&gt;

&lt;p&gt;Para que o WatchTower possa acessar o GHCR privado, precisamos fornecer as credenciais. Atualize o serviço watchtower no docker-compose.yml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;environment:
  - WATCHTOWER_CLEANUP=true
  - WATCHTOWER_POLL_INTERVAL=300
  - WATCHTOWER_REGISTRY_AUTH=true
  - WATCHTOWER_REGISTRY_USERNAME=USERNAME
  - WATCHTOWER_REGISTRY_PASSWORD=CR_PAT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Substitua &lt;strong&gt;USERNAME&lt;/strong&gt; pelo &lt;strong&gt;seu nome&lt;/strong&gt; de usuário do GitHub e &lt;strong&gt;CR_PAT&lt;/strong&gt; pelo &lt;strong&gt;seu token&lt;/strong&gt; de acesso pessoal.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Automatizando a Atualização com GitHub Actions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Para garantir que sempre que você fizer push no repositório, a imagem Docker seja atualizada no GHCR, vamos configurar um workflow no GitHub Actions.&lt;/p&gt;

&lt;p&gt;Crie um arquivo .github/workflows/docker-publish.yml no seu repositório:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Publish Docker image

on:
  push:
    branches: [ main ]

jobs:
  build-and-push:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Login to GHCR
        uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Build and push
        uses: docker/build-push-action@v4
        with:
          context: .
          push: true
          tags: ghcr.io/USERNAME/watchtower-example:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explicações:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;actions/checkout@v3: Faz o checkout do código.&lt;/li&gt;
&lt;li&gt;docker/setup-buildx-action@v3: Configura o Docker Buildx.&lt;/li&gt;
&lt;li&gt;docker/login-action@v3: Autentica no GHCR usando o token do GitHub.&lt;/li&gt;
&lt;li&gt;docker/build-push-action@v4: Constrói e publica a imagem no GHCR.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Substitua &lt;strong&gt;USERNAME&lt;/strong&gt; pelo seu &lt;strong&gt;nome de usuário&lt;/strong&gt; do GitHub.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Implementando e Testando&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;a. Deploy Inicial&lt;/p&gt;

&lt;p&gt;Inicie os serviços usando o Docker Compose:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose up -d&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Isso &lt;strong&gt;iniciará sua aplicação&lt;/strong&gt; e o WatchTower. O WatchTower começará a &lt;strong&gt;monitorar o GHCR a cada 5 minutos&lt;/strong&gt; (300 segundos) para verificar se há novas imagens disponíveis.&lt;/p&gt;

&lt;p&gt;b. Atualizando a Aplicação&lt;/p&gt;

&lt;p&gt;Faça uma alteração no código da sua aplicação, por exemplo, atualize a mensagem em index.js:&lt;/p&gt;

&lt;p&gt;res.send('Hello from WatchTower, GHCR, and your updated app!');&lt;/p&gt;

&lt;p&gt;Commit e push as alterações para o repositório GitHub:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;git add .&lt;/code&gt;&lt;br&gt;
&lt;code&gt;git commit -m "Update welcome message"&lt;/code&gt;&lt;br&gt;
&lt;code&gt;git push origin main&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;O GitHub Actions irá construir e publicar a nova imagem no GHCR. Após até 5 minutos, o WatchTower detectará a nova imagem e atualizará automaticamente o container watchtower-example sem downtime.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feoaybq1vktiu6jxa9j8r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feoaybq1vktiu6jxa9j8r.jpg" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🎉 Conclusão&lt;/p&gt;

&lt;p&gt;Integrar o WatchTower com o GitHub Container Registry oferece uma solução eficiente para manter suas aplicações Docker sempre atualizadas de forma automática. Essa automação não apenas economiza tempo, mas também garante que suas aplicações estejam sempre rodando as versões mais recentes e seguras. Além disso, combinando com ferramentas como o Traefik, você pode criar uma infraestrutura robusta e escalável, pronta para atender às demandas de ambientes de produção modernos.&lt;/p&gt;

&lt;p&gt;Experimente implementar essa integração no seu projeto e aproveite os benefícios da automação contínua!&lt;/p&gt;

&lt;p&gt;📚 Recursos Adicionais&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/containrrr/watchtower" rel="noopener noreferrer"&gt;WatchTower GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry" rel="noopener noreferrer"&gt;GitHub Container Registry Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/compose/" rel="noopener noreferrer"&gt;Docker Compose Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/actions" rel="noopener noreferrer"&gt;GitHub Actions Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>watchtower</category>
      <category>github</category>
    </item>
    <item>
      <title>Implementing DataLoader and Understanding Its Advantages Over Lookup</title>
      <dc:creator>Lucas Aguiar</dc:creator>
      <pubDate>Thu, 15 Aug 2024 00:48:20 +0000</pubDate>
      <link>https://forem.com/lusqua/implementing-dataloader-and-understanding-its-advantages-over-lookup-1phb</link>
      <guid>https://forem.com/lusqua/implementing-dataloader-and-understanding-its-advantages-over-lookup-1phb</guid>
      <description>&lt;p&gt;If you’ve been working with databases, especially in a GraphQL environment, you might have encountered the "N+1 problem." This issue occurs when your system makes multiple database calls to fetch related data, which can significantly slow down your application. Enter &lt;strong&gt;DataLoader&lt;/strong&gt;—a tool that’s here to save the day by batching and caching database requests, making your queries much more efficient.&lt;/p&gt;

&lt;p&gt;Let’s dive into how DataLoader works and see it in action with a practical example involving bank transactions and user details.&lt;/p&gt;

&lt;h4&gt;
  
  
  What is DataLoader?
&lt;/h4&gt;

&lt;p&gt;DataLoader is a utility designed to batch and cache requests for data. Instead of making multiple calls to fetch each piece of data separately, DataLoader groups these requests together, sends them in one go, and then caches the results for future use. This approach is particularly useful in reducing the number of database queries, thus speeding up your application.&lt;/p&gt;

&lt;h4&gt;
  
  
  How DataLoader Solves the N+1 Problem
&lt;/h4&gt;

&lt;p&gt;Consider a scenario where you have a list of banking transactions, and each transaction is associated with a user. Without DataLoader, you might end up querying the database for each user separately, leading to multiple (and often redundant) database queries. This is where DataLoader shines—it batches these requests together, so the database is queried only once per user.&lt;/p&gt;

&lt;h4&gt;
  
  
  Implementing DataLoader: A Practical Example
&lt;/h4&gt;

&lt;p&gt;Let’s imagine you’re building a banking application, and you need to list transactions along with the details of the users who made those transactions. Here’s how you can implement DataLoader to optimize this process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Setting Up the DataLoader:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;DataLoader&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;dataloader&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ObjectId&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mongodb&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;getCollection&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./mongodbConfig&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;// Your MongoDB configuration file&lt;/span&gt;

   &lt;span class="c1"&gt;// Batch function to fetch users by IDs&lt;/span&gt;
   &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;usersInBatch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ids&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;coll&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getCollection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;objIdIds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ids&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ObjectId&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;found&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;coll&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;$in&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;objIdIds&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;toArray&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
     &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;found&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;};&lt;/span&gt;

   &lt;span class="c1"&gt;// Create a DataLoader instance for users&lt;/span&gt;
   &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userLoader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;DataLoader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;usersInBatch&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, &lt;code&gt;usersInBatch&lt;/code&gt; is a batch loading function that fetches user details for multiple &lt;code&gt;userId&lt;/code&gt;s at once, and &lt;code&gt;userLoader&lt;/code&gt; is the DataLoader instance that leverages this function.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Using DataLoader in Your Application:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;   &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;userLoader&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./dataLoaderConfig&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;// Your DataLoader configuration file&lt;/span&gt;

   &lt;span class="c1"&gt;// Function to get transaction details&lt;/span&gt;
   &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getTransactionDetails&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;transaction&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;userLoader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;transaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
     &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;transaction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="na"&gt;userName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="na"&gt;userEmail&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="p"&gt;};&lt;/span&gt;
   &lt;span class="p"&gt;};&lt;/span&gt;

   &lt;span class="c1"&gt;// Function to list transactions with user details&lt;/span&gt;
   &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;listTransactions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;limit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;coll&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getCollection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;transactions&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;skip&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
     &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rawTransactions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;coll&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;skip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;skip&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toArray&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

     &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rawTransactions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;getTransactionDetails&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
   &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;getTransactionDetails&lt;/code&gt; function uses &lt;code&gt;userLoader&lt;/code&gt; to fetch user details by &lt;code&gt;userId&lt;/code&gt;, ensuring that each user is only fetched once per batch.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;listTransactions&lt;/code&gt; retrieves a page of transactions and maps over them to enrich each transaction with the relevant user details.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why Use DataLoader?
&lt;/h4&gt;

&lt;p&gt;By using DataLoader, you reduce the number of database queries from potentially hundreds (or more) to just a few. This not only optimizes performance but also reduces load on your database, which can be crucial in high-traffic applications. Moreover, DataLoader’s built-in caching mechanism ensures that once a user’s data is fetched, it doesn’t need to be fetched again in the same request cycle, further speeding up your application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;DataLoader is an invaluable tool when dealing with scenarios that involve repeated database lookups, particularly in applications with complex data relationships like the one we explored here. By batching and caching requests, it dramatically improves the efficiency and performance of your database interactions. Whether you’re dealing with banking transactions, product orders, or any other domain where related data is frequently accessed, DataLoader can help you keep your application running smoothly and efficiently.&lt;/p&gt;

&lt;p&gt;If you're facing performance issues due to repeated database queries, give DataLoader a try—your application (and your users) will thank you!&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
