<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Tiago Rosa da costa</title>
    <description>The latest articles on Forem by Tiago Rosa da costa (@tiago123456789).</description>
    <link>https://forem.com/tiago123456789</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/tiago123456789"/>
    <language>en</language>
    <item>
      <title>Holding the load - Avoiding duplicated Webhook requests</title>
      <dc:creator>Tiago Rosa da costa</dc:creator>
      <pubDate>Wed, 21 Jan 2026 02:54:45 +0000</pubDate>
      <link>https://forem.com/tiago123456789/holding-the-load-avoiding-duplicated-webhook-requests-2hl7</link>
      <guid>https://forem.com/tiago123456789/holding-the-load-avoiding-duplicated-webhook-requests-2hl7</guid>
      <description>&lt;h2&gt;
  
  
  What is the motivation?
&lt;/h2&gt;

&lt;p&gt;Imagine the scenario where you have a Webhook integration and the application responsible to notify your application sent the Webhook request more than 1 time.&lt;/p&gt;

&lt;p&gt;The application will take the webhook data to create a subscription, so in this scenario will create 2 subscriptions where it will change the user 2 times instead of 1 time, so the user will be angry because of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to resolve it?
&lt;/h2&gt;

&lt;p&gt;Short answer is &lt;strong&gt;idempotency key&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The solution applied is converting the data sent on Webhook request to md5 hash and use the md5 hash as id to store on the database. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The webhook request data: { “message”: “test”}

Md5 hash: 42cc32636e077687972862938d538929
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So everytime I send the same Webhook request data will generate the same &lt;strong&gt;id&lt;/strong&gt; and I using the column &lt;strong&gt;id&lt;/strong&gt; as PRIMARY KEY, important point about PRIMARY KEY constraint is: each value needs to be unique on column is PRIMARY KEY instruction, save the first Webhook request, but next Webhook request with the same data doesn’t store because already exists.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;When I convert webhook request data to md5 hash to use as the &lt;strong&gt;id&lt;/strong&gt; combined by column id PRIMARY KEY instructions on the database to make sure the column &lt;strong&gt;id&lt;/strong&gt; has only unique values, that way I prevent duplicated Webhook requests.&lt;/p&gt;

&lt;p&gt;Link of project: &lt;a href="https://github.com/tiago123456789/holding-the-load" rel="noopener noreferrer"&gt;https://github.com/tiago123456789/holding-the-load&lt;/a&gt;&lt;/p&gt;

</description>
      <category>api</category>
      <category>architecture</category>
      <category>backend</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Holding the Load: Handling Webhook Traffic Spikes Without Scaling Your cheap VPS</title>
      <dc:creator>Tiago Rosa da costa</dc:creator>
      <pubDate>Tue, 13 Jan 2026 11:44:06 +0000</pubDate>
      <link>https://forem.com/tiago123456789/holding-the-load-handling-webhook-traffic-spikes-without-scaling-your-cheap-vps-2hi4</link>
      <guid>https://forem.com/tiago123456789/holding-the-load-handling-webhook-traffic-spikes-without-scaling-your-cheap-vps-2hi4</guid>
      <description>&lt;p&gt;Webhook-based architectures are everywhere.&lt;/p&gt;

&lt;p&gt;From payment providers to automation platforms and SaaS integrations, webhooks are often the primary way systems communicate asynchronously. They work well until traffic spikes hit a self-hosted environment.&lt;/p&gt;

&lt;p&gt;This post explains &lt;strong&gt;Holding the Load&lt;/strong&gt;, a project I built to solve a very specific but common problem:&lt;br&gt;
&lt;strong&gt;how to absorb webhook spikes without vertically scaling a VPS&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Problem: Webhooks Are Bursty by Nature
&lt;/h2&gt;

&lt;p&gt;If you self-host applications or automation tools, you’ve probably seen this pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A VPS handles normal traffic just fine&lt;/li&gt;
&lt;li&gt;Webhooks arrive in short bursts&lt;/li&gt;
&lt;li&gt;A spike happens (campaigns, batch events, retries, provider issues)&lt;/li&gt;
&lt;li&gt;CPU and memory usage explode&lt;/li&gt;
&lt;li&gt;Requests fail or time out&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most webhook providers don’t care about your infrastructure limits. They will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Retry aggressively&lt;/li&gt;
&lt;li&gt;Send large volumes in a short time window&lt;/li&gt;
&lt;li&gt;Assume you can handle it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The usual response is to &lt;strong&gt;scale the VPS&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More CPU&lt;/li&gt;
&lt;li&gt;More memory&lt;/li&gt;
&lt;li&gt;Higher monthly cost&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But here’s the issue:&lt;br&gt;
That extra capacity is often needed &lt;strong&gt;only for minutes or hours&lt;/strong&gt;, not 24/7.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Core Idea Behind Holding the Load
&lt;/h2&gt;

&lt;p&gt;Holding the Load introduces a &lt;strong&gt;decoupling layer&lt;/strong&gt; between webhook ingestion and processing.&lt;/p&gt;

&lt;p&gt;Instead of letting webhooks hit your VPS directly, you place Holding the Load &lt;strong&gt;in front of it&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;At a high level:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Webhook Provider
       |
       v
Holding the Load (buffer + control)
       |
       v
Your VPS (consumer)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This separation is the key to stability.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Holding the Load?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Holding the Load&lt;/strong&gt; is a lightweight application designed to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Receive high volumes of webhook requests&lt;/li&gt;
&lt;li&gt;Store them temporarily&lt;/li&gt;
&lt;li&gt;Expose a controlled consumption mechanism for downstream services&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your VPS no longer reacts to traffic spikes.&lt;br&gt;
Instead, it &lt;strong&gt;pulls messages at a rate it can safely handle&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works (Technically)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Webhook Ingestion
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Webhook requests are received by Holding the Load&lt;/li&gt;
&lt;li&gt;Requests are acknowledged immediately&lt;/li&gt;
&lt;li&gt;Payloads are persisted (FIFO ordering)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This protects webhook providers from timeouts while isolating your backend.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Storage as a Buffer
&lt;/h3&gt;

&lt;p&gt;Holding the Load acts as a &lt;strong&gt;queue-like buffer&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Incoming webhooks are stored in Durable object(service from Cloudflare) using sqlite storage, so preventing lose the webhook data.&lt;/li&gt;
&lt;li&gt;Order is preserved&lt;/li&gt;
&lt;li&gt;No processing happens at ingestion time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is critical: ingestion and processing are completely decoupled.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Controlled Consumption by Your VPS
&lt;/h3&gt;

&lt;p&gt;Your application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pulls messages from Holding the Load&lt;/li&gt;
&lt;li&gt;Defines:

&lt;ul&gt;
&lt;li&gt;Batch size&lt;/li&gt;
&lt;li&gt;Pull interval&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fetch 10 messages every 5 seconds&lt;/li&gt;
&lt;li&gt;Or 50 messages every minute&lt;/li&gt;
&lt;li&gt;Or any strategy that fits your VPS capacity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first webhook received is always the first consumed (FIFO).&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Architecture Matters
&lt;/h2&gt;

&lt;p&gt;This approach solves multiple problems at once:&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Traffic Spike Absorption
&lt;/h3&gt;

&lt;p&gt;Webhook spikes are handled upstream without affecting your VPS.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Predictable Resource Usage
&lt;/h3&gt;

&lt;p&gt;Your VPS workload becomes stable and predictable.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ No Overprovisioning
&lt;/h3&gt;

&lt;p&gt;You don’t need to pay for peak capacity all month long.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Failure Isolation
&lt;/h3&gt;

&lt;p&gt;Even if your VPS goes down temporarily, webhooks are not lost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless Cost Model
&lt;/h2&gt;

&lt;p&gt;Holding the Load follows a &lt;strong&gt;serverless-style philosophy&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Resources scale based on demand&lt;/li&gt;
&lt;li&gt;You pay only for actual usage&lt;/li&gt;
&lt;li&gt;Idle time costs almost nothing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is particularly useful when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Spikes are rare but intense&lt;/li&gt;
&lt;li&gt;Traffic patterns are unpredictable&lt;/li&gt;
&lt;li&gt;You want cost efficiency without sacrificing reliability&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Typical Use Cases
&lt;/h2&gt;

&lt;p&gt;Holding the Load works especially well for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automation platforms like N8N&lt;/li&gt;
&lt;li&gt;Self-hosted workflow engines&lt;/li&gt;
&lt;li&gt;Api&lt;/li&gt;
&lt;li&gt;Ai agents where react based webhook event&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why I Built It
&lt;/h2&gt;

&lt;p&gt;I built Holding the Load after noticing a recurring pattern in self-hosted systems:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We scale infrastructure to handle rare peaks, not real workloads.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Holding the Load flips that logic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep the VPS small and cheap&lt;/li&gt;
&lt;li&gt;Scale only the ingestion layer&lt;/li&gt;
&lt;li&gt;Let processing happen at a controlled pace&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Holding the Load is not a replacement for queues, workers, or job schedulers.&lt;/p&gt;

&lt;p&gt;It’s a &lt;strong&gt;protective layer&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A buffer that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shields your VPS&lt;/li&gt;
&lt;li&gt;Controls load&lt;/li&gt;
&lt;li&gt;Reduces cost&lt;/li&gt;
&lt;li&gt;Improves reliability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you rely on webhooks and self-host your infrastructure, this tool can simplify your scaling strategy.&lt;/p&gt;

&lt;p&gt;Here’s the &lt;strong&gt;updated ending&lt;/strong&gt; with the project link added cleanly and naturally for a tech blog:&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Holding the Load is not a replacement for queues, workers, or job schedulers.&lt;/p&gt;

&lt;p&gt;It’s a &lt;strong&gt;protective layer&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A buffer that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shields your VPS&lt;/li&gt;
&lt;li&gt;Controls load&lt;/li&gt;
&lt;li&gt;Reduces cost&lt;/li&gt;
&lt;li&gt;Improves reliability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you rely on webhooks and self-host your infrastructure, this pattern can dramatically simplify your scaling strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Repository
&lt;/h2&gt;

&lt;p&gt;You can find the full source code, documentation, and examples here:&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;&lt;a href="https://github.com/tiago123456789/holding-the-load" rel="noopener noreferrer"&gt;https://github.com/tiago123456789/holding-the-load&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Feedback, issues, and contributions are welcome.&lt;/p&gt;

&lt;h2&gt;
  
  
  Need Help or Want to Talk?
&lt;/h2&gt;

&lt;p&gt;If you’re facing webhook scaling issues, evaluating this architecture, or need help adapting &lt;strong&gt;Holding the Load&lt;/strong&gt; to your own setup, feel free to reach out.&lt;/p&gt;

&lt;p&gt;I’m happy to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Discuss real-world use cases&lt;/li&gt;
&lt;li&gt;Help with architecture decisions&lt;/li&gt;
&lt;li&gt;Answer questions about the project&lt;/li&gt;
&lt;li&gt;Support integrations or custom scenarios&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;📧 &lt;strong&gt;Email:&lt;/strong&gt; &lt;a href="mailto:tiagorosadacost@gmail.com"&gt;tiagorosadacost@gmail.com&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  ai #aiagents #automation #n8n #aichatbot #chatbot
&lt;/h1&gt;

</description>
      <category>ai</category>
      <category>webhook</category>
      <category>agents</category>
      <category>selfhosted</category>
    </item>
    <item>
      <title>Chatbot para ajudar a melhorar a conversão em inglês(gratuito)</title>
      <dc:creator>Tiago Rosa da costa</dc:creator>
      <pubDate>Sat, 09 Aug 2025 16:19:49 +0000</pubDate>
      <link>https://forem.com/tiago123456789/chatbot-para-ajudar-a-melhorar-a-conversao-em-inglesgratuito-55dm</link>
      <guid>https://forem.com/tiago123456789/chatbot-para-ajudar-a-melhorar-a-conversao-em-inglesgratuito-55dm</guid>
      <description>&lt;p&gt;Se você está ainda sente que o inglês te trava um pouco — seja pra entender documentações, participar de calls ou aplicar pra vagas fora ou expandir possibilidades de negócios — conheça esse projeto que lancei: &lt;strong&gt;EnglishLanguageTutorBotFree&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  O que é?
&lt;/h2&gt;

&lt;p&gt;É um chatbot no Telegram que usa inteligência artificial pra simular uma conversa semelhante a uma conversa com o professor de inglês. Você envia áudio ou texto, conversa em inglês, e no final recebe feedback automático com pontos pra melhorar. &lt;strong&gt;OBS: essa ferramenta tem como foco ser um complemento no seu estudo de inglês.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sem cadastro, apenas instalar o Telegram e acessar &lt;a href="https://t.me/EnglishLanguageTutorFreeBot" rel="noopener noreferrer"&gt;https://t.me/EnglishLanguageTutorFreeBot&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Funcionalidades já existem:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Conversas em inglês por texto ou áudio sobre qualquer assunto de interesse — como se fosse com um professor real.&lt;/li&gt;
&lt;li&gt;Transcrição automática dos seus áudios.&lt;/li&gt;
&lt;li&gt;Respostas também em português, para facilitar quando você esquecer uma palavra ou quiser perguntar algo como:"How can I say ‘dinheiro’ in English?"&lt;/li&gt;
&lt;li&gt;Você pode também pedir para elaborar um aula ou simular um entrevista de emprego. Por exemplo: você pode enviar um áudio ou mensagem falando algo como: 'I want to study more about verb to be, so elabore a quiz about verb to be using the topic football' ou "Let's simulate a interview for Developer or Project manager, so elabore 10 most common questions and play as HR recruiter while I candidate"&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Funcionalidades que serão adicionadas em breve:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Lembrar você de estudar todos os dias&lt;/li&gt;
&lt;li&gt;Newsletter com links de esportes/lifestyle/notícias pra ler em inglês&lt;/li&gt;
&lt;li&gt;Salvar palavras novas pra estudar depois&lt;/li&gt;
&lt;li&gt;Gerar Excel com essas palavras pra importar no Anki&lt;/li&gt;
&lt;li&gt;Criar tarefas personalizadas com base no feedback da aula&lt;/li&gt;
&lt;li&gt;Gerar quizzes e desafios com base nos seus interesses (ex: quiz sobre Brasileirão em inglês pra treinar o verbo to be)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>braziliandevs</category>
      <category>ai</category>
      <category>openai</category>
      <category>english</category>
    </item>
    <item>
      <title>Why Should You Specify a Language Version in Your Dockerfile?</title>
      <dc:creator>Tiago Rosa da costa</dc:creator>
      <pubDate>Sun, 05 Jan 2025 14:42:19 +0000</pubDate>
      <link>https://forem.com/tiago123456789/why-should-you-specify-a-language-version-in-your-dockerfile-5h3c</link>
      <guid>https://forem.com/tiago123456789/why-should-you-specify-a-language-version-in-your-dockerfile-5h3c</guid>
      <description>&lt;p&gt;Let me share a lesson learned the hard way. Once, while working for a company, our application was running flawlessly. However, after deploying a new feature update, the application started displaying strange characters in place of properly accented words.&lt;/p&gt;

&lt;p&gt;The development team spent countless hours investigating potential causes of the problem. After much effort, I noticed something odd in the Dockerfile: we were using Node.js without specifying a version. By default, Docker pulls the latest version if no specific version is defined.&lt;/p&gt;

&lt;p&gt;In our case, Docker had downloaded Node.js version 22.7.0, which happened to have a critical bug affecting UTF-8 encoding. This caused words with accents to render as garbled characters. (For reference, see Node.js issue &lt;a href="https://github.com/nodejs/node/issues/54543" rel="noopener noreferrer"&gt;#54543&lt;/a&gt;.)&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix
&lt;/h2&gt;

&lt;p&gt;The solution was simple yet crucial: we specified a stable Node.js version (in this case, v20) in the Dockerfile and redeployed the application. Immediately, everything returned to normal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for Your Dockerfile
&lt;/h2&gt;

&lt;p&gt;Always specify the exact version of your base image to avoid such issues. Here's an example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmcgas4c1azp7q47uljx5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmcgas4c1azp7q47uljx5.png" alt="Image description" width="800" height="741"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the first line, we explicitly set the base image to Node.js v20, ensuring stability and avoiding unexpected behavior caused by breaking changes in future versions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus Tip: Optimize Your Docker Image
&lt;/h2&gt;

&lt;p&gt;When creating a Docker image, consider using the Alpine version of your base image (e.g., node:20-alpine). Why?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lightweight&lt;/strong&gt;: Alpine images are significantly smaller, saving storage space. This is especially important for private repositories like AWS ECR, where costs are tied to storage usage.&lt;br&gt;
&lt;strong&gt;Faster Builds&lt;/strong&gt;: Smaller base images download and build faster, reducing deployment time.&lt;/p&gt;

&lt;p&gt;By specifying the base image version and using optimized images, you can avoid unexpected bugs and improve your build process efficiency. Don’t let a simple oversight like omitting the language version derail your application’s stability!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>docker</category>
    </item>
    <item>
      <title>React in MVC application</title>
      <dc:creator>Tiago Rosa da costa</dc:creator>
      <pubDate>Sun, 03 Apr 2022 01:47:33 +0000</pubDate>
      <link>https://forem.com/tiago123456789/react-in-mvc-application-e77</link>
      <guid>https://forem.com/tiago123456789/react-in-mvc-application-e77</guid>
      <description>&lt;p&gt;What is it? I believe you are questioning and this question is because it is uncommon to use react.js this way, in majority the application all frontend is built using react.js and consume api REST.&lt;/p&gt;

&lt;p&gt;So, let's go, react.js in MVC application, before getting out utilizing some technology needs to understand the finality and situation more suitable to apply.&lt;/p&gt;

&lt;p&gt;Imagine the following situation: the company has MVC application and this application has more than 2 years life, many things already made and you have many feature for build, using jquery will easily the build the feature until react.js is came, where is a lot easily build feature has many interactions. So understand the benefits the react.js you present to your company, all stay interesting, but the company has many things that are necessary to change to react.js. Because there are many things to change, the company denies adoption of react.js.&lt;/p&gt;

&lt;p&gt;Many times we programmers stay get excited the technologies and we want apply this technologies all places, but when we working in some company is necessary analyses some points with: community of technology, adoptions the technology per persons, impact the adoption this technology in application, finance impact the adoption this technology and benefits the adoption this technology to the company.&lt;/p&gt;

&lt;p&gt;I believe this approach to use of react.js  in MVC applications is interesting, because it prevents many changes in application to apply this technology. This way the programmer stays happy per work using new technology and company too, because the productivity rises.&lt;/p&gt;

&lt;p&gt;Ok, I speak a lot but show nothing, so, let’s go. Image below is the project structure that I created to explain this approach.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmi5yeqf8exmdtz9m0i0w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmi5yeqf8exmdtz9m0i0w.png" alt="Image description" width="286" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Details about folder structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Directory &lt;strong&gt;componentsReact&lt;/strong&gt; is where react.js code. OBS: directory name is different, this does not affect anything.&lt;/li&gt;
&lt;li&gt;Directory &lt;strong&gt;src&lt;/strong&gt; where find application web code node.js.&lt;/li&gt;
&lt;li&gt;Directory &lt;strong&gt;public&lt;/strong&gt; where find js and css files.&lt;/li&gt;
&lt;li&gt;Directory &lt;strong&gt;views&lt;/strong&gt; where find applications page.
Now we go into the deep react.js code part. The image below show settings of webpack:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhs1zhc721kph5ggotmb7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhs1zhc721kph5ggotmb7.png" alt="Image description" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Summary the webpack.config.js file working the following way:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The entry is entry file used per webpack to generate bundle.js&lt;/li&gt;
&lt;li&gt;The output is section where settings where webpack to generate bundle.js and put within public directory to use more later in views directory.&lt;/li&gt;
&lt;li&gt;When generate bundle.js code is set library variable and attributed bundle.js code to this variable&lt;/li&gt;
&lt;li&gt;The module is a section where has rules and loaders necessary to webpack handler javascript new versions and css code in react.js components.&lt;/li&gt;
&lt;li&gt;In ./src/index.js file inside componentsReact directory necessary stay this way:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu61hb7uysrms9kojsjw8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu61hb7uysrms9kojsjw8.png" alt="Image description" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s go, explain image above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You saw there exist two functions renderLoging and renderLogout both have parameters, the container receive element html id where I inject react component  into, already seconds params receive one object where is passed with props to react component.&lt;/li&gt;
&lt;li&gt;In the end file export functions that were added in bundle.js file.&lt;/li&gt;
&lt;li&gt;You must stay thinking about how to render the react component on one page. The image below shows how to make this:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkgdb9cj6ejv5hll44oz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkgdb9cj6ejv5hll44oz.png" alt="Image description" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Summary what is happening here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Import bundle.js file in page.&lt;/li&gt;
&lt;li&gt;After create script block where call variable defined in webpack.config.js that has two function exported. In line 45 is called renderLogin function passing information.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result is:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp14uoaqqq4dnnj9brisg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp14uoaqqq4dnnj9brisg.png" alt="Image description" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One thing important is http requests made per react.js, no necessary implement jwt authentication with jwt token,  because when javascript makes a request the same domain automatically sends cookies in request this way you can identify the user authenticated  per session.&lt;/p&gt;

&lt;p&gt;Case you have interesting about the project used, link of repository here: &lt;a href="https://github.com/tiago123456789/REACT-IN-APPLICATION-MVC"&gt;link of github&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary:
&lt;/h1&gt;

&lt;p&gt;In this post I want to pass one approach different from how to use react.js. Another thing I believe has passed is technology is tools and you need to analyze the impact that it can cause in application.&lt;/p&gt;

&lt;p&gt;In this example I use application node.js, but you can use this approach to applications laravel, spring, ruby on rails, etc&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Scalability web application</title>
      <dc:creator>Tiago Rosa da costa</dc:creator>
      <pubDate>Sun, 03 Apr 2022 01:34:04 +0000</pubDate>
      <link>https://forem.com/tiago123456789/scalability-web-application-2cfi</link>
      <guid>https://forem.com/tiago123456789/scalability-web-application-2cfi</guid>
      <description>&lt;p&gt;Scalability is the capacity the application handles growth access number, users and handler data volume without affecting user experience.&lt;/p&gt;

&lt;p&gt;If you search about scalability in web applications, you will probably find someone with: vertical scalability and horizontal scalability, you can think what is it first moment? Easy my friend, keep going, I will explain to you  what I know about this theme.&lt;/p&gt;

&lt;h1&gt;
  
  
  Vertical scalability
&lt;/h1&gt;

&lt;p&gt;This scalability type where you increment hardware resources in the machine where running your application. Increase hardware resources with: cpu, ram and disk to handle growth access number, users and data volume.&lt;/p&gt;

&lt;p&gt;The image below describe the vertical scalability well:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshb706di545j8b3f16mq.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshb706di545j8b3f16mq.jpeg" alt="Image description" width="800" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Increasing the hardware resources of a machine is relatively easy when you are using IAAS(infrastructure as a service) with AWS, Azure and DigitalOcean, because you access the specified service and change machine settings in a few minutes.&lt;/p&gt;

&lt;p&gt;But has some points you need warning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Exist the hardware resource limit you can increment one machine&lt;/li&gt;
&lt;li&gt;If there is some problem on the machine or in the data center where the machine is put. Your application can't be accessed.&lt;/li&gt;
&lt;li&gt;Sometimes you have a big machine, but no need for all hardware resources, you need only moments of peak access. &lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Horizontal scalability
&lt;/h1&gt;

&lt;p&gt;This scalability type where you increment machine numbers where each machine has an application running to handler growth access number, users, data volume.&lt;/p&gt;

&lt;p&gt;Has one component very important that is load balancer responsable distribuite request between machines with application web running.&lt;/p&gt;

&lt;p&gt;The image below describe horizontal scalability well:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjs9j60cfk9f3jqov6c4.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjs9j60cfk9f3jqov6c4.jpeg" alt="Image description" width="800" height="716"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this case only creating one machine with an application is when the ready load balancer sends a request to this new machine.&lt;/p&gt;

&lt;p&gt;One thing very interesting this scalability type if you settings yours machines in diferentes availability zones(AWS concept) you have fail torelancy, because if one machine or data center &lt;br&gt;
has some problems if your application keeps running because there exist other machines in other availability zones(AWS concept).&lt;/p&gt;

&lt;p&gt;If some IAAS with AWS has autoscaling, that is one service that allows increment or decrement number machines automatically based on some settings. For example: when the machine group reaches 80% cpu usage, create a new machine, and when it reaches 30% cpu usage remove one machine. &lt;/p&gt;

&lt;p&gt;But has points you have warning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If your application uses sessions authentication is necessary, enable sticky sessions on load balancer.&lt;/li&gt;
&lt;li&gt;If  your application has upload of files is necessary, use services with S3 to storage your files, because if you store in disk the machine sometimes can have problems. For example: you have 2 machines, you make request load balancer send you to machine 1, you store file machine 1 disk, moments later you can need get file stored but you request load balancer send to machine 2 where don’t have this file in disk and to resolve this problem you some service with S3.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Vertical scalability or horizontal scalability? Which is better?
&lt;/h1&gt;

&lt;p&gt;Your application has a fixed access number customer and the application don’t will grow up, so vertical scalability. If you don’t have a certain access number and can increment access number, users and data volume, so horizontal scalability.  &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Serverless architecture</title>
      <dc:creator>Tiago Rosa da costa</dc:creator>
      <pubDate>Mon, 07 Feb 2022 16:10:37 +0000</pubDate>
      <link>https://forem.com/tiago123456789/serverless-architecture-40cg</link>
      <guid>https://forem.com/tiago123456789/serverless-architecture-40cg</guid>
      <description>&lt;h1&gt;
  
  
  What’s serverless architecture?
&lt;/h1&gt;

&lt;p&gt;This serverless architecture has the following characteristics: pay to resources used and cloud provider is responsible per running and scale out and scale in based needs.&lt;/p&gt;

&lt;p&gt;Another thing  word serverless means no server,  no true because you need one server to run code, in this case i believe this name is to call focus and because too no need know server where is running your code.&lt;/p&gt;

&lt;p&gt;Many times when people talk about serverless architecture the lambda function is referenced, but any anothers services can be considered serverless for example: firebase, s3, faunadb and more.&lt;/p&gt;

&lt;h1&gt;
  
  
  Evolution until serverless architecture
&lt;/h1&gt;

&lt;h2&gt;
  
  
  In house
&lt;/h2&gt;

&lt;p&gt;In house many time before people running servers in house where all responsibility the company, you need take care internet link, electricity, maintain OS updated and another responsibilities. But one big disadvantage is when you need to scale your application you need to increment hardware or purchase another server machine takes a lot of time. This is one problem because you need to scale your application to spike at 12pm for example.&lt;/p&gt;

&lt;h2&gt;
  
  
  IAAS
&lt;/h2&gt;

&lt;p&gt;IAAS is knowledge with infrastructure as a service where many cloud providers with: aws, gcp and azure offer services. Using IAAS you pay per resource that you need for example: if you need running one web application you use an ec2 instance.&lt;/p&gt;

&lt;p&gt;IAAS allows the choice of resource hardware or modify resource hardware quickly and easily. In Many cases you need to scale your application when receive spike access at 12pm exist ways allow create copies of the web application in other machines quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  PAAS
&lt;/h2&gt;

&lt;p&gt;Paas is knowledge of the platform as a service, where Heroku is a good example. The paas is one evolution of the IAAS. In this case you specific technology and send code to paas, paas is responsible for setting all things to your application running. Removing your responsibility settings all things to your application running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless
&lt;/h2&gt;

&lt;p&gt;Services model serverless pay what you use, different another solution above where you pay to the service running same no any interaction with service. The model you pay to use and scale up and down is the responsibility of the cloud provider. For example: firebase you pay what you use and if volume grows the firebase increases resource it or S3 is another example you pay to the file stored and download the file no need to worry about disk full because this is the responsibility of the s3 service.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's a lambda function or FAAS?
&lt;/h2&gt;

&lt;p&gt;The lambda function is one function of any programming language where one specific task is executed.&lt;/p&gt;

&lt;p&gt;I think it is necessary to talk about lambda function or FAAS(function as a service) because many times when people talk about serverless architecture already associated with the lambda function, but when analyzing aspects of the serverless architecture you pay what you use and cloud providers manage your application for you. When I think about this, Firebase and S3 are serverless.&lt;br&gt;
When and why use the lambda function?&lt;br&gt;
Imagine that you have a service responsable send email to anyone, to prevent create all applications to only  to send email and pay to run this application one ec2 monthly same application no receive any  interaction some moments, already another moment you have huge volume the emails to send and you need scale this application to delivery all emails suitable time.&lt;/p&gt;

&lt;p&gt;In this situation the lambda function is a great solution because you need to create only one function to send email, pay only what you use and the cloud provider scales up and down automatically the lambda function numbers when increasing the volume of email to send.&lt;/p&gt;

&lt;p&gt;Too when you need execute task spend little time less than or equal 15 minutes, tasks eventually have spike execution and other times no have interactions and you don’t care responsibility the scale up and down your application you want give this responsibility to cloud provider and you focus logic the application.&lt;/p&gt;

&lt;h2&gt;
  
  
  When not using the lambda function? 
&lt;/h2&gt;

&lt;p&gt;You need to generate a report that takes a lot of time to generate. In this case no suitable use of the lambda function because the lambda function limit lifetime is 15 minutes in aws.&lt;/p&gt;

&lt;p&gt;The websocket application is another example because after time the lambda function process is finished, so, websocket clients lose connection.&lt;/p&gt;

&lt;p&gt;The api that uses database no scale resources automatically. The lambda function scales the application layer, but your database does not scale resources automatically. It's one problem, because each request creates one the lambda function execution each the lambda function creates a new connection to your database, if your database is not scale you can down your database. In this case is interesting use one database with scale  based on demand like: dynamodb, faunadb or aurora serverless&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the lambda function cold start and why take care about it?
&lt;/h2&gt;

&lt;p&gt;Firstly let me explain what is the lambda function cold start, the cold start is time needed to the cloud provider in this case AWS get the lambda function code and put a docker container with code to run and after execute what you need. This occurs on the first time you interact with your lambda, after interactions no have cold start until the cloud provider kills the lambda function because no have interactions, in case this occurs the next interaction the lambda function will have the cold start again.&lt;/p&gt;

&lt;p&gt;Now you are thinking what is the problem the cold start because only increase time little bit, if is not problem for you, ok, but exist situation where you have specific response time for example 5 seconds to response one request, in this situation to reduce the cold start execute following things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use only dependencies necessaries because more and more dependencies more size will be the lambda function code more time spend put it to running in docker container&lt;/li&gt;
&lt;li&gt;Increment resources the lambda function only when necessary. Example: memory&lt;/li&gt;
&lt;li&gt;Using interpreted language like python, node.js and another because check types in runtime.&lt;/li&gt;
&lt;li&gt;Warm up the lambda function this way time to time interacte the lambda function to maintain warmed up. In this link &lt;a href="https://www.serverless.com/blog/keep-your-lambdas-warm/"&gt;https://www.serverless.com/blog/keep-your-lambdas-warm/&lt;/a&gt; you have one example about this.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The lambda function disadvantages?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Vendor lock.&lt;/li&gt;
&lt;li&gt;Limit timeout 15 minutes.&lt;/li&gt;
&lt;li&gt;Limit memory 10GB.&lt;/li&gt;
&lt;li&gt;Can’t store files in disk because you will lose the file.&lt;/li&gt;
&lt;li&gt;High volume connections because create one execution of the lambda function that creates a new connection on the database.&lt;/li&gt;
&lt;li&gt;More complex implements tests in application.&lt;/li&gt;
&lt;li&gt;You will have problems when using some dependency OS, because the lambda function running in docker, docker container is light. So install only packages necessaries.&lt;/li&gt;
&lt;li&gt;Development environment needs components from the cloud provider.&lt;/li&gt;
&lt;li&gt;Deploy the code manually or you need to use cloudformation or terraform.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The lambda function advantages?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A lambda function is one function isolated from another code in the application. When you change code the lambda function has no impact on another lambda function that occurs in a monolith application.&lt;/li&gt;
&lt;li&gt;The cloud provider is responsible scale up and down
After deploying the application your team no need managed it is the responsibility of the cloud provider.&lt;/li&gt;
&lt;li&gt;Low cost because you pay what you use&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to deal with some disadvantages of the lambda function?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Can’t store file in disk:&lt;/strong&gt; because after time execution the lambda function finishes and you can lose file storage in disk. To resolve this problem using storage solutions like S3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Development environment need components of the cloud provider:&lt;/strong&gt; exist a project that simulates the cloud provider services locally, this project named localstack. &lt;a href="https://github.com/localstack/localstack"&gt;https://github.com/localstack/localstack&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploy the code manually:&lt;/strong&gt; you can resolve this problem using a serverless framework. This tool simplifies your deploy because this tool reads serverless.yml file where all needed settings are to deploy the lambda function. Link of serverless framework: &lt;a href="https://www.serverless.com/"&gt;https://www.serverless.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High volume connections because create one execution of the lambda function that creates a new connection on the database:&lt;/strong&gt; in this situation you need some database to scale your resource on demand like Dynamodb.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;More complex implements tests in application:&lt;/strong&gt; the lambda function is an event trigger, so when one event triggers the lambda function executes code with the event data. In this case you can implement unit tests mocking event the lambda faction will receive to prevent use of the cloud services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Aspects you need take care before build all thing using lambda
&lt;/h2&gt;

&lt;p&gt;Take care how to use the lambda function because the cost is the sum of the following things: how many times executed the lambda function, time each execute the lambda function code and resource allocated to execute this lambda function.&lt;/p&gt;

&lt;p&gt;Another thing necessary take care is when you use the lambda function you scale application layer, but some moment this capacity the application layer scale can be cause the problem in database layer because create many connection on database or you communicate third service and third service have limit request per time or third service no prepare receive huge volume request, case no prepared you can down third service. &lt;/p&gt;

&lt;p&gt;These are something you stay warned about when you think about using the lambda function(serverless architecture).&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;I believe this post I explained a little bit of what I know about the serverless architecture, what it means, and explains that services like Firebase or S3 are serverless, not only the lambda function(FAAS).&lt;/p&gt;

&lt;p&gt;Talked more focus about the lambda function, when and where to use it, when not to use it and explain the advantages and disadvantages and keep in mind it is not a silver bullet, exist sceneries correct to use it, sceneries wrong to apply the lambda function.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>architecture</category>
      <category>faas</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Quick tip - S3 sync</title>
      <dc:creator>Tiago Rosa da costa</dc:creator>
      <pubDate>Sun, 18 Jul 2021 01:56:06 +0000</pubDate>
      <link>https://forem.com/tiago123456789/quick-tip-s3-sync-2n85</link>
      <guid>https://forem.com/tiago123456789/quick-tip-s3-sync-2n85</guid>
      <description>&lt;p&gt;What’s s3 sync? The s3 sync is one command of the aws-cli which allows synchronize files with a bucket. The complete command is: aws s3 sync ./ s3://bucket_name --delete&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explain in details this command:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The part aws s3 sync indicate to aws-cli that you need synchronize something with s3 &lt;/li&gt;
&lt;li&gt;The part ./ get all files in current directory to send to s3&lt;/li&gt;
&lt;li&gt;The part s3://bucket_name change bucket_name per name the bucket that exists in your aws account. This bucket receives files synchronized.&lt;/li&gt;
&lt;li&gt;The part --delete is optional. When mentioned this parameter indicates that after synchronization occurs delete files in the local machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;I believe that following situations s3 sync can help you:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your application running in machine and write log files in disk, but disk is full and you can’t delete log files of your application.&lt;/li&gt;
&lt;li&gt;You configured your database and made backups of the database because you didn't use management database service
You deploy the react application in s3.&lt;/li&gt;
&lt;li&gt;You execute build and upload files in s3 where s3 hosted frontend application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s talk about the situations above and explain how s3 sync can help you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First situation: your application running in machine and write log files in disk, but disk is full and you can’t delete log files of your application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this scenery your application writes log files in disk, ok. But it is necessary to remove these files because the machine disk is full and can’t delete log files. One solution to resolve this problem:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 sync ./ s3://bucket_name --delete
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explain command above:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Firstly install aws-cli and configure credentials.&lt;/li&gt;
&lt;li&gt;After execute command above where get files in &lt;strong&gt;./&lt;/strong&gt; , send to bucket_name and delete local files.&lt;/li&gt;
&lt;li&gt;To automate is necessary only create cron task in machine to execute command above based your needs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Second situation: you configured your database and made backups of the database because you didn't use management database service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this scenery you need to generate backup daily from the database and store one place to another moment access, case needs. The solution to resolve this problem:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysqldump -u username -p database_name &amp;gt; ./backup_$(date +"%m-%d-%Y").sql
aws s3 sync ./ s3://bucket_name --delete
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explain commands above:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Firstly install aws-cli  and configure credentials.&lt;/li&gt;
&lt;li&gt;After use first command above in this example to execute backup mysql database.&lt;/li&gt;
&lt;li&gt;After use second command to get files in &lt;strong&gt;./&lt;/strong&gt; , send to bucket_name and delete local files to prevent machine disk full.&lt;/li&gt;
&lt;li&gt;To automate is necessary only to create a cron task in the machine to execute the commands above. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Third situation: you deploy the react application in s3. You execute build and upload files in s3 where s3 hosted frontend application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You deploy a manually react application in s3 where s3 hosts a frontend application. The solution to resolve this problem deploy manually:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm run build
cd ./build
aws s3 sync ./ s3://bucket_name 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explain commands above:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Firstly install aws-cli and configure credentials&lt;/li&gt;
&lt;li&gt;After use the first command to execute build react application that generates &lt;strong&gt;build&lt;/strong&gt; directory when contains 
files to deploy.&lt;/li&gt;
&lt;li&gt;After use second command to access build directory&lt;/li&gt;
&lt;li&gt;After  use command to get files in &lt;strong&gt;./&lt;/strong&gt; and send to bucket_name&lt;/li&gt;
&lt;li&gt;To automate it for you, you can use github actions to execute pipeline. When sending code to the branch master in the github repository execute the github action pipeline with command above. To github actions pipeline use this &lt;a href="https://github.com/aws-actions/configure-aws-credentials"&gt;https://github.com/aws-actions/configure-aws-credentials&lt;/a&gt; where setting aws-cli and configured credentials in your aws-cli and after execute commands above&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Webhook</title>
      <dc:creator>Tiago Rosa da costa</dc:creator>
      <pubDate>Sun, 25 Apr 2021 01:00:43 +0000</pubDate>
      <link>https://forem.com/tiago123456789/webhook-45l9</link>
      <guid>https://forem.com/tiago123456789/webhook-45l9</guid>
      <description>&lt;h2&gt;
  
  
  What is it?
&lt;/h2&gt;

&lt;p&gt;Is one request http using the verb post that is triggered when one event in the application to notify another application.&lt;/p&gt;

&lt;h2&gt;
  
  
  When do you use it?
&lt;/h2&gt;

&lt;p&gt;One example is api payment where use webhook to notify a third application about status payment. Imagine an api payment implemented one form communication for each client, it’s impossible to implement and maintain it.&lt;/p&gt;

&lt;p&gt;So, when your application needs to send data for a third application when an event occurs and needs one simple form to make this, webhook is one good solution for it. &lt;/p&gt;

&lt;h2&gt;
  
  
  How to work?
&lt;/h2&gt;

&lt;p&gt;I go use an example of the payment api. When a client create one transaction him informed webhook url and payment api when change status the transaction send data for webhook url informated.&lt;/p&gt;

&lt;p&gt;Image below illustrate what i spoke:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F89htwg9cry16rbyddowy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F89htwg9cry16rbyddowy.png" alt="Alt Text" width="551" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How to make webhook safe?&lt;/p&gt;

&lt;p&gt;The webhook url is public, so anyone can access this url. To make webhook url more safe enable https in webhook url and add secret in webhook url for when application trigger webhook url the third application will check if secret is valid, if yes, process request, case no, reject request. Example webhook url with secret: &lt;a href="https://domain_url/rota?secret=value"&gt;https://domain_url/rota?secret=value&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What situations apply webhook?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The first scenery using webhook url to create CI/CD pipeline, explain flow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add webhook url the jenkins for trigger when push code in repository&lt;/li&gt;
&lt;li&gt;When webhook url is triggered execute one job&lt;/li&gt;
&lt;li&gt;The job run tests the project and after trigger another job&lt;/li&gt;
&lt;li&gt;The second job to deploy changes in staging or production server.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;The second scenery using webhook url is automate deploy, explain flow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add webhook url the for trigger when push code in repository&lt;/li&gt;
&lt;li&gt;When the webhook url is triggered execute one code that changes the project in staging or production server.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;The third scenery using webhook url is one course platform, I will explain flow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;People buy course using credit card&lt;/li&gt;
&lt;li&gt;Send datas payment api &lt;/li&gt;
&lt;li&gt;When payment is approved it triggers a webhook url on the course platform to enable courses for people.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The example application 
&lt;/h2&gt;

&lt;p&gt;In file index.js is code example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4ymqgduqkcu1qsbf64d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4ymqgduqkcu1qsbf64d.png" alt="Alt Text" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Explaining image above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In this file I’m use express for create application and define route /webhook&lt;/li&gt;
&lt;li&gt;In line 9 until 20 I’m check if request has a token valid, case yes, process data, case no, no process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the github you set a webhook url. Image example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1ly9kk5qafwjqbjtgbr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1ly9kk5qafwjqbjtgbr.png" alt="Alt Text" width="800" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Explaining image above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In Payload URL you set url and add token. Example: webhook_url/route?token=value_token&lt;/li&gt;
&lt;li&gt;In Content type set application/json to application webhook receive data in format json.&lt;/li&gt;
&lt;li&gt;In section “Which events would you like to trigger this webhook?” I set an event trigger webhook, this case when I make push code for a repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you have one application need handler many webhooks is necessary think about one strategy for scale up and prevent that affect performance the principal application, because when application make request third application this third application can low performance and it is affect your application.&lt;/p&gt;

&lt;p&gt;The image below is a solution for this problem is:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibzpllxr12vztv5jjmna.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibzpllxr12vztv5jjmna.png" alt="Alt Text" width="800" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Explaining image in above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Actor create transaction api payment&lt;/li&gt;
&lt;li&gt;When occur one event that is necessary trigger webhook, send data for message queue&lt;/li&gt;
&lt;li&gt;Another side has a job to get data and make request to notify third application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, my friends, here I finish one more article, case you have one question about webhook, comment below  I try reply with one solution.&lt;/p&gt;

&lt;p&gt;Link the project repository in Github: &lt;a href="https://github.com/tiago123456789/webhook-article-project"&gt;https://github.com/tiago123456789/webhook-article-project&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Quick tips - Docker, Dockerfile and Docker compose</title>
      <dc:creator>Tiago Rosa da costa</dc:creator>
      <pubDate>Sun, 11 Apr 2021 23:23:17 +0000</pubDate>
      <link>https://forem.com/tiago123456789/quick-tips-docker-dockerfile-and-docker-compose-1jl5</link>
      <guid>https://forem.com/tiago123456789/quick-tips-docker-dockerfile-and-docker-compose-1jl5</guid>
      <description>&lt;h1&gt;
  
  
  Docker
&lt;/h1&gt;

&lt;h2&gt;
  
  
  What is?
&lt;/h2&gt;

&lt;p&gt;Docker is one platform that allows running containers. The containers are processes where in this process have application and libraries necessary to run application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why use this?
&lt;/h2&gt;

&lt;p&gt;Someone one moment said “to work in my machine” this problem occurs because when configuring applications in staging or production environments can have different versions of OS, libs and softwares used in the application. Docker allows specific version OS, libs and softwares prevent this problem mentioned above.&lt;/p&gt;

&lt;h2&gt;
  
  
  Virtualization or docker?
&lt;/h2&gt;

&lt;p&gt;The virtualization will be installed on another OS over the machine, another side the docker is different from the virtualization where the docker uses the kernel of linux to run the processes docker called containers. The containers only necessary to run applications make containers light compared to virtualization.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to install?
&lt;/h2&gt;

&lt;p&gt;To install docker use this link: &lt;a href="https://docs.docker.com/get-docker/"&gt;https://docs.docker.com/get-docker/&lt;/a&gt; in this link have instructions to install in windows, mac and linux.  After install, back the article. Let’s begin.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check docker installed:&lt;/strong&gt;&lt;br&gt;
docker --version&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run docker container com nginx:&lt;/strong&gt;&lt;br&gt;
 docker run nginx&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explain command above:&lt;/strong&gt;&lt;br&gt;
The command docker run é usando para criar um container docker&lt;br&gt;
The name nginx é uma image docker. The image docker is one file where have instructions to configure nginx and your libraries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run docker container com nginx:&lt;/strong&gt;&lt;br&gt;
 docker run --name=value_name nginx&lt;/p&gt;

&lt;p&gt;The parameter --name=value_name is used for the set name the docker container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run docker container nginx:&lt;/strong&gt;&lt;br&gt;
 docker run --name=value_name -p 8080:80 nginx&lt;/p&gt;

&lt;p&gt;The parameter -p 8080:80 sets the docker containers allowing that when access port 8080 the host machine redirects for port 80 in docker container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run docker container background:&lt;/strong&gt;&lt;br&gt;
docker run -d -p 8080:80 nginx&lt;/p&gt;

&lt;p&gt;The parameter -d in the command above is one indicator to run the container in background. Allowing the same terminal execute another action.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stop container docker is running:&lt;/strong&gt;&lt;br&gt;
 docker stop id_name_container&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;List containers docker running in moment:&lt;/strong&gt;&lt;br&gt;
 docker ps&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;List containers stopped:&lt;/strong&gt;&lt;br&gt;
 docker ps -a&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remove container docker stopped:&lt;/strong&gt;&lt;br&gt;
 docker rm id_or_name_container&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remove container docker is running:&lt;/strong&gt;&lt;br&gt;
 docker rm -f id_or_name_container&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access and interact with container docker is running:&lt;/strong&gt;&lt;br&gt;
  docker exec -it id_or_name_container command_execute_container &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explain command above:&lt;/strong&gt;&lt;br&gt;
The parameter -it will indicate you want interact with container docker&lt;br&gt;
In place command_execute_container you can use the /bin/bash open terminal to interact with the container docker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using environment variable in docker container:&lt;/strong&gt;&lt;br&gt;
 docker run -e KEY_NAME=VALUE image&lt;/p&gt;

&lt;p&gt;The parameter -e after KEY_NAME=VALUE the flag -e is used when a set environment variable is needed for the docker container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using volumes in docker container:&lt;/strong&gt;&lt;br&gt;
  docker run -d -v path_local_machine:path_in_container -e KEY_NAME=VALUE image&lt;/p&gt;
&lt;h2&gt;
  
  
  What is volume?
&lt;/h2&gt;

&lt;p&gt;The volume is one functionality that allows persistent data in applications running in docker. Example: you execute command for running your application and the application write logs in file, case occur docker container died  the log file is deleted, but you need maintain this log file same docker container died in this case volume is the solution, because when you execute a command to create docker container your application you set the volume this allow that log file write in path mapped in container is write in local machine, case docker container died file keep persisted in local machine. When docker container restarts, it gets a log file in the local machine and synchronizes the docker container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;List images docker in machine:&lt;/strong&gt;&lt;br&gt;
  docker images&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Download image docker the docker hub:&lt;/strong&gt;&lt;br&gt;
 docker pull name_image&lt;/p&gt;

&lt;p&gt;The docker hub is repository images docker is equal to github, but the docker hub focus store image docker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remove image docker the local machine:&lt;/strong&gt;&lt;br&gt;
  docker image rm image_name&lt;/p&gt;
&lt;h1&gt;
  
  
  Dockerfile
&lt;/h1&gt;

&lt;p&gt;When you work in one application you have specific needs and it’s make needs you create your own Dockerfile. If you don't know what is docker until moment, calm down, I'll explain better about it.&lt;/p&gt;

&lt;p&gt;Image Dockerfile with cake recipe where I have necessary instructions to make cake, in this case cake is container docker. &lt;/p&gt;

&lt;p&gt;I’m imagining the now concept about dockerfile is ok. Let’s go example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:15.13.0-alpine3.10

ENV PORT 4000

WORKDIR app/

COPY . ./

RUN npm i 

EXPOSE $PORT

CMD node server.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Example step by step Dockerfile:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Dockerfile is text file&lt;/li&gt;
&lt;li&gt;The parameter FROM is the start point in Dockerfile. This case I’m using node image with base. Case we want create one image absolute zero, replace node use a image scratch
The parameter ENV is a one form set environment  variable. In this case I’m creating one environment PORT  with default value 4000.&lt;/li&gt;
&lt;li&gt;The parameter WORKDIR is used to set a directory to work. In this case set directory app/&lt;/li&gt;
&lt;li&gt;The parameter COPY is used as a copy file in the local machine for docker containers. In this case I copy all files local machine for docker container in directory app/&lt;/li&gt;
&lt;li&gt;The parameter RUN is used to execute commands with if typing command command line in terminal. In this case, I executed npm i to install dependencies on the project.&lt;/li&gt;
&lt;li&gt;The parameter EXPOSE is used more with documentation, because no expose this port. In this case the parameter is to indicate that necessary exposure port 3000. &lt;/li&gt;
&lt;li&gt;The parameter CMD is used to execute when docker container created&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Create one image based Dockerfile:&lt;/strong&gt;&lt;br&gt;
 docker build -f ./Dockerfile -t  name_image:version  .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explain command above:&lt;/strong&gt;&lt;br&gt;
The parameter -f is used to set a path to your Dockerfile.&lt;br&gt;
The parameter -t is used to set name, image and version.&lt;br&gt;
The ‘.’ on the final command used for the specified current directory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Send image for docker hub:&lt;/strong&gt;&lt;br&gt;
  docker login --username=username_docker_hub&lt;br&gt;
  docker tag id_image_docker  username/project_name:version&lt;br&gt;
  docker push username/project_name:version&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explain commands above:&lt;/strong&gt;&lt;br&gt;
The first command is used in docker hub(image repository).&lt;br&gt;
The second command is used to create one tag for one image.&lt;br&gt;
The third command is used to send images for docker hub.&lt;/p&gt;

&lt;p&gt;Link the project in github: &lt;a href="https://github.com/tiago123456789/project-for-article-docker"&gt;https://github.com/tiago123456789/project-for-article-docker&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Docker compose
&lt;/h1&gt;
&lt;h2&gt;
  
  
  What is it?
&lt;/h2&gt;

&lt;p&gt;Docker composer is one tool that allows management of docker containers the simple way and prevents execute a lot of tasks in hand.&lt;/p&gt;
&lt;h2&gt;
  
  
  How to work?
&lt;/h2&gt;

&lt;p&gt;Before keeping reading this article using this link &lt;a href="https://docs.docker.com/compose/install/"&gt;https://docs.docker.com/compose/install/&lt;/a&gt; para install docker compose, after installed docker compose is needed create file called docker-composer.yaml where will be add instructions to run containers. The file docker-compose.yaml is yaml one file text what to work using indentation.&lt;/p&gt;

&lt;p&gt;Below is one example of docker-compose.yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "3"

services:

    app: 
        build: ./
        container_name: app_example
        environment:
            - PORT=3000
        ports:
            - 3000:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Example code above:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Line 1: is set version&lt;/li&gt;
&lt;li&gt;Line 3: from this line below is the service section where the docker containers that docker compose will be management.&lt;/li&gt;
&lt;li&gt;Line 5: set name the service.&lt;/li&gt;
&lt;li&gt;Line 6: using word build and set ./ this is one indicator mentioned docker compose to using Dockerfile with image this service.&lt;/li&gt;
&lt;li&gt;Line 7: set name the container &lt;/li&gt;
&lt;li&gt;Line 8 and 9: set environment variable used in application running into container.&lt;/li&gt;
&lt;li&gt;Line 10 and 11: set port your local machine and port application running into the container. When made a request in the local machine  will redirect to a container running in this port.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, after define docker-compose.yaml is need run containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Running docker containers using docker-compose:&lt;/strong&gt;&lt;br&gt;
 docker-compose up&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Running docker containers in background using docker-compose:&lt;/strong&gt;&lt;br&gt;
 docker-compose up -d&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remove docker containers created using docker-compose:&lt;/strong&gt;&lt;br&gt;
 docker-compose down&lt;/p&gt;

&lt;p&gt;I think that to the initial article about docker, it’s ok. Case you want to learn more about docker, comment below.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Improving performance the web application</title>
      <dc:creator>Tiago Rosa da costa</dc:creator>
      <pubDate>Wed, 03 Mar 2021 15:01:48 +0000</pubDate>
      <link>https://forem.com/tiago123456789/improving-performance-the-web-application-17d8</link>
      <guid>https://forem.com/tiago123456789/improving-performance-the-web-application-17d8</guid>
      <description>&lt;p&gt;Improving performance the web application&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pagination&lt;/strong&gt;: looks simple, but exists many applications no applying pagination in listing the data. Returning all registers generate processement unnecessary in many situations, not is necessary to return all data in one only time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Orm&lt;/strong&gt;: are tools that abstract communication with databases to the form object oriented, but there are moments where it is necessary to execute complex queries with orm is slow. This moment  use native sql to complex queries is the correct for gain performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Search only the necessary&lt;/strong&gt;: one another thing a lot simple, but when one application grows up it’s simple detail makes different. Imagem the scenery: you make one request to one endpoint in api and return many data unnecessary for context present generated process unnecessary. This moment can use one querystring to specifically necessary to return in response. Example: /peoples/?fields=usename,email.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Static files&lt;/strong&gt;: are files that no matter how many times you request the files, the content continues the same. The css, js and image files are good examples. When we speak about static files there are some solutions to improving performance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minify css and js files this form the file size reduce and it’s allow download more quick.&lt;/li&gt;
&lt;li&gt;Use CDN, what is it? CDN(content delivery network) where static content is cached and the content is searched in one machine geography more near the person that requested it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Zip the response&lt;/strong&gt;: when we work with applications using http protocol to request something, make requests to the server and the server processes data and returns a response. The response can be zipped making response reduce size and allowing download more quickly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Asynchronous process&lt;/strong&gt;: are processes that can execute in a moment later, different the processes synchronous that need execute in exactly the moment. Example: imagem that after registering a new user must send the email welcome. This email no need be sent in exactly the moment register user, email can be sent one time later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cache&lt;/strong&gt;: is one solution very interessent to improving the performance of the applications due the data stored in memory. Example: one user makes a request to your application, you search the data in the database and store it in cache and after returning a response to the user. In the next request no necessary search data in the database only gets data in cache. It makes response return more quick, because getting data in memory is more quick than getting data in the database.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Essential commands pm2</title>
      <dc:creator>Tiago Rosa da costa</dc:creator>
      <pubDate>Fri, 26 Feb 2021 14:49:36 +0000</pubDate>
      <link>https://forem.com/tiago123456789/essential-commands-pm2-21jk</link>
      <guid>https://forem.com/tiago123456789/essential-commands-pm2-21jk</guid>
      <description>&lt;p&gt;Essential commands pm2&lt;/p&gt;

&lt;p&gt;I think that before we need to understand what is pm2? Pm2 is one tool that allows management of the process, case the process stops running the pm2 make restart the process, another thing interessent is that you can look at logs of the process to understand error that occurs.&lt;/p&gt;

&lt;p&gt;Command to install pm2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; npm i -g pm2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Command above install the pm2 globally&lt;/p&gt;

&lt;p&gt;Command listing the processes running in pm2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; pm2 list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Command to execute one process with the pm2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm2 start path_script_js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Command to execute one process and specific name to the process with the pm2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm2 start path_script_js --name=name_process
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Command to execute one process in cluster mode with pm2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm2 start path_script_js -i max
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command above init script and the parameter  &lt;strong&gt;-i max&lt;/strong&gt; will put this process the same as how many cpu core of the machine is running the application. It's to prevent that you have to implement it manually using a cluster module of node.js for creating child processes starting the parent process. It’s one strategy used to better performance in node.js applications.&lt;/p&gt;

&lt;p&gt;Command to stop all the processes in the pm2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  pm2 stop all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Command to stop one process in the pm2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm2 stop id_process
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Command to delete all the processes in the pm2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm2 delete all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Command to delete one process in the pm2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm2 delete id_process
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Command to see logs one process in the pm2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; pm2 logs id_process
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Command to restart one process in the pm2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  pm2 restart id_process
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Command to restart all the processes in the pm2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; pm2 restart all
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Command to save the processes in the pm2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm2 save
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command above is helpful, because will create one backup file with the processes that are running in pm2, case have that make reboot the machine the pm2 will use the backup file to recreate the processes.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
