<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Amplication</title>
    <description>The latest articles on Forem by Amplication (@amplication).</description>
    <link>https://forem.com/amplication</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/amplication"/>
    <language>en</language>
    <item>
      <title>Creating a Restaurant Finder Application Using ReactJS and Amplication</title>
      <dc:creator>Saurav Jain</dc:creator>
      <pubDate>Mon, 15 Jan 2024 06:16:31 +0000</pubDate>
      <link>https://forem.com/amplication/creating-a-restaurant-finder-application-using-reactjs-and-amplication-56o5</link>
      <guid>https://forem.com/amplication/creating-a-restaurant-finder-application-using-reactjs-and-amplication-56o5</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This tutorial will guide you through building a restaurant finder application. This full-stack web application will allow users to search for restaurants based on location (zip code) and view a list of restaurants that match their criteria. We'll create the backend and frontend components and demonstrate how to connect them to create a fully functional restaurant finder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fcreating-a-restaurant-finder-full-stack-web-application-using-reactjs-and-amplication%2F0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fcreating-a-restaurant-finder-full-stack-web-application-using-reactjs-and-amplication%2F0.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;This tutorial will include how to create a backend using Amplication, create a frontend using ReactJS, and connect the backend created by Amplication to the frontend.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating the backend:
&lt;/h3&gt;

&lt;p&gt;In this part of the tutorial, we will use Amplication to develop the backend of our restaurant finder application. Amplication creates a fully functional production-ready backend with REST and GraphQL API, authentication, databases, best practices, etc., in a few minutes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to &lt;a href="https://amplication.com" rel="noopener noreferrer"&gt;https://amplication.com&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Login into it or make an account if you have not already.&lt;/li&gt;
&lt;li&gt;Create a new app and name it &lt;code&gt;restaurant-finder-backend.&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click on &lt;code&gt;Add Resource&lt;/code&gt; and then on  &lt;code&gt;Service&lt;/code&gt; and name it &lt;code&gt;restaurant-finder.&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Then, connect your GitHub account to the one on which you want Amplication to push the backend code and select or create a repository.&lt;/li&gt;
&lt;li&gt;Choose which API you want; we will go by the default, and that is both.&lt;/li&gt;
&lt;li&gt;Choose &lt;code&gt;monorepo&lt;/code&gt; in this step.&lt;/li&gt;
&lt;li&gt;Choose the database. We will go with &lt;code&gt;PostgreSQL&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;We will create entities from scratch.&lt;/li&gt;
&lt;li&gt;We want to have &lt;code&gt;auth&lt;/code&gt; in the app.&lt;/li&gt;
&lt;li&gt;Click on &lt;code&gt;create service,&lt;/code&gt; and the code is generated now.&lt;/li&gt;
&lt;li&gt;Go to Entities and click on &lt;code&gt;add entity&lt;/code&gt;:

&lt;ul&gt;
&lt;li&gt;Entity Name: &lt;code&gt;Restaurant&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;First field: name [Searchable, Required, Single Line Text]&lt;/li&gt;
&lt;li&gt;Second field: address [Searchable, Required, Single Line Text]&lt;/li&gt;
&lt;li&gt;Third field: phone [Searchable, Required, Single Line Text]&lt;/li&gt;
&lt;li&gt;Fourth field: zipCode [Searchable, Required, Single Line Text]&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Click on &lt;code&gt;Commit Changes and Build,&lt;/code&gt; and in a few minutes, the code will be pushed to the GitHub repository. Go to the repo and approve the Pull Request that Amplication created.&lt;/p&gt;

&lt;p&gt;Now, you have all the backend code generated by Amplication in your GitHub repository.&lt;br&gt;
It will look like this:&lt;/p&gt;


&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fcreating-a-restaurant-finder-full-stack-web-application-using-reactjs-and-amplication%2F1.png" alt=""&gt;




&lt;p&gt;Backend Code: &lt;a href="https://github.com/souravjain540/restaurant-finder" rel="noopener noreferrer"&gt;https://github.com/souravjain540/restaurant-finder&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now clone your repository, open it in the IDE of your choice, go to &lt;code&gt;restaurant-finder-backend/apps/restaurant-finder,&lt;/code&gt; and follow these commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install
npm run prisma:generate
npm run docker:dev
npm run db:init
npm run start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After all these steps, you will have a backend running on &lt;code&gt;localhost:3000.&lt;/code&gt; As Amplication comes with AdminUI and Swagger Documentation, you can go to &lt;code&gt;localhost:3000/api&lt;/code&gt; and view all the endpoints of the API that amplication generated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fcreating-a-restaurant-finder-full-stack-web-application-using-reactjs-and-amplication%2F2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fcreating-a-restaurant-finder-full-stack-web-application-using-reactjs-and-amplication%2F2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Let’s click on the auth and make a &lt;code&gt;POST&lt;/code&gt; request by clicking on try it out with credentials &lt;code&gt;admin&lt;/code&gt; and &lt;code&gt;admin.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fcreating-a-restaurant-finder-full-stack-web-application-using-reactjs-and-amplication%2F3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fcreating-a-restaurant-finder-full-stack-web-application-using-reactjs-and-amplication%2F3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Now click on execute, copy the &lt;code&gt;accessToken,&lt;/code&gt; and save it for the auth purpose later.&lt;/p&gt;

&lt;p&gt;Now, we have our backend ready and running on our local system. It is now time to move to the frontend part.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;NOTE&lt;/em&gt;: If you have any problems using Amplication while creating your web application or in the installation, please feel free to contact the Amplication Team on our Discord channel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating the frontend:
&lt;/h3&gt;

&lt;p&gt;In this part, we will build the frontend of our restaurant finder application using React, a popular JavaScript library for building user interfaces. We'll create the user interface for searching restaurants by zipcode, displaying search results, and adding new restaurants.&lt;/p&gt;

&lt;h4&gt;
  
  
  Setting up React Project:
&lt;/h4&gt;

&lt;p&gt;First, ensure your system has Node.js and npm (Node Package Manager) installed. If not, you can download and install them from the official Node.js website. &lt;/p&gt;

&lt;p&gt;Let's create a new React project using Create React App, a popular tool for setting up React applications with a predefined project structure. Open your terminal and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx create-react-app restaurant-finder
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will create a new directory called &lt;code&gt;restaurant-finder&lt;/code&gt; containing all the necessary files and folders for your React project. &lt;/p&gt;

&lt;h4&gt;
  
  
  Designing the User Interface with Components
&lt;/h4&gt;

&lt;p&gt;In React, you build user interfaces by creating components. Let's design the components for our restaurant finder application. &lt;/p&gt;

&lt;p&gt;There will be three main components in our project:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;SearchForm Component: This will be the main page of our project where the user will enter the &lt;code&gt;zipCode,&lt;/code&gt; and all the restaurants with that &lt;code&gt;zipCode&lt;/code&gt; will be shown as a list in the result. It will work in the root directory. (&lt;code&gt;/&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;RestaurantForm Component:  This page will be responsible for adding any new restaurant to the list of restaurants. It will be a form with all the relevant details. It will work at &lt;code&gt;/restaurants/add.&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;RestaurantList Component: This page will show all the restaurants available in our database. It will work at &lt;code&gt;/restaurants.&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Directory Structure:
&lt;/h4&gt;

&lt;p&gt;To avoid any confusion, the file structure will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fcreating-a-restaurant-finder-full-stack-web-application-using-reactjs-and-amplication%2F4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fcreating-a-restaurant-finder-full-stack-web-application-using-reactjs-and-amplication%2F4.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h3&gt;
  
  
  Create a SearchForm Component
&lt;/h3&gt;

&lt;p&gt;Inside the src folder of your project, create a new file named SearchForm.js. This component will be responsible for the restaurant search form. Let's break down the code snippet of &lt;code&gt;SearchForm.js&lt;/code&gt; step by step:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Import Statements:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   import React, { useState } from 'react';
   import axios from 'axios';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The code begins with importing the necessary modules. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;React&lt;/code&gt; is imported to define React components.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;useState&lt;/code&gt; is a React hook used to manage component-level state.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;axios&lt;/code&gt; is imported to make HTTP requests to the backend API.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;getAuthToken&lt;/code&gt; Function:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   const getAuthToken = () =&amp;gt; {
     // Replace this with your logic to obtain the token
     return 'ebGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9....'; // A placeholder token
   };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;getAuthToken&lt;/code&gt; is a function that should be replaced with your actual logic to obtain an authentication token. &lt;/li&gt;
&lt;li&gt;In this code, it returns a placeholder token.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Axios Configuration:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   const api = axios.create({
     baseURL: 'http://localhost:3000/api', // Adjust the base URL to your API endpoint
     headers: {
       'Content-Type': 'application/json',
       Authorization: `Bearer ${getAuthToken()}`, // Attach the token here
     },
   });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;api&lt;/code&gt; is an instance of Axios configured with a base URL for the backend API.&lt;/li&gt;
&lt;li&gt;It sets the &lt;code&gt;Content-Type&lt;/code&gt; header to indicate that the request body is in JSON format.&lt;/li&gt;
&lt;li&gt;It also attaches an authorization header with the token obtained from &lt;code&gt;getAuthToken&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;SearchForm&lt;/code&gt; Component:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   function SearchForm({ onSearch }) {
     // ...
   }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;SearchForm&lt;/code&gt; is a React functional component that takes a prop named &lt;code&gt;onSearch&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Inside this component, we will create the UI for searching restaurants by zip code.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Component State:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   const [zipCode, setZipCode] = useState('');
   const [restaurants, setRestaurants] = useState([]);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;useState&lt;/code&gt; is used to define two pieces of component state: &lt;code&gt;zipCode&lt;/code&gt; and &lt;code&gt;restaurants&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;zipCode&lt;/code&gt; stores the user's input for the zip code.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;restaurants&lt;/code&gt; will store the search results.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;handleSearch&lt;/code&gt; Function:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   const handleSearch = () =&amp;gt; {
     if (zipCode.trim() !== '') {
       // API request to search for restaurants based on the provided zip code
       // ...
     }
   };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;handleSearch&lt;/code&gt; is a function that is called when the user clicks the "Search" button.&lt;/li&gt;
&lt;li&gt;It first checks if the &lt;code&gt;zipCode&lt;/code&gt; is not empty.&lt;/li&gt;
&lt;li&gt;If the &lt;code&gt;zipCode&lt;/code&gt; is not empty, it makes an API request to search for restaurants based on the provided zip code.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Making the API Request:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   api.get('/restaurants', {
     params: {
       where: { zipCode }, // Send the zipCode as a query parameter
     },
   })
     .then(response =&amp;gt; {
       // Handle the API response
       // ...
     })
     .catch(error =&amp;gt; console.error('Error searching restaurants:', error));
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Axios is used to make a GET request to the &lt;code&gt;/restaurants&lt;/code&gt; endpoint of the backend API.&lt;/li&gt;
&lt;li&gt;It includes a query parameter &lt;code&gt;where&lt;/code&gt; with the specified &lt;code&gt;zipCode&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;If the request is successful, the response is processed in the &lt;code&gt;.then&lt;/code&gt; block, and the search results are updated in the &lt;code&gt;restaurants&lt;/code&gt; state.&lt;/li&gt;
&lt;li&gt;If there's an error, it is caught and logged in the &lt;code&gt;.catch&lt;/code&gt; block.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Rendering the UI:

&lt;ul&gt;
&lt;li&gt;The component returns a JSX structure for rendering the search form, search button, and search results.&lt;/li&gt;
&lt;li&gt;The search results are displayed as a list of restaurants if there are any.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's a breakdown of the &lt;code&gt;SearchForm.js&lt;/code&gt; code. It defines a React component for searching restaurants by zip code and requests API to retrieve restaurant data based on user input.&lt;/p&gt;

&lt;p&gt;You can look into the final code of &lt;code&gt;searchForm.js&lt;/code&gt;: &lt;a href="https://github.com/souravjain540/restaurant-finder-frontend/blob/main/src/components/searchForm.js" rel="noopener noreferrer"&gt;https://github.com/souravjain540/restaurant-finder-frontend/blob/main/src/components/searchForm.js&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a RestaurantForm Component
&lt;/h3&gt;

&lt;p&gt;Now, let's create a component for adding new restaurants. Create a file named &lt;code&gt;RestaurantForm.js&lt;/code&gt; inside the src folder. This component will allow users to input restaurant details and submit them to the backend. Let's break down the code snippet of &lt;code&gt;RestaurantForm.js&lt;/code&gt; step by step:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Import Statements:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   import React, { useState } from 'react';
   import axios from 'axios';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The code begins with importing the necessary modules.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;React&lt;/code&gt; is imported to define React components.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;useState&lt;/code&gt; is a React hook used to manage component-level state.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;axios&lt;/code&gt; is imported to make HTTP requests to the backend API.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;getAuthToken&lt;/code&gt; Function:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   const getAuthToken = () =&amp;gt; {
     // Replace this with your logic to obtain the token
     return 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9....'; // A placeholder token
   };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;getAuthToken&lt;/code&gt; is a function that should be replaced with your actual logic to obtain an authentication token.&lt;/li&gt;
&lt;li&gt;In this code, it returns a placeholder token.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Axios Configuration:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   const api = axios.create({
     baseURL: 'http://localhost:3000/api', // Adjust the base URL to your API endpoint
     headers: {
       'Content-Type': 'application/json',
       Authorization: `Bearer ${getAuthToken()}`, // Attach the token here
     },
   });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;api&lt;/code&gt; is an instance of Axios configured with a base URL for the backend API.&lt;/li&gt;
&lt;li&gt;It sets the &lt;code&gt;Content-Type&lt;/code&gt; header to indicate that the request body is in JSON format.&lt;/li&gt;
&lt;li&gt;It also attaches an authorization header with the token obtained from &lt;code&gt;getAuthToken&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;RestaurantForm&lt;/code&gt; Component:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   function RestaurantForm({ onFormSubmit }) {
     // ...
   }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;RestaurantForm&lt;/code&gt; is a React functional component that takes a prop named &lt;code&gt;onFormSubmit&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Inside this component, we will create the UI for adding a new restaurant.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Component State:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   const [name, setName] = useState('');
   const [address, setAddress] = useState('');
   const [zipCode, setZipCode] = useState('');
   const [phone, setPhone] = useState('');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;useState&lt;/code&gt; is used to define four pieces of component state: &lt;code&gt;name&lt;/code&gt;, &lt;code&gt;address&lt;/code&gt;, &lt;code&gt;zipCode&lt;/code&gt;, and &lt;code&gt;phone&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;These states will store the user's input for the restaurant's name, address, zip code, and phone number.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;handleFormSubmit&lt;/code&gt; Function:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   const handleFormSubmit = (e) =&amp;gt; {
     e.preventDefault();

     const newRestaurant = {
       name,
       address,
       zipCode,
       phone,
     };

     api.post('/restaurants', newRestaurant)
       .then(response =&amp;gt; {
         // Call the onFormSubmit function with the newly created restaurant
         onFormSubmit(response.data);

         // Clear the form input fields
         setName('');
         setAddress('');
         setZipCode('');
         setPhone('');

         // Refresh the page after a successful submission
         window.location.reload();
       })
       .catch(error =&amp;gt; console.error('Error creating restaurant:', error));
   };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;handleFormSubmit&lt;/code&gt; is a function that is called when the user submits the restaurant form.&lt;/li&gt;
&lt;li&gt;It first prevents the default form submission behavior.&lt;/li&gt;
&lt;li&gt;It creates a &lt;code&gt;newRestaurant&lt;/code&gt; object with the values entered in the form fields.&lt;/li&gt;
&lt;li&gt;It makes a POST request to the &lt;code&gt;/restaurants&lt;/code&gt; endpoint of the backend API to create a new restaurant.&lt;/li&gt;
&lt;li&gt;If the request is successful, it calls the &lt;code&gt;onFormSubmit&lt;/code&gt; function with the newly created restaurant data.&lt;/li&gt;
&lt;li&gt;It also clears the form input fields, and then refreshes the page to reflect the updated restaurant list.&lt;/li&gt;
&lt;li&gt;If there's an error, it is caught and logged.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Rendering the UI:

&lt;ul&gt;
&lt;li&gt;The component returns a JSX structure for rendering the restaurant form.&lt;/li&gt;
&lt;li&gt;The form includes fields for entering the restaurant's name, address, zip code, and phone number.&lt;/li&gt;
&lt;li&gt;When the user submits the form, the &lt;code&gt;handleFormSubmit&lt;/code&gt; function is called.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's a breakdown of the &lt;code&gt;RestaurantForm.js&lt;/code&gt; code. It defines a React component for adding a new restaurant to the system, and it makes an API request to create the restaurant on form submission.&lt;/p&gt;

&lt;p&gt;You can view the whole code here: &lt;a href="https://github.com/souravjain540/restaurant-finder-frontend/blob/main/src/components/restaurantForm.js" rel="noopener noreferrer"&gt;https://github.com/souravjain540/restaurant-finder-frontend/blob/main/src/components/restaurantForm.js&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating the restaurantLists component:
&lt;/h3&gt;

&lt;p&gt;Next, create a file named &lt;code&gt;RestaurantList.js&lt;/code&gt; inside the src folder. This component will display the list of restaurants returned by the search. Let's break down the code snippet of &lt;code&gt;RestaurantList.js&lt;/code&gt; step by step:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Import Statements:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   import React, { useState, useEffect } from 'react';
   import axios from 'axios';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The code begins with importing the necessary modules.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;React&lt;/code&gt; is imported to define React components.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;useState&lt;/code&gt; and &lt;code&gt;useEffect&lt;/code&gt; are React hooks used to manage component-level state and side effects.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;axios&lt;/code&gt; is imported to make HTTP requests to the backend API.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;getAuthToken&lt;/code&gt; Function:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   const getAuthToken = () =&amp;gt; {
     // Replace this with your logic to obtain the token
     return 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9....'; // A placeholder token
   };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;getAuthToken&lt;/code&gt; is a function that should be replaced with your actual logic to obtain an authentication token.&lt;/li&gt;
&lt;li&gt;In this code, it returns a placeholder token.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Axios Configuration:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   const api = axios.create({
     baseURL: 'http://localhost:3000/api', // Adjust the base URL to your API endpoint
     headers: {
       'Content-Type': 'application/json',
       Authorization: `Bearer ${getAuthToken()}`, // Attach the token here
     },
   });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;api&lt;/code&gt; is an instance of Axios configured with a base URL for the backend API.&lt;/li&gt;
&lt;li&gt;It sets the &lt;code&gt;Content-Type&lt;/code&gt; header to indicate that the request body is in JSON format.&lt;/li&gt;
&lt;li&gt;It also attaches an authorization header with the token obtained from &lt;code&gt;getAuthToken&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;RestaurantList&lt;/code&gt; Component:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   function RestaurantList() {
     // ...
   }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;RestaurantList&lt;/code&gt; is a React functional component that displays a list of restaurants.&lt;/li&gt;
&lt;li&gt;Inside this component, we will fetch the list of restaurants from the backend and display them.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Component State:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   const [restaurants, setRestaurants] = useState([]);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;useState&lt;/code&gt; is used to define a state variable &lt;code&gt;restaurants&lt;/code&gt; that will store the list of restaurants fetched from the API.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Fetching Restaurants:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   useEffect(() =&amp;gt; {
     api.get('/restaurants')
       .then(response =&amp;gt; {
         setRestaurants(response.data);
       })
       .catch(error =&amp;gt; console.error('Error fetching restaurants:', error));
   }, []);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;useEffect&lt;/code&gt; hook is used to fetch the list of restaurants when the component mounts (i.e., when it first renders).&lt;/li&gt;
&lt;li&gt;It makes a GET request to the &lt;code&gt;/restaurants&lt;/code&gt; endpoint of the backend API.&lt;/li&gt;
&lt;li&gt;When the response is received, it sets the &lt;code&gt;restaurants&lt;/code&gt; state with the data.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Deleting Restaurants:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   const handleDelete = (restaurantId) =&amp;gt; {
     api.delete(`/restaurants/${restaurantId}`)
       .then(() =&amp;gt; {
         // Filter out the deleted restaurant
         setRestaurants(restaurants.filter(restaurant =&amp;gt; restaurant.id !== restaurantId));
       })
       .catch(error =&amp;gt; console.error('Error deleting restaurant:', error));
   };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;handleDelete&lt;/code&gt; function is called when a user clicks the "Delete" button next to a restaurant.&lt;/li&gt;
&lt;li&gt;It makes a DELETE request to the &lt;code&gt;/restaurants/{restaurantId}&lt;/code&gt; endpoint of the backend API to delete the restaurant.&lt;/li&gt;
&lt;li&gt;After successful deletion, it updates the &lt;code&gt;restaurants&lt;/code&gt; state by filtering out the deleted restaurant.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Rendering the UI:

&lt;ul&gt;
&lt;li&gt;The component returns a JSX structure for rendering the list of restaurants.&lt;/li&gt;
&lt;li&gt;It maps over the &lt;code&gt;restaurants&lt;/code&gt; array and displays each restaurant's name, address, phone number, and zip code.&lt;/li&gt;
&lt;li&gt;A "Delete" button is provided for each restaurant, which triggers the &lt;code&gt;handleDelete&lt;/code&gt; function when clicked.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's a breakdown of the &lt;code&gt;RestaurantList.js&lt;/code&gt; code. It defines a React component for displaying a list of restaurants fetched from the backend and provides the ability to delete restaurants.&lt;/p&gt;

&lt;p&gt;Find the complete code snippet here: &lt;a href="https://github.com/souravjain540/restaurant-finder-frontend/blob/main/src/components/restaurantList.js" rel="noopener noreferrer"&gt;https://github.com/souravjain540/restaurant-finder-frontend/blob/main/src/components/restaurantList.js&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Changing the App.js file:
&lt;/h3&gt;

&lt;p&gt;Let's break down the code snippet of &lt;code&gt;App.js&lt;/code&gt; step by step:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Import Statements:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   import React from 'react';
   import { BrowserRouter as Router, Routes, Route } from 'react-router-dom';
   import RestaurantList from '/Users/sauravjain/projects/my-restaurant-app/src/components/restaurantList.js';
   import RestaurantForm from '/Users/sauravjain/projects/my-restaurant-app/src/components/restaurantForm.js';
   import SearchForm from '/Users/sauravjain/projects/my-restaurant-app/src/components/searchForm.js';
   import '/Users/sauravjain/projects/my-restaurant-app/src/index.css'; // Import the CSS file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The code begins with importing the necessary modules and components.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;React&lt;/code&gt; is imported to define React components.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;BrowserRouter&lt;/code&gt;, &lt;code&gt;Routes&lt;/code&gt;, and &lt;code&gt;Route&lt;/code&gt; are imported from &lt;code&gt;react-router-dom&lt;/code&gt; for defining and handling routes in the application.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RestaurantList&lt;/code&gt;, &lt;code&gt;RestaurantForm&lt;/code&gt;, and &lt;code&gt;SearchForm&lt;/code&gt; are imported as components from their respective file paths.&lt;/li&gt;
&lt;li&gt;The CSS file is imported to apply styles to the application.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;App&lt;/code&gt; Component:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   function App() {
     // ...
   }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;App&lt;/code&gt; is a React functional component that serves as the main component for the application.&lt;/li&gt;
&lt;li&gt;Inside this component, you define the routes, layout, and functionality of the app.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Event Handlers:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   const handleSearch = (searchResults) =&amp;gt; {
     // Handle the search results (e.g., update state)
     console.log('Search results:', searchResults);
   };
   const handleFormSubmit = (newRestaurantData) =&amp;gt; {
     // Update the state with the new restaurant data
     console.log('Restaurant data: ', newRestaurantData);
   };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Two event handler functions, &lt;code&gt;handleSearch&lt;/code&gt; and &lt;code&gt;handleFormSubmit&lt;/code&gt;, are defined. These functions are used to handle data received from child components.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;handleSearch&lt;/code&gt; is intended to handle search results data, and &lt;code&gt;handleFormSubmit&lt;/code&gt; is intended to handle new restaurant data.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Router Setup:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   return (
     &amp;lt;Router&amp;gt;
       &amp;lt;div className="App"&amp;gt;
         &amp;lt;h1&amp;gt;Restaurant Finder&amp;lt;/h1&amp;gt;

         &amp;lt;Routes&amp;gt;
           &amp;lt;Route path="/" element={&amp;lt;SearchForm onSearch={handleSearch} /&amp;gt;} /&amp;gt;
           &amp;lt;Route path="/restaurants" element={&amp;lt;RestaurantList /&amp;gt;} /&amp;gt;
           &amp;lt;Route path="/restaurants/add" element={&amp;lt;RestaurantForm onFormSubmit={handleFormSubmit} /&amp;gt;} /&amp;gt;
           &amp;lt;Route path="/restaurants/edit/:id" element={&amp;lt;RestaurantForm /&amp;gt;} /&amp;gt;
         &amp;lt;/Routes&amp;gt;
       &amp;lt;/div&amp;gt;
       &amp;lt;footer&amp;gt;
         {/* Footer content */}
       &amp;lt;/footer&amp;gt;
     &amp;lt;/Router&amp;gt;
   );
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;Router&lt;/code&gt; component is used to wrap the entire application, enabling client-side routing.&lt;/li&gt;
&lt;li&gt;Inside the router, there is a &lt;code&gt;div&lt;/code&gt; with the class name "App" that serves as the main container for the application.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;h1&lt;/code&gt; element displays the title "Restaurant Finder."&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Routes Configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inside the &lt;code&gt;Routes&lt;/code&gt; component, different routes are defined using the &lt;code&gt;Route&lt;/code&gt; component from &lt;code&gt;react-router-dom&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The routes specify which components to render when certain URLs are accessed.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Route Paths and Components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/&lt;/code&gt; path is associated with the &lt;code&gt;SearchForm&lt;/code&gt; component. The &lt;code&gt;onSearch&lt;/code&gt; prop is passed to it, allowing it to handle search results.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/restaurants&lt;/code&gt; path is associated with the &lt;code&gt;RestaurantList&lt;/code&gt; component.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/restaurants/add&lt;/code&gt; path is associated with the &lt;code&gt;RestaurantForm&lt;/code&gt; component. The &lt;code&gt;onFormSubmit&lt;/code&gt; prop is passed to it to handle form submissions.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/restaurants/edit/:id&lt;/code&gt; path is associated with the &lt;code&gt;RestaurantForm&lt;/code&gt; component, presumably for editing restaurant data.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Footer Section:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Below the &lt;code&gt;Router&lt;/code&gt; content, there is a footer section with links to the author's Twitter profile and a mention of "Backend Powered by Amplication."&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is an overview of the &lt;code&gt;App.js&lt;/code&gt; code, which sets up the routing and components for your restaurant finder application. It defines how different components are rendered based on the URL paths and handles events with the defined event handler functions.&lt;/p&gt;

&lt;p&gt;Have a look at the final code snippet of the &lt;code&gt;App.js&lt;/code&gt; file here: &lt;a href="https://github.com/souravjain540/restaurant-finder-frontend/blob/main/src/App.js" rel="noopener noreferrer"&gt;https://github.com/souravjain540/restaurant-finder-frontend/blob/main/src/App.js&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the end, I encourage everyone to create the frontend by themselves according to their creativity, but if you want to have it like mine, please copy the &lt;a href="https://github.com/souravjain540/restaurant-finder-frontend/blob/main/public/index.html" rel="noopener noreferrer"&gt;index.html&lt;/a&gt; and &lt;a href="https://github.com/souravjain540/restaurant-finder-frontend/blob/main/public/index.html" rel="noopener noreferrer"&gt;index.css&lt;/a&gt; file as well.&lt;/p&gt;

&lt;p&gt;In the end, your app will look like this:&lt;/p&gt;

&lt;h4&gt;
  
  
  Root directory(searchForm):
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fcreating-a-restaurant-finder-full-stack-web-application-using-reactjs-and-amplication%2F5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fcreating-a-restaurant-finder-full-stack-web-application-using-reactjs-and-amplication%2F5.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h4&gt;
  
  
  restaurantLists.js:
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fcreating-a-restaurant-finder-full-stack-web-application-using-reactjs-and-amplication%2F6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fcreating-a-restaurant-finder-full-stack-web-application-using-reactjs-and-amplication%2F6.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h4&gt;
  
  
  restaurantForm.js
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fcreating-a-restaurant-finder-full-stack-web-application-using-reactjs-and-amplication%2F7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fcreating-a-restaurant-finder-full-stack-web-application-using-reactjs-and-amplication%2F7.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;You can have a look at the complete frontend code here: &lt;a href="https://github.com/souravjain540/restaurant-finder-frontend/tree/main" rel="noopener noreferrer"&gt;https://github.com/souravjain540/restaurant-finder-frontend/tree/main&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have any problem with any part of this tutorial, please feel free to contact me on my &lt;a href="https://twitter.com/Sauain" rel="noopener noreferrer"&gt;Twitter account&lt;/a&gt;. Thanks for giving it a read.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>javascript</category>
      <category>api</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Extending GitOps: Effortless continuous integration and deployment on Kubernetes</title>
      <dc:creator>Levi van Noort</dc:creator>
      <pubDate>Tue, 26 Dec 2023 09:26:49 +0000</pubDate>
      <link>https://forem.com/amplication/extending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes-1oem</link>
      <guid>https://forem.com/amplication/extending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes-1oem</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;Over the last decade, there have been notable shifts in the process of delivering source code. One of the more recent adaptations on the deployment aspect of this process has been the declarative and version controlled description of an application's desired infrastructure state and configuration - commonly referred to as 'GitOps'. This approach has gained popularity in the context of cloud-native applications and container orchestration platforms, such as Kubernetes, where managing complex, distributed systems can be challenging.&lt;/p&gt;

&lt;p&gt;As this desired state is off declarative nature, it points to a specific/static version of that application. This offers significant benefits, particularly the fact that it is possible to audit changes before they are made, roll back to a previous state and maintain a reproducible setup. Without requiring a pipeline to make changes in the application's state/configuration, how can we move to the more recent application version while avoiding manual version adjustment?&lt;/p&gt;

&lt;p&gt;This is where Argo CD Image Updater comes in; it verifies if a more recent version of a container image is available, and subsequently triggers the necessary updates of the application's Kubernetes resources or optionally these changes in the associated version control.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview:
&lt;/h2&gt;

&lt;p&gt;Prior to diving into the technical implementation, let's establish an overview of the GitOps process and highlight the role of Argo CD Image Updater within this process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Default GitOps
&lt;/h3&gt;

&lt;p&gt;The first part of process starts with a developer modifying the source code of the application and pushing the changes back to the version control system. Subsequently, this action initiates a workflow or pipeline that both constructs and assesses the application. The outcome is an artifact in the form of a container image, which is subsequently pushed to an image registry.&lt;/p&gt;

&lt;p&gt;In a second - detached - part of the process, the cluster configuration repository is the single source of truth regarding the the &lt;em&gt;desired state&lt;/em&gt; of the application configuration. Argo CD periodically monitors the Kubernetes cluster to see if the &lt;em&gt;live state&lt;/em&gt; differs from the &lt;em&gt;desired state&lt;/em&gt;. When there is a difference, depending on the synchronization strategy Argo CD tries to revert back to the &lt;em&gt;desired state&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F0.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h3&gt;
  
  
  Extended GitOps
&lt;/h3&gt;

&lt;p&gt;Compared to the default process, in this extended, variant another Argo CD component is added to the Kubernetes cluster. The Argo CD Image Updater component verifies if a more recent version of a container image exists within the image registry. If such version is identified, the component either directly or indirectly updates the running application. In the next section we'll delve into the configuration options for the Argo CD Image Updater as well as the implementation of the component.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F1.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h2&gt;
  
  
  Configuration:
&lt;/h2&gt;

&lt;p&gt;Before the technical implementation we'll familiarize ourself with the configuration options Argo CD Image Updater provides. This configuration can be found in two concepts, the &lt;code&gt;write back method&lt;/code&gt; and &lt;code&gt;update strategy&lt;/code&gt;. Both have options tailored to specific situation, so it is good to understand what the options are and how that equates to the technical implementation.&lt;/p&gt;

&lt;p&gt;For this configuration/demonstration the following repositories can be referenced&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/amplication/bookstore-application" rel="noopener noreferrer"&gt;bookstore-application&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/amplication/bookstore-cluster-configuration" rel="noopener noreferrer"&gt;bookstore-cluster-configuration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Write Back Method
&lt;/h3&gt;

&lt;p&gt;At the moment of writing Argo CD Image Updater supports two methods of propagating the new versions of the images to Argo CD. These methods also referred to as &lt;em&gt;write back&lt;/em&gt; methods are &lt;code&gt;argocd&lt;/code&gt; &amp;amp; &lt;code&gt;git&lt;/code&gt;. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;argocd&lt;/code&gt;: This default &lt;em&gt;write back&lt;/em&gt; method is pseudo-persistent - when deleting an application or synchronizing the configuration in version control, any changes made to an application by Argo CD Image Updater are lost - making it best suitable for imperatively created resources. This default method doesn't require additional configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;git&lt;/code&gt;: The other &lt;em&gt;write back&lt;/em&gt; method is the persistent/declarative option, when the a more recent version of a container image is identified, Argo CD Image Updater stores the parameter override along the application's resource manifests. It stores the override in a file named &lt;code&gt;.argocd-source-&amp;lt;application-name&amp;gt;.yaml&lt;/code&gt;, reducing the risk of a merge conflict in the application's resource manifests. To change the &lt;em&gt;write back&lt;/em&gt; method the an annotation needs to be set on the Argo CD &lt;code&gt;Application&lt;/code&gt; resource. In addition the branch the to commit back to can optionally be changed from the default value &lt;code&gt;.spec.source.targetRevision&lt;/code&gt; of the application.&lt;/p&gt;

&lt;p&gt;From an audit trail and a reproducible perspective, this is the desired option. It provides us with the option to have automatic continuous deployment, while keeping these aspects that GitOps is known for.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;argocd-image-updater.argoproj.io/write-back-method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
When using the &lt;code&gt;git&lt;/code&gt; write back method, credentials configured for Argo CD will be re-used. A dedicated set of credentials can be provided, this and more configuration can be found in the &lt;a href="https://argocd-image-updater.readthedocs.io/en/stable/basics/update-methods" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Update Strategies
&lt;/h3&gt;

&lt;p&gt;In addition to the choice of which write back method to use we need to decide on a update strategy. This strategy defines how Argo CD Image Updater finds new versions of an image that is to be updated. Currently four methods are supported; &lt;code&gt;semver&lt;/code&gt;, &lt;code&gt;latest&lt;/code&gt;, &lt;code&gt;digest&lt;/code&gt;, &lt;code&gt;name&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Before looking at their respective differences, we'll need to know what &lt;code&gt;mutable&lt;/code&gt; and &lt;code&gt;immutable&lt;/code&gt; image tags are. A mutable repository has tags that can be overwritten by a newer image, where as when a repository configuration states that tags must be immutable - it can't be overwritten by a newer image. From the options below each options expects &lt;em&gt;immutable&lt;/em&gt; tags to be used, if a mutable &lt;em&gt;tag&lt;/em&gt; is used the &lt;em&gt;digest&lt;/em&gt; strategy should be used.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;semver&lt;/code&gt;: Updates the application to the latest version of an image in an image registry while taking into consideration semantic versioning constraints - following the format &lt;code&gt;X.Y.Z&lt;/code&gt;, where &lt;code&gt;X&lt;/code&gt; is the major version, &lt;code&gt;Y&lt;/code&gt; is the minor version and &lt;code&gt;Z&lt;/code&gt; the patch version. The option can be configured to only bump, to newer minor or patch versions - it also supports pre-release versions through additional configuration. In the example below the application would be updated with newer patch version of the application, but not upgrading when a newer minor or major version is present.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;argocd-image-updater.argoproj.io/&amp;lt;alias&amp;gt;.update-strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;semver&lt;/span&gt;
&lt;span class="na"&gt;argocd-image-updater.argoproj.io/image-list&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;alias&amp;gt;=&amp;lt;repository-name&amp;gt;/&amp;lt;image-name&amp;gt;[:&amp;lt;version_constraint&amp;gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;latest&lt;/code&gt;: Updates the application with the image that has the most recent build date. When a specific build has multiple tags Argo CD Image Updater picks the lexically descending sorted last tag in the list. Optionally if you want to consider only certain tags, an annotation with a regular expression can be used. Similarly an annotation can be used to ignore a list of tags.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;argocd-image-updater.argoproj.io/&amp;lt;alias&amp;gt;.update-strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;latest&lt;/span&gt;
&lt;span class="na"&gt;argocd-image-updater.argoproj.io/image-list&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;alias&amp;gt;=&amp;lt;repository-name&amp;gt;/&amp;lt;image-name&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;digest&lt;/code&gt;: Updates the application based on a change for a mutable tag within the registry. When this strategy is used image digests are be used for updating the application, so the image on the cluster for &lt;code&gt;&amp;lt;repository-name&amp;gt;/&amp;lt;image-name&amp;gt;:&amp;lt;tag_name&amp;gt;&lt;/code&gt; appears as &lt;code&gt;&amp;lt;repository-name&amp;gt;/&amp;lt;image-name&amp;gt;@sha256:&amp;lt;hash&amp;gt;&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;argocd-image-updater.argoproj.io/&amp;lt;alias&amp;gt;.update-strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;digest&lt;/span&gt;
&lt;span class="na"&gt;argocd-image-updater.argoproj.io/image-list&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;alias&amp;gt;=&amp;lt;repository-name&amp;gt;/&amp;lt;image-name&amp;gt;:&amp;lt;tag_name&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;name&lt;/code&gt;: Updates the application based on a lexical sort of the image tags and uses the last tag in the sorted list. Which could be used when using date/time for tagging images. Similar to the latest strategy, a regular expression can be used to consider only specific tags.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;argocd-image-updater.argoproj.io/&amp;lt;alias&amp;gt;.update-strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;name&lt;/span&gt;
&lt;span class="na"&gt;argocd-image-updater.argoproj.io/image-list&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;alias&amp;gt;=&amp;lt;repository-name&amp;gt;/&amp;lt;image-name&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementation:
&lt;/h2&gt;

&lt;p&gt;We'll start out by creating two repositories as can been seen within the overview, a &lt;code&gt;source code&lt;/code&gt; and a &lt;code&gt;cluster configuration&lt;/code&gt; repository. Theoretically both could be housed in the same repository, but a separation of concerns is advised. &lt;/p&gt;

&lt;p&gt;The next step would be to setup the continuous integration pipeline to create the artifact, i.e. container image, that are to be used as a starting point in the continuous deployment process. In this walkthrough we'll use GitHub for our repository as well as GitHub Actions for our pipeline. However this setup can be made in most popular version control/pipeline options.&lt;/p&gt;

&lt;h3&gt;
  
  
  Continuous Integration Workflow
&lt;/h3&gt;

&lt;p&gt;Within the source code repository under the &lt;code&gt;.github/worksflows/&lt;/code&gt; directory we'll create a GitHub actions workflow, which we name &lt;code&gt;continuous-integration.yaml&lt;/code&gt;. This workflow consists of checking out the source code, building the container image and pushing it to the GitHub Packages Image registry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;continuous-integration&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;REGISTRY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io&lt;/span&gt;
  &lt;span class="na"&gt;IMAGE_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.repository }}&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build-and-push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build and push container image&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;
      &lt;span class="na"&gt;packages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;checkout source code&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;authenticate with repository&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/login-action@v3&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;registry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io&lt;/span&gt;
          &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.actor }}&lt;/span&gt;
          &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.GITHUB_TOKEN }}&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image metadata&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/metadata-action@v4&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;meta&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;images&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;env.REGISTRY&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}/${{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;env.IMAGE_NAME&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
          &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;type=sha,prefix=sha-&lt;/span&gt;
            &lt;span class="s"&gt;type=ref,event=pr,prefix=pr-&lt;/span&gt;
            &lt;span class="s"&gt;type=ref,event=tag,prefix=tag-&lt;/span&gt;
            &lt;span class="s"&gt;type=raw,value=${{ github.run_id }},prefix=gh-&lt;/span&gt;
            &lt;span class="s"&gt;type=raw,value=${{ github.ref_name }}&lt;/span&gt;
            &lt;span class="s"&gt;type=raw,value=latest,enable=${{ github.ref_name == 'main' }}&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build and push&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/build-push-action@v5&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
          &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.meta.outputs.tags }}&lt;/span&gt;
          &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.meta.outputs.labels }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For simplicity sake the image registry is made public so that additional authentication from within the cluster isn't needed. You can discover a detailed tutorial on how to make a GitHub Package public &lt;a href="https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/managing-repository-settings/setting-repository-visibility" rel="noopener noreferrer"&gt;here&lt;/a&gt;. If you prefer utilizing a private repository, refer to &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="noopener noreferrer"&gt;this&lt;/a&gt; guide to enable pulling from the private repository within the cluster.&lt;/p&gt;

&lt;p&gt;We can see that after we commit to our &lt;code&gt;main&lt;/code&gt; branch that packages are automatically pushed to our GitHub packages image registry.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;If we now release everything that is in the main branch with a semantic version of &lt;code&gt;v1.0.0&lt;/code&gt; we can see the newer version of the application image, where the &lt;code&gt;sha-&amp;lt;number&amp;gt;&lt;/code&gt; is also placed on the newer image as no new commit was made between the previous push on &lt;code&gt;main&lt;/code&gt; and the tag.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F4.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h3&gt;
  
  
  Cluster Configuration
&lt;/h3&gt;

&lt;p&gt;For our application's Kubernetes resources we'll create a Helm chart. In the cluster configuration repository, under the charts directory run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm create &amp;lt;application-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  charts/&amp;lt;application-name&amp;gt;
    ├── .helmignore   # patterns to ignore when packaging Helm charts.
    ├── Chart.yaml    # information about your chart
    ├── values.yaml   # default values for your templates
    ├── charts/       # chart dependencies
    └── templates/    # template files
        └── tests/    # test files
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Argo CD &amp;amp; Argo CD Image Updater Installation
&lt;/h3&gt;

&lt;p&gt;Start by setting up a Kubernetes cluster, for this demonstration a local cluster is be used, created through minikube - other tools like &lt;code&gt;kind&lt;/code&gt; or &lt;code&gt;k3s&lt;/code&gt; can also be used. After installing minikube the following command can be ran to start the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step would be to setup Argo CD within the cluster, this can be done by running the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To get access to the running Argo CD instance we can use port-forwarding to connect to the api server without having to expose the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward svc/argocd-server &lt;span class="nt"&gt;-n&lt;/span&gt; argocd 8080:443
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An initial password is generated for the admin account and stored under the &lt;code&gt;password&lt;/code&gt; field in a secret named &lt;code&gt;argocd-initial-admin-secret&lt;/code&gt;. Use this to login with the &lt;code&gt;username&lt;/code&gt; value &lt;code&gt;admin&lt;/code&gt; and change the password for the user in the 'User Info' . Another safer option would be to use SSO.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; argocd get secret argocd-initial-admin-secret &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.data.password}"&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In addition we also go ahead ad install Argo CD Image Updater into the cluster, this could also be done declaratively as we'll see in the upcoming paragraphs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/argoproj-labs/argocd-image-updater/stable/manifests/install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have access to the Argo CD user interface, we'll look at the configuration of Argo CD Applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Argo CD Authentication
&lt;/h3&gt;

&lt;p&gt;Before we can configure Argo CD to start managing the application's Kubernetes resources, we need to make sure that Argo CD can access the cluster configuration repository. Repository details are stored in secret resources. Authentication can be handled in different ways, but for this demonstration we'll use HTTPS.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;syntax&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;repository-name&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;argocd.argoproj.io/secret-type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repository&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git&lt;/span&gt;
  &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/&amp;lt;organization-or-username&amp;gt;/&amp;lt;repository-name&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;github-pat&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;github-username&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before we can declaratively create the secret used for authentication we need to create the GitHub Personal Access Token (PAT) used in the &lt;code&gt;password&lt;/code&gt; field of the secret. Navigate to &lt;code&gt;Settings&lt;/code&gt; on the profile navigation bar. Click on &lt;code&gt;Developer settings&lt;/code&gt; &amp;gt; &lt;code&gt;Personal access tokens&lt;/code&gt; &amp;gt; &lt;code&gt;Fine-grained token&lt;/code&gt; &amp;amp; &lt;code&gt;Generate new token&lt;/code&gt;. Set a &lt;code&gt;token name&lt;/code&gt;, e.g., &lt;code&gt;argocd-repository-cluster-configuration&lt;/code&gt; and set an &lt;code&gt;expiration&lt;/code&gt;, I would suggest for a year. &lt;/p&gt;

&lt;p&gt;Set the &lt;code&gt;Resource owner&lt;/code&gt; to the user or organization that the cluster configuration repository is in. Set the &lt;code&gt;Repository access&lt;/code&gt; to 'only select repositories' and set it to access only the cluster configuration repository. Lastly we need to give the token scoped permissions, for the integration to work we need the following: &lt;code&gt;Contents - Access: Read and write&lt;/code&gt; &amp;amp; &lt;code&gt;Metadata - Access: Read-only&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;./secret/cluster-configuration-repository.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;blog-cluster-configuration&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;argocd.argoproj.io/secret-type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;repository&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git&lt;/span&gt;
  &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/amplication/bookstore-cluster-configuration&lt;/span&gt;
  &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;github-pat&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;levivannoort&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply this secret against the cluster by using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ./secret/cluster-configuration-repository.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When looking at the Argo CD user interfaces, we can see under the settings &amp;gt; repositories whether the authentication against the GitHub repository has succeeded. We should now be able to start using the repository definition in our Argo CD application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F5.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h3&gt;
  
  
  Argo CD Configuration
&lt;/h3&gt;

&lt;p&gt;Now that we're able to authenticate against GitHub to get the content from the cluster configuration repository. We can start defining our Argo CD applications and start managing the application's Kubernetes resources. This can be done in an imperative or declarative manner. For this demonstration we'll configure the Argo CD application declaratively. Lets look at the following manifest:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;syntax&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;application-name&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
  &lt;span class="na"&gt;finalizers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;resources-finalizer.argocd.argoproj.io&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/&amp;lt;organization-name&amp;gt;/&amp;lt;repository-name&amp;gt;.git&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;charts/&amp;lt;application-name&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;application-name&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;allowEmpty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;syncOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CreateNamespace=true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For instrumenting Argo CD Image Updater we'll need to add the previously mentioned annotations. We're going for the &lt;code&gt;semver&lt;/code&gt; update strategy with a &lt;code&gt;argocd&lt;/code&gt; write back. As both the chosen update strategy as well as the write back method are the default we don't need to specify these annotations.&lt;/p&gt;

&lt;p&gt;Add the following annotations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;argocd-image-updater.argoproj.io/image-list&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bookstore=ghcr.io/amplication/bookstore-application&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;example-application.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bookstore&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;argocd-image-updater.argoproj.io/write-back-method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
    &lt;span class="na"&gt;argocd-image-updater.argoproj.io/bookstore.update-strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;semver&lt;/span&gt;
    &lt;span class="na"&gt;argocd-image-updater.argoproj.io/image-list&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bookstore=ghcr.io/amplication/bookstore-application&lt;/span&gt;
  &lt;span class="na"&gt;finalizers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;resources-finalizer.argocd.argoproj.io&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/amplication/bookstore-cluster-configuration.git&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;charts/bookstore&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bookstore&lt;/span&gt;
  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;allowEmpty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;syncOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CreateNamespace=true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply this configuration against the cluster by using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &amp;lt;manifest-name&amp;gt;.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Demonstration:
&lt;/h2&gt;

&lt;p&gt;After creation of the Argo CD application we can see that the application is healthy and running within the cluster. As our application needed a database to be able to run we added a dependency to a postgresql helm chart for running a database in the cluster as well - so additional resources can be seen next to the default Helm chart Kubernetes resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F6.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;If we take an in-depth look on the &lt;code&gt;deployment&lt;/code&gt; object we'll see the image tag currently used by the deployment, which is the current last release within the repository - &lt;code&gt;v1.0.0&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F7.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;When looking at the Argo CD Image Updater logging, we can see that it has picked up the fact that we want to continuously update to the latest semantic version. By setting &lt;code&gt;log.level&lt;/code&gt; to &lt;code&gt;debug&lt;/code&gt; instead of the default &lt;code&gt;info&lt;/code&gt; we get more information around which images are being considered and which do not match the constraints.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F8.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Next we update the application with some changes and release the component again with an incremented version &lt;code&gt;1.0.1&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F9.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;After the workflow has concluded this newer version should be present within the image registry:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F10.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;ArgoCD image updater periodically checks the image registry for newer versions per the constraints and finds the &lt;code&gt;v1.0.1&lt;/code&gt; image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F11.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;For the demonstration I decided to disable the automated synchronization policy. As you can see Argo CD Image Updater changed the image tag from &lt;code&gt;v1.0.0&lt;/code&gt; to &lt;code&gt;v1.0.1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F12.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F13.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F14.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fextending-gitops-effortless-continuous-integration-and-deployment-on-kubernetes%2F14.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;We managed to successfully configure the extended GitOps setup. Any changes made on the application side should be reflected by outputting a container image to the artifact registry, successfully completing the Continuous Integration side. After which in a detached manner the Continuous Deployment process is started by Argo CD Image Updater finding a newer container image in the image registry and updating the declaratively defined image tag for the application. In turn triggering Argo CD to update the application's Kubernetes resource, serving the newer version of the application by updating the deployment with the new image tag.&lt;/p&gt;

&lt;p&gt;A possible improvement to the setup demonstrated, would be to switch over to the &lt;code&gt;git&lt;/code&gt; write back method, improving the setup by being more reproducible as well as having a clear audit trail.&lt;/p&gt;

&lt;p&gt;The application used in this demonstration was generated through &lt;a href="https://amplication.com" rel="noopener noreferrer"&gt;Amplication&lt;/a&gt;, which allows you to generate production-ready backend services - reliably, securely, and consistently.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
As of writing the blog the Argo CD Image Updater project does not honor the Argo CD's rollback feature and thus automatically updates the application back to the latest version found in the image registry. A solution to this would be to temporary disable the indexing of Argo CD Image Updater for the application and set the &lt;code&gt;image.tag&lt;/code&gt; in the Helm chart to the desired version.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>kubernetes</category>
      <category>backend</category>
      <category>microservices</category>
      <category>argocd</category>
    </item>
    <item>
      <title>Auth0 and Amplication: Simplifying Authentication in Your Applications</title>
      <dc:creator>Ashish Padhy</dc:creator>
      <pubDate>Fri, 24 Nov 2023 10:19:36 +0000</pubDate>
      <link>https://forem.com/amplication/auth0-and-amplication-simplifying-authentication-in-your-applications-3al6</link>
      <guid>https://forem.com/amplication/auth0-and-amplication-simplifying-authentication-in-your-applications-3al6</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://auth0.com" rel="noopener noreferrer"&gt;Auth0&lt;/a&gt; is a cloud service that provides a turn-key solution for authentication, authorization and user management. It is a feature-rich service that is highly customizable and can be used in a variety of ways. Auth0 is a great choice for a wide range of applications, from simple web apps to enterprise applications. It provides a great way to add authentication and authorization to your application without having to build it yourself, and has various integrations with services such as Google, Facebook, Twitter, and more. This along with its passwordless authentication and multi-factor authentication makes it a great choice for a wide range of applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to use Auth0 authentication in your Amplication application
&lt;/h2&gt;

&lt;p&gt;Setting up Auth0 authentication in your &lt;a href="https://amplication.com/" rel="noopener noreferrer"&gt;Amplication&lt;/a&gt; application is easy. You can use the &lt;a href="https://github.com/amplication/plugins/tree/master/plugins/auth-auth0" rel="noopener noreferrer"&gt;Auth0 plugin&lt;/a&gt; to add the required dependencies and configuration files to your application. The steps are as follows:&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a service in amplication
&lt;/h3&gt;

&lt;p&gt;Start by creating a service within the Amplication platform. Once your service is set up, click on the &lt;code&gt;Commit changes &amp;amp; build&lt;/code&gt; button to initiate the build process. Merge the generated Pull Request to move ahead.&lt;/p&gt;

&lt;h3&gt;
  
  
  Add the NestJS Auth Module
&lt;/h3&gt;

&lt;p&gt;Next, add the NestJS Auth Module to your service. You can do this by navigating to the &lt;strong&gt;Plugins&lt;/strong&gt; section within your service topbar menu. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Also create and set an authentication entity if you have not done so yet. For more information on how to do this, see the &lt;a href="https://docs.amplication.com/how-to/add-delete-user-entity/" rel="noopener noreferrer"&gt;&lt;strong&gt;Authentication&lt;/strong&gt; section of the Amplication documentation&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Add the Auth0 plugin
&lt;/h3&gt;

&lt;p&gt;Next, add the Auth0 plugin to your service. You can do this by navigating to the 'Plugins' section within your service sidebar menu, where you'll see a list of available plugins and installed plugins(see screenshot below for reference).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: You have to remove other auth plugins already there in the service by inspecting the &lt;strong&gt;installed plugins&lt;/strong&gt; tab. ( Look out for the default &lt;strong&gt;JWT Auth Provider&lt;/strong&gt; added automatically 😉 )&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7rrxkuampqenyzejldo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7rrxkuampqenyzejldo.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After installing the plugin you have to provide settings for the plugin. You can do this by clicking the &lt;code&gt;settings&lt;/code&gt; button next to the plugin name. &lt;br&gt;
After this you can follow the instructions in the &lt;a href="https://github.com/amplication/plugins/blob/master/plugins/auth-auth0/README.md" rel="noopener noreferrer"&gt;&lt;strong&gt;Plugin&lt;/strong&gt; to configure your Auth0 account&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To provide a summary of the steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an Auth0 account&lt;/li&gt;
&lt;li&gt;Create an Auth0 application and API&lt;/li&gt;
&lt;li&gt;Configure the Auth0 application&lt;/li&gt;
&lt;li&gt;Configure the Auth0 plugin&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The settings will look something like the following picture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuuju2b5b6tviyjdaju8i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuuju2b5b6tviyjdaju8i.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then click the &lt;code&gt;Save&lt;/code&gt; button to save the settings and commit the changes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Some things to note:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;domain&lt;/code&gt;, &lt;code&gt;clientId&lt;/code&gt;, &lt;code&gt;issuerURL&lt;/code&gt;, and &lt;code&gt;audience&lt;/code&gt; are required fields. These are the values you get from your Auth0 account.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;emailFieldName&lt;/code&gt; provided must be present in the authentication entity.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;defaultUser&lt;/code&gt; will be used to create a default and new users.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Alternative: Automate setup of Auth0 account
&lt;/h3&gt;

&lt;p&gt;If you hate having to setup everything manually, or just don't have access to the auth0 account, then you don't have to worry as I have got you covered. Introducing &lt;a href="https://auth0.com/docs/api/management/v2" rel="noopener noreferrer"&gt;Auth0 Management API&lt;/a&gt;. Using this all the nifty work will be done for you. All you have to do is provide is a access token with necessary permisssions. You can also customise the names 🤖🚀.&lt;/p&gt;

&lt;p&gt;For how to get the access token and the permissions required, see the &lt;a href="https://github.com/amplication/plugins/blob/master/plugins/auth-auth0/README.md#using-management-api" rel="noopener noreferrer"&gt;&lt;strong&gt;Plugin Docs&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After getting these you can add them to the plugin settings as shown below:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6242w9sat7r722p9lyqv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6242w9sat7r722p9lyqv.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then click the &lt;code&gt;Save&lt;/code&gt; button to save the settings and commit the changes. This will trigger a build and the plugin will do the rest for you.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Some things to note:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If there are already actions and api with that name, the plugin will not create them again.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;As you can see on &lt;a href="https://github.com/Shurtu-gal/auth0-example/pull/3" rel="noopener noreferrer"&gt;this PR&lt;/a&gt; from our example repo, the plugin has created the actions and api for us. 🎉🎉🎉&lt;/p&gt;

&lt;h2&gt;
  
  
  How things work
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Manual method :-
&lt;/h3&gt;

&lt;p&gt;The plugin will create the following files for you as seen in this &lt;a href="https://github.com/Shurtu-gal/auth0-example/pull/3/files" rel="noopener noreferrer"&gt;&lt;strong&gt;PR&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adds the required dependencies to the &lt;code&gt;package.json&lt;/code&gt; file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fht1e5wuv058v0dddxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fht1e5wuv058v0dddxp.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjnsyuw4zlk9zoi593re3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjnsyuw4zlk9zoi593re3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;@auth0/auth0-spa-js&lt;/code&gt; and &lt;code&gt;jwks-rsa&lt;/code&gt; help in adding the authentication and authorization to the frontend and backend respectively. While &lt;code&gt;react-router-dom&lt;/code&gt; is used to add the routes to the frontend.&lt;br&gt;
&lt;/p&gt;


&lt;ul&gt;
&lt;li&gt;Adds the required .env variables to the &lt;code&gt;.env&lt;/code&gt; file used in the frontend and backend.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxvpbsf9ncfxspmrq7fh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxvpbsf9ncfxspmrq7fh.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxb30w4e4vqjipvuld6iw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxb30w4e4vqjipvuld6iw.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;




&lt;ul&gt;
&lt;li&gt;Adds &lt;code&gt;ra-auth0-provider&lt;/code&gt; to the &lt;code&gt;Admin&lt;/code&gt;. This is used to setup Auth0 in the frontend. It provides the &lt;code&gt;authProvider&lt;/code&gt; prop to the &lt;code&gt;Admin&lt;/code&gt; component, and has requisite &lt;code&gt;login&lt;/code&gt;, &lt;code&gt;logout&lt;/code&gt; functions.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Auth0Client&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@auth0/auth0-spa-js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;AuthProvider&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;UserIdentity&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react-admin&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;PreviousLocationStorageKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@react-admin/nextPathname&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Auth0Client&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;REACT_APP_AUTH0_DOMAIN&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;clientId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;REACT_APP_AUTH0_CLIENT_ID&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;cacheLocation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;localstorage&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;authorizationParams&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;audience&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;REACT_APP_AUTH0_AUDIENCE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;openid profile email&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;useRefreshTokens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;auth0AuthProvider&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AuthProvider&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;login&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loginWithPopup&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;authorizationParams&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;redirect_uri&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;REACT_APP_AUTH0_REDIRECT_URI&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;

  &lt;span class="na"&gt;logout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;logout&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;logoutParams&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;returnTo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;REACT_APP_AUTH0_LOGOUT_REDIRECT_URI&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;checkAuth&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;isAuthenticated&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isAuthenticated&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isAuthenticated&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nx"&gt;localStorage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setItem&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;PreviousLocationStorageKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;href&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reject&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;

  &lt;span class="na"&gt;checkError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;401&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;403&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Unauthorized&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;

  &lt;span class="na"&gt;getPermissions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isAuthenticated&lt;/span&gt;&lt;span class="p"&gt;()))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getIdTokenClaims&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;

  &lt;span class="na"&gt;getIdentity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isAuthenticated&lt;/span&gt;&lt;span class="p"&gt;()))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;User not authenticated&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;User not found&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;fullName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;avatar&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;picture&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;UserIdentity&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;

  &lt;span class="na"&gt;handleCallback&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;search&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;code=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;state=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;handleRedirectCallback&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Failed to handle login callback: &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Failed to handle login callback.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Adds the logic to get access token in graphql provider as seen in these &lt;a href="https://github.com/Shurtu-gal/auth0-example/blob/2f4b7f69056ffbd8bc89871bb315babb7805cd9e/apps/auth-example-admin/src/data-provider/graphqlDataProvider.ts#L11-#L17" rel="noopener noreferrer"&gt;&lt;strong&gt;lines&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Adds custom login page to the admin app. This is done by changing the default &lt;a href="https://github.com/Shurtu-gal/auth0-example/pull/3/files#diff-ce95c848675c3daf98191c954e40b94a18522c0b56ffb2f980d15179ac45c098" rel="noopener noreferrer"&gt;Login.tsx&lt;/a&gt; file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Adds the &lt;code&gt;JWT Base Strategy&lt;/code&gt; to the backend. This checks the &lt;strong&gt;JWT&lt;/strong&gt; token sent in the request header and verifies it using the &lt;strong&gt;JWKS&lt;/strong&gt; key provided by Auth0. This is done by adding the following code to the &lt;code&gt;src/auth/jwt/base/jwt.strategy.base.ts&lt;/code&gt; file. And then validating it in the database using &lt;code&gt;validateBase&lt;/code&gt; function.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ConfigService&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@nestjs/config&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;PassportStrategy&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@nestjs/passport&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;passportJwtSecret&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;jwks-rsa&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ExtractJwt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Strategy&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;passport-jwt&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Auth0User&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./User&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;UserInfo&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../../UserInfo&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;UserService&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;src/user/user.service&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;JwtStrategyBase&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;PassportStrategy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Strategy&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;protected&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;configService&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ConfigService&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;protected&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;userService&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;UserService&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;jwtFromRequest&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ExtractJwt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fromAuthHeaderAsBearerToken&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="c1"&gt;// Extract JWT from the Authorization header&lt;/span&gt;
      &lt;span class="na"&gt;audience&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;configService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;AUTH0_AUDIENCE&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="c1"&gt;// The resource server where the JWT is processed&lt;/span&gt;
      &lt;span class="na"&gt;issuer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;configService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;AUTH0_ISSUER_URL&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// The issuing Auth0 server&lt;/span&gt;
      &lt;span class="na"&gt;algorithms&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;RS256&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="c1"&gt;// Asymmetric signing algorithm&lt;/span&gt;

      &lt;span class="na"&gt;secretOrKeyProvider&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;passportJwtSecret&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;rateLimit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;jwksRequestsPerMinute&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;jwksUri&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;configService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
          &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;AUTH0_ISSUER_URL&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;.well-known/jwks.json`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// Validate the received JWT and construct the user object out of the decoded token.&lt;/span&gt;
  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;validateBase&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Auth0User&lt;/span&gt; &lt;span class="p"&gt;}):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;UserInfo&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;userService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findOne&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;where&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;roles&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;ul&gt;
&lt;li&gt;Adds the &lt;code&gt;JWT stratgey&lt;/code&gt; code which is editable by users to the backend. This is done by adding the following code to the &lt;code&gt;src/auth/jwt/jwt.strategy.ts&lt;/code&gt; file. And then validating it in the database using &lt;code&gt;validate&lt;/code&gt; function.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Injectable&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@nestjs/common&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;JwtStrategyBase&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./base/jwt.strategy.base&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ConfigService&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@nestjs/config&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Auth0User&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./base/User&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;IAuthStrategy&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../IAuthStrategy&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;UserInfo&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../UserInfo&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;UserService&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;src/user/user.service&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Injectable&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;JwtStrategy&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;JwtStrategyBase&lt;/span&gt; &lt;span class="k"&gt;implements&lt;/span&gt; &lt;span class="nx"&gt;IAuthStrategy&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;protected&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;configService&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ConfigService&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;protected&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;userService&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;UserService&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;configService&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;userService&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;validate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Auth0User&lt;/span&gt; &lt;span class="p"&gt;}):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;UserInfo&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;validatedUser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;validateBase&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="c1"&gt;// If the entity is valid, return it&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;validatedUser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;validatedUser&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// Otherwise, make a new entity and return it&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userFields&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;defaultData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;userFields&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;userFields&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;userFields&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;admin&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;newUser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;userService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;defaultData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;newUser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;newUser&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;roles&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this code, if a user is not found in the database then a new user is created with the default roles as &lt;code&gt;admin&lt;/code&gt;. This can be changed by the user as per their requirements and needs. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; :- Please make sure that the field names in the &lt;code&gt;defaultData&lt;/code&gt; object are present in the authentication entity.&lt;br&gt;
Also make sure of the role you wish to assign to the user. In this case it is &lt;code&gt;admin&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Adds the &lt;code&gt;User&lt;/code&gt; type to the &lt;code&gt;src/auth/jwt/base/User.ts&lt;/code&gt; file. This is used to get the user data from the JWT token.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;Auth0User&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;nickname&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;username&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;email_verified&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;picture&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The types of the fields in this interface can be changed as per the requirements of the user and generally varies from application to application. You can find more information about the fields &lt;a href="https://auth0.com/docs/secure/tokens/json-web-tokens/create-custom-claims" rel="noopener noreferrer"&gt;here&lt;/a&gt;. However, the &lt;code&gt;email&lt;/code&gt; field is required as it is used to identify the user. Also, if you want some fields you may have to change the &lt;strong&gt;scope&lt;/strong&gt; in the &lt;code&gt;src/auth-provider/ra-auth-auth0.ts&lt;/code&gt; in the frontend.&lt;/p&gt;

&lt;h2&gt;
  
  
  Customization - Add social connections
&lt;/h2&gt;

&lt;p&gt;With &lt;a href="https://auth0.com/" rel="noopener noreferrer"&gt;Auth0&lt;/a&gt; you can add social connections to your application. This allows users to login to your application using their social media accounts. This allows you to provide a more personalized, secure and passwordless experience for your users. You can add social connections to your application by following the steps below:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Go to the &lt;a href="https://manage.auth0.com/#/connections/social/create" rel="noopener noreferrer"&gt;&lt;strong&gt;Auth0 Social Connections&lt;/strong&gt;&lt;/a&gt; page. You will see various options out there as can be seen below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffc0om840t4xfok3uwx20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffc0om840t4xfok3uwx20.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;


&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Here I am choosing GitHub however, you can choose any of the options available. The steps are nearly similar to each other.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After choosing the option, you will be redirected to the configuration page for that option. Here you can configure the connection as per your requirements. You can also add custom scopes to the connection. For more information on how to do this, see the &lt;a href="https://marketplace.auth0.com/integrations/github-social-connection" rel="noopener noreferrer"&gt;&lt;strong&gt;Marketplace documentation&lt;/strong&gt;&lt;/a&gt;. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;:- Make sure to add the &lt;code&gt;email&lt;/code&gt; scope to the connection as shown in the image. This is required to get the email of the user.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fub92tjk6d3itfavkwee0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fub92tjk6d3itfavkwee0.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;




&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click the &lt;code&gt;Create&lt;/code&gt; button to create the connection. This will redirect you to the &lt;strong&gt;Connection setup&lt;/strong&gt; page where you can configure on which apps you should add this. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Now, the connections should be visible in the login page of your application. You can see the login page of the example application below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7uunrd7mt8xfc3irsjd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7uunrd7mt8xfc3irsjd.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;


&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Future Work
&lt;/h2&gt;

&lt;p&gt;Make the plugin more customizable by adding more options to the plugin settings. This will allow the user to customize the plugin as per their requirements. Some of the options that can be added are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authentication using phone number&lt;/li&gt;
&lt;li&gt;Passwordless authentication&lt;/li&gt;
&lt;li&gt;Two factor authentication&lt;/li&gt;
&lt;li&gt;Adding custom roles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have any other suggestions, please feel free create an issue at the &lt;a href="https://github.com/amplication/plugins/issues/new" rel="noopener noreferrer"&gt;&lt;strong&gt;Auth0 plugin repo&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Outro
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/amplication/plugins/blob/master/plugins/auth-auth0" rel="noopener noreferrer"&gt;Amplication's Auth0 Plugin&lt;/a&gt; provides a powerful but effortless way to add authentication to your application. It is easy to use and can be configured in a few minutes thus reducing complexity overhead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I hope this blog post was helpful to you. If you have any questions or suggestions, please feel free to reach out to me on &lt;a href="https://twitter.com/Shurtu_Gal" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://github.com/Shurtu-gal/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; or &lt;a href="https://www.linkedin.com/in/ashish-padhy3023/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;. I would love to hear from you.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>authentication</category>
      <category>backenddevelopment</category>
      <category>nestjs</category>
    </item>
    <item>
      <title>Node.js Worker Threads Vs. Child Processes: Which one should you use?</title>
      <dc:creator>Muly Gottlieb</dc:creator>
      <pubDate>Wed, 25 Oct 2023 09:00:25 +0000</pubDate>
      <link>https://forem.com/amplication/nodejs-worker-threads-vs-child-processes-which-one-should-you-use-178i</link>
      <guid>https://forem.com/amplication/nodejs-worker-threads-vs-child-processes-which-one-should-you-use-178i</guid>
      <description>&lt;p&gt;Parallel processing plays a vital role in compute-heavy applications. For example, consider an application that determines if a given number is prime or not. If you're familiar with prime numbers, you'll know that you have to traverse from 1 to the square root of the number to determine if it is prime or not, and this is often time-consuming and extremely compute-heavy.&lt;/p&gt;

&lt;p&gt;So, if you're building such compute-heavy apps on Node.js, you'll be blocking the running thread for a potentially long time. Due to Node.js's single-threaded nature, compute-heavy operations that do not involve I/O will cause the application to halt until this task is finished.&lt;/p&gt;

&lt;p&gt;Therefore, there's a chance that you'll stay away from Node.js when building software that needs to perform such tasks. However, Node.js has introduced the concept of &lt;a href="https://nodejs.org/api/worker_threads.html" rel="noopener noreferrer"&gt;Worker Threads&lt;/a&gt; and &lt;a href="https://nodejs.org/api/child_process.html" rel="noopener noreferrer"&gt;Child Processes&lt;/a&gt; to help with parallel processing in your Node.js app so that you can execute specific processes in parallel. In this article, we will understand both concepts and discuss when it would be useful to employ each of them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Node.js Worker Threads
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What are worker threads in Node.js?
&lt;/h3&gt;

&lt;p&gt;Node.js is capable of handling I/O operations efficiently. However, when it runs into any compute-heavy operation, it causes the primary event loop to freeze up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fnodejs-worker-threads-vs-child-processes-which-one-should-you-use%2F0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fnodejs-worker-threads-vs-child-processes-which-one-should-you-use%2F0.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Figure: The Node.js event loop&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When Node.js discovers an async operation, it ״offshores״ it to the thread pool. However, when it needs to run a compute-heavy operation, it performs it on its primary thread, which causes the app to block until the operation has finished. Therefore, to mitigate this issue, Node.js introduced the concept of Worker Threads to help offload CPU-intensive operations from the primary event loop so that developers can spawn multiple threads in parallel in a non-blocking manner.&lt;/p&gt;

&lt;p&gt;It does this by spinning up an isolated Node.js context that contains its own Node.js runtime, event loop, and event queue, which runs in a remote V8 environment. This executes in a disconnected environment from the primary event loop, allowing the primary event loop to free up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fnodejs-worker-threads-vs-child-processes-which-one-should-you-use%2F1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fnodejs-worker-threads-vs-child-processes-which-one-should-you-use%2F1.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Figure: Worker threads in Node.js&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As shown above, Node.js creates independent runtimes as Worker Threads, where each thread executes independently of other threads and communicates its process statuses to the parent thread through a messaging channel. This allows the parent thread to continue performing its functions as usual (without being blocked). By doing so, you're able to achieve multi-threading in Node.js.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are the benefits of using Worker Threads in Node.js?
&lt;/h3&gt;

&lt;p&gt;As you can see, using worker threads can be very beneficial for CPU-intensive applications. In fact, it has several advantages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Improved performance: You can offshore compute heavy operations to worker threads, and this can free up the primary thread, which lets your app be responsive to serve more requests.&lt;/li&gt;
&lt;li&gt; Improve parallelism: If you have a large process that you would like to chunk into subtasks and execute in parallel, you can use worker threads to do so. For example, if you were determining if 1,999,3241,123 was a prime number, you could use worker threads to check for divisors in a range - (1 to 100,000 in WT1, 100,001 to 200,000 in WT2, etc). This would speed up your algorithm and would result in faster responses.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  When should you use Worker Threads in Node.js?
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;If you think about it, you should only use Worker Threads to run compute-heavy operations in isolation from the parent thread.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It's pointless to run I/O operations in a worker thread as they are already being offshored to the event loop. So, consider using worker threads when you've got a compute-heavy operation that you need to execute in an isolated environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  How can you build a Worker Thread in Node.js?
&lt;/h3&gt;

&lt;p&gt;If all of this sounds appealing to you, let's look at how we can implement a Worker Thread in Node.js. Consider the snippet below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;Worker&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;isMainThread&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;parentPort&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;workerData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;worker_threads&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;generatePrimes&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./prime&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Set&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;999999&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;breakIntoParts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;threadCount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;parts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;chunkSize&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ceil&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kr"&gt;number&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nx"&gt;threadCount&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nx"&gt;chunkSize&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;end&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;chunkSize&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;end&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isMainThread&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;parts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;breakIntoParts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;part&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;threads&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;__filename&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;workerData&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;part&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;start&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;end&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;part&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;end&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nx"&gt;threads&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;thread&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;error&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nx"&gt;thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;exit&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;threads&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;thread&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Thread exiting, &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;threads&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;size&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; running...`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nx"&gt;thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;message&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;primes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;generatePrimes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;workerData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;start&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;workerData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;end&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;parentPort&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;postMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s2"&gt;`Primes from - &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;workerData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;start&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; to &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;workerData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;end&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;primes&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The snippet above showcases an ideal scenario in which you can utilize worker threads. To build a worker thread, you'll need to import &lt;code&gt;Worker&lt;/code&gt;, &lt;code&gt;IsMainThread&lt;/code&gt;, &lt;code&gt;parentPort&lt;/code&gt;, and&lt;code&gt;workerData&lt;/code&gt; from the &lt;code&gt;worker_threads&lt;/code&gt; library. These definitions will be used to create the worker thread.&lt;/p&gt;

&lt;p&gt;I've created an algorithm that finds all the prime numbers in a given range. It splits the range into different parts (five parts in the example above) in the main thread and then creates a Worker Thread using the &lt;code&gt;new Worker()&lt;/code&gt; to handle each part. The worker thread executes the &lt;code&gt;else&lt;/code&gt; block, which finds the prime numbers in the range assigned to that worker thread, and finally sends the result back to the parent (main) thread by using &lt;code&gt;parentPort.postMessage()&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Node.js: Child Processes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What are child processes in Node.js?
&lt;/h3&gt;

&lt;p&gt;Child processes are different from worker threads. While worker threads provide an isolated event loop and V8 runtime in the same process, child processes are separate instances of the entire Node.js runtime. Each child process has its own memory space and communicates with the main process through IPC (inter-process communication) techniques like message streaming or piping (or files, Database, TCP/UDP, etc.).&lt;/p&gt;

&lt;h3&gt;
  
  
  What are the benefits of using Child Processes in Node.js?
&lt;/h3&gt;

&lt;p&gt;Using child processes in your Node.js applications brings about a lot of benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Improved isolation: Each child process runs in its own memory space, providing isolation from the main process. This is advantageous for tasks that may have resource conflicts or dependencies that need to be separated.&lt;/li&gt;
&lt;li&gt; Improved scalability: Child processes distribute tasks among multiple processes, which lets you take advantage of multi-core systems and handle more concurrent requests.&lt;/li&gt;
&lt;li&gt; Improved robustness: If the child process crashes for some reason, it will not crash your main process along with it.&lt;/li&gt;
&lt;li&gt; Running external programs: Child processes let you run external programs or scripts as separate processes. This is useful for scenarios where you need to interact with other executables.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  When should you use Child Processes in Node.js?
&lt;/h3&gt;

&lt;p&gt;So, now you know the benefits child processes bring to the picture. It's important to understand when you should use child processes in Node.js. Based on my experience, I'd recommend using a child process when you want to execute an external program in Node.js.&lt;/p&gt;

&lt;p&gt;My recent experience included a scenario where I had to run an external executable from within my Node.js service. It isn't possible to execute a binary inside the primary thread. So, I had to use a child process in which I executed the binary.&lt;/p&gt;

&lt;h3&gt;
  
  
  How can you build Child Processes in Node.js?
&lt;/h3&gt;

&lt;p&gt;Well, now the fun part. How do you build a child process? There are several ways to create a child process in Node.js (using methods like &lt;code&gt;spawn()&lt;/code&gt;, &lt;code&gt;fork()&lt;/code&gt;, &lt;code&gt;exec()&lt;/code&gt;, and &lt;code&gt;execFile()&lt;/code&gt;) and as always, reading the &lt;a href="https://nodejs.org/api/child_process.html" rel="noopener noreferrer"&gt;docs&lt;/a&gt; is advisable to get the full picture, but the simplest case of creating child processes is as simple as the script shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;spawn&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;child_process&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;child&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;spawn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;node&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;child.js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;

&lt;span class="nx"&gt;child&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stdout&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;data&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Child process stdout: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;child&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;close&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;code&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Child process exited with code &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;code&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All you have to do is import a &lt;code&gt;spawn()&lt;/code&gt; method from the &lt;code&gt;child_process&lt;/code&gt; module and then call the method by passing a CLI argument as the parameter. So in our example, we're running a file named &lt;code&gt;child.js&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The file execution logs are printed through the event streaming &lt;code&gt;stdout&lt;/code&gt; while the &lt;code&gt;close&lt;/code&gt; handler handles the process termination.&lt;/p&gt;

&lt;p&gt;Of course, this is a very minimal and contrived example of using child processes, but it is brought here just to illustrate the concept.&lt;/p&gt;

&lt;h1&gt;
  
  
  How to select between worker threads and child processes?
&lt;/h1&gt;

&lt;p&gt;Well, now that you know what child processes and worker threads are, it's important to know when to use either of these techniques. Neither of them is a silver bullet that fits all cases. Both approaches work well for specific conditions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use worker threads when:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; You're running CPU-intensive tasks. If your tasks are CPU-intensive, worker threads are a good choice.&lt;/li&gt;
&lt;li&gt; Your tasks require shared memory and efficient communication between threads. Worker threads have built-in support for shared memory and a messaging system for communication.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Use child processes when:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; You're running tasks that need to be isolated and run independently, especially if they involve external programs or scripts. Each child process runs in its own memory space.&lt;/li&gt;
&lt;li&gt; You need to communicate between processes using IPC mechanisms, such as standard input/output streams, messaging, or events. Child processes are well-suited for this purpose.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;Parallel processing is becoming a vital aspect of modern system design, especially when building applications that deal with very large datasets or compute-intensive tasks. Therefore, it's important to consider Worker Threads and Child Processes when building such apps with Node.js.&lt;/p&gt;

&lt;p&gt;If your system is not designed properly with the right parallel processing technique, your system could perform poorly by over-exhausting resources (as spawning these resources consumes a lot of resources as well).&lt;/p&gt;

&lt;p&gt;Therefore, it's important for software engineers and architects to verify requirements clearly and select the right tool based on the information presented in this article.&lt;/p&gt;

&lt;p&gt;Additionally, you can use tools like &lt;a href="https://amplication.com/" rel="noopener noreferrer"&gt;Amplication&lt;/a&gt; to bootstrap your Node.js applications easily and focus on these parallel processing techniques instead of wasting time on (re)building all the boilerplate code for your Node.js services.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>backend</category>
      <category>efficiency</category>
      <category>node</category>
    </item>
    <item>
      <title>Top 6 ORMs for Modern Node.js App Development</title>
      <dc:creator>Muly Gottlieb</dc:creator>
      <pubDate>Wed, 11 Oct 2023 07:22:44 +0000</pubDate>
      <link>https://forem.com/amplication/top-6-orms-for-modern-nodejs-app-development-2fop</link>
      <guid>https://forem.com/amplication/top-6-orms-for-modern-nodejs-app-development-2fop</guid>
      <description>&lt;p&gt;In modern web development, one can confidently predict that constructing robust and efficient Node.js applications frequently necessitates database interaction. A pivotal challenge in databases-driven applications lies in managing the interplay between the application code and the database.&lt;/p&gt;

&lt;p&gt;This is precisely where Object-Relational Mapping (ORM) libraries assume a crucial role.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an ORM?
&lt;/h2&gt;

&lt;p&gt;ORMs serve as tools that bridge the divide between the object-oriented nature of application code and the relational structure of databases. They streamline database operations, enhance code organization, and boost developer productivity. In this article, I will delve into the significance of ORMs in Node.js app development and examine the top six ORM tools you can employ to enhance your development workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Importance of ORMs in Node.js App Development
&lt;/h2&gt;

&lt;p&gt;ORMs bridge the gap between the object-oriented programming world and relational databases, making it easier for developers to interact with databases using JavaScript. Here are five key benefits of using ORMs in Node.js app development:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Abstraction of Database Operations:&lt;/strong&gt; ORMs provide a higher-level abstraction, allowing developers to work with JavaScript objects and classes rather than writing complex SQL queries. This abstraction simplifies database operations, making code more readable and maintainable.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Database Agnosticism:&lt;/strong&gt; ORMs are often database-agnostic, which supports multiple database systems. This flexibility allows developers to switch between databases (e.g., MySQL, PostgreSQL, SQLite) without major code changes, making it easier to adapt to evolving project requirements.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Code Reusability:&lt;/strong&gt; ORMs encourage code reusability by providing a consistent API for database interactions. Developers can create generic database access codes that can be reused across different application parts, reducing duplication and minimizing the chances of errors.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Security:&lt;/strong&gt; ORMs help mitigate common security vulnerabilities, such as SQL injection attacks, by automatically sanitizing and parameterizing SQL queries. This helps in building more secure applications by default.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Rapid Development:&lt;/strong&gt; ORMs accelerate development by simplifying database setup and management. Developers can focus on application logic rather than excessive time on database-related tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's explore the top six ORM tools for modern Node.js app development.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Top 6 ORM tools for modern Node.js app development&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Sequelize&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Ftop-6-orms-for-modern-nodejs-app-development%2F0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Ftop-6-orms-for-modern-nodejs-app-development%2F0.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a href="https://sequelize.org/" rel="noopener noreferrer"&gt;Sequelize&lt;/a&gt; is an extensively employed ORM for Node.js. It supports relational databases, such as MySQL, PostgreSQL, SQLite, and MSSQL. Sequelize boasts a comprehensive array of features for database modeling and querying. It caters to various coding styles by accommodating both Promise and Callback-based APIs. Moreover, it encompasses advanced functionalities such as transactions, migrations, and associations, making it well-suited for intricate database operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Excellent documentation and a large community.&lt;/li&gt;
&lt;li&gt;  Support for multiple database systems.&lt;/li&gt;
&lt;li&gt;  Strong support for migrations and schema changes.&lt;/li&gt;
&lt;li&gt;  Comprehensive query builder.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  It can have a steep learning curve for beginners.&lt;/li&gt;
&lt;li&gt;  Some users find the API complex and lengthy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Sequelize is a good choice when working with projects that require support for multiple database systems and complex relationships between data models.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. TypeORM&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Ftop-6-orms-for-modern-nodejs-app-development%2F1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Ftop-6-orms-for-modern-nodejs-app-development%2F1.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a href="https://typeorm.io/" rel="noopener noreferrer"&gt;TypeORM&lt;/a&gt; places its focus on TypeScript and JavaScript (ES7+) development. It offers compatibility with various database systems, including MySQL, PostgreSQL, SQLite, and MongoDB. What sets TypeORM apart is its robust integration with TypeScript. It provides a user-friendly experience with a convenient decorator-based syntax for defining entities and relationships. Additionally, TypeORM supports the &lt;a href="https://www.linkedin.com/pulse/implementing-repository-pattern-nestjs-nadeera-sampath/" rel="noopener noreferrer"&gt;repository pattern&lt;/a&gt; and enables &lt;a href="https://typeorm.io/eager-and-lazy-relations" rel="noopener noreferrer"&gt;eager loading&lt;/a&gt;, enhancing its versatility for developers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Strong TypeScript support with type checking.&lt;/li&gt;
&lt;li&gt;  Intuitive decorator-based syntax.&lt;/li&gt;
&lt;li&gt;  Support for migrations and schema generation.&lt;/li&gt;
&lt;li&gt;  Active development with frequent updates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Limited support for NoSQL databases.&lt;/li&gt;
&lt;li&gt;  It may not be as performant as some other ORMs.&lt;/li&gt;
&lt;li&gt;  Support and maintenance of the project are not always as expected.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; TypeORM is an excellent choice for projects prioritizing TypeScript and prefers a developer-friendly, decorator-based syntax for defining data models.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Prisma
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Ftop-6-orms-for-modern-nodejs-app-development%2F2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Ftop-6-orms-for-modern-nodejs-app-development%2F2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a href="https://www.prisma.io/" rel="noopener noreferrer"&gt;Prisma&lt;/a&gt; is a contemporary database toolkit and ORM, seamlessly compatible with TypeScript, JavaScript, and multiple databases, such as PostgreSQL, MySQL, SQLite, MongoDB, and SQL Server. Prisma's primary focus is ensuring type-safe database access, featuring an auto-generated, robust query builder. Prisma excels in prioritizing type safety and modern tooling, producing a strongly typed database client that effectively minimizes runtime errors associated with database queries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Excellent TypeScript integration with generated types.&lt;/li&gt;
&lt;li&gt;  Powerful query builder with auto-completion.&lt;/li&gt;
&lt;li&gt;  Efficient database migrations.&lt;/li&gt;
&lt;li&gt;  Schema-first design approach.&lt;/li&gt;
&lt;li&gt;  Strong support, community, and maintenance and a growing ecosystem.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Limited support for NoSQL databases.&lt;/li&gt;
&lt;li&gt;  Relatively newer in the ORM ecosystem.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Prisma is an ideal choice for projects that prioritize type safety, modern tooling, and efficient database queries, especially when working with TypeScript.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Objection.js
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Ftop-6-orms-for-modern-nodejs-app-development%2F3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Ftop-6-orms-for-modern-nodejs-app-development%2F3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://vincit.github.io/objection.js/" rel="noopener noreferrer"&gt;Objection.js&lt;/a&gt; is a SQL-friendly ORM for Node.js that supports various relational databases, including PostgreSQL, MySQL, and SQLite. It provides a flexible and expressive query builder. Objection.js is known for its expressive syntax, allowing developers to build complex queries easily. It supports eager loading, transactions, and migrations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Expressive query builder.&lt;/li&gt;
&lt;li&gt;  Support for complex data relationships.&lt;/li&gt;
&lt;li&gt;  Excellent documentation.&lt;/li&gt;
&lt;li&gt;  Active development and community support.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Limited support for NoSQL databases.&lt;/li&gt;
&lt;li&gt;  It may require a steep learning curve for beginners.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Objection.js is a good choice for developers who prefer an expressive query builder and need to work with SQL databases in their Node.js projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5. Bookshelf.js&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Ftop-6-orms-for-modern-nodejs-app-development%2F4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Ftop-6-orms-for-modern-nodejs-app-development%2F4.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a href="https://bookshelfjs.org/" rel="noopener noreferrer"&gt;Bookshelf.js&lt;/a&gt; is an uncomplicated and lightweight ORM designed for Node.js, constructed atop the Knex.js query builder. Its primary aim is to support SQL databases, such as PostgreSQL, MySQL, and SQLite. Bookshelf.js focuses on simplicity and user-friendliness, offering a direct method for defining models and relationships through JavaScript classes and prototypal inheritance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  It is lightweight and easy to get started with.&lt;/li&gt;
&lt;li&gt;  Suitable for smaller projects with basic database needs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Limited advanced features compared to other ORMs.&lt;/li&gt;
&lt;li&gt;  It may not be ideal for large and complex applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Bookshelf.js is a good choice for small to medium-sized projects with simple database requirements and developers who prefer a minimalistic approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Mikro-ORM
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Ftop-6-orms-for-modern-nodejs-app-development%2F5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Ftop-6-orms-for-modern-nodejs-app-development%2F5.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a href="https://mikro-orm.io/" rel="noopener noreferrer"&gt;Mikro-ORM&lt;/a&gt; is a TypeScript ORM that focuses on simplicity and efficiency. It supports various SQL databases and MongoDB. Mikro-ORM is known for its simplicity and developer-friendly APIs. It provides a concise syntax for defining data models and relationships, making it easy to use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  TypeScript support with solid typing.&lt;/li&gt;
&lt;li&gt;  Supports SQL and NoSQL databases.&lt;/li&gt;
&lt;li&gt;  Automatic migrations and schema updates.&lt;/li&gt;
&lt;li&gt;  Focus on performance and efficiency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Smaller community compared to some other ORMs.&lt;/li&gt;
&lt;li&gt;  It may not have all the advanced features of larger ORMs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best for:&lt;/strong&gt; Mikro-ORM is an excellent choice for developers who value simplicity and efficiency, especially when working with TypeScript and multiple database types.&lt;/p&gt;

&lt;h1&gt;
  
  
  What's the best ORM for Node.js microservices?
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;My short (subjective) answer, is Prisma.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Prisma presents a type-safe and user-friendly approach to database interaction, simplifying intricate database tasks and diminishing the likelihood of runtime errors. It is compatible with various databases, including PostgreSQL, MySQL, MongoDB, and MS SQL Server, making it adaptable to diverse project requirements. The maintenance and support of the project are top-notch, assuring that bugs are quickly addressed and new features roll out on a competitive cadence.&lt;/p&gt;

&lt;p&gt;In addition, Prisma is supported by microservice code generation tools like &lt;a href="https://amplication.com/" rel="noopener noreferrer"&gt;Amplication&lt;/a&gt;. Prisma plugs directly into the code generated by Amplication. By doing so, you can utilize Prisma as an ORM layer for your databases and generate microservice code with ease in just a few clicks.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Selecting the right ORM for your Node.js project is an important decision.&lt;/p&gt;

&lt;p&gt;The ORMs discussed in this article each bring unique strengths and weaknesses tailored for diverse scenarios. When making your choice, consider critical factors such as type safety, database compatibility, developer-friendliness, community and support, level of maintenance, and the specific demands of your project.&lt;/p&gt;

&lt;p&gt;In a nutshell, ORMs offer many invaluable advantages in modern Node.js app development, including the abstraction of database operations, database agnosticism, code reusability, heightened security, and accelerated growth.&lt;/p&gt;

&lt;p&gt;By assessing and opting for the ORM that aligns with your requirements, you can streamline database interactions and craft efficient, sustainable applications poised for success. Your choice of ORM will likely stay with your project for a long time and will impact your project's success, so choose wisely and embark on your journey to a brighter development future.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>backend</category>
      <category>prisma</category>
      <category>node</category>
    </item>
    <item>
      <title>Celebrating Hacktoberfest 2023 with Amplication</title>
      <dc:creator>Saurav Jain</dc:creator>
      <pubDate>Wed, 04 Oct 2023 07:18:50 +0000</pubDate>
      <link>https://forem.com/amplication/celebrating-hacktoberfest-2023-with-amplication-438f</link>
      <guid>https://forem.com/amplication/celebrating-hacktoberfest-2023-with-amplication-438f</guid>
      <description>&lt;h2&gt;
  
  
  A quick look back
&lt;/h2&gt;

&lt;p&gt;Hello everyone! I'm Saurav Jain, your Community Manager at Amplication. In 2018, took my first step into the dynamic world of Hacktoberfest, and since then, I haven't looked back. As the Community Manager at Amplication, my involvement with Hacktoberfest reached a crescendo last year when I was given the incredible opportunity to organize a major campaign for our organization. The response was overwhelming. We received a lot of contributions from every corner of the globe. The diversity and volume of input were amazing, from code adjustments to non-code contributions, bug fixes, feature requests, and even documentation and tutorials. Reflecting on last year, the passion and dedication displayed by developers worldwide is what fuels my enthusiasm for Hacktoberfest. It's heartwarming to see so many individuals dedicate their time and skills to the betterment of the open-source community.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amplication: A proud sponsor of Hacktoberfest 2023
&lt;/h2&gt;

&lt;p&gt;Guess what's even more exciting? This year, we're not only participating, we're sponsoring Hacktoberfest 2023 on its 10th anniversary! With bigger responsibilities come even bigger plans; trust me, we've got some fascinating stuff in store.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LFMPqkDg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/celebrating-hacktoberfest-2023-with-amplication/2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LFMPqkDg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/celebrating-hacktoberfest-2023-with-amplication/2.png" alt="" width="800" height="526"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h2&gt;
  
  
  What's on the agenda?
&lt;/h2&gt;

&lt;p&gt;We've prepped &lt;a href="https://github.com/amplication/amplication/issues"&gt;dozens of issues&lt;/a&gt; spanning various domains like bug fixes, UI/UX improvements, documentation enhancements, and more.&lt;/p&gt;

&lt;p&gt;Regardless of whether you're a beginner or a seasoned developer, there's something for you. Here are the repositories we're mainly focusing on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/amplication/amplication"&gt;amplication/amplication&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/amplication/docs"&gt;amplication/docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/amplication/plugins"&gt;amplication/plugins&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/amplication/amplication-site"&gt;amplication/amplication-site&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can even suggest new issues. Just make sure to get them verified by our team before diving in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond code: Diverse contributions
&lt;/h2&gt;

&lt;p&gt;Coding isn't the only way to contribute. We believe that open source is for everyone. That's why we're introducing a wide array of non-code contributions. These include crafting product use cases, creating Amplication tutorials, making instructional videos like 'How to do XYZ using Amplication,' and conducting user interviews.&lt;/p&gt;

&lt;h2&gt;
  
  
  The icing on the cake: Rewards!
&lt;/h2&gt;

&lt;p&gt;Who doesn't love rewards, right? Contribute to one issue and grab some exclusive Amplication stickers. Solve three, and you'll be the proud owner of an Amplication T-shirt. And for those who are up for a real challenge, we have 10+ Premium Issues lined up, each coming along with special prizes ranging up to $500 worth of gift cards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Premium Issues:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We invite experienced developers who are willing to take a challenge and build something that can help thousands of developers.&lt;/p&gt;

&lt;p&gt;We put together 10+ premium issues, including new development for our robust plugin system. Each issue in the category is associated with a gift card worth up to $500 that you can claim, or you can choose to donate the similar amount to an open-source project of your choice.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you are qualified and have knowledge for the task, please select the plugin you want to work on and feel free to share any background information associated with the skillset.&lt;/li&gt;
&lt;li&gt;Once the issue is assigned to you, we will invite you to a private discord channel where you can talk with our team members, ask for help, and share your progress on the task.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To ensure continuous work on the issue and prevent delays for other community members (so you guys won’t have to wait for issues), if there's no activity for three days, the issue may be reassigned to another community member.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RcyBUJrx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/celebrating-hacktoberfest-2023-with-amplication/0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RcyBUJrx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/celebrating-hacktoberfest-2023-with-amplication/0.png" alt="" width="800" height="526"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;How to get started?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;So, how do you dive in? It's simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Visit our &lt;a href="https://github.com/amplication/amplication/issues"&gt;GitHub repo and look for an issue&lt;/a&gt; that interests you.&lt;/li&gt;
&lt;li&gt;Drop a comment asking us to assign it to you.&lt;/li&gt;
&lt;li&gt;Once assigned, get cracking!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Read the guidelines here: &lt;a href="https://github.com/amplication/amplication/issues/7026"&gt;https://github.com/amplication/amplication/issues/7026&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Don’t wait- join the parade!
&lt;/h2&gt;

&lt;p&gt;Getting involved is simple. Our &lt;a href="https://amplication.com/discord"&gt;Discord server&lt;/a&gt; is the place to be. Oh, and don't forget, our Hacktoberfest kickoff event is on October 5th, &lt;a href="https://www.youtube.com/watch?v=hY5cfZxKSxg"&gt;live on our YouTube channel&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Date: Oct 5th, 2023&lt;br&gt;
Time: 5 PM CEST&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eZc7k6oP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/celebrating-hacktoberfest-2023-with-amplication/1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eZc7k6oP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/celebrating-hacktoberfest-2023-with-amplication/1.png" alt="" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;I'm buzzing with excitement, and I hope you are, too. Let's make Hacktoberfest 2023 legendary!&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>github</category>
      <category>node</category>
      <category>hacktoberfest23</category>
    </item>
    <item>
      <title>Distributed Tracing and OpenTelemetry Guide</title>
      <dc:creator>Daniele Iasella</dc:creator>
      <pubDate>Fri, 29 Sep 2023 06:26:07 +0000</pubDate>
      <link>https://forem.com/amplication/distributed-tracing-and-opentelemetry-guide-8b6</link>
      <guid>https://forem.com/amplication/distributed-tracing-and-opentelemetry-guide-8b6</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F0.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Microservices have become popular for modern web applications since they provide many benefits over traditional monolithic architectures. However, microservices are not a silver bullet; they also have a fair share of challenges. For example, debugging and troubleshooting errors in microservices can be challenging since tracking the request flow across multiple services is difficult.&lt;/p&gt;

&lt;p&gt;That's where distributed tracing and &lt;a href="https://opentelemetry.io/docs/what-is-opentelemetry/" rel="noopener noreferrer"&gt;OpenTelemetry&lt;/a&gt; come in. OpenTelemetry is an Observability framework designed to create and manage telemetry data like traces, metrics, and logs from distributed systems. So, in this article, I will take you through the steps of using OpenTelemetry within a Node.js ecosystem to trace your microservices applications effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is distributed tracing
&lt;/h2&gt;

&lt;p&gt;The complexity of microservices makes it difficult to track the request path through multiple microservices. Distributed tracing is an observability technique used to track these requests across microservices. In other words, we can define it as a flashlight that helps you understand the request flow across your system.&lt;/p&gt;

&lt;p&gt;Distributed tracing is beneficial for developers in many scenarios. For example, there can be a single microservice with a slow response time, slowing down the whole application. Tracing data lets you pinpoint the exact origin and easily troubleshoot the issue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of using distributed tracing:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Identify performance bottlenecks.&lt;/li&gt;
&lt;li&gt;  Provides a comprehensive view of the system.&lt;/li&gt;
&lt;li&gt;  Provides insights into the dependencies between different services.&lt;/li&gt;
&lt;li&gt;  Identify potential security vulnerabilities.&lt;/li&gt;
&lt;li&gt;  Support both synchronous (gRPC, REST, GraphQL) and asynchronous (event sourcing, pub-sub) application architectures.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Components of distributed tracing?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F1.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;A typical distributed tracing system is built up with the below components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Trace&lt;/strong&gt;: End-to-end path of a single user request as it moves through various services.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Span&lt;/strong&gt;: A single operation or unit of work within a distributed system. It captures information like start time, end time, metadata, or annotation that might be useful to understand what is happening.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Context Propagation&lt;/strong&gt;: Passing contextual information between different services within a distributed system. It is essential for connecting spans to construct a complete trace of a request.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since you now have a brief idea of what distributed tracing is, let's see how to implement distributed tracing with Node.js.&lt;/p&gt;

&lt;h2&gt;
  
  
  Instrumenting Node.js app with OpenTelemetry
&lt;/h2&gt;

&lt;p&gt;In this example, I will create 3 Node.js services (shipping, notification, and courier) using &lt;a href="https://amplication.com/" rel="noopener noreferrer"&gt;Amplication&lt;/a&gt;, add traces to all services, and show how to analyze trace data using &lt;a href="https://www.jaegertracing.io/" rel="noopener noreferrer"&gt;Jaeger&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Generating services using Amplication
&lt;/h3&gt;

&lt;p&gt;As the first step, you must create the Node.js services with &lt;a href="https://app.amplication.com/login" rel="noopener noreferrer"&gt;Amplication&lt;/a&gt;. In this example, I will be using three already created Prisma schemas. You can find those schemas in &lt;a href="https://github.com/overbit/otel-workshop" rel="noopener noreferrer"&gt;this GitHub repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you are ready with schemas, go to the Amplication dashboard and create a new Project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Then, select the project from the dashboard and connect the GitHub repository with Prisma schemas to that project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Now, you can start creating services. For that, return to the Amplication dashboard and click the &lt;strong&gt;Add Resources&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F4.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Then, enter the necessary information to make the service. In this case, I have named the services as "&lt;strong&gt;courier gateway service&lt;/strong&gt;" with the below settings:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F5.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Git Repository&lt;/strong&gt;: I've used the GitHub repo, which I connected earlier.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F6.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;REST or GraphQL&lt;/strong&gt;: I've enabled both options to show the file structure generated by Amplication.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F7.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Repo type&lt;/strong&gt;: Monorepo.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F8.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Database&lt;/strong&gt;: PostgreSQL&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F9.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Authentication&lt;/strong&gt;: Included&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F10.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;It will take a few seconds to generate the service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F11.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;After that, you need to modify a few database settings to avoid collision between databases when sharing the same Docker service. For that, navigate to &lt;strong&gt;the Plugins&lt;/strong&gt; tab, select &lt;strong&gt;the PostgreSQL DB&lt;/strong&gt; plugin, and click the &lt;strong&gt;Settings&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F12.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;There, you will see a JSON file like below, and you need to update the &lt;strong&gt;dbName&lt;/strong&gt; property. Here, I have renamed it to &lt;strong&gt;courier&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F13.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Then, go back to &lt;strong&gt;the Entities&lt;/strong&gt; tab and import the courier Prisma schema to generate the entities related to the courier service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F14.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F14.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Once the schema is imported, you will see 2 new entities named &lt;strong&gt;Parcel&lt;/strong&gt; and &lt;strong&gt;Quote&lt;/strong&gt; in the &lt;strong&gt;Entities&lt;/strong&gt; tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F15.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Now, perform the same steps again for the other two services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F16.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F16.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h3&gt;
  
  
  Step 2: Adding a Kafka integration
&lt;/h3&gt;

&lt;p&gt;In this example, I will use a Message Broker to communicate between these services. You can easily generate a Message Broker through Amplication by clicking the &lt;strong&gt;Add Resource&lt;/strong&gt; button and selecting &lt;strong&gt;the Message Broker&lt;/strong&gt; option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F17.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Then, go back to the shipping service and install the Kafka plugin to allow the shipping service to use the Message Broker.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F18.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Then, go to the &lt;strong&gt;Connections&lt;/strong&gt; tab and select the &lt;strong&gt;Message pattern&lt;/strong&gt; as &lt;strong&gt;Send&lt;/strong&gt; to allow the shipping service to send messages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F19.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Similarly, go to the notification service and select &lt;strong&gt;Message pattern&lt;/strong&gt; as &lt;strong&gt;Receive&lt;/strong&gt; to subscribe to the Message Broker.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F20.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h3&gt;
  
  
  Step 3: Building the application
&lt;/h3&gt;

&lt;p&gt;Click the &lt;strong&gt;Commit change &amp;amp; build&lt;/strong&gt; button to finalize the changes. It will start the build process, generate the new files in the Git repo, and create a pull request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F21.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note:&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;Make sure to merge the pull request to the main branch to get the latest updates.&lt;/em&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Configuring Docker compose
&lt;/h3&gt;

&lt;p&gt;Each service generated by Amplication contains a separate Docker compose file. But, in this example, I want to share the same database with all services. Hence, I created a &lt;a href="https://github.com/overbit/otel-workshop/blob/main/docker-compose.yml" rel="noopener noreferrer"&gt;new Docker compose file&lt;/a&gt; by coping the content of the docker-compose files generated by amplication.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

version: "3"
name: otel-workshop
services:
  # Shared DB for all services
  db:
    image: postgres:12
    ports:
      - 5432:5432
    environment:
      POSTGRES_USER: admin
      POSTGRES_PASSWORD: admin
    volumes:
      - postgres:/var/lib/postgresql/data

  # Jaeger
  jaeger-all-in-one:
    image: jaegertracing/all-in-one:latest
    ports:
      - "16686:16686"
      - "14268"
      - "14250"
  # Collector
  collector-gateway:
    image: otel/opentelemetry-collector:latest
    volumes:
      - ./collector-gateway.yml:/etc/collector-gateway.yaml
    command: ["--config=/etc/collector-gateway.yaml"]
    ports:
      - "1888:1888" # pprof extension
      - "13133:13133" # health_check extension
      - "4317:4317" # OTLP gRPC receiver
      - "4318:4318" # OTLP HTTP receiver
      - "55670:55679" # zpages extension
    depends_on:
      - jaeger-all-in-one

  kafka-ui:
    container_name: kafka-ui
    image: provectuslabs/kafka-ui:latest
    ports:
      - "8080:8080"
    depends_on:
      - zookeeper
      - kafka
    environment:
      KAFKA_CLUSTERS_0_NAME: local
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092
      KAFKA_CLUSTERS_0_ZOOKEEPER: zookeeper:2181
      KAFKA_CLUSTERS_0_JMXPORT: 9997

  zookeeper:
    image: confluentinc/cp-zookeeper:7.3.1
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
    ports:
      - "2181:2181"

  kafka:
    image: confluentinc/cp-kafka:7.3.1
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
      - "9997:9997"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_MESSAGE_MAX_BYTES: 10485760
      JMX_PORT: 9997
      KAFKA_JMX_OPTS: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka -Dcom.sun.management.jmxremote.rmi.port=9997
    healthcheck:
      test: nc -z localhost 9092 || exit -1
      start_period: 15s
      interval: 30s
      timeout: 10s
      retries: 10

volumes:
  postgres: ~


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I don't need to specify a collector getaway in the above configuration since I'm using &lt;strong&gt;jaeger-all-in-one&lt;/strong&gt;. However, I have specified a &lt;a href="https://github.com/ChameeraD/otel-example/blob/main/collector-gateway.yml" rel="noopener noreferrer"&gt;collector getaway&lt;/a&gt; to highlight the receiver's components and ports.&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;docker-compose up --detach&lt;/code&gt; command to start Docker.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Configuring services for local development
&lt;/h3&gt;

&lt;p&gt;Now, you need to set up all 3 services for local development. For that, you just need to follow the instructions given in the &lt;strong&gt;README.md&lt;/strong&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F22.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F22.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note:&lt;/em&gt;&lt;/strong&gt;&lt;em&gt;: You don't need to run `_npm run docker:dev&lt;/em&gt;` command since Docker is already running_&lt;/p&gt;

&lt;p&gt;Once all the databases are initialized and dependencies are installed, start each service using &lt;code&gt;npm run start:watch&lt;/code&gt; command and the courier-gateway-service-admin using &lt;code&gt;npm run start&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note&lt;/em&gt;&lt;/strong&gt;&lt;em&gt;: You need to update the ports of each service from the&lt;/em&gt; &lt;code&gt;_.env_&lt;/code&gt; &lt;em&gt;files to avoid clashes between the services.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Creating a Parcel through admin view
&lt;/h2&gt;

&lt;p&gt;You can easily create a new Parcel by logging into the admin view.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F23.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F24.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F24.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h3&gt;
  
  
  Step 7: Connecting the services
&lt;/h3&gt;

&lt;p&gt;First, navigate to the shipping service and install axios using &lt;code&gt;npm install axios&lt;/code&gt; command. Then, add the below code to the &lt;strong&gt;shipping-service/src/shipment/shipment.service.ts&lt;/strong&gt; file to get parcel details. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Injectable&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@nestjs/common&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;PrismaService&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../prisma/prisma.service&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ShipmentServiceBase&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./base/shipment.service.base&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Prisma&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Shipment&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@prisma/client&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;axios&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;KafkaProducerService&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../kafka/kafka.producer.service&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ShippingEvent&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./shipping.event&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;MyMessageBrokerTopics&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../kafka/topics&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Injectable&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ShipmentService&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;ShipmentServiceBase&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;protected&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;prisma&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PrismaService&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;kafkaProducerService&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;KafkaProducerService&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prisma&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nx"&gt;create&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nx"&gt;Prisma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ShipmentCreateArgs&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Prisma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SelectSubset&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Prisma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ShipmentCreateArgs&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Shipment&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;accessToken&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//localhost:3002/api/login , {&lt;/span&gt;
      &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;admin&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;admin&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;parcels&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//localhost:3002/api/parcels ,&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nl"&gt;params&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{},&lt;/span&gt;
        &lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nl"&gt;Authorization&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Bearer&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;accessToken&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;randomParcel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;floor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;random&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;parcels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;shipment&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;create&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;price&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;parcels&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;randomParcel&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;price&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ShippingEvent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Shipment&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;shipment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;CustomerId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1b2c&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kafkaProducerService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;emitMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nx"&gt;MyMessageBrokerTopics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ShipmentCreateV1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;shipment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;shipment&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Step 8: Creating a client app
&lt;/h3&gt;

&lt;p&gt;Before starting instrumenting, let's create a client application to get shipment data. This can be a simple Node.js project with a &lt;strong&gt;main.js&lt;/strong&gt; file containing the code to fetch shipment data. &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="c1"&gt;// main.js&lt;/span&gt;

&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use strict&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;axios&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:3004/api/shipments&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;numberOfRequests&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;makeRequest&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;requestId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;main&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;numberOfRequests&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;makeRequest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Response&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Step 9: Adding tracing
&lt;/h3&gt;

&lt;p&gt;Create a new file named &lt;strong&gt;tracing.js&lt;/strong&gt; in the same directory where you created the &lt;strong&gt;main.js&lt;/strong&gt; file to fetch shipment data. Then, install all the OpenTelemetry dependencies using the below command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

ls
npm install @opentelemetry/sdk-node \
  @opentelemetry/api \
  @opentelemetry/resources\
  @opentelemetry/semantic-conventions \
  @opentelemetry/instrumentation-http



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Add the below code to the &lt;strong&gt;tracing.js&lt;/strong&gt; file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

const {
  BasicTracerProvider,
  SimpleSpanProcessor,
} = require("@opentelemetry/sdk-trace-base");
const { Resource } = require("@opentelemetry/resources");
const {
  SemanticResourceAttributes,
} = require("@opentelemetry/semantic-conventions");
const { trace } = require("@opentelemetry/api");
const {
  OTLPTraceExporter,
} = require("@opentelemetry/exporter-trace-otlp-http");
const { NodeSDK } = require("@opentelemetry/sdk-node");
const { HttpInstrumentation } = require("@opentelemetry/instrumentation-http");
const { B3Propagator } = require("@opentelemetry/propagator-b3");

const exporter = new OTLPTraceExporter({});

const getTracer = () =&amp;gt; {
  return trace.getTracer("default");
};

const sdk = new NodeSDK({
  resource: new Resource({
    [SemanticResourceAttributes.SERVICE_NAME]: "fake-client-app",
    [SemanticResourceAttributes.SERVICE_VERSION]: "0.1.0",
  }),
  spanProcessor: new SimpleSpanProcessor(exporter),
  traceExporter: exporter,
  instrumentations: [new HttpInstrumentation()],
  textMapPropagator: new B3Propagator(),
});

sdk.start();

module.exports = { getTracer };



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To start tracing, you need to import the above file to the &lt;strong&gt;main.js&lt;/strong&gt; file and do some modifications. The updated &lt;strong&gt;main.js&lt;/strong&gt; file will look like below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

js
"use strict";
const { getTracer } = require("./tracing");
const axios = require("axios");
const { trace } = require("@opentelemetry/api");

const tracer = getTracer("fake-client");

const url = "http://localhost:3004/api/shipments";
const numberOfRequests = 1;

const makeRequest = async (requestId) =&amp;gt; {
  return tracer.startActiveSpan("makeRequests", async (span) =&amp;gt; {
    span.updateName(makeRequests-${requestId} );
    const result = await axios.post(url);
    span.end();
    return result;
  });
};

tracer.startActiveSpan("main", async (span) =&amp;gt; {
  for (let i = 0; i &amp;lt; numberOfRequests; i++) {
    const res = await makeRequest(i);
    console.log("Response", res.data);
  }
  span.end();
});


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, you can run the client application with the &lt;code&gt;node main.js&lt;/code&gt; command and monitor the trace data with Jeager.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F25.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F25.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F26.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F26.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;That's it. You successfully created a Node. js-based microservices application using Amplication, configuring tracing, and monitoring trace data through Jeager. You can find the complete code example in &lt;a href="https://github.com/amplication/otel-workshop/tree/main" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; and watch the video below to understand the code used for tracing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 10: Adding tracing to the generated services
&lt;/h3&gt;

&lt;p&gt;As Amplication now supports OpenTelemetry through a plugin, we will leverage the plugin to integrate all the services without much effort.&lt;/p&gt;

&lt;p&gt;Go to each service starting from the shipping service and install the OpenTelemetry plugin.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fdistributed-tracing-and-open-telemetry-guide%2F27.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Click the &lt;strong&gt;Commit change &amp;amp; build&lt;/strong&gt; button to finalize the changes. It will start the build process again, generate the new files and update existing ones in the Git repo, and create/update a pull request.&lt;/p&gt;

&lt;p&gt;Now try to perform new requests as before and observe the tracing data in Jaeger!&lt;/p&gt;

&lt;h2&gt;
  
  
  Watch Webinar
&lt;/h2&gt;

&lt;p&gt;I took a live workshop few weeks ago on Distributed Tracing and Open Telemetry. You can watch it here: &lt;a href="https://www.youtube.com/watch?v=Pu-HiD2QksI" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=Pu-HiD2QksI&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Best practices to follow
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  Prioritize critical paths and high-impact services.&lt;/li&gt;
&lt;li&gt;  Use consistent and meaningful naming conventions for spans, and services.&lt;/li&gt;
&lt;li&gt;  Ensure that trace context is propagated across service boundaries. This typically involves adding trace headers to HTTP requests or message headers.&lt;/li&gt;
&lt;li&gt;  Use tags and annotations to &lt;a href="https://youtu.be/Pu-HiD2QksI?si=1WsJNHO5PWUCrB2V&amp;amp;t=3056" rel="noopener noreferrer"&gt;add additional metadata&lt;/a&gt; to spans.&lt;/li&gt;
&lt;li&gt;  Implement adaptive sampling strategies that adjust the sampling rate based on the service's load, and error rates.&lt;/li&gt;
&lt;li&gt;  Automatically capture and log errors.&lt;/li&gt;
&lt;li&gt;  Retain trace data for an appropriate period.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This guide provided an overview of implementing tracing for Node.js-based microservices applications. As you can see, enabling tracing for your application requires little effort. But it can save you a whole lot of troubleshooting and debugging time. Thank you for reading.&lt;/p&gt;

</description>
      <category>opentelemetry</category>
      <category>backend</category>
      <category>microservices</category>
      <category>node</category>
    </item>
    <item>
      <title>The Complete Microservices Guide</title>
      <dc:creator>Muly Gottlieb</dc:creator>
      <pubDate>Thu, 21 Sep 2023 08:53:59 +0000</pubDate>
      <link>https://forem.com/amplication/the-complete-microservices-guide-5d64</link>
      <guid>https://forem.com/amplication/the-complete-microservices-guide-5d64</guid>
      <description>&lt;h2&gt;
  
  
  Introduction to Microservices
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why Microservices?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://amplication.com/blog/an-introduction-to-microservices" rel="noopener noreferrer"&gt;Microservices&lt;/a&gt; have emerged as a popular architectural approach for designing and building software systems for several compelling reasons and advantages. It is a design approach that involves dividing applications into multiple distinct and independent services called "microservices," which offers several benefits, including the autonomy of each service, making it easier to maintain and test in isolation over monolithic architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F0.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Figure 1: A sample microservice-based architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Figure 1 depicts a simple microservice-based architecture showcasing the services' independent, isolated nature. Each particular entity belonging to the application is isolated into its service. For example, the UserService, OrderService, and NotificationService focus on dealing with different parts of the business.&lt;/p&gt;

&lt;p&gt;The overall system is split into services that are driven by independent teams that use their own tech stacks and are even scaled independently.&lt;/p&gt;

&lt;p&gt;In a nutshell, each service handles its specific business domain. Therefore, the question arises - "How do you split an application into microservices?". Well, this is where microservices meet Domain Driven Design (DDD).&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Domain-Driven Design?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://blog.bitsrc.io/demystifying-domain-driven-design-ddd-in-modern-software-architecture-b57e27c210f7" rel="noopener noreferrer"&gt;Domain-Driven Design (DDD)&lt;/a&gt; is an approach to software development that emphasizes modeling software based on the domain it serves. &lt;/p&gt;

&lt;p&gt;It involves understanding and modeling the domain or problem space of the application, fostering close collaboration between domain experts and software developers. This collaboration creates a shared understanding of the domain and ensures the developed software aligns closely with its intricacies.&lt;/p&gt;

&lt;p&gt;This means microservices are not only about picking a tech stack for your app. Before you build your app, you'll have to understand the domain you are working with. This will let you know the unique business processes being executed in your organization, thus making it easy to split up the application into tiny microservices.&lt;/p&gt;

&lt;p&gt;Doing so creates a distributed architecture where your services no longer have to be deployed together to a single target but instead are deployed separately and can be deployed to multiple targets.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are Distributed Services?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.splunk.com/en_us/blog/learn/distributed-systems.html" rel="noopener noreferrer"&gt;Distributed services&lt;/a&gt; refer to a software architecture and design approach where various application components, modules, or functions are distributed across multiple machines or nodes within a network.&lt;/p&gt;

&lt;p&gt;Modern computing commonly uses this approach to improve scalability, availability, and fault tolerance. As shown in Figure 1, microservices are naturally distributed services as each service is isolated from the others and runs in its own instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Microservices Architecture?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Microservices and Infrastructure
&lt;/h3&gt;

&lt;p&gt;Microservices architecture places a significant focus on infrastructure, as the way microservices are deployed and managed directly impacts the effectiveness and scalability of the system.&lt;/p&gt;

&lt;p&gt;There are several ways in which microservices architecture addresses infrastructure considerations.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Containerization:&lt;/strong&gt; Microservices are often packaged as containers, like &lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt;, that encapsulate an application and its dependencies, ensuring consistency between development, testing, and production environments. Containerization simplifies deployment and makes it easier to manage infrastructure resources.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Orchestration:&lt;/strong&gt; Microservices are typically deployed and managed using container orchestration platforms like &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;. Kubernetes automates the deployment, scaling, and management of containerized applications. It ensures that microservices are distributed across infrastructure nodes efficiently and can recover from failures.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Service Discovery:&lt;/strong&gt; Microservices need to discover and communicate with each other dynamically. &lt;a href="https://devopscube.com/open-source-service-discovery/" rel="noopener noreferrer"&gt;Service discovery&lt;/a&gt; tools like &lt;a href="https://etcd.io/" rel="noopener noreferrer"&gt;etcd&lt;/a&gt;, &lt;a href="https://www.consul.io/" rel="noopener noreferrer"&gt;Consul&lt;/a&gt;, or Kubernetes built-in service discovery mechanisms help locate and connect to microservices running on different nodes within the infrastructure.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Scalability:&lt;/strong&gt; Microservices architecture emphasizes horizontal scaling, where additional microservice instances can be added as needed to handle increased workloads. Infrastructure must support the dynamic allocation and scaling of resources based on demand.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  How to build a microservice?
&lt;/h3&gt;

&lt;p&gt;The first step in building a microservice is breaking down an application into a set of services. Breaking a monolithic application into microservices involves a process of decomposition where you identify discrete functionalities within the monolith and refactor them into separate, independent microservices.&lt;/p&gt;

&lt;p&gt;This process requires careful planning and consideration of various factors, as discussed below.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Analyze the Monolith:&lt;/strong&gt; Understand the existing monolithic application thoroughly, including its architecture, dependencies, and functionality.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Identify Business Capabilities:&lt;/strong&gt; Determine the monolith's distinct business capabilities or functionalities. These could be features, modules, or services that can be separated logically.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Define Service Boundaries:&lt;/strong&gt; Establish clear boundaries for each microservice. Identify what each microservice will be responsible for and ensure that these responsibilities are cohesive and well-defined.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Data Decoupling:&lt;/strong&gt; Examine data dependencies and decide how data will be shared between microservices. You may need to introduce data replication, data synchronization, and separate databases for each microservice.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Communication Protocols:&lt;/strong&gt; Define communication protocols and APIs between microservices. RESTful APIs, gRPC, or message queues are commonly used for inter-service communication.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Separate Codebases:&lt;/strong&gt; Create different codebases for each microservice. This may involve extracting relevant code and functionality from the monolith into &lt;a href="https://earthly.dev/blog/monorepo-vs-polyrepo/" rel="noopener noreferrer"&gt;individual repositories or as packages in a monorepo strategy&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Decompose the Database:&lt;/strong&gt; If the monolithic application relies on a single database, you may need to split the database into smaller databases or schema within a database for each microservice.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Implement Service Logic:&lt;/strong&gt; Develop the business logic for each microservice. Ensure that each microservice can function independently and handle its specific responsibilities.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Integration and Testing:&lt;/strong&gt; Create thorough integration tests to ensure that the microservices can communicate and work together as expected. Use continuous integration (CI) and automated testing to maintain code quality.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Documentation:&lt;/strong&gt; Maintain comprehensive documentation for each microservice, including API documentation and usage guidelines for developers who will interact with the services.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After you've broken down your services, it's important to establish correct standards for how your microservices will communicate.&lt;/p&gt;

&lt;h3&gt;
  
  
  How do microservices communicate with each other?
&lt;/h3&gt;

&lt;p&gt;Communication across services is an important aspect to consider when building microservices. So, whichever approach you adopt, it's essential to ensure that such &lt;a href="https://amplication.com/blog/communication-in-a-microservice-architecture" rel="noopener noreferrer"&gt;communication is made to be efficient and robust&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There are two main categories of microservices-based communication:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Inter-service communication&lt;/li&gt;
&lt;li&gt; Intra-service communication&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Inter-Service Communication&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Inter-service communication in microservices refers to how individual microservices communicate and interact within a microservices architecture.&lt;/p&gt;

&lt;p&gt;Microservices can employ two fundamental messaging approaches to interact with other microservices in &lt;a href="https://learn.microsoft.com/en-us/azure/architecture/microservices/design/interservice-communication" rel="noopener noreferrer"&gt;inter-service communication&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Synchronous Communication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One approach to adopting inter-service communication is through synchronous communication. Synchronous communication is an approach where a service invokes another service through protocols like HTTP or gRPC and waits until the service responds with a response.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F1.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Source: &lt;a href="https://www.theserverside.com/answer/Synchronous-vs-asynchronous-microservices-communication-patterns" rel="noopener noreferrer"&gt;https://www.theserverside.com/answer/Synchronous-vs-asynchronous-microservices-communication-patterns&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Asynchronous Message Passing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The second approach is through asynchronous message passing. Over here, a service dispatches a message without waiting for an immediate response.&lt;/p&gt;

&lt;p&gt;Subsequently, asynchronously, one or more services process the message at their own pace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Source: &lt;a href="https://www.theserverside.com/answer/Synchronous-vs-asynchronous-microservices-communication-patterns" rel="noopener noreferrer"&gt;https://www.theserverside.com/answer/Synchronous-vs-asynchronous-microservices-communication-patterns&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Intra-Service Communication&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Intra-service communication in microservices refers to the interactions and communication within a single microservice, encompassing the various components, modules, and layers that make up that microservice.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Simply put - unlike inter-service communication, which involves communication between different microservices, intra-service communication focuses on the internal workings of a single microservice.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But, with either approach you adopt, you have to make sure that you create the perfect balance of communication to ensure that you don't have excessive communication happening in your microservices. If so, this could lead to "chatty" microservices.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is chattiness in microservices communication?
&lt;/h3&gt;

&lt;p&gt;"&lt;a href="https://thenewstack.io/are-your-microservices-overly-chatty/" rel="noopener noreferrer"&gt;Chattiness&lt;/a&gt;" refers to a situation where there is excessive or frequent communication between microservices.&lt;/p&gt;

&lt;p&gt;It implies that microservices are making many network requests or API calls to each other, which can have several implications and challenges, such as performance overhead, increased complexity, scalability issues, and network traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Figure: A chatty microservice&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As shown above, the UserService has excessive communication with the OrderService and itself, which could lead to performance and scaling challenges as there are excessive network calls.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the usage of middleware in microservices?
&lt;/h3&gt;

&lt;p&gt;Middleware plays a crucial role in microservices architecture by providing services, tools, and components that facilitate communication, integration, and management of microservices. Let's discuss a few of the usages.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Inter-Service Communication:&lt;/strong&gt; Middleware provides communication channels and protocols that enable microservices to communicate with each other. This can include message brokers like &lt;a href="https://www.rabbitmq.com/" rel="noopener noreferrer"&gt;RabbitMQ&lt;/a&gt;, &lt;a href="https://kafka.apache.org/" rel="noopener noreferrer"&gt;Apache Kafka&lt;/a&gt;, RPC frameworks like &lt;a href="https://grpc.io/" rel="noopener noreferrer"&gt;gRPC&lt;/a&gt;, or RESTful APIs.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Service Discovery:&lt;/strong&gt; Service discovery middleware helps microservices locate and connect to other microservices dynamically, especially in dynamic or containerized environments. Tools like Consul, etcd, or Kubernetes service discovery features aid in this process.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;API Gateway:&lt;/strong&gt; An API gateway is a middleware component that serves as an entry point for external clients to access microservices. It can handle authentication, authorization, request routing, and aggregation of responses from multiple microservices.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Security and Authentication:&lt;/strong&gt; Middleware components often provide security features like authentication, authorization, and encryption to ensure secure communication between microservices. Tools like OAuth2, JWT, and API security gateways are used to enhance security.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Distributed Tracing:&lt;/strong&gt; Middleware for distributed tracing like &lt;a href="https://www.jaegertracing.io/" rel="noopener noreferrer"&gt;Jaeger&lt;/a&gt; and &lt;a href="https://zipkin.io/" rel="noopener noreferrer"&gt;Zipkin&lt;/a&gt; helps monitor and trace requests as they flow through multiple microservices, aiding in debugging, performance optimization, and understanding the system's behavior.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Monitoring and Logging:&lt;/strong&gt; Middleware often includes monitoring and logging components like &lt;a href="https://www.elastic.co/elastic-stack" rel="noopener noreferrer"&gt;ELK Stack&lt;/a&gt;, &lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt;, and &lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt; to track the health, performance, and behavior of microservices. This aids in troubleshooting and performance optimization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building Microservices with Node.js
&lt;/h2&gt;

&lt;p&gt;Building microservices with Node.js has become a popular choice due to Node.js's non-blocking, event-driven architecture and extensive ecosystem of libraries and frameworks.&lt;/p&gt;

&lt;p&gt;If you want to build Microservices with Node.js, there is a way to significantly accelerate this process by using &lt;a href="https://www.youtube.com/watch?v=ko4GjiUeJ_w&amp;amp;t=4s" rel="noopener noreferrer"&gt;Amplication&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.amplication.com/" rel="noopener noreferrer"&gt;Amplication&lt;/a&gt; is a free and open-source tool designed for backend development. It expedites the creation of Node.js applications by automatically generating fully functional apps with all the boilerplate code - just add in your own business logic. It simplifies your development workflow and enhances productivity, allowing you to concentrate on your primary goal: crafting outstanding applications. Learn More &lt;a href="https://www.youtube.com/watch?v=f-HsNzPRtqI" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding the basics of REST API
&lt;/h3&gt;

&lt;p&gt;REST (Representational State Transfer) is an architectural style for designing networked applications. &lt;a href="https://www.redhat.com/en/topics/api/what-is-a-rest-api" rel="noopener noreferrer"&gt;REST APIs&lt;/a&gt; (Application Programming Interfaces) are a way to expose the functionality of a system or service to other applications through HTTP requests.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to create a REST API endpoint?
&lt;/h3&gt;

&lt;p&gt;There are many ways we can develop REST APIs. Here, we are using Amplication. It can be done with just a few clicks.&lt;/p&gt;

&lt;p&gt;The screenshots below can be used to walk through the flow of creating REST APIs.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Click on "Add New Project"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F4.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ol&gt;
&lt;li&gt;Give your new project a descriptive name&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F5.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ol&gt;
&lt;li&gt;Click "Add Resource" and select "Service"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F6.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ol&gt;
&lt;li&gt;Name your service&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F7.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;5. Connect to a git repository where Amplication will create a PR with your generated code&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F8.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;6. Select the options you want to generate for your service. In particular, which endpoints to generate - REST and/or GraphQL&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F9.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;7. Choose your microservices repository pattern - monorepo or polyrepo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F10.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;8. Select which database you want for your service&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F11.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;9. Choose if you want to manually create a data model or start from a template (you can also &lt;a href="https://docs.amplication.com/how-to/import-prisma-schema/" rel="noopener noreferrer"&gt;import your existing DB Schema&lt;/a&gt; later on)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F12.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;10. You can select or skip adding authentication for your service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F13.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;11. Yay! We are done with our service creation using REST APIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F14.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F14.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;12. Next, you will be redirected to the following screen showing you the details and controls for your new service&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F15.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;13. After you click "Commit Changes &amp;amp; Build", a Pull-Request is created in your repository, and you can now see the code that Amplication generated for you:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F16.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F16.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F17.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F18.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F19.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h3&gt;
  
  
  How can you connect a frontend with a microservice?
&lt;/h3&gt;

&lt;p&gt;Connecting the frontend with the service layer involves making HTTP requests to the API endpoints exposed by the service layer. Those API endpoints will usually be RESTful or GraphQL endpoints.&lt;/p&gt;

&lt;p&gt;This allows the frontend to interact with and retrieve data from the backend service.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://medium.com/mobilepeople/backend-for-frontend-pattern-why-you-need-to-know-it-46f94ce420b0" rel="noopener noreferrer"&gt;BFF&lt;/a&gt; (Backend For Frontend) pattern is an architectural design pattern used to develop microservices-based applications, particularly those with diverse client interfaces such as web, mobile, and other devices. The BFF pattern involves creating a separate backend service for each frontend application or client type.&lt;/p&gt;

&lt;p&gt;Consider the user-facing application as consisting of two components: a client-side application located outside your system's boundaries and a server-side component known as the BFF (Backend For Frontend) within your system's boundaries. The BFF is a variation of the API Gateway pattern but adds an extra layer between microservices and each client type. Instead of a single entry point, it introduces multiple gateways.&lt;/p&gt;

&lt;p&gt;This approach enables you to create custom APIs tailored to the specific requirements of each client type, like mobile, web, desktop, voice assistant, etc. It eliminates the need to consolidate everything in a single location. Moreover, it keeps your backend services "clean" from specific API concerns that are client-type-specific: Your backend services can serve "pure" domain-driven APIs, and all the client-specific translations are located in the BFF(s). The diagram below illustrates this concept.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F20.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Source: &lt;a href="https://medium.com/mobilepeople/backend-for-frontend-pattern-why-you-need-to-know-it-46f94ce420b0" rel="noopener noreferrer"&gt;https://medium.com/mobilepeople/backend-for-frontend-pattern-why-you-need-to-know-it-46f94ce420b0&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Microservices + Security
&lt;/h2&gt;

&lt;p&gt;Security is a crucial aspect when building microservices. Only authorized users must have access to your APIs. So, how can you secure your microservices?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choose an Authentication Mechanism&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Secure your microservices through token-based authentication (JWT or OAuth 2.0), API keys, or session-based authentication, depending on your application's requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Centralized Authentication Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Consider using a centralized authentication service if you have multiple microservices. This allows users to authenticate once and obtain tokens for subsequent requests. If you are using an API Gateway, Authentication and Authorization will usually be centralized there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secure Communication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensure that communication between microservices and clients is encrypted using TLS (usually HTTPS) or other secure protocols to prevent eavesdropping and data interception.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implement Authentication Middleware&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each microservice should include authentication middleware to validate incoming requests. Verify tokens or credentials and extract user identity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Token Validation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For token-based authentication, validate JWT tokens or OAuth 2.0 tokens using libraries or frameworks that support token validation. Ensure token expiration checks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User and Role Management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Implement user and role management within each microservice or use an external identity provider to manage user identities and permissions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role-Based Access Control (RBAC)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Implement RBAC to define roles and permissions. Assign roles to users and use them to control access to specific microservice endpoints or resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authorization Middleware&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Include authorization middleware in each microservice to enforce access control based on user roles and permissions. This middleware should check whether the authenticated user has the necessary permissions to perform the requested action.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fine-Grained Access Control&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Consider implementing fine-grained access control to control access to individual resources or data records within a microservice based on user attributes, roles, or ownership.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In general, it's essential to consider the &lt;a href="https://owasp.org/API-Security/editions/2023/en/0x11-t10/" rel="noopener noreferrer"&gt;Top 10 OWASP API Security Risks&lt;/a&gt; and implement preventive strategies that help overcome these API Security risks.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;💡&lt;strong&gt;Pro Tip:&lt;/strong&gt; When you build your microservices with Amplication, many of the above concerns are already taken care of automatically - each generated service comes with built-in authentication and authorization middleware. You can manage roles and permissions for your APIs easily from within the Amplication interface, and the generated code will already include the relevant middleware decorators (Guards) to enforce the authorization based on what you defined in Amplication.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing Microservices
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Unit testing
&lt;/h3&gt;

&lt;p&gt;Unit testing microservices involves testing individual components or units of a microservice in isolation to ensure they function correctly.&lt;/p&gt;

&lt;p&gt;These tests are designed to verify the behavior of your microservices' most minor testable parts, such as functions, methods, or classes, without external dependencies.&lt;/p&gt;

&lt;p&gt;For example, in our microservice we built earlier, we can unit test the OrderService by mocking its database and external API calls and ensuring that the OrderService is error-free on its own.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration testing
&lt;/h3&gt;

&lt;p&gt;Integration testing involves verifying that different microservices work together correctly when interacting as part of a larger system.&lt;/p&gt;

&lt;p&gt;These tests ensure that the integrated microservices can exchange data and collaborate effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying Microservices to a Production Environment
&lt;/h2&gt;

&lt;p&gt;Deploying microservices to a production environment requires careful planning and execution to ensure your application's stability, reliability, and scalability. Let's discuss some of the key steps and considerations attached to that.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Containerization and Orchestration:&lt;/strong&gt; We need first to containerize the microservices using technologies like Docker. Containers provide consistency across development, testing, and production environments. Use container orchestration platforms like Kubernetes to manage and deploy containers at scale.&lt;/li&gt;
&lt;li&gt;  💡 Did you know? Amplication provides a Dockerfile for containerizing your services out of the box and has a &lt;a href="https://github.com/amplication/plugins/tree/master/plugins/deployment-helm-chart" rel="noopener noreferrer"&gt;plugin to create a Helm Chart&lt;/a&gt; for your services to ease container orchestration.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Infrastructure as Code (IaC):&lt;/strong&gt; Define your infrastructure using code (IaC) to automate the provisioning of resources such as virtual machines, load balancers, and databases. Tools like &lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt;, &lt;a href="https://www.pulumi.com/" rel="noopener noreferrer"&gt;Pulumi&lt;/a&gt;, and &lt;a href="https://aws.amazon.com/cloudformation/" rel="noopener noreferrer"&gt;AWS CloudFormation&lt;/a&gt; can help.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Continuous Integration and Continuous Deployment (CI/CD):&lt;/strong&gt; Implement a CI/CD pipeline to automate microservices' build, testing, and deployment. This pipeline should include unit tests, integration tests, and automated deployment steps.&lt;/li&gt;
&lt;li&gt;  💡Did you know? Amplication has a &lt;a href="https://github.com/amplication/plugins/tree/master/plugins/ci-github-actions" rel="noopener noreferrer"&gt;plugin for GitHub Actions&lt;/a&gt; that creates an initial CI pipeline for your service!&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Environment Configuration:&lt;/strong&gt; Maintain separate environment configurations like development, staging, and production to ensure consistency and minimize human error during deployments.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Secret Management:&lt;/strong&gt; Securely stores sensitive configuration data and secrets using tools like &lt;a href="https://aws.amazon.com/secrets-manager/" rel="noopener noreferrer"&gt;AWS Secrets Manager&lt;/a&gt; or &lt;a href="https://www.vaultproject.io/" rel="noopener noreferrer"&gt;HashiCorp Vault&lt;/a&gt;. Avoid hardcoding secrets in code or configuration files.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Monitoring and Logging:&lt;/strong&gt; Implement monitoring and logging solutions to track the health and performance of your microservices in real time. Tools like &lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt;, &lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt;, and ELK Stack (Elasticsearch, Logstash, Kibana) can help.&lt;/li&gt;
&lt;li&gt;  💡You guessed it! Amplication has a &lt;a href="https://github.com/amplication/plugins/tree/master/plugins/observability-opentelemetry" rel="noopener noreferrer"&gt;plugin for OpenTelemetry&lt;/a&gt; that instruments your generated services with tracing and sends tracing to &lt;a href="https://www.jaegertracing.io/" rel="noopener noreferrer"&gt;Jaeger&lt;/a&gt;!&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Scaling microservices&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.opslevel.com/resources/detailed-guide-to-how-to-scale-microservices" rel="noopener noreferrer"&gt;Scaling microservices&lt;/a&gt; involves adjusting the capacity of your microservice-based application to handle increased loads, traffic, or data volume while maintaining performance, reliability, and responsiveness. Scaling can be done vertically (scaling up) and horizontally (scaling out). A key benefit of a microservices architecture, compared to a monolithic one, is the ability to individually scale each microservice - allowing a cost-efficient operation (usually, high-load only affects specific microservices and not the entire application).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vertical Scaling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Vertical scaling refers to upgrading the resources of an individual microservice instance, such as CPU and memory, to manage higher workloads effectively.&lt;/p&gt;

&lt;p&gt;The main upside of this approach - there is no need to worry about the architecture of having multiple instances of the same microservice and how to coordinate and synchronize them. It is a simple approach and does not involve changing your architecture or code. The downsides of this approach are: a) Vertical scaling is eventually limited (There is only so much RAM and CPU you can provision in a single instance) and gets expensive very quickly; b) It might involve some downtime as in many cases, vertical scaling of an instance involves provisioning a new, bigger instance, and then migrating your microservice to run on the new instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F21.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Source: &lt;a href="https://data-flair.training/blogs/scaling-in-microsoft-azure/" rel="noopener noreferrer"&gt;https://data-flair.training/blogs/scaling-in-microsoft-azure/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Horizontal Scaling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Horizontal scaling involves adding more microservice instances to distribute the workload and handle increased traffic. This is usually the recommended scaling approach in many cases since it's cheaper (in most cases) and allows "infinite scale". In addition, scaling back down is very easy in this method - just remove some of the instances. It does require however some architectural planning to ensure that multiple instances of the same microservice "play nicely" together in terms of data consistency, coordination and synchronization, session stickiness concerns, and not locking mutual resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F22.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F22.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Source: &lt;a href="https://data-flair.training/blogs/scaling-in-microsoft-azure/" rel="noopener noreferrer"&gt;https://data-flair.training/blogs/scaling-in-microsoft-azure/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Challenges and Best Practices
&lt;/h2&gt;

&lt;p&gt;Microservices architecture offers numerous benefits but comes with its own challenges.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Challenge:&lt;/strong&gt; Scaling individual microservices while maintaining overall system performance can be challenging.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Best Practices:&lt;/strong&gt; Implement auto-scaling based on real-time metrics. Use container orchestration platforms like Kubernetes for efficient scaling. Conduct performance testing to identify bottlenecks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Challenge:&lt;/strong&gt; Ensuring security across multiple microservices and managing authentication and authorization can be complex.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Best Practices:&lt;/strong&gt; Implement a zero-trust security model with proper authentication like OAuth 2.0 and authorization like RBAC. Use API gateways for security enforcement. Regularly update and patch dependencies to address security vulnerabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment and DevOps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Challenge:&lt;/strong&gt; Coordinating deployments and managing the CI/CD pipeline for a large number of microservices can be challenging.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Best Practices:&lt;/strong&gt; Implement a robust CI/CD pipeline with automated testing and deployment processes. Use containerization like Docker and container orchestration like Kubernetes for consistency and scalability. Make sure that each microservice is completely independent in terms of deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Versioning and API Management&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Challenge:&lt;/strong&gt; Managing API versions and ensuring backward compatibility is crucial when multiple services depend on APIs.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Best Practices:&lt;/strong&gt; Use versioned APIs and introduce backward-compatible changes whenever possible. Implement API gateways for version management and transformation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Monitoring and Debugging&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Challenge:&lt;/strong&gt; Debugging and monitoring microservices across a distributed system is difficult. It's much easier to follow the flow of a request in a monolith compared to tracking a request that is handled in a distributed manner.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Best Practices:&lt;/strong&gt; Implement centralized logging and use distributed tracing tools like &lt;a href="https://zipkin.io/" rel="noopener noreferrer"&gt;Zipkin&lt;/a&gt; and &lt;a href="https://www.jaegertracing.io/" rel="noopener noreferrer"&gt;Jaeger&lt;/a&gt; for visibility into requests across services. Implement health checks and metrics for monitoring.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Handling Database Transactions
&lt;/h2&gt;

&lt;p&gt;Handling database transactions in a microservices architecture can be complex due to the distributed nature of the system.&lt;/p&gt;

&lt;p&gt;Microservices often have their own databases, and ensuring data consistency and maintaining transactional integrity across services requires careful planning and the use of &lt;a href="https://medium.com/nerd-for-tech/transactions-in-distributed-systems-b5ceea869d7d" rel="noopener noreferrer"&gt;appropriate strategies&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-complete-microservices-guide%2F23.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Figure: Database per Microservice&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As shown above, having a single database per microservice helps adopt better data modeling requirements and even lets you scale the database in and out independently. This way, you have more flexibility in handling DB-level bottlenecks.&lt;/p&gt;

&lt;p&gt;Therefore, when you're building microservices, having a separate database per service is often recommended. But, there are certain areas that you should consider when doing so:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Microservices and Data Isolation:&lt;/strong&gt; Each microservice should have its database. This isolation allows services to manage data independently without interfering with other services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Distributed Transactions:&lt;/strong&gt; Avoid distributed transactions whenever possible. They can be complex to implement and negatively impact system performance. Use them as a last resort when no other option is viable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Eventual Consistency:&lt;/strong&gt; Embrace the &lt;a href="https://www.keboola.com/blog/eventual-consistency" rel="noopener noreferrer"&gt;eventual consistency model&lt;/a&gt;. In a microservices architecture, it's often acceptable for data to be temporarily inconsistent across services but eventually converge to a consistent state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Adopt The Saga Pattern:&lt;/strong&gt; Implement the &lt;a href="https://medium.com/design-microservices-architecture-with-patterns/saga-pattern-for-microservices-distributed-transactions-7e95d0613345" rel="noopener noreferrer"&gt;Saga pattern&lt;/a&gt; to manage long-running and multi-step transactions across multiple microservices. Sagas consist of local transactions and compensating actions to maintain consistency.&lt;/p&gt;

&lt;h2&gt;
  
  
  DevOps with Microservices
&lt;/h2&gt;

&lt;p&gt;DevOps practices are essential when working with microservices to ensure seamless collaboration between development and operations teams, automate processes, and maintain the agility and reliability required in a microservices architecture.&lt;/p&gt;

&lt;p&gt;Here are some critical considerations for DevOps with microservices:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Automation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Continuous Integration (CI)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Implement CI pipelines that automatically build, test, and package microservices whenever code changes are pushed to version control repositories.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continuous Delivery/Deployment (CD)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Automate the deployment process of new microservice versions to different environments like preview, staging, and production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure as Code (IaC)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use IaC tools like Terraform, Pulumi, or AWS CloudFormation to automate the provisioning and configuration of infrastructure resources, including containers, VMs, Network resources, Storage resources, etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Containerization&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Use containerization technologies like Docker to package microservices and their dependencies consistently. This ensures that microservices can run consistently across different environments. Implement container orchestration platforms like Kubernetes or Docker Swarm to automate containerized microservices' deployment, scaling, and management.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Microservices Monitoring&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Implement monitoring and observability tools to track the health and performance of microservices in real time. Collect metrics, logs, and traces to diagnose issues quickly. Use tools like Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), and distributed tracing like Zipkin or Jaeger for comprehensive monitoring.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Deployment Strategies&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Implement deployment strategies like &lt;a href="https://www.redhat.com/en/topics/devops/what-is-blue-green-deployment" rel="noopener noreferrer"&gt;blue-green deployments&lt;/a&gt; and &lt;a href="https://martinfowler.com/bliki/CanaryRelease.html" rel="noopener noreferrer"&gt;canary releases&lt;/a&gt; to minimize downtime and risks when rolling out new versions of microservices. Automate rollbacks if issues are detected after a deployment, ensuring a fast recovery process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;In this comprehensive guide, we've delved into the world of microservices, exploring the concepts, architecture, benefits, and challenges of this transformative software development approach. Microservices promise agility, scalability, and improved maintainability, but they also require careful planning, design, and governance to realize their full potential. By breaking down monolithic applications into smaller, independently deployable services, organizations can respond to changing business needs faster and more flexibly.&lt;/p&gt;

&lt;p&gt;We've discussed topics such as building microservices with Node.js, Handling security in microservices, testing microservices, and the importance of well-defined APIs. DevOps practices are crucial in successfully implementing microservices, facilitating automation, continuous integration, and continuous delivery. Monitoring and observability tools help maintain system health, while security practices protect sensitive data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;As you embark on your microservices journey, remember there is no one-size-fits-all solution. Microservices should be tailored to your organization's specific needs and constraints. When adopting this architecture, consider factors like team culture, skill sets, and existing infrastructure.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Good luck with building your perfect microservices architecture, and I really hope you will find this blog post useful in doing so.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>backend</category>
      <category>microservices</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Understanding and Preventing Memory Leaks in Node.js</title>
      <dc:creator>Muly Gottlieb</dc:creator>
      <pubDate>Fri, 15 Sep 2023 14:48:29 +0000</pubDate>
      <link>https://forem.com/amplication/understanding-and-preventing-memory-leaks-in-nodejs-3ipd</link>
      <guid>https://forem.com/amplication/understanding-and-preventing-memory-leaks-in-nodejs-3ipd</guid>
      <description>&lt;h1&gt;
  
  
  Memory leaks in Node.js???
&lt;/h1&gt;

&lt;p&gt;In my early career, I spent a lot of years writing code in C and C++. Memory management in those languages was a real art, and disasters like memory leaks, dangling pointers, and segmentation faults were no strangers to my life. Then, at some point, the world, along with my career, all moved to memory-managed languages like Java, .NET, Python, and of course - the inevitable JavaScript. At first, coming from C/C++, the concept of automatic memory management and garbage collection seemed too good to be true - can I &lt;em&gt;really&lt;/em&gt; stop worrying about memory leaks?? I'll take two of those, please.&lt;/p&gt;

&lt;p&gt;But as is often the case in life, if something is too good to be true - it might indeed not be (completely) true. Automatic memory management is great, but it's not a foolproof silver bullet, and memory leaks are still lurking out there even when you write code in languages that possess this trait - like JavaScript. This means that for us, the Node.js developers, there are still concerns to be aware of regarding memory leaks.&lt;/p&gt;

&lt;p&gt;Let's dive into memory leaks in Node.js and see how they can occur, how to identify them, and, of course, some tips on how to avoid them.&lt;/p&gt;

&lt;h1&gt;
  
  
  How do memory leaks occur?
&lt;/h1&gt;

&lt;p&gt;Memory leaks are caused when the Garbage Collector on Node.js does not release blocks of memory that aren't being utilized. Ultimately, this causes the application's overall memory utilization to increase monotonically, even without any demanding workload, which can significantly degrade the application's performance in the long run.&lt;/p&gt;

&lt;p&gt;And, to make things worse, these memory blocks can grow in size, causing your app to run out of memory, which eventually causes your application to crash.&lt;/p&gt;

&lt;p&gt;Therefore, it's essential to understand what memory leaks are and how they can occur in Node.js apps so that you can troubleshoot such issues quickly and fix them before a user experiences a problem in your app.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does Garbage Collection happen in Node.js?
&lt;/h2&gt;

&lt;p&gt;Before diving in any further, it's essential to understand the process of &lt;a href="https://blog.risingstack.com/node-js-at-scale-node-js-garbage-collection/"&gt;Garbage Collection in Node.js&lt;/a&gt;. This is crucial when troubleshooting memory leaks in Node.js.&lt;/p&gt;

&lt;p&gt;Node.js uses Chrome's &lt;a href="https://nodejs.dev/en/learn/the-v8-javascript-engine/#:~:text=V8%20is%20the%20name%20of,are%20provided%20by%20the%20browser."&gt;V8 runtime&lt;/a&gt; to run its JavaScript code. All JavaScript code processed in the V8 runtime is processed in the memory in two main places:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Stack: The stack holds static data, method and function frames, primitive values, and pointers to stored objects. As usual with Stacks (and in particular call stacks), they get pushed and popped in a LIFO order, and popping from the stack automatically frees the relevant stack memory. Nothing for us to worry about :)&lt;/li&gt;
&lt;li&gt; Heap: The heap keeps the objects referenced in the stack's pointers. Since everything in JavaScript is an object, all dynamic data, like arrays, closures, sets, and all of your class instances, are stored in the heap. As a result, the heap becomes the biggest block of memory used in your Node.js app, and it’s where Garbage Collection (GC) will ultimately happen.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Why is Garbage Collection Expensive in Node.js?
&lt;/h2&gt;

&lt;p&gt;Node.js needs to periodically run its garbage collector process, which is basically code that needs to run and map the heap objects to identify unreachable objects (unreferenced). As the heap (and the reference tree) grows, this becomes an expensive computational task.&lt;/p&gt;

&lt;p&gt;Since JavaScript is single-threaded, this will interrupt the application flow until garbage collection is completed. That is the main reason why the GC process runs infrequently.&lt;/p&gt;

&lt;h1&gt;
  
  
  What causes a memory leak in Node.js?
&lt;/h1&gt;

&lt;p&gt;With this information, it's safe to assume that most memory leaks in Node.js will happen when expensive objects are stored in the heap but aren't used. So, ultimately, memory leaks are caused by the coding habits that you adopt and the overall understanding that you have of the workings of Node.js&lt;/p&gt;

&lt;p&gt;Let's look at four common cases of memory leaks in Node.js so we know what patterns we want to avoid (or minimize).&lt;/p&gt;

&lt;h2&gt;
  
  
  Memory Leak 01 - Use of Global Variables
&lt;/h2&gt;

&lt;p&gt;Global variables are a red flag in Node.js. It heavily contributes to memory leaks in your app if it's not handled correctly. For those of you who don't know what it is, a global variable is a variable that's referenced by the root node. It’s the equivalent of the Window Object for JavaScript running in the browser.&lt;/p&gt;

&lt;p&gt;So, these global variables never cease to be referenced. Therefore, the garbage collector will never clean them up throughout your app lifecycle. Your global variables will continue allocating memory in the app during its execution. Therefore, if you're managing highly complex data structures or nested object hierarchies in the root of your app, your app is at a high chance of being impacted by memory leaks.&lt;/p&gt;

&lt;p&gt;For example, if you're working with dynamic data structures, as shown below, your app will likely have memory leaks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Global variable holding a large array
global.myArray = [];

function addDataToGlobalArray(data) {
  // Push data into the global array
  global.myArray.push(data);
}

// Function to remove data from the global array
function removeDataFromGlobalArray() {
  // Pop data from the global array
  global.myArray.pop();
}

// Function to do some processing with the global array
function processData() {
  // Use the global array for some computation
  console.log(`Processing data with ${global.myArray.length} elements.`);
}

// Call functions to add and process data
addDataToGlobalArray("Item 1");
processData();

// Call functions to add and remove data
addDataToGlobalArray("Item 2");
removeDataFromGlobalArray();

// Call processData again
processData();

// The global.myArray variable is still in memory, even though it's no longer needed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Memory Leak 02 - Use of Multiple References
&lt;/h2&gt;

&lt;p&gt;The next issue is something that we have all done at some point. It's when you use multiple references that point to one object in the heap. Such issues are often developer faults where they reference various variables to the same object.&lt;/p&gt;

&lt;p&gt;Therefore, if you deallocate one variable, the heap won't clear it as more variables point to the same reference. For example, the code shown below is a classic scenario in which you're bound to run into memory leaks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Define two objects with circular references
const obj1 = { name: "Object 1" };
const obj2 = { name: "Object 2" };

// Create circular references between obj1 and obj2
obj1.reference = obj2;
obj2.reference = obj1;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By doing so, both &lt;code&gt;obj1&lt;/code&gt; and &lt;code&gt;obj2&lt;/code&gt; will never be cleaned up by the garbage collector as each object is pointing to the other.&lt;/p&gt;

&lt;h2&gt;
  
  
  Memory Leak 03 - Use of Closures
&lt;/h2&gt;

&lt;p&gt;Closures memorize their surrounding context. When a closure holds a reference to a large object in the heap, it keeps the object in memory as long as the closure is in use. For example, consider the snippet below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function createClosure() {
  const data = "I'm a variable captured in a closure";

  // Return a function that captures the 'data' variable
  return function() {
    console.log (data);
  };
}

// Create a closure by calling createClosure
const closure = createClosure();

// The closure still references 'data' from its outer scope
// Even though 'createClosure' has finished executing
closure();

// The 'data' variable is not eligible for garbage collection

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As shown above, all the variables defined inside &lt;code&gt;createClosure()&lt;/code&gt; are being used by the function that is returned from &lt;code&gt;createClosure()&lt;/code&gt;. And since JavaScript refers to the lexical scope when getting references to the variables it has used, data will never be collected by the garbage collector. If you manage more complex or dynamic data inside a closure, this pattern is prone to memory leaks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Memory Leak 04 - Unmanaged use of Timers and Intervals
&lt;/h2&gt;

&lt;p&gt;If you're using &lt;code&gt;setTimeout&lt;/code&gt; or &lt;code&gt;setInterval&lt;/code&gt; with Node.js, you should know they are a very common source of memory leaks. Node.js will keep referencing the function Object passed to &lt;code&gt;setTimer&lt;/code&gt; or &lt;code&gt;setInterval&lt;/code&gt; as long as they are not stopped. If you do not store the returned &lt;code&gt;id&lt;/code&gt; from &lt;code&gt;setTimer&lt;/code&gt; and &lt;code&gt;setInterval&lt;/code&gt; in order to call &lt;code&gt;clearTimeout&lt;/code&gt; / &lt;code&gt;clearInterval&lt;/code&gt;, those function Objects will stay referenced and won't get garbage collected. If, on top of that, you don't wisely manage the variables you create inside your function Object, you are prone to memory leaks.&lt;/p&gt;

&lt;p&gt;Consider this snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function thisWillLeak() {
  let numbers = [];
  return function() {
    numbers.push(Math.random());
  }
}

setInterval(thisWillLeak(), 2000);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the &lt;code&gt;numbers&lt;/code&gt; array will keep growing in memory forever and will not get garbage collected since the Interval is never cleared. You should make sure to store the returned &lt;code&gt;timeoutId&lt;/code&gt;/&lt;code&gt;intervalId&lt;/code&gt; in a variable and to make sure to clear them as soon as they are no longer used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;thisWillNoLongerLeak = setInterval(thisWillLeak(), 2000);
// .... do some things with this Interval
clearInterval(thisWillNoLongerLeak);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  How can I identify a memory leak in Node.js?
&lt;/h1&gt;

&lt;p&gt;The snippets I provided in this article might make it seem like memory leaks are pretty easy to diagnose. But your codebase is not as simple as these examples and will have a much higher count of lines of code. Therefore, if you wish to find memory leaks by reviewing your codebase, you'll have to go through an irrational number of lines of code in your app to find issues related to global scopes, closures, or any of the other points I've covered.&lt;/p&gt;

&lt;p&gt;Therefore, relying on tools specializing in debugging memory leaks in Node.js apps is best. Here are a few tools to help you detect memory leaks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tool 01 - node-inspector
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Aok3BgA_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/understanding-and-preventing-memory-leaks-in-nodejs/3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Aok3BgA_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/understanding-and-preventing-memory-leaks-in-nodejs/3.png" alt="" width="800" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Figure: Node Inspector&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;node-inspector (&lt;a href="https://github.com/node-inspector/node-inspector"&gt;GitHub&lt;/a&gt; | &lt;a href="https://www.npmjs.com/package/node-inspector"&gt;NPM&lt;/a&gt;) lets you connect to a running app by running the &lt;code&gt;node-debug&lt;/code&gt; command. This command will load Node Inspector in your default browser. Node Inspector supports Heap Profiling and can be useful for debugging memory leak issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tool 02 - Chrome DevTools
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ijZiyY1T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/understanding-and-preventing-memory-leaks-in-nodejs/0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ijZiyY1T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/understanding-and-preventing-memory-leaks-in-nodejs/0.png" alt="" width="583" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Figure: Chrome DevTools&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The next option is to use a tool already built into your browse - &lt;a href="https://developer.chrome.com/docs/devtools/"&gt;Chrome DevTools&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Chrome DevTools lets you analyze the application memory in real-time and troubleshoot potential memory leaks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--thE3SjBl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/understanding-and-preventing-memory-leaks-in-nodejs/1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--thE3SjBl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/understanding-and-preventing-memory-leaks-in-nodejs/1.png" alt="" width="504" height="672"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Figure: A sample DevTool inspection&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Wrapping up
&lt;/h1&gt;

&lt;p&gt;In order to make sure your services are robust and won't crash, it's essential to look closely into your codebase and identify potential patterns that might cause memory leaks. If they remain untreated, your app's memory footprint will monotonically increase as the app grows, which could drastically impact app performance for your end users.&lt;/p&gt;

&lt;p&gt;So, do take note of the areas I mentioned above - Closures, Global Variables, Multiple/Circular References, Timeouts, and Intervals as these are the key areas that can cause memory leaks in your app.&lt;/p&gt;

&lt;p&gt;I hope that you will find this article helpful on your journey to make your services robust.&lt;/p&gt;

&lt;p&gt;If you are indeed all about making your Node.js microservices robust and coded to the highest standards, there is one more tool that can help you with that... 😉:&lt;/p&gt;

&lt;h1&gt;
  
  
  How can Amplication Help?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://amplication.com/"&gt;Amplication&lt;/a&gt; lets you auto-generate Node.js code for your microservices, enabling you to build high-quality apps with high-quality code that take extra precautions for the issues discussed above to ensure that your app will not cause any memory leaks (well, at least not in the boilerplate code we generate. The rest... is up to you 😊).&lt;/p&gt;

</description>
      <category>caching</category>
      <category>backend</category>
      <category>microservices</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How to Effectively Use Caching to Improve Microservices Performance</title>
      <dc:creator>Muly Gottlieb</dc:creator>
      <pubDate>Tue, 12 Sep 2023 15:21:12 +0000</pubDate>
      <link>https://forem.com/amplication/how-to-effectively-use-caching-to-improve-microservices-performance-21c1</link>
      <guid>https://forem.com/amplication/how-to-effectively-use-caching-to-improve-microservices-performance-21c1</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;In the dynamic landscape of modern software development, microservices have emerged as a powerful architectural paradigm, offering scalability, flexibility, and agility. However, maintaining optimal performance becomes a crucial challenge as microservices systems grow in complexity and scale. This is where caching becomes a key strategy to enhance microservices' efficiency.&lt;/p&gt;

&lt;p&gt;This article will dive into the art of leveraging caching techniques to their fullest potential and ultimately boosting the performance of microservices.&lt;/p&gt;

&lt;h1&gt;
  
  
  What are Microservices?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.writergate.com/editor/xshmqazeeq1s/6g3f0i7bu3i7" rel="noopener noreferrer"&gt;Microservices&lt;/a&gt; are a distinctive architectural strategy that partitions applications into compact, self-contained services, each tasked with a distinct business function.&lt;/p&gt;

&lt;p&gt;These services are crafted to operate autonomously, enabling simpler development, deployment, and scalability.&lt;/p&gt;

&lt;p&gt;This approach promotes agility, scalability, and effectiveness within software development.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Caching?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/caching/" rel="noopener noreferrer"&gt;Caching&lt;/a&gt; is a technique used in computer systems to store frequently accessed data or computation results in a temporary storage area called a "cache."&lt;/p&gt;

&lt;p&gt;The primary purpose of caching is to speed up data retrieval and improve system performance by reducing the need to repeat time-consuming operations, such as database queries or complex computations.&lt;/p&gt;

&lt;p&gt;Caching is widely used in various computing systems, including web browsers, databases, content delivery networks (CDNs), microservices, and many other applications. &lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;What are the Different Types of Caching Strategies?&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;There are different types of caching strategies. We will explore database caching, edge caching, API caching, and local caching.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Database caching&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.prisma.io/dataguide/managing-databases/introduction-database-caching" rel="noopener noreferrer"&gt;Database caching&lt;/a&gt; involves storing frequently accessed or computationally expensive data from a database in a cache to improve the performance and efficiency of data retrieval operations. Caching reduces the need to repeatedly query the database for the same data, which can be slow and resource-intensive. Instead, cached data is readily available in memory, leading to faster response times and lower load on the database. There are a few different database caching strategies. Let's discuss them.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Cache aside:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In a cache-aside setup, the database cache is positioned adjacent to the database itself. When the application needs specific data, it initially examines the cache. The data is promptly delivered if the cache contains the required data (&lt;strong&gt;referred to as a cache hit)&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Alternatively, if the cache lacks the necessary data (&lt;strong&gt;a cache miss&lt;/strong&gt;), the application will proceed to query the database. The application then stores the retrieved data in the cache, making it accessible for future queries. This strategy proves particularly advantageous for applications that heavily prioritize reading tasks. The below image depicts the steps in the cache-aside approach.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fhow-to-use-caching-to-improve-microservices-peformance%2F0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fhow-to-use-caching-to-improve-microservices-peformance%2F0.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;(image source: &lt;a href="https://www.prisma.io/dataguide/managing-databases/introduction-database-caching" rel="noopener noreferrer"&gt;prisma.io&lt;/a&gt;)&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Read through:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In a read-through cache configuration, the cache is positioned between the application and the database, forming a linear connection. This approach ensures that the application exclusively communicates with the cache when performing read operations. The data is promptly provided if the cache contains the requested data (cache hit). In instances of cache misses, the cache will retrieve the missing data from the database and then return it to the application. However, the application continues to interact directly with the database for data write operations. The below image depicts the steps in the read-through approach.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fhow-to-use-caching-to-improve-microservices-peformance%2F1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fhow-to-use-caching-to-improve-microservices-peformance%2F1.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;(image source: &lt;a href="https://www.prisma.io/dataguide/managing-databases/introduction-database-caching" rel="noopener noreferrer"&gt;prisma.io&lt;/a&gt;)&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Write through:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Unlike the previous strategies we discussed, this strategy involves initially writing data to the cache instead of the database, and the cache promptly mirrors this write to the database. The setup can still be conceptualized similarly to the read-through strategy, forming a linear connection with the cache at the center. The below image depicts the steps in the write-through approach.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fhow-to-use-caching-to-improve-microservices-peformance%2F2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fhow-to-use-caching-to-improve-microservices-peformance%2F2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;(image source: &lt;a href="https://www.prisma.io/dataguide/managing-databases/introduction-database-caching" rel="noopener noreferrer"&gt;prisma.io&lt;/a&gt;)&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Write back:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The write-back approach functions nearly identical to the write-through strategy, with a single crucial distinction. In the write-back strategy, the application initiates the writing process directly to the cache as in the write-through case. However, in this case, the cache doesn't promptly mirror the write to the database; instead, it performs the database write after a certain delay. The below image depicts the steps in the write-back approach.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fhow-to-use-caching-to-improve-microservices-peformance%2F3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fhow-to-use-caching-to-improve-microservices-peformance%2F3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;(image source: &lt;a href="https://www.prisma.io/dataguide/managing-databases/introduction-database-caching" rel="noopener noreferrer"&gt;prisma.io&lt;/a&gt;)&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Write around:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A write-around caching approach can be integrated with either a cache-aside or a read-through strategy. In this setup, data is consistently written to the database, and retrieved data is directed to the cache. When a cache miss occurs, the application proceeds to access the database for reading and subsequently updates the cache to enhance future access. The below image depicts the steps in the write-around approach.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fhow-to-use-caching-to-improve-microservices-peformance%2F4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fhow-to-use-caching-to-improve-microservices-peformance%2F4.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;(image source: &lt;a href="https://www.prisma.io/dataguide/managing-databases/introduction-database-caching" rel="noopener noreferrer"&gt;prisma.io&lt;/a&gt;)&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Edge caching&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://learn.microsoft.com/en-us/iis/media/iis-media-services/edge-caching-for-media-delivery" rel="noopener noreferrer"&gt;Edge caching&lt;/a&gt;, also known as content delivery caching, involves the storage of content and data at geographically distributed edge server locations closer to end users. This technique is used to improve the delivery speed and efficiency of web applications, APIs, and other online content. Edge caching reduces latency by serving content from servers located near the user, minimizing the distance data needs to travel across the internet backbone. This is mostly useful for static content like media, HTML, CSS, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;API Caching&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://rapidapi.com/guides/api-caching" rel="noopener noreferrer"&gt;API caching&lt;/a&gt; involves the temporary storage of API responses to improve the performance and efficiency of interactions between clients and APIs. Caching API responses can significantly reduce the need for repeated requests to the API server, thereby reducing latency and decreasing the load on both the client and the server. This technique is particularly useful for improving the responsiveness of applications that rely heavily on external data sources through APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Local caching&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Local caching, also known as client-side caching or browser caching, refers to the practice of storing data, files, or resources on the client's side (such as a user's device or web browser) to enhance the performance of web applications and reduce the need for repeated requests to remote servers. By storing frequently used data locally, local caching minimizes the latency associated with retrieving data from remote servers and contributes to faster page loads and improved user experiences.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;What are the Benefits of using Caching in Microservices?&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Utilizing caching in a microservices architecture can offer a multitude of benefits that contribute to improved performance, scalability, and efficiency. Here are some key advantages of incorporating caching into microservices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Enhanced Performance &amp;amp; Lower Latency:&lt;/strong&gt; Caching reduces the need to repeatedly fetch data from slower data sources, such as databases or external APIs. Cached data can be quickly retrieved from the faster cache memory, leading to reduced latency and faster response times for microservices.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Reduced Load on Data Sources:&lt;/strong&gt; By serving frequently requested data from the cache, microservices can alleviate the load on backend data sources. This ensures that databases and other resources are not overwhelmed with redundant requests, freeing up resources for other critical tasks.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Improved Scalability:&lt;/strong&gt; Caching allows microservices to handle increased traffic and load more effectively. With cached data, microservices can serve a larger number of requests without overloading backend systems, leading to better overall scalability.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Optimized Data Processing:&lt;/strong&gt; Microservices can preprocess and store frequently used data in the cache, allowing for more complex computations or transformations to be performed on cached data. This can result in more efficient data processing pipelines.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Offline Access and Resilience:&lt;/strong&gt; In scenarios where microservices need to operate in offline or disconnected environments, caching can provide access to previously fetched data, ensuring continued functionality.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Key Considerations When Implementing Caching in Microservices&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Implementing caching in a microservices architecture requires careful consideration to ensure that the caching strategy aligns with the specific needs and characteristics of the architecture. Here are some key considerations to keep in mind when implementing caching in microservices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Data Volatility and Freshness:&lt;/strong&gt; Evaluate the volatility of your data. Caching might not be suitable for data that changes frequently, as it could lead to serving stale information. Determine whether data can be cached for a certain period or whether it requires real-time updates.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Data Granularity:&lt;/strong&gt; Identify the appropriate level of granularity for caching. Determine whether to cache individual items, aggregated data, or entire responses. Fine-tuning granularity can impact cache hit rates and efficiency.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cache Invalidation:&lt;/strong&gt; Plan how to invalidate cached data when it becomes outdated. Consider strategies such as time-based expiration, manual invalidation, or event-based invalidation triggered by data changes. This is arguably the most challenging part of implementing caching successfully. I recommend giving this careful thought during system design, particularly if you're not very experienced with caching.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cache Eviction Policies:&lt;/strong&gt; Choose appropriate eviction policies to handle cache capacity limitations. Common strategies include Least Recently Used (LRU), Least Frequently Used (LFU), and Time-To-Live (TTL) based eviction.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cache Consistency:&lt;/strong&gt; Assess whether data consistency across microservices is critical. Depending on the use case, you might need to implement cache synchronization mechanisms to ensure data integrity.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cold Start:&lt;/strong&gt; Consider how to handle cache "cold starts" when a cache is empty or invalidated, and a high volume of requests is received simultaneously. Implement fallback mechanisms to gracefully handle such situations. Consider implementing an artificial cache warm-up when starting the service from a "cold" state.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cache Placement:&lt;/strong&gt; Decide where to place the cache – whether it's inside the microservices themselves, at the API gateway, or in a separate caching layer. Each option has its benefits and trade-offs in terms of ease of management and efficiency.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cache Segmentation:&lt;/strong&gt; Segment your cache based on data access patterns. Different microservices might have distinct data access requirements, and segmenting the cache can lead to better cache utilization and hit rates.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cache Key Design:&lt;/strong&gt; Design cache keys thoughtfully to ensure uniqueness and avoid conflicts. Include relevant identifiers that accurately represent the data being cached. Choose keys that are native to the consuming microservices.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cloud-Based Caching Services:&lt;/strong&gt; Evaluate the use of cloud-based caching services, such as &lt;a href="https://aws.amazon.com/elasticache/" rel="noopener noreferrer"&gt;Amazon ElastiCache&lt;/a&gt; or &lt;a href="https://redis.com/redis-enterprise-cloud/overview/" rel="noopener noreferrer"&gt;Redis Cloud&lt;/a&gt;, for managed caching solutions that offer scalability, resilience, and reduced maintenance overhead.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Overview of Popular Caching Tools&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Redis&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Redis is an open-source data structure store that functions as a database, cache, messaging system, and stream processor. It supports various data structures like strings, hashes, lists, sets, sorted sets with range queries, bitmaps, &lt;a href="https://redis.io/docs/data-types/probabilistic/hyperloglogs/" rel="noopener noreferrer"&gt;hyperloglogs&lt;/a&gt;, geospatial indexes, and streams. Redis offers built-in features such as replication, scripting in Lua, LRU (Least Recently Used) eviction, transactions, and multiple levels of data persistence. Additionally, it ensures high availability through Redis Sentinel and automatic partitioning via Redis Cluster. The below image depicts how Redis is traditionally used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fhow-to-use-caching-to-improve-microservices-peformance%2F5.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fhow-to-use-caching-to-improve-microservices-peformance%2F5.jpeg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Redis prioritizes speed by utilizing an in-memory dataset. Depending on your needs, Redis can make your data persistent by periodically saving the dataset to disk or logging each command to disk. You also have the option to disable persistence if your requirement is solely a feature-rich, networked, in-memory cache. Redis can be a valuable tool for improving the performance of microservices architectures. It offers fast data retrieval, caching capabilities, and support for various data structures.&lt;/p&gt;

&lt;p&gt;It's important to note that while Redis can significantly enhance microservices performance, it also introduces some considerations, such as &lt;a href="https://www.designgurus.io/blog/cache-invalidation-strategies" rel="noopener noreferrer"&gt;cache invalidation&lt;/a&gt; strategies, data persistence, and memory management. Proper design and careful consideration of your microservices' data access patterns and requirements are crucial for effectively leveraging Redis to improve performance.&lt;/p&gt;

&lt;p&gt;💡Pro Tip: &lt;a href="https://amplication.com/" rel="noopener noreferrer"&gt;Amplication&lt;/a&gt; now offers a &lt;a href="https://github.com/amplication/plugins/tree/master/plugins/cache-redis" rel="noopener noreferrer"&gt;Redis Plugin&lt;/a&gt; that can help you integrate Redis into your microservices more easily than ever before.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Memcached&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Memcached is another popular in-memory caching system that can be used to improve the performance of microservices. Similar to Redis, Memcached is designed to store and retrieve data quickly from memory, making it well-suited for scenarios where fast data access is crucial. It is a fast and distributed system for caching memory objects. While it's versatile, its initial purpose was to enhance the speed of dynamic web applications by reducing the workload on databases. It's like a brief memory boost for your applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fhow-to-use-caching-to-improve-microservices-peformance%2F6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fhow-to-use-caching-to-improve-microservices-peformance%2F6.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Memcached can redistribute memory surplus from certain parts of your system to address shortages in other areas. This optimization aims to enhance memory utilization and efficiency.&lt;/p&gt;

&lt;p&gt;Consider the two deployment scenarios depicted in the diagram:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  In the first scenario (top), each node operates independently. However, this approach is inefficient, with the cache size being a fraction of the web farm's actual capacity. It's also labor-intensive to maintain cache consistency across nodes.&lt;/li&gt;
&lt;li&gt;  With Memcached, all servers share a common memory pool (bottom). This ensures that a specific item is consistently stored and retrieved from the same location across the entire web cluster. As demand and data access requirements increase with your application's expansion, this strategy aligns scalability for both server count and data volume.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Though the illustration shows only two web servers for simplicity, this concept holds as the server count grows. For instance, while the first scenario provides a cache size of 64MB with fifty servers, the second scenario yields a substantial 3.2GB cache size. It's essential to note that you can opt not to use your web server's memory for caching. Many users of Memcached choose dedicated machines specifically designed as Memcached servers.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Amplication for building Microservices&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;If you're eager to explore microservices architecture and seeking an excellent entry point, consider &lt;a href="https://amplication.com/" rel="noopener noreferrer"&gt;Amplication&lt;/a&gt;. Amplication is an open-source, user-friendly backend generation platform that simplifies the process of crafting resilient and scalable microservices applications 20x faster. With a large and growing &lt;a href="https://amplication.com/plugins" rel="noopener noreferrer"&gt;library of plugins&lt;/a&gt;, you have the freedom to use exactly the tools and technologies you need for each of your microservices.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;By incorporating caching intelligently, microservices can transcend limitations, reducing latency, relieving database pressure, and scaling with newfound ease. The journey through the nuances of caching strategies unveils its potential to elevate not only response times but also the overall user experience.&lt;/p&gt;

&lt;p&gt;In conclusion, the marriage of microservices and caching isn't just a technological union – it's a gateway to unlocking huge performance gains. As technology continues to evolve, this synergy will undoubtedly remain a cornerstone in the perpetual quest for optimal microservices performance.&lt;/p&gt;

</description>
      <category>caching</category>
      <category>backend</category>
      <category>microservices</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Picking the Perfect Database for Your Microservices</title>
      <dc:creator>Muly Gottlieb</dc:creator>
      <pubDate>Thu, 07 Sep 2023 08:49:36 +0000</pubDate>
      <link>https://forem.com/amplication/picking-the-perfect-database-for-your-microservices-435j</link>
      <guid>https://forem.com/amplication/picking-the-perfect-database-for-your-microservices-435j</guid>
      <description>&lt;p&gt;Microservices have been the go-to application architecture that many software projects have adopted due to the numerous benefits they offer, ranging from:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Service decoupling&lt;/li&gt;
&lt;li&gt; Faster development times&lt;/li&gt;
&lt;li&gt; Faster release times&lt;/li&gt;
&lt;li&gt; Tailored datastores&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Hence, developers can select the right tools and platforms that help deliver the best performance in each specific microservice. One aspect to consider when doing so is eliminating the use of a monolithic data-store architecture in the application. Microservices favour independent service components where each service can run on its own runtime and connect to its own database.&lt;/p&gt;

&lt;p&gt;This means you're encouraged to share data between microservices rather than using an extensive single database for all your microservices, as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fpicking-the-perfect-database-for-your-microservices%2F0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fpicking-the-perfect-database-for-your-microservices%2F0.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Figure: A microservices architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;However, this raises the question, How should you pick the correct (distributed) database for each microservice?&lt;/p&gt;

&lt;h1&gt;
  
  
  How do you choose the best database for a microservice?
&lt;/h1&gt;

&lt;p&gt;To answer this question, you need to understand that different types of databases are made to cater to different purposes and requirements.&lt;/p&gt;

&lt;p&gt;Therefore, you must consider factors such as performance, reliability, and data modelling requirements in your decision-making process to ensure that you select the correct database.&lt;/p&gt;

&lt;h2&gt;
  
  
  The CAP Theorem for Distributed Databases
&lt;/h2&gt;

&lt;p&gt;It's important to understand that when selecting a database, you must consider its Consistency, Availability, and (network) Partition tolerance capability.&lt;/p&gt;

&lt;p&gt;This is also known as the &lt;a href="https://www.geeksforgeeks.org/the-cap-theorem-in-dbms/" rel="noopener noreferrer"&gt;CAP Theorem&lt;/a&gt;, and it's vital to be aware that there are tradeoffs in database design where one of these factors will always be impacted by the other two. In a nutshell, the CAP theorem proposed that any database in a distributed system can have some combination of the following properties:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;(Sequential) Consistency&lt;/strong&gt;: Distributed Databases that satisfy this property will always return the same data (latest committed data) from all DB nodes/shards, which means that all your DB clients will get the latest data regardless of the node they query.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Availability&lt;/strong&gt;: Distributed Databases that satisfy this property guarantee to always respond to read and write requests in a timely manner from every reachable node.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;(Network) Partition Tolerance&lt;/strong&gt;: Distributed Databases that satisfy this property guarantee to function even if there is a network disconnection between the DB nodes (which partitions the DB nodes into two or more network partitions).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These three factors make up modern distributed databases, but the CAP Throrem states that &lt;strong&gt;no database can satisfy all three characteristics.&lt;/strong&gt; Any database implementation can choose two of those characteristics at the expense of the third.&lt;/p&gt;

&lt;p&gt;Distributed Databases therefore fall into one of the following combinations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; CA (Consistency + Availability): Your database can serve the most recent data from all the nodes while remaining highly available.&lt;/li&gt;
&lt;li&gt; CP (Consistency + Partition Tolerance): Your database can serve the most recent data from all the nodes with a high resilience to network errors.&lt;/li&gt;
&lt;li&gt; AP (Availability + Partition Tolerance): Your database nodes always respond timely and can respond well even in the face of network failures. But it doesn't guarantee returning the last updated data from every node. These databases adopt a principle known as "Eventual Consistency," where the data is replicated eventually and not instantly (eventual consistency is a weaker form of consistency compared to sequential consistency, which is the "C" in CAP Theorem).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So, it's essential to understand the CAP theorem before selecting a database. The table below showcases some popular distributed databases according to their "CAP Theorem preference".&lt;/p&gt;

&lt;p&gt;By evaluating your non-functional requirements, you can use this as a guide to understanding the direction you need to look at.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fpicking-the-perfect-database-for-your-microservices%2F1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fpicking-the-perfect-database-for-your-microservices%2F1.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Figure: CAP Theorem preferences in popular databases&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Database vs. Service Requirements&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I've covered this topic above, but apart from the CAP Theorem, it's essential to understand that selecting the correct database for your microservice ultimately depends on your service requirements. This is also known as polyglot persistence. It's where you utilize different databases for different services depending on the requirement of each service.&lt;/p&gt;

&lt;p&gt;For example, your microservice might be read or write-intensive, need rapid scaling, or simply high durability. Therefore, it's essential to understand your requirements clearly before deciding on a database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance (Read/Write) Requirements
&lt;/h3&gt;

&lt;p&gt;The first aspect you may need to look at is performance.&lt;/p&gt;

&lt;p&gt;If you're building a microservice that needs to be high-performing, you'll likely need a database that can meet that exact demand.&lt;/p&gt;

&lt;p&gt;For example, suppose you're building your microservice using an API Gateway and AWS Lambda. In that case, your service can scale infinitely, so you'll need a database that can scale as your Lambda functions scale. If you fail to do so, you'll create a bottleneck in your database-level service, which could lead to inter-service latencies and timeout errors as your system cannot scale.&lt;/p&gt;

&lt;p&gt;So, in such cases, it's essential to consider the number of IOPS (Input/Output Operations Per Second) your service will process. &lt;a href="https://www.linkedin.com/pulse/database-selection-considerations-microservices-kapil-kumar-gupta/" rel="noopener noreferrer"&gt;Here are some typical numbers&lt;/a&gt; for operations per second:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Very high — Greater than one million IOPS&lt;/li&gt;
&lt;li&gt;  High — Between 500,000 and one million IOPS&lt;/li&gt;
&lt;li&gt;  Moderate — Between 10,000 and 500,000 IOPS&lt;/li&gt;
&lt;li&gt;  Low — Less than 10,000 IOPS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, it's essential to consider the IOPS you'll be processing in your service before picking a database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Latency Requirements
&lt;/h3&gt;

&lt;p&gt;The next requirement to look at is latency. Latency refers to the delay that has occurred when serving a read/write request.&lt;/p&gt;

&lt;p&gt;For latency, the typical numbers are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Low — Less than one millisecond&lt;/li&gt;
&lt;li&gt;  Moderate — one to 10 milliseconds&lt;/li&gt;
&lt;li&gt;  High — Greater than 10 milliseconds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're building microservices that need instant communication, you'll likely need to adopt a low-latency database.&lt;/p&gt;

&lt;p&gt;For example, let's say you're modelling a Search Service:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fpicking-the-perfect-database-for-your-microservices%2F2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fpicking-the-perfect-database-for-your-microservices%2F2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Figure: A Product Searching Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ideally, a search operation cannot take more than a few seconds, regardless of the payload. Therefore, in such cases, you'll need to pick a database that supports delivering responses in the defined period.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Modelling Requirements
&lt;/h3&gt;

&lt;p&gt;One of the most significant advantages of choosing microservices over monolith is that developers get to define different data models for different services. A typical microservices architecture may consist of data models comprising key-value, graph, time-series, JSON, streams, search engines, and more.&lt;/p&gt;

&lt;p&gt;For example, if you were modelling an e-commerce app with microservices, you could have a data requirement as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fpicking-the-perfect-database-for-your-microservices%2F3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fpicking-the-perfect-database-for-your-microservices%2F3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Figure: Metric requirement for services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Some of your services would need very high read performance with low latency, while others can tolerate a moderate level of latency.&lt;/p&gt;

&lt;p&gt;Each of these services could have a data model as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fpicking-the-perfect-database-for-your-microservices%2F4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fpicking-the-perfect-database-for-your-microservices%2F4.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Figure: Modelling microservice data structures&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For example, DynamoDB is a strong candidate for the Cache Server as it requires very high read performance (less than 1 ms) and high write performance with low latency.&lt;/p&gt;

&lt;p&gt;You should formalize the performance requirements for your microservices in terms of acceptable latency and IOPS to ensure you're selecting the correct database for your microservice.&lt;/p&gt;

&lt;h1&gt;
  
  
  What are the tips for choosing the correct database for a microservice?##
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Tip #1 - Consider the CAP Theorem
&lt;/h2&gt;

&lt;p&gt;When you pick a database, look into its workings and identify its location in the CAP theorem. Proceed with the database only if it meets your expectations in the CAP Theorem, as there will always be tradeoffs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tip #2 - Gather all requirements upfront
&lt;/h2&gt;

&lt;p&gt;It's essential to understand the requirements of your microservice before you pick a database for it. If your microservice is write-heavy but not read-heavy, you could consider utilizing two databases (one for reading, one for writing) and communicating with them using Eventual Consistency and the &lt;a href="https://learn.microsoft.com/en-us/azure/architecture/patterns/cqrs" rel="noopener noreferrer"&gt;CQRS (Command Query Responsibility Segregation) pattern&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Apart from that, gain an insight into the acceptable latency and IOPS your database will need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tip #3 - Use Amplication 😁 💜
&lt;/h2&gt;

&lt;p&gt;Consider using tools like &lt;a href="https://www.amplication.com/" rel="noopener noreferrer"&gt;Amplication&lt;/a&gt; to build your microservices. Amplication lets you bootstrap and build microservices in just a few clicks while allowing you to select specific databases such as PostgreSQL, MySQL, and MongoDB for each particular service, depending on your requirements. Swapping a database in favour of another is just four clicks. This allows you to experiment and test with different databases very quickly, which can be a game changer for testing multiple databases per service until you find the most suitable one.&lt;/p&gt;

&lt;p&gt;Pro Tip 💡 - Database implementations in Amplication come in the form of a &lt;a href="https://docs.amplication.com/getting-started/plugins/" rel="noopener noreferrer"&gt;plugin&lt;/a&gt;, and you can easily &lt;a href="https://docs.amplication.com/plugins/how-to-create-plugin/" rel="noopener noreferrer"&gt;write your own&lt;/a&gt; plugins for other databases if you wish to experiment even more.&lt;/p&gt;

&lt;h1&gt;
  
  
  Wrapping up
&lt;/h1&gt;

&lt;p&gt;Microservices have gained a significant advantage over monoliths due to their capability to support loosely coupled services, where each service can be developed, tested, and maintained in isolation while using a separate datastore that is most suitable for that microservice.&lt;/p&gt;

&lt;p&gt;Hence, it's essential to understand how to pick the most suitable database for each microservice. You need to dive into aspects like IOPS, Latency, and Data Modeling and gain a strong understanding of the CAP Theorem to ensure that you pick the correct database. You should strive to build your services using architectures and platforms that will allow you to easily swap databases in the future.&lt;/p&gt;

&lt;p&gt;By doing so, you're on the right path to building highly scalable and high-performing microservices that can serve requests at optimal capacity.&lt;/p&gt;

&lt;h1&gt;
  
  
  FAQ
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Can microservices use multiple databases?
&lt;/h2&gt;

&lt;p&gt;Yes, you are highly encouraged to use separate databases for your microservices as this helps break down the monolith data store and lets you independently scale your database services up and down based on your requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Can microservices use SQL databases?
&lt;/h2&gt;

&lt;p&gt;You can choose between SQL, Key-Value, and Graph databases for your microservice. It depends on your requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Should I use a relational or a NoSQL database for my microservice?
&lt;/h2&gt;

&lt;p&gt;There is no "one size fits all" and no silver bullet. It depends on the requirements that you wish to satisfy. Consider using a normalized relational database if consistency is more important than performance. If performance is important, consider using a NoSQL database.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the trade-offs between using a single database for all microservices and multiple databases?
&lt;/h2&gt;

&lt;p&gt;With a single database for all of your microservices, it's challenging to scale parts of your database. And, sometimes, different services might have different access patterns and need other data models that cannot be implemented if you use a single database for all your microservices.&lt;/p&gt;

</description>
      <category>database</category>
      <category>backend</category>
      <category>microservices</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Serving Frontends in Microservices Architecture</title>
      <dc:creator>Muly Gottlieb</dc:creator>
      <pubDate>Wed, 30 Aug 2023 08:20:14 +0000</pubDate>
      <link>https://forem.com/amplication/serving-frontends-in-microservices-architecture-4p61</link>
      <guid>https://forem.com/amplication/serving-frontends-in-microservices-architecture-4p61</guid>
      <description>&lt;p&gt;The microservices architecture has emerged as a dominant paradigm in the software development landscape. While much attention has been given to the backend components, the frontend - which serves as the user's gateway to the application - is equally crucial. This article aims to explore the challenges and solutions associated with serving frontends in a microservices environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Storing and Serving Frontend Assets
&lt;/h2&gt;

&lt;p&gt;In traditional monolithic applications, frontend assets such as HTML, JavaScript, and CSS were bundled and served from a single server. However, the distributed nature of microservices necessitates a different approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud Storage&lt;/strong&gt;: Solutions like AWS S3, Google Cloud Storage, and Azure Blob Storage have become the backbone for storing frontend assets in a microservices architecture. These platforms offer high availability, redundancy, and scalability. For instance, consider a global e-commerce platform with distinct microservices for product listings, user profiles, and checkout processes. Each of these could have its frontend assets stored in separate cloud storage buckets, ensuring modularity and ease of management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CDN Integration&lt;/strong&gt;: A Content Delivery Network (CDN) is essential for global applications. CDNs ensure that users worldwide receive data from the nearest point by caching assets in multiple geographical locations, reducing latency. Platforms like Cloudflare, Akamai, and AWS CloudFront have become industry standards. For instance, a user in London accessing a US-based service will retrieve assets from a European server, ensuring faster load times and a smoother user experience. See below for further elaboration regarding CDNs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Considerations
&lt;/h2&gt;

&lt;p&gt;The public nature of frontend assets brings forth unique security challenges.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Headers&lt;/strong&gt;: Implementing security headers, such as Content Security Policy (CSP), can significantly reduce risks associated with cross-site scripting (XSS) attacks. A well-configured CSP ensures that only whitelisted sources can run scripts, thereby preventing potential malicious injections.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sensitive Information&lt;/strong&gt;: It's not uncommon for developers to inadvertently leave sensitive information, such as API keys or debug logs, within frontend code. Regular audits, both manual and automated, are essential to ensure that such data is stripped out before deployment. To automate this process, tools like SonarQube or ESLint can be integrated into CI/CD pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Leveraging CDNs for Faster Content Delivery
&lt;/h2&gt;

&lt;p&gt;The role of CDNs in a microservices setup extends beyond just caching.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Global Reach&lt;/strong&gt;: CDNs play a pivotal role in ensuring consistent user experience for applications with a global user base. They achieve this by replicating your frontend assets across global edge locations and directing user requests to the nearest edge location, reducing the round-trip data retrieval time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cache-Control&lt;/strong&gt;: Modern platforms like Netlify and Vercel offer developers granular control over caching policies. This ensures that users always access the most recent version of assets while also benefiting from caching's speed advantages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling CORS and Preflight Requests
&lt;/h2&gt;

&lt;p&gt;Cross-Origin Resource Sharing (CORS) is a security feature implemented by web browsers, ensuring that web pages make requests only to their own domain. However, CORS can pose challenges in a microservices setup where services might reside on different domains or subdomains.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gateway Implementation&lt;/strong&gt;: Employing a gateway, such as AWS API Gateway or Kong, can centralize and manage CORS policies. This ensures that all microservices adhere to a consistent set of CORS rules, simplifying maintenance and troubleshooting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CDN as a Gateway&lt;/strong&gt;: Some CDNs offer advanced features that allow them to function as gateways. This means they can handle CORS headers and also pass through API requests, offering a unified solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architectural Alternatives
&lt;/h2&gt;

&lt;p&gt;The microservices architecture offers flexibility in how frontends are served:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Single Service for Frontend&lt;/strong&gt;: All frontend assets are served from a single service. This approach simplifies deployment and management but can become a bottleneck in large-scale applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Micro-frontends&lt;/strong&gt;: This approach aligns the frontend architecture with microservices. Each microservice has its corresponding frontend, allowing for modular development and deployment. For instance, in a modular e-commerce platform, the product listing page, shopping cart, and user profile could each be a separate micro-frontend, developed and deployed independently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BFF (Backend For Frontend)&lt;/strong&gt;: This pattern introduces an intermediary service layer that sits between the frontend and multiple backend services. The BFF aggregates and transforms data from various backend services, optimizing it for frontend consumption.&lt;/p&gt;

&lt;h2&gt;
  
  
  Release Pipelines
&lt;/h2&gt;

&lt;p&gt;The deployment of frontend assets often differs from backend services, especially in a microservices setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CI/CD Integration&lt;/strong&gt;: Continuous Integration and Continuous Deployment (CI/CD) tools like GitHub Actions, Jenkins, Travis CI, and GitLab CI can automate the build, test, and deployment processes. For instance, a new feature developed for a micro-frontend can be automatically tested and, if tests pass, deployed to the production environment (e.g. the CDN) without manual intervention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unified Deployments&lt;/strong&gt;: In scenarios where the frontend and backend are tightly coupled, deploying them simultaneously ensures consistency across the application. This is especially crucial when a new feature or change spans both the frontend and backend.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tools and Platforms for Frontend Hosting
&lt;/h2&gt;

&lt;p&gt;Several platforms cater specifically to frontend hosting in a microservices environment:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Netlify&lt;/strong&gt;: Renowned for its simplicity, Netlify offers atomic deploys, instant cache invalidation, and integrated CI/CD, making it a favorite among developers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vercel&lt;/strong&gt;: With a focus on frontend frameworks like React and Next.js, Vercel provides out-of-the-box optimizations, ensuring blazing-fast load times.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS CloudFront/S3&lt;/strong&gt;: This combination is a powerhouse for hosting static assets. With S3 providing reliable storage and CloudFront ensuring global content delivery, it's a robust solution for large-scale applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;One promising platform that you should check out is &lt;a href="https://amplication.com"&gt;Amplication&lt;/a&gt;. Amplication focuses on automating the development of business applications and supports various layers, including the backend services and the API layer. By integrating Amplication into the development pipeline, organizations can generate robust, well-designed backend services that effortlessly connect with their modular frontends.&lt;/p&gt;

&lt;p&gt;Serving frontends in a microservices architecture is a complex yet rewarding endeavor. Developers can create scalable, resilient, and user-friendly applications by understanding the challenges and leveraging the right strategies and tools. As the world of software development continues to evolve, staying up-to-date with these practices will be paramount for professionals aiming to deliver excellence in the realm of microservices.&lt;/p&gt;

</description>
      <category>frontend</category>
      <category>backend</category>
      <category>microservices</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
