<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Alfonso Domenech</title>
    <description>The latest articles on Forem by Alfonso Domenech (@aldorea).</description>
    <link>https://forem.com/aldorea</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/aldorea"/>
    <language>en</language>
    <item>
      <title>Build Faster: Your Guide to a Quick-Start Project Template</title>
      <dc:creator>Alfonso Domenech</dc:creator>
      <pubDate>Mon, 18 Dec 2023 14:20:47 +0000</pubDate>
      <link>https://forem.com/one-beyond/build-faster-your-guide-to-a-quick-start-project-template-2o0p</link>
      <guid>https://forem.com/one-beyond/build-faster-your-guide-to-a-quick-start-project-template-2o0p</guid>
      <description>&lt;p&gt;Every time I've wanted to embark on a personal project - to learn a new technology or develop an idea - I've found myself redoing configurations for code structure and maintenance, not to mention execution and deployment. It can be frustrating, but I've come to realize that these steps are crucial for success. That's why I'm sharing a minimal template with the most common configurations - the essentials that I believe will help you achieve your goals. We are going to introduce and configure the following technologies.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NestJS&lt;/li&gt;
&lt;li&gt;NVM&lt;/li&gt;
&lt;li&gt;Commitlint&lt;/li&gt;
&lt;li&gt;Husky&lt;/li&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;Docker compose&lt;/li&gt;
&lt;li&gt;Github Actions&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want to take a look at it while you read, you can check out the official &lt;a href="https://github.com/aldorea/nestjs-source-template" rel="noopener noreferrer"&gt;repo&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  NestJS &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;NestJS is our backend superhero in the template repo. It's like a magic trick for building solid server-side applications effortlessly in TypeScript or JavaScript. Check out more at &lt;a href="https://docs.nestjs.com/" rel="noopener noreferrer"&gt;NestJS Official Website&lt;/a&gt;. &lt;/p&gt;

&lt;h3&gt;
  
  
  Creating .nvmrc file &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;In this part of the process, the whole team must work with the same version of NodeJS to ensure that all changes made to the application are compatible. This can also be done with Docker, but we will see it later.&lt;/p&gt;

&lt;p&gt;In the .&lt;strong&gt;nvmrc&lt;/strong&gt; file we have to put the version of node we want to use. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

20


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then we have to install nvm.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure commitlint &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;If you've worked on projects with a team, you may have noticed that everyone creates different branches and commits changes with random messages. This can make things confusing. That's where commitlint comes in. It helps make sure that commit messages are more consistent and easier to understand. &lt;/p&gt;

&lt;p&gt;💡 Here we have to explain the commit convention and if we want to add our custom commit configuration. Also, the benefits of including commitlint in the projects.&lt;/p&gt;

&lt;p&gt;Now we have to set up commitment in our project following the instructions of the library &lt;a href="https://commitlint.js.org/#/guides-local-setup" rel="noopener noreferrer"&gt;commitlint&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;First, we must install the library and the convention we want to follow. In our case, we are going to use &lt;a href="https://www.conventionalcommits.org/en/v1.0.0/" rel="noopener noreferrer"&gt;conventional commits&lt;/a&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--save-dev&lt;/span&gt; @commitlint/cli @commitlint/config-conventional


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then we have to configure commitlint to use the conventional config. Let´s create our &lt;code&gt;commitlint.config.js&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;extends&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@commitlint/config-conventional&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Set up Husky &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://typicode.github.io/husky/getting-started.html#automatic-recommended" rel="noopener noreferrer"&gt;Husky&lt;/a&gt; is a library which allows developers to execute some commands in the different &lt;a href="https://git-scm.com/docs/githooks" rel="noopener noreferrer"&gt;Git hooks&lt;/a&gt;. Why do we want this? As we said we need to ensure a little bit of homogeneity in our project with husky so we can achieve this. Let´s see how.&lt;br&gt;
So, there's this neat library called husky that developers can use. It helps you run specific commands in various Git hooks(&lt;a href="https://git-scm.com/docs/githooks" rel="noopener noreferrer"&gt;https://git-scm.com/docs/githooks&lt;/a&gt;), which is a fancy way of saying it keeps your project looking consistent. Why is that a good thing? Well, it makes everything easier to understand! Want to know more? Let me break it down.&lt;/p&gt;

&lt;p&gt;First, we need to install husky in our project. For this we are going to follow &lt;a href="https://typicode.github.io/husky/getting-started.html#automatic-recommended" rel="noopener noreferrer"&gt;husky automatic installation&lt;/a&gt; which is recommended.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

npx husky-init &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm &lt;span class="nb"&gt;install&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This command executes the next actions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add the &lt;code&gt;prepare&lt;/code&gt; script to &lt;code&gt;package.json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Create a sample &lt;code&gt;pre-commit&lt;/code&gt; hook that you can edit (by default, &lt;code&gt;npm test&lt;/code&gt; will run when you commit).&lt;/li&gt;
&lt;li&gt;Configure Git hooks path.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you want manual installation and customize the husky configuration you can read the corresponding &lt;a href="https://typicode.github.io/husky/getting-started.html#manual" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now we have to configure husky to execute some commands in the git &lt;a href="https://git-scm.com/docs/githooks" rel="noopener noreferrer"&gt;hooks&lt;/a&gt;. We are going to add three that from my point of view are the most important.&lt;/p&gt;

&lt;p&gt;To add a new hook we have to follow the next structure.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

npx husky add .husky/&lt;span class="o"&gt;{&lt;/span&gt;gitHook&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="s2"&gt;"{command}"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This command is going to create different bash files in the &lt;code&gt;.husky&lt;/code&gt; directory. The names of these files are going to be named with the correspondent git hook.&lt;/p&gt;

&lt;p&gt;The first one is the &lt;strong&gt;commit-msg&lt;/strong&gt; hook. Here we are going to ensure that our commit messages follow our commitlint configuration.&lt;/p&gt;

&lt;p&gt;So, let´s add our new hook! Following the command structure mentioned above.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

npx husky add .husky/commit-msg  &lt;span class="s1"&gt;'npx --no -- commitlint --edit ${1}'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The second one is the pre-commit hook. With this hook, we are going to lint and format the files we want to add to our commit. This action is executed before committing our changes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

npx husky add .husky/pre-commit  &lt;span class="s1"&gt;'npm run lint &amp;amp;&amp;amp; npm run format'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The last one but not the least, is the &lt;strong&gt;pre-push&lt;/strong&gt; hook. This hook is going to execute our test before pushing our changes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

npx husky add .husky/pre-push  &lt;span class="s1"&gt;'npm run test'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Docker &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Now we want to create our development environment for our NestJS project. In this part of the article we are going to see how to include Docker and Docker compose.&lt;/p&gt;

&lt;p&gt;First, we need to configure our Docker container creating our &lt;code&gt;Dockerfile&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Our Dockerfile employs multi-stage builds, which facilitate the setup of our docker container for both local and development settings. This method offers a range of advantages. It's a two-step process, compile dependencies in one stage and then keep only the essentials in the final image improving our Docker image performance. To learn more about multi-stage builds you can check the &lt;a href="https://docs.docker.com/build/building/multi-stage/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;

&lt;span class="c"&gt;# BUILD FOR LOCAL DEVELOPMENT&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:20-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;As&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;development&lt;/span&gt;

&lt;span class="c"&gt;# Create app directory&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /usr/src/app&lt;/span&gt;

&lt;span class="c"&gt;# Copy application dependency manifests to the container image.&lt;/span&gt;
&lt;span class="c"&gt;# A wildcard is used to ensure copying both package.json AND package-lock.json (when available).&lt;/span&gt;
&lt;span class="c"&gt;# Copying this first prevents re-running npm install on every code change.&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --chown=node:node package*.json ./&lt;/span&gt;

&lt;span class="c"&gt;# Install app dependencies using the `npm ci` command instead of `npm install`&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci

&lt;span class="c"&gt;# Bundle app source&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --chown=node:node . .&lt;/span&gt;

&lt;span class="c"&gt;# Use the node user from the image (instead of the root user)&lt;/span&gt;
&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; node&lt;/span&gt;

&lt;span class="c"&gt;# BUILD FOR PRODUCTION&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:20-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;As&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /usr/src/app&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --chown=node:node package*.json ./&lt;/span&gt;

&lt;span class="c"&gt;# To run `npm run build` we need access to the Nest CLI which is a dev dependency. In the previous development stage we ran `npm ci` which installed all dependencies, so we can copy over the node_modules directory from the development image&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --chown=node:node --from=development /usr/src/app/node_modules ./node_modules&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --chown=node:node . .&lt;/span&gt;

&lt;span class="c"&gt;# Run the build command which creates the production bundle&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm run build

&lt;span class="c"&gt;# Set NODE_ENV environment variable&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; NODE_ENV production&lt;/span&gt;

&lt;span class="c"&gt;# Remove husky from the production build and install the production dependencies&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm pkg delete scripts.prepare &lt;span class="se"&gt;\
&lt;/span&gt;    npm ci &lt;span class="nt"&gt;--omit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dev

&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; node&lt;/span&gt;

&lt;span class="c"&gt;# PRODUCTION&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:20-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;As&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;production&lt;/span&gt;

&lt;span class="c"&gt;# Copy the bundled code from the build stage to the production image&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --chown=node:node --from=build /usr/src/app/node_modules ./node_modules&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --chown=node:node --from=build /usr/src/app/dist ./dist&lt;/span&gt;

&lt;span class="c"&gt;# Start the server using the production build&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; [ "node", "dist/main.js" ]&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Docker-compose configuration &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Docker-compose is our dev sidekick, making local coding a cakewalk. One config file and that´s it! No more "it works on my machine" drama. Our &lt;code&gt;docker-compose&lt;/code&gt; file is located at the root level of our project.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;api&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
      &lt;span class="c1"&gt;# Only will build development stage from our dockerfile&lt;/span&gt;
      &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;development&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.:/usr/src/app&lt;/span&gt;
    &lt;span class="na"&gt;env_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.env&lt;/span&gt;
    &lt;span class="c1"&gt;# Run a command against the development stage of the image&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm run start:dev&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;3000:3000&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now it's a piece of cake to develop our app with the container system we have. It's super easy and hassle-free.&lt;/p&gt;

&lt;h3&gt;
  
  
  Github Action &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Continuous Integration (CI) is a software development practice that allows developers to automatically build, test, and validate their code changes in a centralized and consistent environment. GitHub Actions is a powerful CI/CD (Continuous Integration/Continuous Deployment) platform integrated directly into GitHub repositories, enabling developers to automate their workflows seamlessly. Checkout their &lt;a href="https://docs.github.com/en/actions" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; to learn more!&lt;/p&gt;

&lt;p&gt;In this section, I'll guide you through the process of setting up a basic CI pipeline using GitHub Actions. This pipeline will automatically build and push your docker image to the Docker Hub account when a Pull Request is merged.&lt;/p&gt;

&lt;p&gt;Great! So, to push our Docker images, we need to authenticate our GitHub action. Don't worry, it's quite simple. First, let's add your Dockerhub username and token. After that, we just need to store the credentials in our GitHub repository settings. You can see how to do this in the image below. Let me know if you need any help with this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4lzsrs87twbgejviooe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4lzsrs87twbgejviooe.png" alt="GitHub Privacy settings"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Awesome! Let's create our workflow now. We need to place it in the root of your directory at &lt;strong&gt;.github/workflows/build.yml&lt;/strong&gt;. This file will help us set up our CI steps. Don't worry, I'll guide you through the different steps so you can easily follow along!&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="c1"&gt;# This workflow is triggered when a pull request is closed (merged or closed without merging) into the main branch.&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;main&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;closed&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;#This defines a job named "build" that runs on the latest version of the Ubuntu environment.&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build Docker image&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="c1"&gt;# This step checks out your Git repository content&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="c1"&gt;# Uses the docker/login-action to log in to Docker Hub using the provided username and token.&lt;/span&gt;
        &lt;span class="c1"&gt;# The credentials are stored as secrets (DOCKERHUB_USERNAME and DOCKERHUB_TOKEN).&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Log in to Docker Hub&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/login-action@v3&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOCKERHUB_USERNAME }}&lt;/span&gt;
          &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOCKERHUB_TOKEN }}&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="c1"&gt;# Extracts the repository name from the GitHub repository full name and sets it as an environment variable (REPO_NAME). &lt;/span&gt;
        &lt;span class="c1"&gt;# This information can be useful for later steps.&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get repository name&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;repo_full_name="$GITHUB_REPOSITORY"&lt;/span&gt;
          &lt;span class="s"&gt;IFS='/' read -ra repo_parts &amp;lt;&amp;lt;&amp;lt; "$repo_full_name"&lt;/span&gt;
          &lt;span class="s"&gt;echo "REPO_NAME=${repo_parts[1]}" &amp;gt;&amp;gt; $GITHUB_ENV&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="c1"&gt;# Uses the docker/metadata-action to extract metadata such as tags and labels for Docker. &lt;/span&gt;
        &lt;span class="c1"&gt;# This metadata can be used for versioning and labelling Docker images.&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Extract metadata (tags, labels) for Docker&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;meta&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/metadata-action@v5&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;images&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;${{ secrets.DOCKERHUB_USERNAME }}/${{env.REPO_NAME}}&lt;/span&gt;
          &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;type=sha,format=short &lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="c1"&gt;# Uses the docker/build-push-action to build and push the Docker image. It specifies the context as the current directory (.), the Dockerfile location (./Dockerfile), tags from the metadata, and labels from the metadata. &lt;/span&gt;
        &lt;span class="c1"&gt;# The push: true indicates that the image should be pushed to the Docker Hub registry.&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and push&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/build-push-action@v5&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
          &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./Dockerfile&lt;/span&gt;
          &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.meta.outputs.tags }}&lt;/span&gt;
          &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.meta.outputs.labels }}&lt;/span&gt;
          &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;




&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now each time we merge our PRs a GitHub action is going to build a docker image and push it to our registry in DockerHub as you can see in the following images.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4msltnwtopxev6g9k36.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4msltnwtopxev6g9k36.png" alt="GitHub action to create a Docker image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7s1rrm5n0flnek36kvp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7s1rrm5n0flnek36kvp.png" alt="Docker images in our Dockerhub registry"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope you found it cool and picked up something useful. Stay curious, keep learning, and rock on! Cheers to your next adventure!&lt;/p&gt;

&lt;h3&gt;
  
  
  Bibliography
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/tkssharma/things-you-must-have-in-every-repo-for-javascript-27db"&gt;Things you must have in every Repo for Javascript&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.tomray.dev/nestjs-docker-compose-postgres" rel="noopener noreferrer"&gt;NestJS, Redis and Postgres local development with Docker Compose&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.tomray.dev/nestjs-docker-production" rel="noopener noreferrer"&gt;How to write a NestJS Dockerfile optimized for production&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://earthly.dev/blog/cicd-build-github-action-dockerhub/" rel="noopener noreferrer"&gt;Create Automated CI/CD Builds Using GitHub Actions and DockerHub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>javascript</category>
      <category>nestjs</category>
      <category>docker</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>Deploying a Web App in AWS China</title>
      <dc:creator>Alfonso Domenech</dc:creator>
      <pubDate>Wed, 06 Sep 2023 08:02:53 +0000</pubDate>
      <link>https://forem.com/aldorea/deploying-a-web-app-in-aws-china-2kpe</link>
      <guid>https://forem.com/aldorea/deploying-a-web-app-in-aws-china-2kpe</guid>
      <description>&lt;p&gt;The Chinese market is becoming more and more important for companies due to the volume of customers this market represents. Therefore, companies with digital products want to focus their products on the Chinese population. Companies using AWS services have the opportunity to do so, but with a number of limitations to consider when planning to move your services to China.&lt;/p&gt;

&lt;h1&gt;
  
  
  Things to consider when using AWS services in China
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Signing up for AWS China Regions
&lt;/h3&gt;

&lt;p&gt;There are currently two AWS regions in China, Beijing and Ningxia. Global AWS accounts are not suitable for accessing services in China and vice versa. You have to create a separate account to access AWS services in China. But first we have to identify our identity to be able to use AWS services in China. For this we have to provide our business license issued by the Bureau of Industry and Commerce of People’s Republic of China (PRC) or a valid government agency. To do this, Amazon offers us the AWS China Gateway Portal which will guide us through the process of creating an account in this region. &lt;/p&gt;

&lt;h3&gt;
  
  
  ICP Filing for Internet Information Services in Mainland China
&lt;/h3&gt;

&lt;p&gt;In accordance with Chinese regulations. We have to fill in the Internet Content Provider (ICP) permit and there are two types. The ICP Recordal is the process for hosting a website providing non-commercial internet information services and ICP License to host a website providing commercial internet information services. &lt;/p&gt;

&lt;h3&gt;
  
  
  Network Connections for AWS China Regions
&lt;/h3&gt;

&lt;p&gt;The AWS China and AWS global regions are not directly connected. To overcome packet loss and latency between the Chinese and global AWS regions, AWS works with local internet service providers such as China Mobile. The DirectConnect service is used to establish this solution by signing a contract with the provider as well as following Chinese regulations regarding the transfer and localization of information. &lt;/p&gt;

&lt;h3&gt;
  
  
  Relevant Compliance and Regulatory Requirements in Mainland China
&lt;/h3&gt;

&lt;p&gt;It is important that if we plan to move our services to China, we have to be aware of the legislation. AWS can facilitate this process by contacting their helpdesk. &lt;br&gt;
Apart from the ICP and depending on the type of business, the following regulations must be taken into account security obligations under China Cybersecurity Law, Multi-level Protection Scheme (MLPS) Certification, and cryptographic related regulations.&lt;/p&gt;
&lt;h1&gt;
  
  
  Our particular use case
&lt;/h1&gt;

&lt;p&gt;In our particular use case, we had to deploy a fairly simple web page where users would have to fill in a series of forms to enter a competition. Their personal data would be stored in a database for further processing. As a fundamental requirement, the application had to be highly scalable, as we expected to receive high traffic peaks at certain times after running promotional campaigns on social media. These projects usually have very short development times (between 1 month and 2 weeks) so the developments tend to be quite hectic and results-oriented rather than maintainable.&lt;/p&gt;

&lt;p&gt;All the projects we do for this client have very similar requirements, so we always use a very similar architecture and technology stack to optimize the development process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Front end app using React and NextJS framework.&lt;/li&gt;
&lt;li&gt;API REST using NodeJS and Express. It connects our front end app and our database.&lt;/li&gt;
&lt;li&gt;PostgreSQL as a database to store user data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We deployed these projects on a Kubernetes cluster using the AWS EKS service. We use the same template to create the necessary resources in the cluster, so the deployment and configuration of a new project is quite straightforward.&lt;/p&gt;

&lt;p&gt;However, in the case of this application we encounter a big problem: by Chinese regulations, no personal data of a Chinese citizen can leave China.&lt;/p&gt;

&lt;p&gt;Neither our Kubernetes cluster nor any of our resources are located in China (but in the Frankfurt region). Therefore, due to Chinese regulatory restrictions it was not going to be possible to reuse the infrastructure we already had in place for this project.&lt;/p&gt;

&lt;p&gt;Fortunately, AWS has a couple of special regions within China: Ningxia and Beijing. These regions, which make up AWS China, are segregated from the rest of the regions (AWS Global), have a much more limited number of services than the other regions and their interconnectivity with the rest of the AWS regions is practically non-existent.&lt;/p&gt;

&lt;p&gt;We decided to simplify our infrastructure as much as possible. Instead of using EKS, we decided to use Elastic Beanstalk, an AWS service that we had not used so far but that seemed to fit perfectly with our use case. Elastic Beanstalk allows to deploy scalable web applications in a very simple way, taking care of creating all the necessary AWS resources for its proper implementation (ec2, auto-scaling group, load balancer, etc). Therefore, it made deployment and configuration very easy for us. &lt;/p&gt;

&lt;p&gt;Once we managed to get it up and running (setting up infrastructure, continuous integration, etc.), we ran into another problem related to the Chinese regulation.&lt;/p&gt;

&lt;p&gt;As discussed in the previous section, any website needs to have an ICP license. This license, among other things, links our website to an IP. Our client, right from the start of the project, provided us with an Elastic IP (fixed IP) for which he had already requested and obtained this license. &lt;/p&gt;

&lt;p&gt;At this point we realized one basic thing: how are we going to build a highly scalable application only accessible with a single Elastic IP? If we want the application to be highly scalable, we will have to be able to scale the application horizontally (i.e. increase the number of servers that host the application). However, the Elastic IP can only be associated to a single EC2 machine, which does not work for us. No problem, let's just use a very large server. The big problem with this is that it is not elastic at all: the capacity does not increase/decrease depending on the processing load at any given time, which leads to higher unnecessary costs.&lt;/p&gt;

&lt;p&gt;Well, but Beanstalk can provide us with a load balancer (ELB), allowing us to have several servers hosting the application but only one entry point. Why not associate the IP with the ELB? It turns out, however, that assigning an Elastic IP to an ELB is not possible. Internally, an ELB uses a pool of IPs that allows it to elastically scale its processing capacity, so assigning a single IP to it is not possible.&lt;/p&gt;

&lt;p&gt;Therefore, we had to improvise an alternative solution, which would not be optimal but could help us to solve the problem.&lt;/p&gt;
&lt;h1&gt;
  
  
  Rapid Solution. Configuring an EC2 instance as a reverse proxy
&lt;/h1&gt;

&lt;p&gt;If you are in a hurry to move your services in China as a temporary solution you can set up an EC2 instance as a reverse proxy. The first thing to do is to create an EC2 instance from the AWS China console which you have to create beforehand. Here is a link that shows you how to do it. Remember to configure the security groups to be able to access the instance via SSH and to be able to access port 80 and 443 from anywhere. We have installed certbot to get the SSL certificates for free with Let's Encrypt.&lt;br&gt;
Once you have created the EC2 instance, in our case we selected Amazon Linux 2, and you can access it via SSH you have to install the Nginx web server and certbot. Let's do it!&lt;br&gt;
First we need to update the OS (Amazon Linux 2) and install the Nginx web server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;yum update
&lt;span class="nb"&gt;sudo &lt;/span&gt;amazon-linux-extras &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; nginx1
&lt;span class="nb"&gt;sudo &lt;/span&gt;yum &lt;span class="nb"&gt;install &lt;/span&gt;certbot python2-certbot-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Certbot is able to automatically configure SSL for Nginx but needs to find the &lt;code&gt;server_name&lt;/code&gt; directive that matches the domain you are requesting the certificate for. This directive is found inside the server block. Let's modify the nginx.conf file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/nginx/nginx.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;server_name example.com www.example.com&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We save our changes and verify that we have respected the nginx syntax.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;nginx &lt;span class="nt"&gt;-t&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If no error has occurred, we must restart our server for the changes to take effect.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl reload nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we are able to automatically configure our server to use &lt;strong&gt;HTTPs&lt;/strong&gt;. Now we have to run the following command to make certbot update our configuration file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;certbot &lt;span class="nt"&gt;--nginx&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; example.com &lt;span class="nt"&gt;-d&lt;/span&gt; www.example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then certbot will communicate with the Let's Encrypt server and check that you control the domain for which you are requesting the certificate. If everything is successful, certbot will give you two options for configuring the https settings.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Easy - Allow both HTTP and HTTPS access to these sites&lt;/li&gt;
&lt;li&gt;Secure - Make all requests redirect to secure HTTPS access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you have chosen one of these options certbot will modify the nginx configuration and restart it. It will also give you a message saying that the process was successful and where the certificates are stored.&lt;/p&gt;

&lt;p&gt;Now you just need to make sure that the certificates are renewed automatically as let's encrypt certificates expire after 3 months. To do this we are going to configure a cron job with the certbot command that allows us to renew the certificates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;crontab &lt;span class="nt"&gt;-e&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our editor will open the default cron task tab and we will write the following task.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;30 5 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; /usr/bin/certbot renew &lt;span class="nt"&gt;--quiet&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This task will be executed at 5:30 am everyday. The &lt;code&gt;renew&lt;/code&gt; command for Cerbot will check if installed certificates are set to expire in less than one month and update them. And the &lt;code&gt;--quiet&lt;/code&gt; option tells certbot not to wait for user acceptance on certificate renewal.&lt;/p&gt;

&lt;p&gt;Let's create the configuration file for our Nginx to act as a reverse proxy. Now we are going to create a file in the following path &lt;code&gt;/etc/nginx/default.d/proxy.conf&lt;/code&gt;&lt;br&gt;
The &lt;code&gt;proxy_pass&lt;/code&gt; directive indicates where the traffic has to be redirected, in our case it will be the url of our load balancer. Once this is done we will load our main configuration file. The file will look like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;location / {
 proxy_pass http://your-load-balancer-url;
 proxy_pass_header Server;
 proxy_hide_header X-Powered-By;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we restart our server and check that the syntax is correct (&lt;strong&gt;as we have done in the previous steps&lt;/strong&gt;) and we would have our nginx server ready with https traffic enabled and acting as a reverse proxy.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Stable Solution
&lt;/h1&gt;

&lt;p&gt;Once the problem was solved in time and the day of the launch of the application arrived,we had the feeling that there should be a better way to solve this problem. We thought it was a too common situation. It should have happened to someone before.&lt;/p&gt;

&lt;p&gt;After searching and searching in Google, we found a document from SINNET, the company that manages the Beijing region in AWS China &lt;a href="https://s3.cn-north-1.amazonaws.com.cn/sinnetcloud/%E5%85%AC%E5%AE%89%E5%A4%87%E6%A1%88/Description+of+ICP+Recordal.pdf"&gt;(link)&lt;/a&gt;. This document redirected to another document with information on how to manage ICP licenses when using ELBs &lt;a href="https://s3.cn-north-1.amazonaws.com.cn/sinnetcloud/%E5%85%AC%E5%AE%89%E5%A4%87%E6%A1%88/%E8%8E%B7%E5%8F%96ELB%E7%9A%84IP/ELB+preserved+IP+operation+guide+for+AWS+ICP+Recordal_V2.1(updated).pdf"&gt;(link)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Summarizing the content of the document, it seems that, when applying for an ICP license, it is possible to request several IPs, not just one. On the other hand, there is the possibility to request AWS China to only assign IPs to an ELB from a fixed set of IPs (the ones requested in the ICP license).&lt;/p&gt;

&lt;p&gt;However, starting to use this approach at this stage is no longer possible. A new ICP request can take between 1 and 3 weeks to be accepted, so the current system will have to remain in place for at least a month.&lt;/p&gt;

&lt;p&gt;We decided to write this article mainly to save the lives of those who, in the future, will encounter this problem and will be able to solve it in time. But we would also like to make some conclusions from this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rushing is never good, leading to improvising in case of any problem.&lt;/li&gt;
&lt;li&gt;The client should tell you what he wants to do, not how, especially if he lacks technical knowledge of the domain. I'm not saying that they cannot contribute to it, but not to impose it without a previous discussion.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>nginx</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
