<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Nicolas El Khoury</title>
    <description>The latest articles on Forem by Nicolas El Khoury (@devopsbeyondlimitslb).</description>
    <link>https://forem.com/devopsbeyondlimitslb</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/devopsbeyondlimitslb"/>
    <language>en</language>
    <item>
      <title>Orchestrating Microservices on AWS and Docker Swarm - A Comprehensive Tutorial (2)</title>
      <dc:creator>Nicolas El Khoury</dc:creator>
      <pubDate>Wed, 23 Aug 2023 05:58:49 +0000</pubDate>
      <link>https://forem.com/devopsbeyondlimitslb/orchestrating-microservices-on-aws-and-docker-swarm-a-comprehensive-tutorial-2-3cnn</link>
      <guid>https://forem.com/devopsbeyondlimitslb/orchestrating-microservices-on-aws-and-docker-swarm-a-comprehensive-tutorial-2-3cnn</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Monolithic architecture is a traditional model for designing applications where all components and functions are included in a single element. While monolithic applications are easy to develop and deploy, they become difficult to manage and maintain as the application grows. Microservices architecture, on the other hand, is a collection of smaller, independent, loosely coupled services that may communicate with each other via multiple protocols. Microservices are highly scalable, easy to maintain, and extremely suitable for container-based technologies. They complement cloud solutions and provide fault tolerance. Container orchestration tools are automation technologies that help manage the lifecycle of app containers and microservices architecture at scale. They automate container deployment, management, scaling, and networking, freeing teams from repetitive manual work. Container orchestration tools can be applied in any environment where containers are used. They help deploy the same application across different environments without needing to redesign it. Microservices in containers make it easier to orchestrate services, including storage, networking, and security. Enterprises that need to deploy and manage hundreds or thousands of Linux containers and hosts can benefit from container orchestration. &lt;/p&gt;

&lt;p&gt;The &lt;a href=""&gt;first part&lt;/a&gt; of this tutorial dived deeper into the aforementioned concepts, and introduced Docker Swarm through an exercise of orchestrating containerized services on a Docker Swarm made of 2 Virtual Machines, on &lt;a href="https://aws.amazon.com/"&gt;Amazon Web Services&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;This article dives deeper into the orchestration of Microservices, and their deployment on Docker Swarm. &lt;/p&gt;

&lt;h1&gt;
  
  
  NK-Microservices Application
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nWL0ZbL1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/014ty8n3i4fctqae3qw4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nWL0ZbL1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/014ty8n3i4fctqae3qw4.jpg" alt="NK-microservices components" width="399" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/devops-beyond-limits/nk-microservices-deployment"&gt;NK-Microservices project&lt;/a&gt; is a basic, and open source application built using Microservices. It serves as a pilot project and/or a reference to be used by anyone who wishes to write software using the Microservices approach. Indeed, Microservices is a software development methodology that is being adopted widely nowadays, especially with the advancement of technology, and the adoption of cloud computing resources._&lt;br&gt;
The project is made of the following components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/devops-beyond-limits/nk-gateway-service"&gt;&lt;strong&gt;Gateway Microservice&lt;/strong&gt;&lt;/a&gt;: A REST API Microservice built using SailsJS, and serves as a Gateway, and request router.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/devops-beyond-limits/nk-backend-service"&gt;&lt;strong&gt;Backend Microservice&lt;/strong&gt;&lt;/a&gt;: A REST API Microservice built using SailsJS, and serves as the first, out of many Microservices which can be incorporated and integrated with the aforementioned Gateway Service.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://redis.io/"&gt;&lt;strong&gt;Redis Database&lt;/strong&gt;&lt;/a&gt;: An open source, in-memory data store, used for caching purposes, and for storing other ephemeral pieces of information such as JWT tokens.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.arangodb.com/"&gt;&lt;strong&gt;Arango Database&lt;/strong&gt;&lt;/a&gt;: A multi-model database used for storing persistent information.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;(Visit the &lt;a href="https://github.com/devops-beyond-limits/nk-microservices-deployment"&gt;documentation repository&lt;/a&gt; for the complete details of the project)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This tutorial concentrates on discussing the deployment of the backend microservices only (i.e., Backend, Gateway services). Enabling HA deployments for databases (i.e., Redis, ArangoDB) requires different strategies, and will be discussed in later articles. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;To make the most value from this tutorial, you should be equipped with minimal technical and theoretical knowledge on Containers, Microservices, Cloud Computing, AWS. Readers who do not possess the knowledge above are encouraged to watch this &lt;a href="https://www.udemy.com/course/intro-fullstack-devops/"&gt;Udemy crash course&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h1&gt;
  
  
  Tutorial 2 - NK-Microservices Application Deployment on Docker Swarm
&lt;/h1&gt;
&lt;h2&gt;
  
  
  Problem Statement
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lxCd3k6f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/41g74u6q0g86uwa4hl0g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lxCd3k6f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/41g74u6q0g86uwa4hl0g.png" alt="NK-Microservices Infrastructure Architecture" width="800" height="641"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This tutorial provides a step-by-step to deploy the &lt;strong&gt;&lt;a href="https://github.com/devops-beyond-limits/nk-microservices-deployment/tree/main"&gt;NK Microservices&lt;/a&gt;&lt;/strong&gt; application with the following requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Deployment Mode&lt;/strong&gt;: High Availability - Docker Swarm.

&lt;ul&gt;
&lt;li&gt;A Docker Swarm composed of 4 Virtual Machines, configured as such:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;One Docker Swarm Master&lt;/strong&gt; VM.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Two Docker Swarm Worker&lt;/strong&gt; VMs, on which the Gateway and Backend Services are deployed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One Docker Swarm Worker&lt;/strong&gt; VM, on which the Arango and Redis Databases are deployed.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operating System:&lt;/strong&gt; Ubuntu 20.04.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One Application Load Balance&lt;/strong&gt;r to balance the load across all four VMs.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Expected Output
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The application must be fully deployed and running on port 80 using the
address: &lt;code&gt;http://&amp;lt;load balancer&amp;gt;:80&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;NK-gateway service&lt;/strong&gt; and &lt;strong&gt;NK-backend service&lt;/strong&gt; must be linked as a target group to the Application Load balancer.&lt;/li&gt;
&lt;li&gt;The security group attached to the services machines must enable access on port 80 from the Application Load Balancer.&lt;/li&gt;
&lt;li&gt;The security group attached to the load balancer must enable access to port 80 from the internet.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;NK-gateway service&lt;/strong&gt; and &lt;strong&gt;NK-backend service&lt;/strong&gt; must be deployed as 2 replicas, each on one of the two VMs labeled “&lt;strong&gt;Services&lt;/strong&gt;” strictly.&lt;/li&gt;
&lt;li&gt;The Arango and Redis databases must be deployed as one replica each, on the VM labeled “&lt;strong&gt;Databases&lt;/strong&gt;” Strictly.&lt;/li&gt;
&lt;li&gt;The Arango database must have a volume configured. Test the validity of the deployment by deleting the container (Not the service). A correct configuration should allow Docker Swarm to re-create a replica of the database, on the machine labeled “***************&lt;strong&gt;&lt;em&gt;Databases&lt;/em&gt;&lt;/strong&gt;***************”, and all the data should persist.&lt;/li&gt;
&lt;li&gt;The Docker Swarm Master node must not have any container deployed on it.&lt;/li&gt;
&lt;li&gt;Communication with the Backend service and the databases must be done internally from within the Docker Network only, and not through the VM IP and external port.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;
&lt;h3&gt;
  
  
  AWS Resources
&lt;/h3&gt;
&lt;h4&gt;
  
  
  SSH Keypair
&lt;/h4&gt;

&lt;p&gt;An SSH Keypair  is required to SSH to the Virtual Machines. To create a Keypair:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the &lt;a href="https://eu-central-1.console.aws.amazon.com/ec2/home?region=eu-central-1#Instances:instanceState=running"&gt;EC2 service&lt;/a&gt;, Key Pairs option from the left menu.&lt;/li&gt;
&lt;li&gt;Create a Keypair.&lt;/li&gt;
&lt;li&gt;The key will be automatically downloaded. Move it to a hidden directory.&lt;/li&gt;
&lt;li&gt;Modify the permissions to read only: chmod 400 .pem&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;n SSH Key is required to SSH to the Virtual Machines. To create a Keypair: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the &lt;a href="https://eu-central-1.console.aws.amazon.com/ec2/home?region=eu-central-1"&gt;&lt;strong&gt;EC2&lt;/strong&gt; service&lt;/a&gt;, &lt;strong&gt;Key Pairs&lt;/strong&gt; option from the left menu.&lt;/li&gt;
&lt;li&gt;Create a Keypair. &lt;/li&gt;
&lt;li&gt;The key will be automatically downloaded. Move it to a hidden directory.&lt;/li&gt;
&lt;li&gt;Modify the permissions to read only: &lt;code&gt;chmod 400 &amp;lt;keyName&amp;gt;.pem&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7BfmLt4r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkr9fapc4vo3yyh0cpum.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7BfmLt4r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkr9fapc4vo3yyh0cpum.png" alt="SSH Keypair Creation" width="800" height="612"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rtvzHz-q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/az7w7fx0vkozuayhlvdr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rtvzHz-q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/az7w7fx0vkozuayhlvdr.png" alt="SSH Keypair locally" width="800" height="114"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Security Group
&lt;/h4&gt;

&lt;p&gt;A &lt;a href="https://eu-central-1.console.aws.amazon.com/ec2/home?region=eu-central-1#SecurityGroups:"&gt;security Group&lt;/a&gt; is required to control access to the VMs. The ports below will be allowed access from anywhere:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;TCP Port 80: To perform requests against the Httpd service.&lt;/li&gt;
&lt;li&gt;TCP Port 22: To SSH to the machines.&lt;/li&gt;
&lt;li&gt;All TCP and UDP Ports open from within the VPC: Several ports are required for Docker Swarm, which are out of the scope of this article.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5oSANxEc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4hw4iccf9t8u99rhmt2r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5oSANxEc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4hw4iccf9t8u99rhmt2r.png" alt="Security Group" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In addition, Outbound rules must allow all ports to anywhere. This is essential to provide internet access to the machines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WL6gwHse--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ruslxv6ukyanyhfjqo5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WL6gwHse--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ruslxv6ukyanyhfjqo5w.png" alt="Outbound Rules" width="800" height="102"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  EC2 Machines
&lt;/h4&gt;

&lt;p&gt;Navigate to &lt;a href="https://eu-central-1.console.aws.amazon.com/ec2/home?region=eu-central-1#Instances:"&gt;&lt;strong&gt;AWS EC2&lt;/strong&gt;&lt;/a&gt; —&amp;gt; &lt;strong&gt;Launch instances&lt;/strong&gt;, with the following parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Name&lt;/strong&gt;: Docker Swarm Demo&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Number of instances&lt;/strong&gt;: 4&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AMI&lt;/strong&gt;: Ubuntu Server 22.04 LTS (HVM), SSD Volume Type&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Instance Type&lt;/strong&gt;: t3.medium (Or any type of your choice)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key pair name&lt;/strong&gt;: docker-swarm-demo&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Settings&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Select existing security group&lt;/strong&gt;: Docker Swarm Demo&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configure storage&lt;/strong&gt;: 1 x 25 GiB gp2 Root volume&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the machines are created, rename one of them to &lt;strong&gt;Swarm Master&lt;/strong&gt;, and the others to &lt;strong&gt;Swarm Worker - Backend - 1&lt;/strong&gt;, &lt;strong&gt;Swarm Worker - Backend - 2&lt;/strong&gt;, and &lt;strong&gt;Swarm Worker - Databases&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LzOn0gFm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1mog4f81jo0fu22la8c1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LzOn0gFm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1mog4f81jo0fu22la8c1.png" alt="Swarm Nodes" width="800" height="161"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  ECR Repositories
&lt;/h4&gt;

&lt;p&gt;Two private &lt;a href="https://eu-central-1.console.aws.amazon.com/ecr/repositories?region=eu-central-1"&gt;ECR Repositories&lt;/a&gt; must be created, with the following configuration:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_bfjlcT4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h9b1dc77zoobf7sbakyx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_bfjlcT4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h9b1dc77zoobf7sbakyx.png" alt="ECR Repositories" width="800" height="247"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  IAM Roles
&lt;/h4&gt;

&lt;p&gt;The machines will be used to pull and push images from and to the private ECR registries. An IAM role with enough permissions must be attached to the created machines. To create a role, navigate to the &lt;a href="https://us-east-1.console.aws.amazon.com/iamv2/home?region=eu-central-1#/home"&gt;IAM service&lt;/a&gt; --&amp;gt; Roles, and create a role, with the following parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Trusted entity type&lt;/strong&gt;: AWS Service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use case&lt;/strong&gt;: EC2&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Permission Policy&lt;/strong&gt;: AmazonEC2ContainerRegistryPowerUser&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Role Name&lt;/strong&gt;: docker-swarm-demo&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create the role.&lt;/p&gt;

&lt;p&gt;Navigate back to the EC2 instances dashboard. Click on every instance, one by one, then select &lt;strong&gt;Actions&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Security&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Modify IAM Role&lt;/strong&gt;. Select the &lt;strong&gt;docker-swarm-demo&lt;/strong&gt; role. Repeat this step for every VM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9E-gaf5C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8gz3xxlxsmcepxgk3o3t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9E-gaf5C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8gz3xxlxsmcepxgk3o3t.png" alt="IAM Role Attached" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Docker Installation
&lt;/h3&gt;

&lt;p&gt;SSH to each of the four machines, and paste the code block below to install Docker on each of them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Update the package index and install the required packages
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release

# Add Docker’s official GPG key:
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

# Set up the repository
echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list \
&amp;gt; /dev/null

# Update the package index again
sudo apt-get update

# Install the latest version of docker
sudo apt-get install -y docker-ce docker-ce-cli containerd.io \
docker-compose-plugin

# Add the Docker user to the existing User's group 
#(to run Docker commands without sudo)
sudo usermod -aG docker $USER
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To interact with Docker without &lt;code&gt;sudo&lt;/code&gt; privileges, restart the SSH sessions of both machines. Validate the successful installation of Docker, by performing the following three commands on each machine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;docker ps -a&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;docker images&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;docker -v&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Docker Swarm Configuration
&lt;/h3&gt;

&lt;p&gt;As specified, the purpose is to create a Docker Swarm made of 1 Master node, and 1 Worker node:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSH to the Master node, and initialize a swarm: &lt;code&gt;docker swarm init --advertise-addr &amp;lt;Private IP&amp;gt;&lt;/code&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;advertise-addr&lt;/strong&gt; specifies the address that will be advertised to other members of the swarm for API access and overlay networking. Therefore, it is always better to use the machines' private IPs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ue4s7tm3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c83kubavpfpqpgdp0a7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ue4s7tm3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c83kubavpfpqpgdp0a7j.png" alt="Swarm Initialization" width="800" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;SSH to every Worker node, and join the swarm using the command generated in the Master node: &lt;code&gt;docker swarm join --token SWMTKN-1-210tp2olzm5z0766v71c6e6pmdzrjzz8pnkrw3z4mqj8ocjlbj-5758xx4x3dxib1249tceom6rr &amp;lt;Private IP&amp;gt;:2377&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;List all the nodes available: &lt;code&gt;docker node ls&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5EhtU-mw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k67qych7nqe136gb5mk4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5EhtU-mw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k67qych7nqe136gb5mk4.png" alt="Image description" width="800" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Node configuration
&lt;/h3&gt;

&lt;p&gt;Part of the project requirements are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Master Node must not host any container.&lt;/li&gt;
&lt;li&gt;The databases (ArangoDB, and Redis) must be placed on the database machine only.&lt;/li&gt;
&lt;li&gt;The backend services must be placed on the backend machines only.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To make this happen, first, the nodes have to be properly configured. SSH to the Master machine.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.docker.com/engine/swarm/swarm-tutorial/drain-node/"&gt;Draining&lt;/a&gt; the Master node will prevent it from hosting any container. Drain the node: &lt;code&gt;docker node update --availability drain ip-172-31-46-120&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Lb3umixh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/46m5duiy4u1orq67x58y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Lb3umixh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/46m5duiy4u1orq67x58y.png" alt="Node Draining" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Label the remaining machines. The services VMs must be labeled &lt;code&gt;workload=service&lt;/code&gt;, and the database VM must be labeled &lt;code&gt;workload=database&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker node update --label-add workload=database ip-172-31-41-113
docker node update --label-add workload=service ip-172-31-40-214
docker node update --label-add workload=service ip-172-31-36-173
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ensure that the labels have been applied correctly by listing the nodes by label:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker node ls -f node.label=workload=database
docker node ls -f node.label=workload=service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wUyaPTOC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r3awrjo16x9b1z09tegv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wUyaPTOC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r3awrjo16x9b1z09tegv.png" alt="Node Labels" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The scenarios below further test the validity of the configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy 5 replicas of the HTTPD service with no constraints. The replicas must be deployed on all the worker VMs, and none on the Master node: &lt;code&gt;docker service create --name myhttpd -p 80:80 --replicas 5 httpd:latest&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iJ1gkTBU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwuxmuuljg6djw5qr1kt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iJ1gkTBU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwuxmuuljg6djw5qr1kt.png" alt="Placement Strategy 1" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The picture above clearly shows that all of the replicas have been divided among all the worker nodes and none of them was placed on the master node.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update the HTTPD service to force the placement of the services on machines labeled &lt;em&gt;&lt;strong&gt;workload=service&lt;/strong&gt;&lt;/em&gt; only: &lt;code&gt;docker service update --constraint-add node.labels.workload==service myhttpd&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--n-1l4ukh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u29ncq17kh06hi7hsep3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--n-1l4ukh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u29ncq17kh06hi7hsep3.png" alt="Placement Strategy 2" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image above clearly shows how the placement was modified. All the replicas that were on the &lt;strong&gt;database&lt;/strong&gt; node were removed, and replaced by others placed on the &lt;strong&gt;service&lt;/strong&gt; nodes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update the HTTPD service to force the placement of the services on machines labeled &lt;em&gt;&lt;strong&gt;workload=database&lt;/strong&gt;&lt;/em&gt; only: &lt;code&gt;docker service update --constraint-rm node.labels.workload==service --constraint-add node.labels.workload==database myhttpd&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--01a8BTST--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ajxk7gyc39bmfonxms5y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--01a8BTST--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ajxk7gyc39bmfonxms5y.png" alt="Placement Strategy 3" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image above shows how all the replicas are now on the &lt;strong&gt;database&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Remove the service: &lt;code&gt;docker service rm myhttpd&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  NK-Microservices Deployment
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Internal Network
&lt;/h4&gt;

&lt;p&gt;An &lt;a href="https://docs.docker.com/network/drivers/overlay/"&gt;overlay network&lt;/a&gt; is needed to allow services to communicate with each other within the swarm: &lt;code&gt;docker network create --driver overlay nk-microservices&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  ArangoDB
&lt;/h4&gt;

&lt;p&gt;The first component to add is the Arango database. The deployment must respect the following constraints:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The service can run only on the &lt;strong&gt;database&lt;/strong&gt; node.&lt;/li&gt;
&lt;li&gt;The service must be reachable from within the Swarm network only.&lt;/li&gt;
&lt;li&gt;Data must persist and survive container failures.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Deploy the ArangoDB container: &lt;code&gt;docker service create -d --name person-db --network nk-microservices --replicas 1 --constraint node.labels.workload==database --mount src=arango-volume,dst=/var/lib/arangodb3 -e ARANGO_STORAGE_ENGINE=rocksdb -e ARANGO_ROOT_PASSWORD=openSesame arangodb/arangodb:3.6.3&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The command above instructs docker to create a service of 1 replica, attach it to the &lt;strong&gt;nk-microservices&lt;/strong&gt; overlay network, attach the &lt;strong&gt;/var/lib/arangodb3&lt;/strong&gt; directory to a volume named &lt;strong&gt;arango-volume&lt;/strong&gt;, use &lt;strong&gt;openSesame&lt;/strong&gt; as root password, use the &lt;strong&gt;arangodb/arangodb:3.6.3&lt;/strong&gt; image, and place the container on the VM labeled &lt;strong&gt;workload=database&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--loXuQdiH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sftl2b6ulbpd9z4f91zz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--loXuQdiH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sftl2b6ulbpd9z4f91zz.png" alt="ArangoDB Service Placement" width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image above clearly shows that the container was created on the &lt;strong&gt;database&lt;/strong&gt; node. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K-20R9CZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0r2rhn0tcdifxgv3e3m0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K-20R9CZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0r2rhn0tcdifxgv3e3m0.png" alt="ArangoDB Container Placement" width="800" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image above is a screenshot from within the &lt;strong&gt;database&lt;/strong&gt; node. Clearly the volumes and containers are created as they should be.&lt;/p&gt;

&lt;h4&gt;
  
  
  Redis
&lt;/h4&gt;

&lt;p&gt;The deployment must respect the following constraints:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The service can run only on the &lt;strong&gt;database&lt;/strong&gt; node.&lt;/li&gt;
&lt;li&gt;The service must be reachable from within the Swarm network only.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;docker service create -d --name redis --network nk-microservices --replicas 1 --constraint node.labels.workload==database redis:6.0.5&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The command above instructs docker to create a service of 1 replica, attach it to the &lt;strong&gt;nk-microservices&lt;/strong&gt; overlay network, use the &lt;strong&gt;redis:6.0.5&lt;/strong&gt; image, and place the container on the VM labeled &lt;strong&gt;workload=database&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9aVsrt92--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4pa1qe6sw8w05iwcgxxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9aVsrt92--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4pa1qe6sw8w05iwcgxxw.png" alt="Redis Service Placement" width="800" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1rmoLLK9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wmwr2ioenbxyxyb3sh05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1rmoLLK9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wmwr2ioenbxyxyb3sh05.png" alt="Redis Container Placement" width="800" height="84"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The images above showcase the proper deployment of Redis&lt;/p&gt;

&lt;h4&gt;
  
  
  Connection testing
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;docker service create -d --name client --network nk-microservices --replicas 1 --constraint node.labels.workload==service alpine sleep 3600&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Mz9lCh-E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lhk9qeywhzpdd2m8x2pp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Mz9lCh-E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lhk9qeywhzpdd2m8x2pp.png" alt="Client Service Placement" width="800" height="129"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image above shows the correct placement of the client container on one of the &lt;strong&gt;service&lt;/strong&gt; nodes.&lt;/p&gt;

&lt;p&gt;SSH to the machine hosting the container, and then obtain a session into the container: &lt;code&gt;docker exec -it client.1.vjhcn1zo86n1vqvr0iortjhe5 sh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install the &lt;strong&gt;curl&lt;/strong&gt; package: &lt;code&gt;apk add curl&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test the connection to ArangoDB by sending an API request to the ArangoDB hostname (Which is the service name in this case): &lt;code&gt;curl http://person-db:8529/_api/version&lt;/code&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The API request returns a 404 error response. Nonetheless, this indicates that the request from the clients container has successfully reached the ArangoDB container using the overlay network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2akn00Ku--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nzoof8juir81y1b6ru5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2akn00Ku--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nzoof8juir81y1b6ru5z.png" alt="ArangoDB Connection" width="800" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To test the connection to redis, install the redis package on the alpine container: &lt;code&gt;apk --update add redis&lt;/code&gt;. Then connect to redis using its hostname (The service name in this case): &lt;code&gt;redis-cli -h redis&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hxgBLoWh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/84uqklwbt94ulqq8m54y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hxgBLoWh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/84uqklwbt94ulqq8m54y.png" alt="Redis Connection" width="800" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image above shows the successful connection to the &lt;strong&gt;Redis&lt;/strong&gt; service using the overlay network.&lt;/p&gt;

&lt;p&gt;Remove the clients service, from the master node: &lt;code&gt;docker service rm redis&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Backend Service
&lt;/h4&gt;

&lt;p&gt;As already described, the backend service is a NodeJS service hosted on a public Github Repository. The &lt;strong&gt;backend service&lt;/strong&gt; must be deployed as 2 replicas, and hosted on the &lt;strong&gt;service&lt;/strong&gt; nodes strictly.&lt;/p&gt;

&lt;p&gt;Below are the steps that will be done to ensure the proper deployment of the service:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use the &lt;strong&gt;Master&lt;/strong&gt; node to build and push the container images (A separate machine could be used for this operation, but to avoid using additional resources, we will use the &lt;strong&gt;master&lt;/strong&gt; node).&lt;/li&gt;
&lt;li&gt;Create an Image for the backend service, using the existing dockerfile.&lt;/li&gt;
&lt;li&gt;Push the image to the ECR.&lt;/li&gt;
&lt;li&gt;Create the backend service.&lt;/li&gt;
&lt;li&gt;Ensure its proper connectivity to the database.&lt;/li&gt;
&lt;li&gt;Validation Tests&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Create the Image&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Clone the repository on the &lt;strong&gt;Master&lt;/strong&gt; node: &lt;code&gt;git clone https://github.com/devops-beyond-limits/nk-backend-service.git&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate into the downloaded repository. A Dockerfile containing all the build steps exist. No modifications are required for this Dockerfile.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build the image locally: &lt;code&gt;docker build -t backend-service:latest -f Dockerfile .&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W3TLeI6o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q52xbk6drgn60hnrg4ec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W3TLeI6o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q52xbk6drgn60hnrg4ec.png" alt="Local Image" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image above shows the creation of the image locally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Push the image to the ECR&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To push the image to the ECR, perform the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Install the AWS CLI: &lt;code&gt;sudo apt install awscli -y&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Login to the ECR: &lt;code&gt;aws ecr get-login-password --region eu-central-1 | docker login --username AWS --password-stdin 444208416329.dkr.ecr.eu-central-1.amazonaws.com&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tag the local image to reflect the repository name in the ECR: &lt;code&gt;docker tag backend-service:latest 444208416329.dkr.ecr.eu-central-1.amazonaws.com/nk-backend-service:latest&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Push the Docker image to the ECR: &lt;code&gt;docker push 444208416329.dkr.ecr.eu-central-1.amazonaws.com/nk-backend-service:latest&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;(Make sure to modify the region and account name in the commands above)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_pNn-nLZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4gztmcrz60tnqvoz13ks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_pNn-nLZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4gztmcrz60tnqvoz13ks.png" alt="Backend Service Build and Push" width="800" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X1bKMYJ8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/25adh2oj0bbdx1cc7pko.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X1bKMYJ8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/25adh2oj0bbdx1cc7pko.png" alt="Backend Service ECR Image" width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create the Backend Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create the backend service using this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker service create -d --name nk-backend --network nk-microservices --replicas 2 --constraint node.labels.workload==service --with-registry-auth 444208416329.dkr.ecr.eu-central-1.amazonaws.com/nk-backend-service:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command above instructs Docker to create 2 replicas of the backend service image, located in the AWS ECR private registry, and hence use the &lt;strong&gt;--with-registry-auth&lt;/strong&gt; flag (To allow the worker machines to authenticate to the ECR), attach the &lt;strong&gt;nk-microservices&lt;/strong&gt; network, and place the replicas on the &lt;strong&gt;service&lt;/strong&gt; machines only.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rPeMxo04--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/97xip5ieta9zmjinjcsm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rPeMxo04--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/97xip5ieta9zmjinjcsm.png" alt="Backend Service Deployed" width="800" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The picture above shows the correct placement of the containers on the machines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Validation Tests&lt;/strong&gt;&lt;br&gt;
To validate the correct deployment of the backend service, perform the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure the proper connection to the database, through the service logs: &lt;code&gt;docker service logs nk-backend&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yI_yNknX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nx16tusvnpn8bl7ixn5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yI_yNknX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nx16tusvnpn8bl7ixn5d.png" alt="Application Logs" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The application logs shown in the picture above clearly show the successful connection of the application to the database&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure the backend service is reachable from within the overlay network. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To do so, re-create the clients service: &lt;code&gt;docker service create -d --name client --network nk-microservices --replicas 1 --constraint node.labels.workload==service alpine sleep 3600&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;SSH into the machine hosting the created container, exec inside it, install the &lt;strong&gt;curl&lt;/strong&gt; package, and perform a Health API against the backend service, using its hostname (service name): &lt;code&gt;curl http://nk-backend:1337/health&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zyUPiP8k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4yuw5kbpxfhhtwli6p29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zyUPiP8k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4yuw5kbpxfhhtwli6p29.png" alt="Connection to the Backend Service" width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A successful response indicates the proper configuration of the network.&lt;/p&gt;

&lt;p&gt;Delete the client service: &lt;code&gt;docker service rm client&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Gateway Service
&lt;/h4&gt;

&lt;p&gt;As already described, the gateway service is a NodeJS service hosted on a public Github Repository. The &lt;strong&gt;gateway service&lt;/strong&gt; must be deployed as 2 replicas, and hosted on the &lt;strong&gt;service&lt;/strong&gt; nodes strictly.&lt;/p&gt;

&lt;p&gt;Below are the steps that will be done to ensure the proper deployment of the service:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use the &lt;strong&gt;Master&lt;/strong&gt; node to build and push the container images (A separate machine could be used for this operation, but to avoid using additional resources, we will use the &lt;strong&gt;master&lt;/strong&gt; node).&lt;/li&gt;
&lt;li&gt;Create an Image for the backend service, using the existing dockerfile.&lt;/li&gt;
&lt;li&gt;Push the image to the ECR.&lt;/li&gt;
&lt;li&gt;Create the backend service.&lt;/li&gt;
&lt;li&gt;Validation Tests&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Create the Image&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Clone the repository on the &lt;strong&gt;Master&lt;/strong&gt; node: &lt;code&gt;git clone https://github.com/devops-beyond-limits/nk-gateway-service.git&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate into the downloaded repository. A Dockerfile containing all the build steps exist. Modify the &lt;strong&gt;BACKEND_HOST&lt;/strong&gt; environment variable, to reflect the correct service name of the backend service (&lt;strong&gt;nk-backend)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build the image locally: &lt;code&gt;docker build -t gateway-service:latest -f Dockerfile .&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EMIAJgmZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ntg0mlpyf3xq6qdyiv02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EMIAJgmZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ntg0mlpyf3xq6qdyiv02.png" alt="Local Image - Gateway Service" width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image above shows the creation of the image locally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Push the image to the ECR&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To push the image to the ECR, perform the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Login to the ECR: &lt;code&gt;aws ecr get-login-password --region eu-central-1 | docker login --username AWS --password-stdin 444208416329.dkr.ecr.eu-central-1.amazonaws.com&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tag the local image to reflect the repository name in the ECR: &lt;code&gt;docker tag gateway-service:latest 444208416329.dkr.ecr.eu-central-1.amazonaws.com/nk-gateway-service:latest&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Push the Docker image to the ECR: &lt;code&gt;docker push 444208416329.dkr.ecr.eu-central-1.amazonaws.com/nk-gateway-service:latest&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;(Make sure to modify the region and account name in the commands above)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PMohS_V---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gycw8glkek6e4nza0qaf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PMohS_V---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gycw8glkek6e4nza0qaf.png" alt="Image Push Steps - Gateway" width="800" height="305"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_lW0WUEK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jiy9elwidqsk4tx3en6s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_lW0WUEK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jiy9elwidqsk4tx3en6s.png" alt="ECR Repository - Gateway" width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create the Gateway Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create the gateway service using this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker service create -d --name nk-gateway --network nk-microservices --replicas 2 --constraint node.labels.workload==service -p 80:1337 --with-registry-auth 444208416329.dkr.ecr.eu-central-1.amazonaws.com/nk-gateway-service:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command above instructs Docker to create 2 replicas of the gateway service image, located in the AWS ECR private registry, and hence use the &lt;strong&gt;--with-registry-auth&lt;/strong&gt; flag (To allow the worker machines to authenticate to the ECR), attach the &lt;strong&gt;nk-microservices&lt;/strong&gt; network, and place the replicas on the &lt;strong&gt;service&lt;/strong&gt; machines only.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zYS1aodY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ywuzcd84l24qbzfs42a3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zYS1aodY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ywuzcd84l24qbzfs42a3.png" alt="Gateway Service Deployment" width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The picture above shows the correct placement of the containers on the machines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Validation Tests&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ensure that the gateway service is up and running, through checking the service logs: &lt;code&gt;docker service logs nk-gateway&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensure that the gateway service can be reached from the internet, through any of the four VMs. Perform a health check using each public IP.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wLDRirK4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p8tjeftpt2pjotzy3ow2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wLDRirK4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p8tjeftpt2pjotzy3ow2.png" alt="Internet Reachability" width="716" height="323"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The remaining tests can be validated by performing different API tests against the gateway, and monitoring the behavior of the system. To do so, Download an API Client (i.e., &lt;a href="https://www.postman.com/"&gt;Postman&lt;/a&gt;), and import the nk-microservices &lt;a href="https://github.com/devops-beyond-limits/nk-microservices-deployment/blob/main/nk-gateway-service.postman_collection.json"&gt;Postman collection&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FxW1kpd0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i706vos9ewn9z0rito7f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FxW1kpd0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i706vos9ewn9z0rito7f.png" alt="Postman Setup" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first API to test is the &lt;strong&gt;Create Person&lt;/strong&gt; API, which attempts to create a person in the database. Make sure to modify the GATEWAY_HOST and GATEWAY_PORT variables with the correct values (of any node).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A correct setup should allow the API to traverse the &lt;strong&gt;gateway&lt;/strong&gt; and &lt;strong&gt;backend&lt;/strong&gt; services, and create a person record in the database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c6B0l3lB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkse1urws5ic5wapbk19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c6B0l3lB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkse1urws5ic5wapbk19.png" alt="Create Person API" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image above shows the:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Request/Response of the API in Postman.&lt;/li&gt;
&lt;li&gt;The logs in the &lt;strong&gt;gateway&lt;/strong&gt; service.&lt;/li&gt;
&lt;li&gt;The logs in the &lt;strong&gt;backend&lt;/strong&gt; service.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;The second API to test is the &lt;strong&gt;Get All Persons&lt;/strong&gt; API. This API attempts to fetch all the person records, by performing the following logic:&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Attempt to find the records in the Redis database.&lt;/li&gt;
&lt;li&gt;If the records are found in Redis, return them to the client.&lt;/li&gt;
&lt;li&gt;Else, dispatch the request to the &lt;strong&gt;backend&lt;/strong&gt; service. The backend service will fetch the records, and return them to the &lt;strong&gt;gateway&lt;/strong&gt; service. The &lt;strong&gt;gateway&lt;/strong&gt; service will save the records in Redis, and return the result to the client.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Since this is the first time the API is requested, steps (1) and (3) will be performed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--03_y_4f8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0knddn6rhghzqjshzgsc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--03_y_4f8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0knddn6rhghzqjshzgsc.png" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hit the API again and monitor the service logs. This time, the request will only reach the &lt;strong&gt;gateway&lt;/strong&gt; service only, and fetch the records from redis.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lGqu_DlC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mflxiqvcnaqy2hveugfu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lGqu_DlC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mflxiqvcnaqy2hveugfu.png" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Load Balancer
&lt;/h4&gt;

&lt;p&gt;The final step is to create a load balancer, to divide the load across all VMs. To do so, Navigate to the &lt;strong&gt;EC2 Service&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Load Balancers&lt;/strong&gt;, and create a load balancer with the following parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Load balancer types&lt;/strong&gt;: Application Load Balancer&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load balancer name&lt;/strong&gt;: docker-swarm-demo&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scheme&lt;/strong&gt;: Internet-facing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IP Address Type&lt;/strong&gt;: IPv4&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VPC&lt;/strong&gt;: Default&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mappings&lt;/strong&gt;: Select all Availability Zones (AZs) and subnets per AZ.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Group&lt;/strong&gt;: docker-swarm-demo&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Listener&lt;/strong&gt;: Port 80&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Default Action&lt;/strong&gt;: Create a new Target Group with the following Paramters:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Choose a target type&lt;/strong&gt;: Instances&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Target group name&lt;/strong&gt;: docker-swarm-demo&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Protocol&lt;/strong&gt;: HTTP&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Port&lt;/strong&gt;: 80&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VPC&lt;/strong&gt;: default&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Health check protocol&lt;/strong&gt;: HTTP&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Health check path&lt;/strong&gt;: /health (The health check API of the gateway service)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Include as Pending all the Swarm nodes&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create the target group. Navigate back to the Load Balancer page, and choose the created target group, as a value to the &lt;strong&gt;Default action&lt;/strong&gt; option. Create the load balancer.&lt;/p&gt;

&lt;p&gt;After a few minutes, the load balancer will be active, and the target groups will be healthy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KjvOOD2d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mv975s5an3xk8x5vhncy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KjvOOD2d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mv975s5an3xk8x5vhncy.png" alt="AWS ALB" width="800" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZnjrDkI---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z0fcxyhlnhe9d9uj4srz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZnjrDkI---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z0fcxyhlnhe9d9uj4srz.png" alt="AWS Target Groups" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Replace the VM IP with the Load balancer DNS name in Postman, and perform the same API requests. Similar success responses must be returned, indicating the success of the setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k6T5BYVA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/23gluuaoklxcrbhhxbks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k6T5BYVA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/23gluuaoklxcrbhhxbks.png" alt="Image description" width="800" height="689"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>microservices</category>
      <category>containers</category>
    </item>
    <item>
      <title>Low Cost "Overkill" AWS Infrastructure for a Newborn Startup</title>
      <dc:creator>Nicolas El Khoury</dc:creator>
      <pubDate>Tue, 28 Mar 2023 17:31:39 +0000</pubDate>
      <link>https://forem.com/aws-builders/low-cost-overkill-aws-infrastructure-for-a-newborn-startup-aaf</link>
      <guid>https://forem.com/aws-builders/low-cost-overkill-aws-infrastructure-for-a-newborn-startup-aaf</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;On a cold and dark evening in December 2022, a good friend of mine calls me and says: "Nicolas, I am creating a product that is going to scale massively and revolutionize the market, and I need your help". Now, if I had a dollar for every time I heard this sentence, I would be financing trips to Mars by now. &lt;/p&gt;

&lt;p&gt;Nevertheless, I met with the friend and his technical lead. After long hours of discussions (and daydreaming), the business model was summarized as follows: &lt;em&gt;&lt;strong&gt;"The product is a maintenance management platform designed to help companies and vehicle owners to efficiently manage their vehicles. The product aims to automate the entire maintenance procedure and provide preventive and predictive solutions by connecting vehicles to IoT devices, which allows the monitoring of maintenance parameters in real-time."&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I agreed to help them for many reasons, some of which include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;They actually know what they are doing.&lt;/li&gt;
&lt;li&gt;The technical lead is absolutely intelligent.&lt;/li&gt;
&lt;li&gt;I trust they will make it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;My job, evidently, was to architect and implement the infrastructure, deployment, and maintenance of the application. &lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements and Challenges
&lt;/h2&gt;

&lt;p&gt;At the time of discussion, they had just finished an MVP that was deployed poorly on AWS. In fact, both my friend and the technical lead have very minimal experience in everything related to infrastructure and DevOps. In addition, they had little money to pay my original fees and therefore did not want to be a big burden on me. So at first, they suggested that I perform a very basic infrastructure and deployment strategy, that they can use temporarily until they raise more money.&lt;/p&gt;

&lt;p&gt;The first thought I had was: "Those noobs don't even know what they are talking about". From my experience in consulting with more than two dozen companies (from small startups to extremely large multinationals), once you start working with a bad infrastructure, chances are you will keep building on top of it until working on it becomes a living hell, and then possibly run out of business due to bad tech. I was definitely not going to be part of this scenario&lt;/p&gt;

&lt;p&gt;Therefore, my answer was: "No, I will do it properly". So after countless back-and-forth discussions, below is the summary of the challenges to think about while architecting the solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There must be at least two environments: Develop and Production.&lt;/li&gt;
&lt;li&gt;The developers must be able to operate the infrastructure without having to become DevOps Engineers.&lt;/li&gt;
&lt;li&gt;Proper observability must be employed to quickly identify and solve issues when they happen (Because they will happen).&lt;/li&gt;
&lt;li&gt;The cost must be as optimized as possible. &lt;/li&gt;
&lt;li&gt;And finally, I set a requirement, for my sake primarily: The solution must be robust enough to minimize the number of headaches I have to suffer from in the future.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Understanding the Application
&lt;/h2&gt;

&lt;p&gt;Before actually coming up with the solution, a good approach would be to first understand the different components of the application. Therefore, as a first step, the technical lead was kind enough to explain to me the different components of the application, and how to run it locally.&lt;/p&gt;

&lt;p&gt;For simplicity purposes, both the backend (NodeJS) and frontend (ReactJS) applications are designed as a mono repository, managed through &lt;a href="https://nx.dev/" rel="noopener noreferrer"&gt;NX&lt;/a&gt;. The application stores its data in a &lt;a href="https://www.postgresql.org/" rel="noopener noreferrer"&gt;PostgreSQL&lt;/a&gt; database. Surprisingly, the application was very well documented, a phenomenon I have rarely seen in my life. Therefore, understanding the behavior and the build steps of the application wasn't so difficult.&lt;/p&gt;

&lt;p&gt;In about three hours, I was able to containerize, deploy, and run all the containerized application components on a single Linux machine. Amazing! First step complete.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure Requirements
&lt;/h2&gt;

&lt;p&gt;Now that the application is containerized, and all the steps documented, it is time to architect the infrastructure. Whenever I am architecting a solution, regardless of its complexity and cost, I always make sure to achieve the following characteristics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;: One of the most integral parts in any application is security. A robust software is one that prohibits cyber attacks, such as SQL Injection Attacks, Password Attacks, Cross Site Scripting Attacks, etc. Integrating security mechanisms in the code is a mandatory practice to ensure the safety of the system in general, especially the data layer. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Availability&lt;/strong&gt;: Refers to the probability that a system is running as required, when required, during the time it is supposed to be running. A good practice to achieve availability would be to replicate the system and application as much as possible (e.g., containers, machines, databases, etc).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: The on-demand provisioning of resources offered by the cloud allows its users to quickly scale-in and scale-out resources based on the varying load. This is absolutely important, especially to optimize the cost, all while serving the traffic consistently.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;System Observability&lt;/strong&gt;: One of the most important mechanisms required to achieve a robust application is system visibility:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Logging&lt;/strong&gt;: Aggregating the application logs and displaying them in an organized fashion allows the developers to test, debug, and enhance the application. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tracing&lt;/strong&gt;: Tracing the requests is another important practice, allowing to tail every request flowing in and out of the system and rapidly finding and fixing errors and bottlenecks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring&lt;/strong&gt;: It is essential to have accurate and reliable monitoring mechanisms in every aspect of the system. Key metrics that must be monitored include but are not limited to CPU utilization, Memory Utilization, Disk Read/Write Operations, Disk space, etc.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Infrastructure Solution
&lt;/h2&gt;

&lt;p&gt;In light of all the above, and after twisting my imagination for a little bit, I came up with the architecture depicted in the diagram below (Does not display all the components used):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01o898qnump3s4it7ktf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01o898qnump3s4it7ktf.png" alt="Infrastructure Architecture Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Networking
&lt;/h3&gt;

&lt;p&gt;The infrastructure is created in the region of Ireland &lt;strong&gt;(eu-west-1)&lt;/strong&gt;. The following network components are created:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Virtual Private Cluster:&lt;/strong&gt; To isolate the resources in a private network.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internet Gateway&lt;/strong&gt;: To provide internet connectivity to the resources in the public subnets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NAT Gateway&lt;/strong&gt;: To provide outbound connectivity to private resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Public Subnets:&lt;/strong&gt; In each availability zone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Private Subnets:&lt;/strong&gt; In each availability zone.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  VPN
&lt;/h3&gt;

&lt;p&gt;A VPN instance with a free license is deployed to provide secure connectivity for the developers and system administrators to the private resources in the VPC.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS EKS
&lt;/h3&gt;

&lt;p&gt;An AWS EKS cluster is created to orchestrate the backend service of each environment. The cluster is composed of one node pool made of 2 nodes, each in an Availability zone. &lt;/p&gt;

&lt;h3&gt;
  
  
  Application Load Balancer
&lt;/h3&gt;

&lt;p&gt;An Application Load Balancer (Layer 7) is created to expose the endpoints and provide the routing rules required from the internet into the application. The load balancer is configured to serve traffic on ports 80 and 443.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS RDS PostgreSQL
&lt;/h3&gt;

&lt;p&gt;An AWS RDS PostgreSQL database is created to hold and persist the application’s data. Both the develop and production environments are hosted on the same instance but are separated logically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Clients VM
&lt;/h3&gt;

&lt;p&gt;A private virtual machine on which client applications are installed, to interact with different parts of the infrastructure (e.g., kubectl, PostgreSQL client, etc).&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS ECR
&lt;/h3&gt;

&lt;p&gt;Two ECR repositories are created for the backend service, one for each environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  S3 Bucket
&lt;/h3&gt;

&lt;p&gt;An AWS S3 bucket is created to host the frontend application for each environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Cloudfront
&lt;/h3&gt;

&lt;p&gt;An AWS Cloudfront distribution is created to cache the frontend application hosted on AWS S3 of each environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  ACM
&lt;/h3&gt;

&lt;p&gt;ACM Public certificates are required for the domains. A public certificate must be created in the region of &lt;strong&gt;eu-west-1&lt;/strong&gt; to be used by the load balancer, and another one in the region of &lt;strong&gt;us-east-1&lt;/strong&gt;, to be used by Cloudfront.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloudwatch
&lt;/h3&gt;

&lt;p&gt;The infrastructure metrics and application logs are configured to be displayed on &lt;strong&gt;Cloudwatch&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Deployment
&lt;/h2&gt;

&lt;p&gt;Now that the infrastructure was successfully architected and created, I proceeded to deploy the containerized backend services and ensured their proper connectivity to the databases. Afterward, the frontend application was built and deployed on S3. &lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous Delivery Pipelines
&lt;/h2&gt;

&lt;p&gt;The last step before signaling to the team the good news was to automate the build and delivery steps of all the services. Evidently, none of the developers should perform tedious and time-wasting tasks of building and deploying the application everytime there is a change. As a matter of fact, knowing the pace at which the developers are working, I expect they push code to develop 276 million times per day.&lt;/p&gt;

&lt;p&gt;Therefore, I used &lt;a href="https://aws.amazon.com/codebuild/" rel="noopener noreferrer"&gt;AWS Codebuild&lt;/a&gt; and &lt;a href="https://aws.amazon.com/codepipeline/" rel="noopener noreferrer"&gt;AWS CodePipeline&lt;/a&gt; to automate the steps of building and deploying the services. The diagram below depicts all the steps required to continuously deliver the frontend and backend applications:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk132uaa8kierevjg4m0q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk132uaa8kierevjg4m0q.png" alt="Continuous Delivery Pipelines"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Once everything is done, I met with the friend and with the technical lead for a handover. They were so pleased with the outcome, stating that the infrastructure is amazing, but is overkill and much more than they need right now. &lt;/p&gt;

&lt;p&gt;But in reality, it is not an overkill. As a matter of fact, the product and the team are growing very rapidly. This solution is a skeleton that can be quickly and easily modified and scaled upon need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backend services replicas can be easily modified.&lt;/li&gt;
&lt;li&gt;The EKS nodes can be easily scaled vertically and horizontally.&lt;/li&gt;
&lt;li&gt;The frontend application is on S3, which is automatically scalable.&lt;/li&gt;
&lt;li&gt;The database can be easily scaled vertically and horizontally.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After delivering the solution in mid December 2022:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The developers are happy because of the robustness and ease of use of the infrastructure.&lt;/li&gt;
&lt;li&gt;My friend is happy because his application is live, and is costing him less than $500 per month.&lt;/li&gt;
&lt;li&gt;I am happy because they never called me with a complaint.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everybody is happy :)))) The end!! &lt;/p&gt;

</description>
      <category>aws</category>
      <category>microservices</category>
      <category>kubernetes</category>
      <category>containers</category>
    </item>
    <item>
      <title>AWS Certified DevOps Engineer Professional: Content Summary and Important Notes</title>
      <dc:creator>Nicolas El Khoury</dc:creator>
      <pubDate>Fri, 24 Mar 2023 07:33:01 +0000</pubDate>
      <link>https://forem.com/aws-builders/aws-certified-devops-engineer-professional-content-summary-and-important-notes-3mpn</link>
      <guid>https://forem.com/aws-builders/aws-certified-devops-engineer-professional-content-summary-and-important-notes-3mpn</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://aws.amazon.com/certification/certified-devops-engineer-professional/" rel="noopener noreferrer"&gt;AWS DevOps Engineer Professional Certification&lt;/a&gt; is a certificate offered by &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;Amazon Web Services (AWS)&lt;/a&gt;, designed to test your proficiency in deploying, managing, and maintaining distributed applications on AWS using DevOps principles and practices.&lt;/p&gt;

&lt;p&gt;In addition, it is a great way to verify your knowledge of industry best practices and enhance your profile and competencies in a competitive job market.&lt;/p&gt;

&lt;p&gt;To increase your chances of passing AWS DevOps the exam, it is advised that you have a minimum of two years of experience in designing, setting up, managing, and running AWS environments. In addition, you should have hands-on experience with AWS services such as EC2, S3, RDS, Cloudwatch, CodePipeline, etc, and a good understanding of DevOps principles and practices.&lt;/p&gt;

&lt;p&gt;Professional-level certificates are quite different from Associate-level ones. As a matter of fact, they are more difficult, require more professional experience in AWS, more maturity, and a higher level of thought process. In brief, obtaining the AWS DevOps Professional certificate is not a walk in the park. &lt;/p&gt;

&lt;p&gt;In addition to that, chances are that you are employed, with a busy life. Therefore, preparing for the certificate is not going to be your full-time job.&lt;/p&gt;

&lt;p&gt;In light of all the above, I am writing this article to serve the purposes below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Summarize the requirements needed to pass the exam.&lt;/li&gt;
&lt;li&gt;Summarize &lt;a href="https://www.udemy.com/course/aws-certified-devops-engineer-professional-hands-on/" rel="noopener noreferrer"&gt;Stephane's&lt;/a&gt; amazing Udemy course content.&lt;/li&gt;
&lt;li&gt;List additional preparation steps.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Definitely, this article does not constitute a single source of truth that will guide you to pass the exam. Nonetheless, it aims to be a summary, and a memory refresher.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Certification Requirements
&lt;/h2&gt;

&lt;p&gt;The AWS DevOps Engineer Professional certification exam evaluates your skills and knowledge in essential areas related to deploying, managing, and operating distributed application systems on the AWS platform by implementing DevOps principles and practices. The exam will test your understanding of several key topics:&lt;/p&gt;

&lt;h3&gt;
  
  
  SDLC Automation
&lt;/h3&gt;

&lt;p&gt;Continuous delivery and deployment is a process that automates the building, testing, and deployment of software in a smooth and continuous way. This leads to faster delivery of updates and better customer satisfaction. The exam may test your knowledge of AWS tools such as AWS CodePipeline, AWS CodeDeploy, and AWS Elastic Beanstalk.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuration Management and IaC
&lt;/h3&gt;

&lt;p&gt;Infrastructure as code (IaC) is a practice that involves automating the deployment and management of infrastructure using code rather than manual processes. This approach enables teams to manage infrastructure more efficiently, reduce errors, and increase agility. The AWS DevOps Engineer Professional certification exam may test your understanding of IaC tools and services such as AWS CloudFormation&lt;/p&gt;

&lt;h3&gt;
  
  
  Resilient Cloud Solutions
&lt;/h3&gt;

&lt;p&gt;The Resilient Cloud Solutions domain assesses your ability to build and manage systems that are able to cope with potential failures or disasters, including, creating backup and restore strategies, designing for disaster recovery, and managing scaling and elasticity on AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring and Logging
&lt;/h3&gt;

&lt;p&gt;The exam may assess your skills in designing and implementing effective logging and monitoring systems for AWS services and applications. You may be tested on your ability to identify and troubleshoot issues in these systems and set up alarms and notifications to ensure the efficient operation of the infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Incident and Event Response
&lt;/h3&gt;

&lt;p&gt;The exam evaluates your knowledge of incident management and response processes, such as identifying, categorizing, and resolving incidents and implementing and testing disaster recovery plans. You may also be tested on your ability to effectively communicate and collaborate with stakeholders during such incidents.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security and Compliance
&lt;/h3&gt;

&lt;p&gt;This domain evaluates your understanding of security and compliance best practices for AWS services and applications. This includes topics revolving around implementing security controls, managing access and authentication, and ensuring compliance with regulatory standards. You may also be tested on your knowledge of AWS security services such as AWS IAM and AWS KMS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Course Summary and Important Notes
&lt;/h2&gt;

&lt;p&gt;An important step in my preparation for the exam is &lt;a href="https://www.udemy.com/course/aws-certified-devops-engineer-professional-hands-on/" rel="noopener noreferrer"&gt;Stephane Maarek's Udemy course&lt;/a&gt;. The course is highly interactive, well designed, and contains important information, explained in a simple and clear way. &lt;/p&gt;

&lt;p&gt;Nonetheless, the course is (veeeeeery) long. Remembering, therefore, all the important information explained in the course may be quite difficult. &lt;/p&gt;

&lt;p&gt;In light of the above, the next section of this article lists and explains most of these important concepts to remember. This article constitutes, in no way, a replacement for the course. Rather, it can be used to refresh your memory, only after having carefully studied the Udemy course. &lt;/p&gt;

&lt;h3&gt;
  
  
  Important Notes
&lt;/h3&gt;

&lt;h4&gt;
  
  
  CodeCommit
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Using IAM, you can deny certain users from pushing to master by creating an explicit &lt;strong&gt;DENY&lt;/strong&gt; policy&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can set up notification rules and triggers to either one of &lt;strong&gt;SNS&lt;/strong&gt; or &lt;strong&gt;lambda&lt;/strong&gt;. Examples include the creation or deletion of a repository, branch, Pull Request, etc. You can also configure such events using &lt;strong&gt;Cloudwatch Events&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  CodeBuild
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;You can pass environment variables to Codebuild in many ways:

&lt;ol&gt;
&lt;li&gt;In the &lt;strong&gt;buildspec&lt;/strong&gt; file as key-value pairs.&lt;/li&gt;
&lt;li&gt;In &lt;strong&gt;Codebuild&lt;/strong&gt; using plain-text key-value pairs.&lt;/li&gt;
&lt;li&gt;In &lt;strong&gt;Codebuild&lt;/strong&gt; or the &lt;strong&gt;buildspec&lt;/strong&gt; file as a secret, using the &lt;strong&gt;Systems Manager Parameter Store&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  CodeDeploy
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;There are multiple deployment strategies for EC2's using &lt;strong&gt;CodeDeploy&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;In-place deployment&lt;/strong&gt;: Deploy on the existing EC2 machines:&lt;br&gt;
a. &lt;strong&gt;AllAtOnce&lt;/strong&gt;: Deploys on all the existing EC2 machines at the same time.&lt;br&gt;
b. &lt;strong&gt;HalfAtOnce&lt;/strong&gt;: Deploys on half of the existing EC2 machines per batch.&lt;br&gt;
c. &lt;strong&gt;OneAtOnce&lt;/strong&gt;: Deploys on one EC2 machines at a time.&lt;br&gt;
d. &lt;strong&gt;Custom Rules&lt;/strong&gt;: You can specify your own custom rule.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Blue/Green deployment&lt;/strong&gt;: Provision new instances to deploy the new application version. This deployment requires a new Load balancer. There are two ways to perform such type of deployment:&lt;br&gt;
 a. Manually provisioning instances&lt;br&gt;
 b. Automatically copy autoscaling groups.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;appspec.yml&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;strong&gt;appspec.yml&lt;/strong&gt; file contains all the necessary information to be processed by &lt;strong&gt;CodeDeploy&lt;/strong&gt; to perform a certain deployment. Below is a sample &lt;strong&gt;appspec.yml&lt;/strong&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.0
os: linux  # The Operating system
files:
  source: /index.html # The location of the file(s) to be copied
  destination: /var/www/html # The location in which the file(s) must be copied to on the servers
hooks: # The list of hooks available for CodeDeploy
  ApplicationStop:
    location: scripts/stop_servers.sh # The location of the script to run when this hook is triggered
    timeout: 300 # The timeout for this hook (in seconds)
    runas: root # The user with which the script will be executed 
  BeforeInstall:
  AfterInstall:
  ApplicationStart:
  ValidateService:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Unlike &lt;strong&gt;Codebuild&lt;/strong&gt;, in order to collect &lt;strong&gt;Codedeploy&lt;/strong&gt; logs and display them in &lt;strong&gt;Cloudwatch&lt;/strong&gt;, the &lt;strong&gt;Cloudwatch logs agent&lt;/strong&gt; must be installed on the machines.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Rollbacks&lt;/strong&gt; can be done in two ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Manually&lt;/li&gt;
&lt;li&gt;Automatic: When a deployment fails, or when a threshold is crossed, using specifically set alarms.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Registering an On-premise instance to &lt;strong&gt;CodeDeploy&lt;/strong&gt; can be done in multiple ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;For a small number of instances: create an IAM user, install the CodeDeploy agent, register the instance using the &lt;code&gt;register-on-premise-instance&lt;/code&gt; API&lt;/li&gt;
&lt;li&gt;For a large number of instances: Use an IAM role and AWS STS to generate credentials&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;CodePipeline possesses different deployment strategies for lambda function:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Canary&lt;/strong&gt;: Traffic shift in two increments. For example, pass 15% of traffic to the newly deployed version in the first 15 minutes post-deployment, then switch all the traffic to it afterward.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Linear&lt;/strong&gt;: Traffic shifts in equal increments. For example, add 10% of traffic to the newly deployed version every 5 minutes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AllAtOnce&lt;/strong&gt;: Move all the traffic to the new version at once.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  CodePipeline
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;There are two ways CodePipeline can detect source code changes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Cloudwatch Events&lt;/strong&gt;: A change triggers an AWS Cloudwatch Event. This is the preferred way.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS CodePipeline&lt;/strong&gt;: Periodically check for changes.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Artifacts&lt;/strong&gt;: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Each stage uploads its artifacts to &lt;strong&gt;S3&lt;/strong&gt; in order to be used by the later stage.&lt;/li&gt;
&lt;li&gt;We can use the same S3 bucket for multiple pipelines.&lt;/li&gt;
&lt;li&gt;Objects can be encrypted using AWS KMS or Customer Managed Keys.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cloudformation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;!Ref&lt;/code&gt; can be used to reference parameters or other resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pseudo parameters are variables offered by AWS, for example &lt;code&gt;ACCOUNT_ID&lt;/code&gt;, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mappings are a set of fixed variables&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Mappings
  RegionMap:
    us-east-1:
      "32": "id-1"
      "64": "id-2"
    us-west-1:
      "32": "id-3"
      "64": "id-4"
Resources:
  MyEC2: 
    Type: "AWS::EC2::Instance"
    ImageId: !FindinMap [ RegionMap, !Ref "AWS::Region", 32 ] # Returns the ID based on the region in which the script is executed. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Outputs can be exported and used in other stacks&lt;/li&gt;
&lt;li&gt;You cannot delete stacks which outputs are referenced by other stacks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;!ImportValue&lt;/strong&gt; is used to import an output&lt;/li&gt;
&lt;li&gt;Conditions: &lt;strong&gt;and&lt;/strong&gt;, &lt;strong&gt;equals&lt;/strong&gt;, &lt;strong&gt;if&lt;/strong&gt;, &lt;strong&gt;not&lt;/strong&gt;, &lt;strong&gt;or&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;!Ref&lt;/strong&gt;: 

&lt;ol&gt;
&lt;li&gt;When used against a parameter, it returns the value of the parameter.&lt;/li&gt;
&lt;li&gt;When used against a resource, it returns the ID of the resource.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;!GetAtt&lt;/strong&gt;: Returns a specific attribute for any resource. For example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Type: "AWS::EC2::Volume
Properties:
  AvailabilityZones:
    !GetAtt MyEC2.AvailabilityZone # Retrieves the Availability Zone attribute from the **MyEC2** resource
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;!FindInMap&lt;/strong&gt; returns the value of a specific key in a map &lt;code&gt;!FindInMap [ MapName, TopLevelKey, SecondLevelKey ]&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;!ImportValue&lt;/strong&gt; retrieves the value of an output&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;!Join [ ":" , [ a, b, c ] ]&lt;/strong&gt; --&amp;gt; "a🅱️c" joins an array of strings using a delimiter &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;!Sub&lt;/strong&gt; substitutes values&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;UserData&lt;/strong&gt; can be passed as a property using the &lt;strong&gt;Base64&lt;/strong&gt; function. The output of the UserData can be found under &lt;code&gt;/var/log/cloud-init-output.log&lt;/code&gt; file in Linux&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;cfn-init&lt;/strong&gt; is similar to &lt;strong&gt;userData&lt;/strong&gt;, with some differences. &lt;strong&gt;cfn-signal&lt;/strong&gt; and &lt;strong&gt;waitConditions&lt;/strong&gt; are used after &lt;strong&gt;cfn-init&lt;/strong&gt; to signal the status of a userData script. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Troubleshooting steps in case the wait condition did not receive the required signal:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ensure the AWS Cloudformation helper scripts are installed.&lt;/li&gt;
&lt;li&gt;Verify that &lt;strong&gt;cfn-signal&lt;/strong&gt; and &lt;strong&gt;cfn-init&lt;/strong&gt; commands ran by checking the &lt;code&gt;/var/log/cloud-init.log&lt;/code&gt; and &lt;code&gt;/var/log/cfn-init.log&lt;/code&gt; files.&lt;/li&gt;
&lt;li&gt;Verify that the instance has internet connections&lt;/li&gt;
&lt;li&gt;Note that such troubleshooting cannot be done if rollbacks are enabled.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By default, Cloudformation deletes all the resources after a failure. You can disable rollbacks, but you must make sure you delete all the resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Nested stacks are available. Always modify the parent stack, and the changes will propagate to the nested stacks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Change sets allow you to understand the changes that will be done between two cloudformation stack versions, but it cannot show if it works or not.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Cloudformation has many deletion policies for resources:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Retain: does not delete the resource when deleting the stack&lt;/li&gt;
&lt;li&gt;Snapshot: works on some resources, such as RDS.&lt;/li&gt;
&lt;li&gt;Delete.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can protect a stack from being deleted by enabling the deletion protection policy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cloudformation parameters can be fetched from SSM parameters. It is a good practice to store global parameters, such as AMI IDs. Cloudformation is able to detect changes to such parameters and perform necessary updates when needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DependsOn&lt;/strong&gt; is used to create dependencies between resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Lambda functions can be deployed through Cloudformation in many ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write the function inline in the Cloudformation script.&lt;/li&gt;
&lt;li&gt;Upload the code in a zipped file to S3 and reference it in the Cloudformation script.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Lambda changes can be detected by Cloudformation in many ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Upload the code to a new bucket.&lt;/li&gt;
&lt;li&gt;Upload the code to a new key in the same bucket.&lt;/li&gt;
&lt;li&gt;Upload to a new versioned bucket and reference the version in the Cloudformation script.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cloudformation cannot delete a bucket unless it is empty. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use cases for Cloudformation custom resources:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Resource still not covered by Cloudformation&lt;/li&gt;
&lt;li&gt;Adding an on-premise instance&lt;/li&gt;
&lt;li&gt;Emptying an S3 bucket before deletion&lt;/li&gt;
&lt;li&gt;Fetch AMI ID&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cloudformation drift detection is a tools that checks if the current resources that were created by Cloudformation were modified or not.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Cloudformation Status Codes&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;CREATE_IN_PROGRESS&lt;/li&gt;
&lt;li&gt;CREATE_COMPLETE&lt;/li&gt;
&lt;li&gt;CREATE_FAILED&lt;/li&gt;
&lt;li&gt;DELETE_IN_PROGRESS&lt;/li&gt;
&lt;li&gt;DELETE_COMPLETE&lt;/li&gt;
&lt;li&gt;DELETE_FAILED&lt;/li&gt;
&lt;li&gt;ROLLBACK_COMPLETE&lt;/li&gt;
&lt;li&gt;ROLLBACK_IN_PROGRESS&lt;/li&gt;
&lt;li&gt;ROLLBACK_FAILED&lt;/li&gt;
&lt;li&gt;UPDATE_COMPLETE&lt;/li&gt;
&lt;li&gt;UPDATE_COMPLETE_CLEANUP_PROGRESS: when the update is complete but still cleaning up old resources&lt;/li&gt;
&lt;li&gt;UPDATE_ROLLBACK_COMPLETE&lt;/li&gt;
&lt;li&gt;UPDATE_ROLLBACK_IN_PROGRESS&lt;/li&gt;
&lt;li&gt;UPDATE_ROLLBACK_FAILED&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Potential causes for &lt;strong&gt;UPDATE_ROLLBACK_FAILED&lt;/strong&gt;: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Insufficient permissions&lt;/li&gt;
&lt;li&gt;Invalid credentials for Cloudformation&lt;/li&gt;
&lt;li&gt;Limitation error&lt;/li&gt;
&lt;li&gt;Changes done to the resources outside Cloudformation&lt;/li&gt;
&lt;li&gt;Resources not in a stable state yet&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;INSUFFICIENT_CAPABILITIES_EXCEPTION&lt;/strong&gt;: Cloudformation requires the &lt;strong&gt;CAPABILITY_IAM&lt;/strong&gt; permission to create IAM resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stack policies is a JSON object that specifies allow/deny rules for updates on resources in the Cloudformation script.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Elastic Beanstalk
&lt;/h3&gt;

&lt;p&gt;Important CLI commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;eb status&lt;/code&gt;: display information about the application.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;eb logs&lt;/code&gt;: Display the application logs&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;eb deploy&lt;/code&gt;: Updates the application with a new version&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;eb terminate&lt;/code&gt;: Terminates the environment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There are two ways to modify elastic beanstalk configuration:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Saved Configurations:&lt;br&gt;
a. &lt;code&gt;eb config save &amp;lt;env name&amp;gt; --cfg &amp;lt;configuration name&amp;gt;&lt;/code&gt; --&amp;gt; saves a configuration of an environment locallly.&lt;br&gt;
b. &lt;code&gt;eb setenv KEY=VALUE&lt;/code&gt; --&amp;gt; Creates an environment variable in elastic beanstalk&lt;br&gt;
c. &lt;code&gt;eb config put &amp;lt;configuration file name&amp;gt;&lt;/code&gt; --&amp;gt; Uploads a configuration file to elastic beanstalk saved config&lt;br&gt;
d. &lt;code&gt;eb config &amp;lt;env name&amp;gt; --cfg &amp;lt;config name&amp;gt;&lt;/code&gt; applies a configuration to an elastic beanstalk environment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;YML config files under &lt;strong&gt;.ebextensions&lt;/strong&gt;. After the configuration is added, it can be applied using the &lt;code&gt;eb deploy command&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Configuration precedence is as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Settings applied directly on an environment&lt;/li&gt;
&lt;li&gt;Saved configurations&lt;/li&gt;
&lt;li&gt;Configuration files (.ebextensions)&lt;/li&gt;
&lt;li&gt;default values&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Using the &lt;strong&gt;.ebextensions&lt;/strong&gt; files, we can upload configuration files with additional resources added to the environment (e.g., RDS, DynamoDB, etc).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A database or resource that must outlive the ElasticBeanstalk environment must be created externally, and referenced in ElasticBeanstalk through environment variables for example.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Commands&lt;/strong&gt; and &lt;strong&gt;Container Commands&lt;/strong&gt; for &lt;strong&gt;.ebextensions&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Commands&lt;/strong&gt;: Execute commands on EC2 machines. The commands run before the application and webserver are set up.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conatainer commands&lt;/strong&gt;: Execute commands that affect the application. Runs after the application and webserver are set up, but before the application is deployed. &lt;strong&gt;leader_only&lt;/strong&gt; flag runs the command on a single machine only.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Important ElasticBeanstalk features:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When creating a webserver environment, there are two configuration presets:
a. &lt;strong&gt;Low Cost&lt;/strong&gt;: Creates a single instance with EIP. Good for testing.
b. &lt;strong&gt;High Availability&lt;/strong&gt;: Creates a ELB and autoscaling group. Good for production&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Application versions are saved under the &lt;strong&gt;Application Versions&lt;/strong&gt; section, limited to 1000 versions. We can create lifecycle policies to manage these versions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Clone Environment&lt;/strong&gt; is a quick way to create a new environment from an existing one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Deployment Modes&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;AllAtOnce&lt;/strong&gt;: Fastest way, but brings all the instances down.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rolling&lt;/strong&gt;: Updates a subset of instances at a time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rolling with Additional Batches&lt;/strong&gt;: Similar to &lt;strong&gt;Rolling&lt;/strong&gt; but creates new instances for each batch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Immutable&lt;/strong&gt;: New instances created in a new Autoscaling Group, deploys the new version, and swaps environments when all is done&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Blue/Green&lt;/strong&gt;: can be achieved by having two environments, and then either swapping URLs, or creating weighted records using Route 53 records.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Worker Environment&lt;/strong&gt;: is dedicated for long running background jobs (e.g., video processing, sending emails, etc). This environment creates SQS queues by default. Cron jobs can be specified in the &lt;strong&gt;cron.yml&lt;/strong&gt; file.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Lambda
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;We can store environment variables in Lambda in 3 ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Stored as plaintext.&lt;/li&gt;
&lt;li&gt;Stored as encrypted. Lambda needs enough permissions, and  a KMS key to encrypt/decrypt the variable.&lt;/li&gt;
&lt;li&gt;Stored as a parameters in SSM. Lambda needs to have enough permissions to fetch the secret.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By default, lambda uses the latest version, which is mutable. We can create versions, which are immutable. Each version will have its own ARN. Aliases can be created to point out to versions. In this way, we can preserve the same alias ARN, but change the underlying lambda version.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Serverless Application Model (SAM) allows us to create, manage, and test lambda functions locally, as well as uploading the functions to AWS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using SAM, we can create CodeDeploy projects to continuously deploy code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Step function is a workflow management tool that coordinates the work between several lambda functions. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  API Gateway
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Two protocols available: REST and Websocket.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Endpoint type can be:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Regional&lt;/strong&gt;: In one region&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Edge Optimized&lt;/strong&gt;: Across all regions&lt;/li&gt;
&lt;li&gt;Private: Accessed within a VPC internally&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Gateway + Lambda proxy&lt;/strong&gt;: The gateway can point to an alias for a lambda function, which allows us to do canary or blue/green deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Gateway Stages&lt;/strong&gt;: Changes must be deployed in order to take effect. Stages are used to divide between environments (dev, test, prod). Stages can be rolled back, and possess a deployment history.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Stage variables are like environment variables but for the API gateway. Use cases include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configure HTTP endpoints with stages programmatically.&lt;/li&gt;
&lt;li&gt;Pass parameters to lambda through mapping templates.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Canary deployment can be achieved in two ways across the API gateway:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From the canary deployment feature of the API gateway&lt;/li&gt;
&lt;li&gt;Linking the stage to a lambda alias, and perform canary deployment on the lambda functions.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;API gateway gas a limit of 10000 requests per second across all APIs in an AWS account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can add throttling or usage plans to limit API usage across many levels: lambda, stage, or API gateway levels.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can also front a step function with API gateway. The response would be the ARN of the step function, since the API gateway does not wait for a response from the step function.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ECS
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Task Definition&lt;/strong&gt;: JSON document that contains information on how to run a container (e.g., container image, port, memory limit, etc).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tasks need task roles to interact with other AWS resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the host port is specified to be "0", a random port will be assigned. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ECR&lt;/strong&gt; is the AWS container registry. If unable to interact with it, check the IAM permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fargate&lt;/strong&gt; is the serverless ECS service. Using Fargate, we only need to deal with containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You can run Elastic Beanstalk environments in container mode:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Single Container Mode&lt;/strong&gt;: One container per EC2.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi Container Mode&lt;/strong&gt;: Multiple containers per EC2.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ecsInstanceRole&lt;/strong&gt;: roles attached to the EC2 instances to pull images and other managerial stuff.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ecsTaskRole&lt;/strong&gt;: roles for the container to interact with other AWS services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fargate does not have ecsInstanceRoles.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For ECS classic, autoscaling policies for the instances and autoscaling policies for the tasks are two different things. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The containers in ECS can be configured to send logs to Cloudwatch from their task definition.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The EC2 instances need the cloudwatch agent to be installed and configured on the VMs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Cloudwatch metrics supports metrics for:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;the ECS cluster&lt;/li&gt;
&lt;li&gt;the ECS service (not per container).&lt;/li&gt;
&lt;li&gt;ContainerInsights (per container). This option must be enabled and costs additional money.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CodeDeploy can be used to do Blue/Green deployment on ECS.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  OpsWorks Stacks
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It is the AWS alternative for Chef&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;There are 5 lifecycle events:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Setup&lt;/strong&gt;: After the instance has finished booting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configure&lt;/strong&gt;: Instance enter or leave | Add/remove EIP | add/remove ALB&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;deploy&lt;/strong&gt;: deploys an app&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;undeploy&lt;/strong&gt;: undeploys an app&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;shutdown&lt;/strong&gt;: right before the instance is terminated&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each event can have its own recipe.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Autohealing feature stops and restarts EC2 instances. An instance is considered down if the OpsWorks agent on it cannot reach the service for a period of 5 minutes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cloudtrail
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Logs every API made to AWS&lt;/li&gt;
&lt;li&gt;Logs can be sent to either S3 or Cloudwatch logs&lt;/li&gt;
&lt;li&gt;Log files on S3 are by default encrypted using SSE-S3&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Log files contain info: what is the call, who made it, to who, what is the response?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We can verify the integrity of cloudwatch files using the command: &lt;code&gt;aws cloudtrail validate-logs&lt;/code&gt;. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We can aggregate cloutrail trails from multiple accounts: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configure a trail in each account to send logs to a centralized S3 bucket in one account. Modify the S3 bucket permissions to allow objects to be pushed from all these accounts.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Amazon Kinesis
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Amazon Kinesis Limits&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;1 MB/s or 1000 messages/s at write per shard or else we will receive a "ProvisionThrougputException"&lt;/li&gt;
&lt;li&gt;2 MB/s at read per shard across all consumers&lt;/li&gt;
&lt;li&gt;Data retention of 1 day by default. Can be extended to 7 days&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Producers can be: Kinesis SDK, Cloudwatch logs, 3rd party&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Consumers can be: Kinesis SDK, Firehose, Lambda&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Kinesis Data Streams vs FireHose&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Kinesis Data Streams&lt;/strong&gt;: Requires custom code, realtime, users manage scaling using shards, data storage up to 7 days, used with lambda for realtime data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Firehose&lt;/strong&gt;: fully managed, data transformation with lambda, Near realtime (±60 seconds), automated scaling, No data storage&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Serverless realtime analytics can be done using queries on Kinesis data streams using SQL. New streams can be created using these queries.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cloudwatch
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cloudwatch Metrics classic provides one data point per minute. Detailed monitoring can be enabled and provides one data point per second&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To add custom metrics, use the &lt;code&gt;put-metric-data&lt;/code&gt; API. A metric can be of:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Standard resolution: 1 minute granularity&lt;/li&gt;
&lt;li&gt;High resolution: 1 second granularity&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;get-metric-statistics&lt;/code&gt; API can be used to get the data of a metric. We can automate this to export the metrics to S3.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cloudwatch alarms can accommodate for one metrics per alarm.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Alarm actions can be SNS notication, Autoscaling action, EC2 action&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Billing alarms can be created in North Virginia only.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The unified Cloudwatch agent can be installed to collect logs and metrics from EC2 and on-premise instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can create a metric from filtered logs and then create an alarm out of them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can export log data to S3, The bucket must have enough permissions. This can be automated using cloudwatch events and lambda.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Realtime processing of logs can be done using subscriptions, and having the logs delivered to Kinesis Streams, Data Firehose, or lambda.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;S3 events can send notifications to SNS, SQS, and lambda (object level only).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cloudwatch events has bucket and object level events.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Amazon X-Ray
&lt;/h3&gt;

&lt;p&gt;An AWS service that allows API tracing, and service maps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazin ElasticSearch (ES)
&lt;/h3&gt;

&lt;p&gt;AWS managed Elastic Search, Logstash, and Kibana.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Systems Manager (SSM)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Manages EC2 and on-premise instances, such as applying patch management, maintenance, automation, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Either the AMI has the SSM agent installed or we have to install it manually before registering an instance to SSM.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If SSM is not working, it may be a problem with either the agent or the permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The ID of EC2 instances registered with SSM start with "i-", while those of on-premise instances start with "mi-".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;To register an on-premise instance:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download the SSM agent on the instance.&lt;/li&gt;
&lt;li&gt;Create an activation key&lt;/li&gt;
&lt;li&gt;Register the instance using the CLI, activation ID and activation key.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SSM run command is used to configure something on a bunch of machines.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SSM Parameter store&lt;/strong&gt;: Stores key value pairs. It is better to store the name of a variable as a path, since we can query one or more parameters using paths.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SSM patch manager&lt;/strong&gt;: Creates patch rules for different operating systems&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SSM Inventory&lt;/strong&gt;: Collects applications running on our instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SSM automations&lt;/strong&gt;: Allows the automation of a lot of steps. For instance: Create a VM, patch it, create a new AMI, delete old VM.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AWS Config
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Tracks configuration changes for resources in our account.&lt;/li&gt;
&lt;li&gt;Config rules allow to track the compliance of specific resources agains this rule.&lt;/li&gt;
&lt;li&gt;Multi account and multi region can be aggregated into a single config account.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AWS Service catalog
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create and manage a suite of products.&lt;/li&gt;
&lt;li&gt;Each product is a cloudformation template.&lt;/li&gt;
&lt;li&gt;Each set of products is assigned to a portfolio.&lt;/li&gt;
&lt;li&gt;Each user can be assigned to a portfolio.&lt;/li&gt;
&lt;li&gt;Users can only manage products in their catalogs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AWS Inspector
&lt;/h3&gt;

&lt;p&gt;Continuously scans EC2 and ECR for vulnerabilities&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Service Health dashboard
&lt;/h3&gt;

&lt;p&gt;Displays the health of every AWS service in every region&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Personal Health dashboard
&lt;/h3&gt;

&lt;p&gt;Health of services related to you. Notifications can be set using cloudwatch events.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Trusted Advisor
&lt;/h3&gt;

&lt;p&gt;Provides recommendations related to cost optimization, performance, security, fault tolerance, and service limits. You can refresh the recommendations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Using the refresh button in the console (Once every 5 minutes).&lt;/li&gt;
&lt;li&gt;Using the &lt;code&gt;refresh-trusted-advisor-check&lt;/code&gt; API&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  AWS Guardduty
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;An intelligent threat detection system to protect AWS accounts. No need to install anything.&lt;/li&gt;
&lt;li&gt;Performs checks on: Cloudtrail logs, VPC flow logs, DNS queries. Can be integrated with Lambda and Cloudwatch events.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AWS Secrets Managers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Similar service to SSM parameter store. Specialized in managing and rotating secrets. The secrets can be integrated with Lambda and managed databases.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AWS Cost Allocation tags
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Can be either AWS generated or user-defined. These tags are used for budget and reports by tags.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Autoscaling Groups - revisited
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Launch Configuration&lt;/strong&gt;: Specifies metadata to be used when creating Autoscaling Groups.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Launch Template&lt;/strong&gt;: Specifies metadata to be used by Autoscaling groups, EC2, and other options. Supports a mix of on-demand and spot instances. Overall, it is a better option than Launch configurations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Autoscaling Group Suspended Processes&lt;/strong&gt;: Processes to suspend (for troubleshooting purposes)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Launch&lt;/strong&gt;: Does not add new instances.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terminate&lt;/strong&gt;: Does not remove instances.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Healthchecks&lt;/strong&gt;: No more healthchecks. The states of the machines are no longer changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;replaceUnhealthy&lt;/strong&gt;: Bad instances are no longer replaced.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AZRebalance&lt;/strong&gt;: No longer rebalances instances across Availability Zones.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alarm Notifications&lt;/strong&gt;: No longer answers to alarms, including scaling alarms.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scheduled Actions&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add to Load Balancer&lt;/strong&gt;: Instances that are created are no longer added to the Target Group.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling in can be disabled on specific instances of the Autoscaling group. This means the instance never gets terminated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Autoscaling Groups Termination Policies&lt;/strong&gt;: Determine which instances are terminated first:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Default&lt;/strong&gt;: Check for the Availability Zone with the largest number of instances, and terminate one instance. Finally, terminate the instance with the oldest launch configuration. &lt;/li&gt;
&lt;li&gt;Oldest Instance.&lt;/li&gt;
&lt;li&gt;OldestLaunchConfiguration&lt;/li&gt;
&lt;li&gt;NewestInstance&lt;/li&gt;
&lt;li&gt;NewestLaunchConfiguration&lt;/li&gt;
&lt;li&gt;ClosestToNextInstanceHour&lt;/li&gt;
&lt;li&gt;OldestLaunchTemplate&lt;/li&gt;
&lt;li&gt;AllocationStrategy&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integrating SQS with ASG&lt;/strong&gt;: Autoscaling Groups can be integrated with SQS, by specifying the number of SQS messages per instance, and use it as a scaling policy for the ASG. To prevent VMs that are processing requests from being deleted, we can create a script that enables deletion protection on an instance, and then disables it when the VM is not processing any messages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;ASG Deployment Strategies&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;in-place&lt;/strong&gt;: Deployment on the same VM.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rolling Update&lt;/strong&gt;: Creates a new instance with the new version.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replace&lt;/strong&gt;: Creates a new autoscaling group.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Blue/Green&lt;/strong&gt;: Creates a new ASG and ALB. We might need to shift traffic using route 53.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  DynamoDB
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;When creating a table, it is mandatory to either create a &lt;strong&gt;unique partition key&lt;/strong&gt;, or &lt;strong&gt;composite key&lt;/strong&gt;(&lt;strong&gt;partition key&lt;/strong&gt; and &lt;strong&gt;sort key&lt;/strong&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Local Secondary Indexes&lt;/strong&gt; can be created at table creation only. They are composite key formed of the same partition key as that of the table, but different sort key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Global Secondary Indexes&lt;/strong&gt; is when the primary key is different than that of the partition. It can be created and managed after table creation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DAX clusters are a form of caching for DynamoDB.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DynamoDB Streams are used for realtime operations on the table.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To enable global tables, streams must be enabled, and the table must be empty. Global tables are replica tables in different regions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Disaster Recovery
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Recovery Point Objective (RPO)&lt;/strong&gt;: How much data is lost between a disaster and a successful backup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Recovery Time Objective (RTO)&lt;/strong&gt;: How much time it takes to recover.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Disaster Recovery Strategies&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Backup and Restore&lt;/strong&gt;: High RTO and RPO, but cost is low.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pilot Light&lt;/strong&gt;: Small version of the application is always running in the cloud. Lower RTO and RPO and managed cost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Warm Standby&lt;/strong&gt;: Full system is up and running but at minimum size. More costly, but even lower RTO and RPO.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multisite / Hot Site&lt;/strong&gt;: Full production scale on AWS and on-premise. &lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Additional Information
&lt;/h2&gt;

&lt;p&gt;Below is a list of information gathered from different sources online, including, but not limited to, AWS tutorials and posts, AWS Documentation, forums, etc:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS Trusted Advisor is integrated with Cloudwatch Metrics and Events. You can use Cloudwatch to monitor the results generated by Trusted Advisor, create alarms, and react to status changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CodePipeline can be integrated with Cloudformation in order to create continuous delivery pipelines to create and update stacks. Input parameters, parameter overrides, and mappings can be used to ensure a generic template which inputs vary based on the environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The AWS Application Discovery Service helps plan a migration to the AWS Cloud from On-premise servers. Whenever we want to migrate application from on-premise servers to AWS, it is best to use this service. The discovery service can be installed on the on-premise servers in two ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Agentless discovery, works VMWare environments only, through deploying a connector to the VMware vCenter.&lt;/li&gt;
&lt;li&gt;Agent-based discovery, through deploying the Application Discovery agent on the VM (Windows or Linux). 
The service collects static information, such as CPU, RAM, hostname, IP, etc. Finally, the service integrates with the AWS Migration Hub, which simplifies the migration tracking.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Config allows you to Evaluate AWS resources against desired settings, retrieve (current and historical) configuration of resources in the account, and relationship between resources. AWS Config aggregators allow the collection of compliance data from multiple regions, multiple accounts, or accounts within an organization. Therefore, when you need to retrieve compliance information across regions, accounts or organizations, use AWS Config rules with aggregators.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lambda functions can be used to validate part of the deployment on ECS. In this case, CodeDeploy can be configured to use a load balancer with two target groups, for test and production environments for example. Tests should be performed in either &lt;strong&gt;BeforeAllowTestTraffic&lt;/strong&gt; or &lt;strong&gt;AfterAllowTestTraffic&lt;/strong&gt; hooks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;While Canary Deployment is automatically supported in CodePipeline while deploying to lambda, it cannot be done for application in autoscaling groups. In order to do Canary deployments in autoscaling groups, we need to have two environments, and the traffic percentage is controlled by Route 53 for example.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS proactively monitors popular repository websites for exposed AWS credentials. Once found, AWS generates an &lt;strong&gt;AWS_RISK_CREDENTIALS_EXPOSED&lt;/strong&gt; event in Cloudwatch events, with which an administrator can interact. &lt;strong&gt;aws.health&lt;/strong&gt; can be used as an event source.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cloudtrail can log management events, for example logging in, creating resources, etc, and data events, such as object level operarions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When performing updates in an Autoscaling group, you can do a ReplacingUpdate by setting the &lt;strong&gt;AutoScalingReplacingUpdate&lt;/strong&gt; and &lt;strong&gt;WillReplace&lt;/strong&gt; flag to true, which will create a new autoscaling group, or a rolling update using the &lt;strong&gt;AutoScalingRollingUpdate&lt;/strong&gt; property, which will only create new EC2 machines within the same ASG.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Config is not capable of tracking any changes or maintenance initiated by AWS. AWS Health can do so.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Subscriptions can be used to access a real time feed of logs, and have it sent for other services for processing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon Macie is a security tool that uses ML to protect sensitive data in AWS (S3 in particular)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you have a set of EC2 machines in an ASG, and some of the machines are terminating with no clear cause. You can add a lifecycle hook to the ASG, to move the instance in &lt;strong&gt;terminating&lt;/strong&gt; state to &lt;strong&gt;Terminating:Wait&lt;/strong&gt; state. Then you can configure Systems Manager Automation document to collect the logs of the machine.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Practice Exams
&lt;/h2&gt;

&lt;p&gt;In addition to Stephane's course, a good approach would be to solve a few practice exams. This will familiarize you with the nature of questions to be expected on the exam. &lt;/p&gt;

&lt;p&gt;As a matter of fact, studying the course alone is not a guarantee to pass the exam. The exam's questions are mostly use cases that require strong analytical skills, in addition to a good experience in providing DevOps solutions on AWS. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.udemy.com/course/aws-certified-devops-engineer-professional-practice-exams-amazon-dop-c02/" rel="noopener noreferrer"&gt;Jon Bonso's Practice tests&lt;/a&gt; are a great way to further apply the knowledge you learned, and better prepare for the exam. Two things make Jon's practice tests a must have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They are close to the AWS exams, giving you a great overview on what to expect.&lt;/li&gt;
&lt;li&gt;There is a great explanation to each use case. Jon explains in great detail each question, along with the correct choice of answers. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition, AWS posts &lt;a href="https://d1.awsstatic.com/training-and-certification/docs-devops-pro/AWS-Certified-DevOps-Engineer-Professional_Sample-Questions.pdf" rel="noopener noreferrer"&gt;sample questions&lt;/a&gt; with explanations.&lt;/p&gt;

&lt;p&gt;Finally, &lt;a href="https://explore.skillbuilder.aws/learn/course/14673/aws-certified-devops-engineer-professional-official-practice-question-set-dop-c02-english" rel="noopener noreferrer"&gt;AWS SkillBuilder&lt;/a&gt; posts a free set of sample exam questions for its users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, the AWS DevOps exam is a great way of testing your DevOps skills on AWS. However, achieving this certificate is not a walk in the park, and requires a lot experience, as well as preparation. Nonetheless, it is absolutely worth it!! &lt;/p&gt;

&lt;p&gt;Best of luck!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>certification</category>
      <category>devops</category>
      <category>professional</category>
    </item>
    <item>
      <title>Web Application Deployment on AWS</title>
      <dc:creator>Nicolas El Khoury</dc:creator>
      <pubDate>Mon, 19 Dec 2022 07:09:30 +0000</pubDate>
      <link>https://forem.com/aws-builders/web-application-deployment-on-aws-31a7</link>
      <guid>https://forem.com/aws-builders/web-application-deployment-on-aws-31a7</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;With the rapid evolution of the software industry in general, developing and deploying web applications is not as easy as writing the code and deploying it on remote servers. Today’s software development lifecycle necessitates the collaboration of different teams (e.g., developers, designers, managers, system administrators, etc), working on different tools and technologies to serve the challenging application requirements in an organized and optimized matter. Such collaboration may prove to be extremely complex and costly if not properly managed. &lt;/p&gt;

&lt;p&gt;This tutorial provides an introduction to Web Applications and the different Infrastructure Types. To put the information provided into good use, a demo is applied on &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;Amazon Web Services&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Moreover, multiple infrastructure types exist to deploy and serve these web applications. Each option possesses its advantages, disadvantages, and use cases. &lt;/p&gt;

&lt;p&gt;This tutorial comprises a theoretical part, which aims to list and describe different concepts related to web applications and infrastructure types, followed by a practical part to apply and make sense of the information discussed.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Everything and Why
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Websites vs Web Applications
&lt;/h2&gt;

&lt;p&gt;By definition, websites are a set of interconnected documents, images, videos, or any other piece of information, developed usually using HTML, CSS, and Javascript. User interaction with websites is limited to the user fetching the website’s information only. Moreover, websites are usually stateless, and thus requests from different users yield the same results at all times. Examples of websites include but are not limited to company websites, blogs, news websites, etc.&lt;/p&gt;

&lt;p&gt;Web applications on the other hand are more complex than websites and offer more functionalities to the user. Google, Facebook, Instagram, Online gaming, and e-commerce are all examples of web applications. Such applications allow the user to interact with them in different ways, such as creating accounts, playing games, buying and selling goods, etc. In order to provide such complex functionalities, the architecture of the web application can prove to be much more complex than that of a website.&lt;/p&gt;

&lt;p&gt;A web application is divided into three layers. More layers can be added to the application design, but for simplicity purposes, this tutorial focuses on only three:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Presentation Layer:&lt;/strong&gt; Also known as the client side. The client applications are designed for displaying the application information and for user interaction. Frontend applications are developed using many technologies: &lt;strong&gt;AngularJS, ReactJS, VueJS, etc.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application (Business Logic) Layer:&lt;/strong&gt; is part of the application’s server side. It accepts and processes user requests, and interacts with the databases for data modification. Such applications can be developed using &lt;strong&gt;NodeJS, Python, PHP, Java, etc.&lt;/strong&gt; &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database Layer:&lt;/strong&gt; This is where all the data resides and is persisted.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Example Deployment on AWS
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyse74okfysp8i3pueaa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyse74okfysp8i3pueaa.png" alt="Example Deployment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagram above displays an example deployment of a web application on AWS, and how users can interact with it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A database is deployed and served using AWS’ managed database service (e.g., &lt;a href="https://aws.amazon.com/rds/" rel="noopener noreferrer"&gt;AWS RDS&lt;/a&gt;). As a best practice, it is recommended not to expose the database to the Internet directly, and to properly secure the access.&lt;/li&gt;
&lt;li&gt;The backend application is deployed on a server with a public IP.&lt;/li&gt;
&lt;li&gt;The frontend application is deployed and served on &lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt;AWS S3&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;All the application components reside within an &lt;a href="https://aws.amazon.com/vpc/" rel="noopener noreferrer"&gt;AWS VPC&lt;/a&gt; in a region.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A typical HTTP request/response cycle could be as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The client sends an HTTP request to the front-end application. &lt;/li&gt;
&lt;li&gt;The frontend application code is returned and loaded on the client’s browser.&lt;/li&gt;
&lt;li&gt;The client sends an API call through the frontend application, to the backend application.&lt;/li&gt;
&lt;li&gt;The backend application validates and processes the requests.&lt;/li&gt;
&lt;li&gt;The backend application communicates with the database for managing the data related to the request.&lt;/li&gt;
&lt;li&gt;The backend application sends an HTTP response containing the information requested by the client.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Web Application Components
&lt;/h2&gt;

&lt;p&gt;As discussed, a website is composed of a simple application, developed entirely using HTML, CSS, and Javascript. On the other hand, web applications are more complex and are made of different components. In its simplest form, a web application is composed of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Frontend Application.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend Application&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The components above are essential to creating web applications. However, the latter may require additional components to serve more complex functionalities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;In memory database&lt;/strong&gt;: For caching.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Message bus&lt;/strong&gt;: For asynchronous communication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content Delivery Network:&lt;/strong&gt; For serving and caching static content.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workflow Management Platform:&lt;/strong&gt; For organizing processes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As the application’s use case grows in size and complexity, so will the underlying web application. Therefore, a proper way to architect and organize the application is needed. Below is a diagram representing the architecture of a web application deployed on AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8a64nh34238ozz62k3yy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8a64nh34238ozz62k3yy.png" alt="Web Application Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure Types
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z9x2edat8b0rhba3rxa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z9x2edat8b0rhba3rxa.png" alt="Infrastructure Types"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagram above lists different infrastructure options for deploying and managing applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Physical Servers
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuiukehpeskjvl0367921.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuiukehpeskjvl0367921.png" alt="Physical Servers"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this type of infrastructure, hardware resources would have to be purchased, configured, and managed in a physical location (e.g., a datacenter). Once configured, and an operating system is installed, applications can be deployed on them. The correct configuration and management of the servers and applications must be ensured throughout their lifecycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ownership and Customization:&lt;/strong&gt; Full control over the server and application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance:&lt;/strong&gt; Full dedication of server resources to the applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Large CAPEX and OPEX:&lt;/strong&gt; Setting up the required infrastructure components may require a large upfront investment, in addition to another one for maintaining the resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Management overhead:&lt;/strong&gt; to continuously support and manage the resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lack of scalability:&lt;/strong&gt; Modifying the compute resources is not intuitive, and requires time, and complicated labor work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Mismanagement:&lt;/strong&gt; Due to the lack of scalability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improper isolation between applications&lt;/strong&gt;: All the applications deployed on the same physical host share all the host’s resources together.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance Degradation over time&lt;/strong&gt;: Hardware components will fail over time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Virtual Machines
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fll72ebj9j1fc4oghw9v9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fll72ebj9j1fc4oghw9v9.png" alt="Virtual Machines"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the best practices when deploying web applications is to isolate the application components on dedicated environments and resources. Consider an application composed of the following components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A MySQL database.&lt;/li&gt;
&lt;li&gt;A NodeJS backend API service.&lt;/li&gt;
&lt;li&gt;A Dotnet backend consumer service.&lt;/li&gt;
&lt;li&gt;A ReactJS frontend application.&lt;/li&gt;
&lt;li&gt;RabbitMQ.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Typically, each of these components must be properly installed and configured on the server, with enough resources available. Deploying and managing such an application on physical servers becomes cumbersome, especially at scale:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploying all the components on one physical server may pose several risks:

&lt;ul&gt;
&lt;li&gt;Improper isolation for each application component.&lt;/li&gt;
&lt;li&gt;Race conditions, deadlocks, and resource overconsumption by components.&lt;/li&gt;
&lt;li&gt;The server represents a single point of failure.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Deploying the components on multiple physical servers is not an intuitive approach due to the disadvantages listed above, especially those related to cost and lack of scalability.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Virtual Machines represent the digitized version of the physical servers. Hypervisors (e.g., &lt;a href="https://www.virtualbox.org/" rel="noopener noreferrer"&gt;Oracle VirtualBox&lt;/a&gt;, &lt;a href="https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/about/" rel="noopener noreferrer"&gt;Hyper-v&lt;/a&gt;, and &lt;a href="https://www.vmware.com/" rel="noopener noreferrer"&gt;VMWare&lt;/a&gt;) are software solutions that allow the creation and management of one or more Virtual Machines on one Physical Server. Different VMs with different flavors can be created and configured on the same physical host. For instance, one physical server may host three different VMs with the following specs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;VM1 —&amp;gt; 1 vCPU —&amp;gt; 2 GB RAM —&amp;gt; 20 GB SSD —&amp;gt; Ubuntu 18.04.&lt;/li&gt;
&lt;li&gt;VM2 —&amp;gt; 2 vCPU —&amp;gt; 4 GB RAM —&amp;gt; 50 GB SSD —&amp;gt; Windows 10.&lt;/li&gt;
&lt;li&gt;VM3 —&amp;gt; 4 vCPU —&amp;gt; 3 GB RAM —&amp;gt; 30 GB SSD —&amp;gt; MAC OS.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each virtual machine possesses its dedicated resources and can be managed separately from the other ones. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Low Capital Expenditure&lt;/strong&gt;: No need to buy and manage hardware components.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility:&lt;/strong&gt; The ability to quickly create, destroy, and manage different VM sizes with different flavors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disaster Recovery:&lt;/strong&gt; Most VM vendors are shipped with solid backup and recovery mechanisms for the virtual machines.&lt;/li&gt;
&lt;li&gt;Reduced risk of resource misuse (over and under-provisioning).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proper environment isolation:&lt;/strong&gt; for application components.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Performance issues:&lt;/strong&gt; Virtual machines have an extra level of virtualization before accessing the computing resources, rendering them less performant than the physical machines.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security issues:&lt;/strong&gt; Multiple VMs share the compute resources of the underlying host. Without proper security mechanisms, this may pose a huge security risk for the data in each VM.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Increased overhead and resource consumption:&lt;/strong&gt; Virtualization includes the Operating System. As the number of VMs placed on a host, more resources are wasted as overhead to manage each VM's requirements.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Containers
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9owi4cm2izmfqjfawznn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9owi4cm2izmfqjfawznn.png" alt="Containers"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While Virtual Machines virtualize the underlying hardware, containerization is another form of virtualization, but for the Operating System only. The diagram above visualizes containers. Container Engines (e.g., &lt;a href="https://docker.com" rel="noopener noreferrer"&gt;Docker&lt;/a&gt;),  are software applications that allow the creation of lightweight environments containing only the application, and the required binaries for it to run on the underlying server. All the containers on a single machine share the system resources and the operating system, making containers a much more lightweight solution than virtual machines in general. A container engine, deployed on the server (whether physical or virtual), takes care of the creation and management of Containers on the server (Similar function to hypervisors and VMs)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Decreased Overhead:&lt;/strong&gt; Containers require fewer resources than VMs, especially since virtualization does not include the Operating System.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Portability&lt;/strong&gt;: Container Images are highly portable, and can be easily deployed on different platforms; a Docker image can be deployed on any Container Engine that supports it (e.g., &lt;a href="https://docker.com" rel="noopener noreferrer"&gt;Docker&lt;/a&gt;, &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, &lt;a href="https://aws.amazon.com/ecs/" rel="noopener noreferrer"&gt;AWS ECS&lt;/a&gt;, &lt;a href="https://aws.amazon.com/eks/" rel="noopener noreferrer"&gt;AWS EKS&lt;/a&gt;, &lt;a href="https://azure.microsoft.com/en-us/products/kubernetes-service/" rel="noopener noreferrer"&gt;Microsoft AKS&lt;/a&gt;, &lt;a href="https://cloud.google.com/kubernetes-engine" rel="noopener noreferrer"&gt;Google Kubernetes Engine&lt;/a&gt;, etc).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faster build and release cycles:&lt;/strong&gt; Containers, due to their nature, enhance the Software development lifecycle, from development to continuous delivery of software changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Persistence:&lt;/strong&gt; Containers, although support data persistence, through different mechanisms, is still considered a bad solution for applications that require persistent data (e.g., stateful applications, databases, etc). Up until today, it is not advised to deploy such applications as containers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-Platform incompatibility:&lt;/strong&gt; Containers designed to work on one platform, will not work on other platforms. For instance, Linux containers do not work on Windows Operating Systems and vice versa.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Serverless
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvk2ivpjl5lr459vwi4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvk2ivpjl5lr459vwi4c.png" alt="Serverless"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Serverless solutions (e.g., &lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda Functions&lt;/a&gt;, &lt;a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-overview" rel="noopener noreferrer"&gt;Microsoft Azure Functions&lt;/a&gt;, &lt;a href="https://cloud.google.com/functions" rel="noopener noreferrer"&gt;Google Functions&lt;/a&gt;), mainly designed by cloud providers not long ago, are also becoming greatly popular nowadays. Despite the name, serverless architectures are not really without servers. Rather, solution providers went deeper into the virtualization, removing the need the focus on anything but writing the application code. The code is packaged and deployed into specialized “functions” that take care of managing and running it. Serverless solutions paved the way for new concepts, especially Function as a service (FaaS), which promotes the creation and deployment of a single function per serverless application (e.g., one function to send verification emails, as soon as a new user is created).&lt;/p&gt;

&lt;p&gt;The diagram showcases the architecture of serverless solutions. Application code is packaged and uploaded to a function, which represents a virtualized environment that is completely taken care of by the provider.&lt;/p&gt;

&lt;p&gt;Serverless architecture, although alleviate a lot of challenges presented by the previous three infrastructure types, is still unable to replace any of them due to its many limitations. Serverless architectures do not meet the requirements of all use cases and therefore work best in conjunction with other infrastructure types.&lt;/p&gt;

&lt;p&gt;Most serverless solutions are offered by providers, rather than being a solution that can be deployed and managed by anyone, thus making it a less favored solution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost&lt;/strong&gt;: Users only pay for the resources consumed during the time of execution. Idle functions generally do not use any resources, and therefore the cost of operation is greatly reduced (as opposed to paying for a server that is barely used).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Serverless models are highly scalable by design, and do not require the intervention of the user.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faster build and release cycles:&lt;/strong&gt; Developers only need to focus on writing code and uploading it to a readily available infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: The application code and data are handled by third-party providers. Therefore, security measures are all outsourced to the managing provider. The security concerns are some of the biggest for users of this serverless model, especially when it comes to sensitive applications that have strict security requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Privacy&lt;/strong&gt;: The application code and data are executed on shared environments with application codes, which poses huge privacy (and security) concerns.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vendor Lock-in&lt;/strong&gt;: Serverless solutions are generally offered by third-party providers (e.g., AWS, Microsoft, Google, etc). Each of these solutions is tailored to the providers’ interests. For instance, a function deployed on AWS Lambda functions may not necessarily work on Azure functions without code modifications. Excessive use and dependence on a provider may lead to serious issues of vendor lock-ins, especially as the application grows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex Troubleshooting:&lt;/strong&gt; In contrast with the ease of use and deployment of the code, troubleshooting and debugging the applications are not straightforward. Serverless models do not provide access whatsoever to the underlying infrastructure and provide their generic troubleshooting tools, which may not always be enough.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Docker
&lt;/h2&gt;

&lt;p&gt;Docker is a containerization software that aids in simplifying the workflow, by enabling portable and consistent applications that can be deployed rapidly anywhere, thus allowing software development teams to operate the application in a more optimized way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Container Images
&lt;/h3&gt;

&lt;p&gt;A container image is nothing but a snapshot of the desired environment. For instance, a sample image may contain a MySQL database, NGINX, or a customized NodeJS RESTful service. A Docker image is a snapshot of an isolated environment, usually created by a maintainer, and can be stored in a Container Repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Containers
&lt;/h3&gt;

&lt;p&gt;A container is the running version of an image. A container cannot exist without an image. A container uses an image as a starting point for that process. &lt;/p&gt;

&lt;h3&gt;
  
  
  Container Registries
&lt;/h3&gt;

&lt;p&gt;A container registry is a service to store and maintain images. Container registries can be either public, allowing any user to download the public images, or private, requiring user authentication to manage the images. Examples of Container Registries include but are not limited to: &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;Docker Hub&lt;/a&gt;, &lt;a href="https://aws.amazon.com/ecr/" rel="noopener noreferrer"&gt;Amazon Elastic Container Registry (ECR)&lt;/a&gt;, and &lt;a href="https://azure.microsoft.com/en-us/products/container-registry/" rel="noopener noreferrer"&gt;Microsoft Azure Container Registry&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dockerfiles
&lt;/h3&gt;

&lt;p&gt;A Dockerfile is a text document, interpreted by Docker, and contains all the commands required to build a certain Docker image. A Dockerfile holds all the required commands and allows for the creation of the resulting image using one build command only. &lt;/p&gt;

&lt;h1&gt;
  
  
  Knowledge Application
&lt;/h1&gt;

&lt;p&gt;To better understand the difference between the concepts above, this tutorial will perform the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creation of the networking and compute resources on AWS&lt;/li&gt;
&lt;li&gt;Deployment of a simple application on an AWS EC2 machine&lt;/li&gt;
&lt;li&gt;Containerization of the application&lt;/li&gt;
&lt;li&gt;Deployment of the containerized application on an AWS EC2 machine&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS Infrastructure
&lt;/h2&gt;

&lt;p&gt;The infrastructure resources will be deployed in the region of Ireland (eu-west-1), in the default VPC.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Group
&lt;/h3&gt;

&lt;p&gt;A Security group allowing inbound connection from anywhere to ports 22, and 80 is needed. To create such a security group, navigate to &lt;strong&gt;AWS EC2&lt;/strong&gt; —&amp;gt; &lt;strong&gt;Security Groups&lt;/strong&gt; —&amp;gt; &lt;strong&gt;Create security group&lt;/strong&gt;, with the following parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Security group name&lt;/strong&gt;: aws-demo&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Description&lt;/strong&gt;: Allows inbound connections to ports 22 and 80 from anywhere&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VPC&lt;/strong&gt;: default VPC&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inbound rules&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rule1:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Type&lt;/strong&gt;: SSH&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source&lt;/strong&gt;: Anywhere-IPv4&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Rule2:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Type&lt;/strong&gt;: HTTP&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source&lt;/strong&gt;: Anywhere-IPv4&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdh3t430wh6y6rmjr1yk5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdh3t430wh6y6rmjr1yk5.png" alt="Security Group"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Key pair
&lt;/h3&gt;

&lt;p&gt;An SSH key pair is required to SSH to Linux EC2 instances. To create a key-pair, navigate to &lt;strong&gt;AWS EC2&lt;/strong&gt; —&amp;gt; &lt;strong&gt;Key pairs&lt;/strong&gt; —&amp;gt; &lt;strong&gt;Create key pair&lt;/strong&gt;, with the following parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name: aws-demo&lt;/li&gt;
&lt;li&gt;Private key file format: .pem&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once created, the private key is downloaded to your machine. For it to properly work, the key must be moved to a hidden directory, and its permissions modified (the commands below work on Mac OS. The commands may be different for other operating systems). &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Create a hidden directory&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; ~/.keypairs/aws-demo
&lt;span class="c"&gt;# Move the key to the created directory&lt;/span&gt;
&lt;span class="nb"&gt;mv&lt;/span&gt; ~/Downloads/aws-demo.pem ~/.keypairs/aws-demo/
&lt;span class="c"&gt;# Change the permissions of the key&lt;/span&gt;
&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;400 ~/.keypairs/aws-demo/aws-demo.pem


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscthtnu189c6gaelqzg7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fscthtnu189c6gaelqzg7.png" alt="Key pair"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  IAM Role
&lt;/h3&gt;

&lt;p&gt;An IAM role contains policies and permissions granting access to actions and resources in AWS. The IAM role is assigned to an AWS resource (in this case the EC2 machine). To create an IAM role, navigate to &lt;strong&gt;IAM&lt;/strong&gt; —&amp;gt; &lt;strong&gt;Roles&lt;/strong&gt; —&amp;gt; &lt;strong&gt;Create Role&lt;/strong&gt;, with the following paramters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;*&lt;strong&gt;&lt;em&gt;Trusted entity type:&lt;/em&gt;&lt;/strong&gt;*  AWS Service&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Common use cases&lt;/strong&gt;: EC2&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Permissions policies:&lt;/strong&gt; AdministratorAccess&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Role Name&lt;/strong&gt;: aws-demo&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create Role. This role will be assigned to the EC2 machine during creation.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS EC2 Machine
&lt;/h3&gt;

&lt;p&gt;An AWS EC2 machines, with Ubuntu 20.04 is required. To create the EC2 machine, navigate to &lt;strong&gt;AWS EC2&lt;/strong&gt; —&amp;gt; &lt;strong&gt;instances&lt;/strong&gt; —&amp;gt; &lt;strong&gt;Launch instances&lt;/strong&gt;, with the following parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Name&lt;/strong&gt;: aws-demo&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AMI&lt;/strong&gt;: Ubuntu Server 20.04 LTS (HVM), SSD Volume Type&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Instance Type&lt;/strong&gt;: t3.medium (t3.micro can be used for free tier, but may suffer from performance issues)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key pair name&lt;/strong&gt;: aws-demo&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Settings&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Select existing security group&lt;/strong&gt;: aws-demo&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Configure storage&lt;/strong&gt;: 1 x 25 GiB gp2 Root volume&lt;/li&gt;

&lt;li&gt;Advanced details:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;IAM instance profile&lt;/strong&gt;: aws-demo&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Leave the rest as defaults and launch the instance&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fru6cy4irdbcp11uct0a2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fru6cy4irdbcp11uct0a2.png" alt="EC2 VM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An EC2 VM is created, and is assigned both a private and a public IPv4 addresses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo641o5bwn3bg7zzoc28j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo641o5bwn3bg7zzoc28j.png" alt="Security Group"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In addition, the security group created is attached correctly to the machine. Telnet is one way to ensure the machine is accessible on ports &lt;strong&gt;22&lt;/strong&gt; and &lt;strong&gt;80:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Make sure to replace the machine's IP with the one attributed to your machine&lt;/span&gt;
telnet 3.250.206.251 22
telnet 3.250.206.251 80


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3or4wvkv00rgadaea4g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3or4wvkv00rgadaea4g.png" alt="Telnet"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, SSH to the machine, using the key pair created: &lt;code&gt;ssh ubuntu@3.250.206.251 -i aws-demo.pem&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhqcejy6m2kba7fh4pgpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhqcejy6m2kba7fh4pgpv.png" alt="SSH Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The last step would be to install the AWS CLI:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Update the package repository&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="c"&gt;# Install unzip on the machine&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; unzip
&lt;span class="c"&gt;# Download the zipped package&lt;/span&gt;
curl &lt;span class="s2"&gt;"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"awscliv2.zip"&lt;/span&gt;
&lt;span class="c"&gt;# unzip the package&lt;/span&gt;
unzip awscliv2.zip
&lt;span class="c"&gt;# Run the installer&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./aws/install


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Ensure the AWS CLI is installed by checking the version: &lt;code&gt;aws --version&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd1l4eko8punhj3yvn1ts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd1l4eko8punhj3yvn1ts.png" alt="SSH Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Application Deployment on the EC2 machine
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Application Code
&lt;/h3&gt;

&lt;p&gt;The application to be deployed is a simple HTML document:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;

&lt;span class="cp"&gt;&amp;lt;!DOCTYPE html&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;html&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;head&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;My First Application&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/head&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;p&amp;gt;&lt;/span&gt;I have no idea what I'm doing.&lt;span class="nt"&gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Apache2 Installation
&lt;/h3&gt;

&lt;p&gt;A Webserver is needed to serve the web application. To install Apache2:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update the local package index to reflect the latest upstream changes: &lt;code&gt;sudo apt-get update&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Install the &lt;strong&gt;Apache2&lt;/strong&gt; package: &lt;code&gt;sudo apt-get install -y apache2&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Check if the service is running: &lt;code&gt;sudo service apache2 status&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Verify that the deployment worked by hitting the public IP of the machine:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feh47z1yq63xaztha7dgl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feh47z1yq63xaztha7dgl.png" alt="Apache Default page"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Application Deployment
&lt;/h3&gt;

&lt;p&gt;To deploy the application, perform the following steps:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Create a directory&lt;/span&gt;
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; /var/www/myfirstapp
&lt;span class="c"&gt;# Change the ownership to www-data&lt;/span&gt;
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; www-data:www-data /var/www/myfirstapp
&lt;span class="c"&gt;# Change the directory permissions&lt;/span&gt;
&lt;span class="nb"&gt;sudo chmod&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; 755 /var/www/myfirstapp
&lt;span class="c"&gt;# Create the index.html file and paste the code in it&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;nano /var/www/myfirstapp/index.html
&lt;span class="c"&gt;# Change the owership to www-data&lt;/span&gt;
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; www-data:www-data /var/www/myfirstapp/index.html
&lt;span class="c"&gt;# Create the log directory&lt;/span&gt;
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; /var/log/myfirstapp
&lt;span class="c"&gt;# Change the ownership of the directory&lt;/span&gt;
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; www-data:www-data /var/log/myfirstapp/


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Virtual Host
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create the virtual host file: &lt;code&gt;sudo nano /etc/apache2/sites-available/myfirstapp.conf&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Paste the following:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&amp;lt;VirtualHost &lt;span class="k"&gt;*&lt;/span&gt;:80&amp;gt;
    DocumentRoot /var/www/myfirstapp
    ErrorLog /var/log/myfirstapp/error.log
    CustomLog /var/log/myfirstapp/requests.log combined
&amp;lt;/VirtualHost&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Enable the configuration:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Enable the site configuration&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;a2ensite myfirstapp.conf
&lt;span class="c"&gt;# Disable the default configuration&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;a2dissite 000-default.conf
&lt;span class="c"&gt;# Test the configuration&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apache2ctl configtest
&lt;span class="c"&gt;# Restart apache&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart apache2


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Perform a request on the server. The response will now return the HTML document created:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0sor7xpd5cs3nddahy3z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0sor7xpd5cs3nddahy3z.png" alt="Custom Website"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In conclusion, to deploy the application on the EC2 machine, several tools had to be deployed and configured. While this is manageable for one simple application, things won’t be as easy and straightforward when the application grows in size. For instance, assume an application of 5 components must be deployed. Managing each component on the server will be cumbersome. The next part demonstrates how containerization can alleviate such problems.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remove the apache webserver: &lt;code&gt;sudo apt-get purge -y apache2&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftjp72yfagbuqxj3ufl3e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftjp72yfagbuqxj3ufl3e.png" alt="Apache Removed"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Deployment using containers
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Docker Installation
&lt;/h3&gt;

&lt;p&gt;Install Docker on the machine:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Update the package index and install the required packages&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; ca-certificates curl gnupg lsb-release

&lt;span class="c"&gt;# Add Docker’s official GPG key:&lt;/span&gt;
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/apt/keyrings
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://download.docker.com/linux/ubuntu/gpg &lt;span class="se"&gt;\&lt;/span&gt;
| &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/docker.gpg

&lt;span class="c"&gt;# Set up the repository&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [arch=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;dpkg &lt;span class="nt"&gt;--print-architecture&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
signed-by=/etc/apt/keyrings/docker.gpg] &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
https://download.docker.com/linux/ubuntu &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;lsb_release &lt;span class="nt"&gt;-cs&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; stable"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/docker.list &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null

&lt;span class="c"&gt;# Update the package index again&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update

&lt;span class="c"&gt;# Install the latest version of docker&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; docker-ce docker-ce-cli containerd.io &lt;span class="se"&gt;\&lt;/span&gt;
docker-compose-plugin

&lt;span class="c"&gt;# Add the Docker user to the existing User's group &lt;/span&gt;
&lt;span class="c"&gt;#(to run Docker commands without sudo)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; docker &lt;span class="nv"&gt;$USER&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To validate that Docker is installed and the changes are all applied, restart the SSH session, and query the docker version: &lt;code&gt;docker ps -a&lt;/code&gt;. A response similar to the below indicates the success of the installation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftx19ryg3zycox1n9r1bl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftx19ryg3zycox1n9r1bl.png" alt="Docker Installed"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Base Image
&lt;/h3&gt;

&lt;p&gt;There exists several &lt;a href="https://hub.docker.com/search?q=&amp;amp;type=image&amp;amp;image_filter=official" rel="noopener noreferrer"&gt;official Docker images&lt;/a&gt; curated and hosted on Docker Hub, aiming to serve as starting points for users attempting to build on top of them. There exists an official repository for the &lt;a href="https://hub.docker.com/_/httpd" rel="noopener noreferrer"&gt;Apache server&lt;/a&gt;, containing all the information necessary to deploy and operate the image. Start by downloading the image to the server: &lt;code&gt;docker pull httpd:2.4-alpine&lt;/code&gt;. &lt;code&gt;2.4-alpine&lt;/code&gt; is the image tag, an identifier to distinguish the different image versions available. After downloading the image successfully, list the available images &lt;code&gt;docker images&lt;/code&gt;. The image below clearly shows the successful deployment of the image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd27q9i30t206ag2fbuah.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd27q9i30t206ag2fbuah.png" alt="HTTPD image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, it is time to run an container from this image. The goal is to run the apache server on port 80, and have it accept requests. Create a Docker container using the command: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -d --name myfirstcontainer -p 80:80 httpd:2.4-alpine&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The command above is explained as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;run&lt;/code&gt;: instructs docker to run a container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-d&lt;/code&gt;: Flag to run the container in the background (Detached mode). Omitting this flag will cause the container to run in the foreground.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--name&lt;/code&gt;: A custom name can be given to the container. If no name is given, Docker will assign a random name for the container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-p&lt;/code&gt;: Publish a container's port to the host. by default, if the ports are not exposed, containers cannot be accessible from outside the Docker network. To bypass this, the &lt;code&gt;-p&lt;/code&gt; flag maps the container’s internal ports to those of hosts, allowing the containers to be reachable from outside the network. In the example above, The Apache server listens internally on port 80. The command above instructs Docker to map the container’s port 80, to the host’s port 81. Now, the container can be reached via the machine’s public IP, and on port 81, which will translate it to the container’s default port.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ensure that the container is successfully running: &lt;code&gt;docker ps -a&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;Monitor the container logs: &lt;code&gt;docker logs -f myfirstcontainer&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgb8loytga6f8antonx5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgb8loytga6f8antonx5.png" alt="My First Container"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using any browser, attempt to make a request to the container, using the machine’s public IP and port 81: &lt;code&gt;http:/3.250.206.251:80&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8s9a0jsnj02uzu4qjob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8s9a0jsnj02uzu4qjob.png" alt="My First Container Response"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Customize the Base Image&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now that the base image is successfully deployed and running, it is time to customize it. The official documentation presents clear instructions on the different ways the image can be customized, namely, creating custom Virtual Hosts, adding static content, adding custom certificates, etc.&lt;/p&gt;

&lt;p&gt;In this example, the following simple HTML page representing a website will be added, thus creating a custom image.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;

&lt;span class="cp"&gt;&amp;lt;!DOCTYPE html&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;html&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;head&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;My First Dockerized Website&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/head&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;p&amp;gt;&lt;/span&gt;I am inside a Docker Container.&lt;span class="nt"&gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The official documentation on Docker Hub clearly states that the default location for adding static content is in &lt;code&gt;/usr/local/apache2/htdocs/&lt;/code&gt;. To do so, create an interactive &lt;code&gt;sh&lt;/code&gt; shell on the container &lt;code&gt;docker exec -it myfirstcontainer sh&lt;/code&gt;. Once inside the container, navigate to the designated directory &lt;code&gt;cd /usr/local/apache2/htdocs/&lt;/code&gt;. The directory already has a file named &lt;code&gt;index.html&lt;/code&gt; which contains the default Apache page loaded above. Modify it to include the custom HTML page above, and hit the container again: &lt;code&gt;http:/3.250.206.251:80&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Clearly, the image shows that the changes have been reflected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F17jj1kgtvnqvcaq7dntp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F17jj1kgtvnqvcaq7dntp.png" alt="Custom Response - Docker"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a Custom Image
&lt;/h3&gt;

&lt;p&gt;Unfortunately, The changes performed will not persist, especially when the container crashes. As a matter of fact, by default, containers are ephemeral, which means that custom data generated during runtime will disappear as soon as the container fails. To verify it, remove the container and start it again:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; myfirstcontainer
docker ps &lt;span class="nt"&gt;-a&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; myfirstcontainer &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 httpd:2.4-alpine


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now hit the server again &lt;code&gt;http://3.250.206.251:80&lt;/code&gt;. The changes performed disappeared. To persist the changes, a custom image must be built. The custom image is a snapshot of the container after adding the custom website. Repeat the steps above to add the HTML page, and ensure the container is returning the new page again.&lt;/p&gt;

&lt;p&gt;To create a new image from the customized running container, perform the following, create a local image from the running container &lt;code&gt;docker commit myfirstcontainer&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Clearly, a new image with no name and no tag has been just created using the &lt;code&gt;docker commit&lt;/code&gt; command. Name and tag the image: &lt;code&gt;docker tag &amp;lt;image ID&amp;gt; custom-httpd:v2&lt;/code&gt;. The image should be that created by Docker, and the name and tag can be anything.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5f1nnwdyhxuipmb65gy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5f1nnwdyhxuipmb65gy.png" alt="Custom Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Remove the old container, and create a new one using the new image:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; myfirstcontainer
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; mysecondcontainer &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 custom-httpd:v2


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F096rma96k88qjo5lo0d4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F096rma96k88qjo5lo0d4.png" alt="My Second Container"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A new docker container named &lt;code&gt;mysecondcontainer&lt;/code&gt; is now running using the custom-built image. Hitting the machine on port 80 should return the new HTML page now no matter how many times the container is destroyed and created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqea22dlg48vdi779bfg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqea22dlg48vdi779bfg.png" alt="My Second Container Response"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The custom image is now located on the Virtual Machine. However, storing the image on the VM alone is not a best practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inability to efficiently share the image with other developers working on it.&lt;/li&gt;
&lt;li&gt;Inability to efficiently download and run the image on different servers.&lt;/li&gt;
&lt;li&gt;Risk of losing the image, especially if the VM is not well backed up.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A better solution would be to host the image in a Container Registry. In this tutorial, AWS ECR will be used to store the image. To create a Container repository on AWS ECR, navigate to &lt;strong&gt;Amazon ECR&lt;/strong&gt; —&amp;gt; &lt;strong&gt;Repositories&lt;/strong&gt; —&amp;gt; &lt;strong&gt;Private&lt;/strong&gt; —&amp;gt; &lt;strong&gt;Create repository&lt;/strong&gt;, with the following parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Visibility Settings&lt;/strong&gt;: Private&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repository name:&lt;/strong&gt; custom-httpd&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Leave the rest as defaults and create the repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feunxonw7hi58gq9i56cg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feunxonw7hi58gq9i56cg.png" alt="ECR"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that the repository is successfully created, perform the following to push the image from the local server to the remote repository:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, login to the ECR from the machine:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws ecr get-login-password &lt;span class="nt"&gt;--region&lt;/span&gt; eu-west-1 | docker login &lt;span class="nt"&gt;--username&lt;/span&gt; AWS &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--password-stdin&lt;/span&gt; &amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.eu-west-1.amazonaws.com


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkg7dcrpd35bfkgebz7w4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkg7dcrpd35bfkgebz7w4.png" alt="ECR Login"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The repository created on AWS ECR possesses a different name than the one we created, therefore, we need to tag the image with the correct name, and then push it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Tag the image with the correct repository name&lt;/span&gt;
docker tag custom-httpd:v2 &lt;span class="se"&gt;\&lt;/span&gt;
&amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.eu-west-1.amazonaws.com/custom-httpd:v1
&lt;span class="c"&gt;# Push the image&lt;/span&gt;
docker push &amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.eu-west-1.amazonaws.com/custom-httpd:v1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feboixcibjjxs9xcjydij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feboixcibjjxs9xcjydij.png" alt="Image push"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmpyu5u8vzhwtzw4r1fi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmpyu5u8vzhwtzw4r1fi.png" alt="Image push - ECR"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clearly, we were able to push the image to AWS ECR. Now, to finally test that the new custom image is successfully built and pushed, attempt to create a container from the image located in the Container Registry. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Delete all the containers from the server&lt;/span&gt;
docker &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;docker ps &lt;span class="nt"&gt;-a&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;# Delete all the images from the server&lt;/span&gt;
docker rmi &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;docker images&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;# List all the available images and containers (should return empty)&lt;/span&gt;
docker images
docker ps &lt;span class="nt"&gt;-a&lt;/span&gt; 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;create a third container, but this time, reference the image located in the ECR: &lt;code&gt;docker run -d --name mythirdcontainer -p 80:80 &amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.eu-west-1.amazonaws.com/custom-httpd:v1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F404xtp4yu1532smt1ax7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F404xtp4yu1532smt1ax7.png" alt="Custom image download"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image clearly shows that the image was not found locally (on the server), and therefore fetched from the ECR.&lt;/p&gt;

&lt;p&gt;Finally, hit the server again &lt;code&gt;http://3.250.206.251:80&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49o3vv4s2m5iwt0t3df7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49o3vv4s2m5iwt0t3df7.png" alt="Image response"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The server returns the custom page created. Cleaning up the server from the applications is as simple as deleting all the containers and all the images:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Delete all the containers from the server&lt;/span&gt;
docker &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;docker ps &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;# Delete all the images from the server&lt;/span&gt;
docker rmi &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;docker images&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;# List all the available images and containers (should return empty)&lt;/span&gt;
docker images
docker ps &lt;span class="nt"&gt;-a&lt;/span&gt; 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Create a Docker image using Dockerfiles&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Creating images from existing containers is one intuitive way. However, such an approach may prove to be inefficient, and inconsistent:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker images may require to be built several times a day.&lt;/li&gt;
&lt;li&gt;A Docker image may require several complex commands to be built.&lt;/li&gt;
&lt;li&gt;Difficult to maintain as the number of services and teams grows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Dockerfiles are considered a better alternative, capable of providing more consistency and allowing for automating the build steps. To better understand Dockerfiles, the rest of this tutorial attempts to Containerize the application, using Dockerfiles.&lt;/p&gt;

&lt;p&gt;To do so, The application code and the Dockerfile must be placed together:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a temporary directory: &lt;code&gt;mkdir ~/tempDir&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;and place the application code inside the directory in a file called &lt;code&gt;index.html&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;

&lt;span class="cp"&gt;&amp;lt;!DOCTYPE html&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;html&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;head&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;My Final Dockerized Website&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/head&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;p&amp;gt;&lt;/span&gt;I am Dockerized using a Dockerfile.&lt;span class="nt"&gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Create a Dockerfile next to the &lt;strong&gt;index.html&lt;/strong&gt; file, with the following content:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; httpd:2.4-alpine&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; index.html /usr/local/apache2/htdocs/&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The Dockerfile above has two instructions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use &lt;code&gt;httpd:2.4-alpine&lt;/code&gt; as base image&lt;/li&gt;
&lt;li&gt;Copy &lt;code&gt;index.html&lt;/code&gt; from the server to the &lt;code&gt;/usr/local/apache2/htdocs/&lt;/code&gt; inside the container&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The resultant directory should look as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnzep10yt6u2ec89wku4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnzep10yt6u2ec89wku4.png" alt="Directory"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To build the image using a Dockerfile, perform the following command: &lt;code&gt;docker build -f Dockerfile -t &amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.eu-west-1.amazonaws.com/custom-httpd:v-Dockerfile .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Breaking down the command above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker build&lt;/code&gt;: Docker command to build a Docker image from a Dockerfile&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-f Dockerfile&lt;/code&gt;: The path and filename of the Dockerfile (Can be named anything)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-t &amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.eu-west-1.amazonaws.com:v-dockerfile&lt;/code&gt;: The name and tag of the resulting image&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.&lt;/code&gt;: The path to the context, or the set of files to be built.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Push the image to the ECR: &lt;code&gt;docker push &amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.eu-west-1.amazonaws.com/custom-httpd:v-Dockerfile&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjri1cbnvctkpgio5iskb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjri1cbnvctkpgio5iskb.png" alt="Image build"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuvcz0gezvvf3ldixuei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuvcz0gezvvf3ldixuei.png" alt="Image push"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, to simulate a fresh installation of the image, remove all the containers and images from the server, and create a final container from the newly pushed image:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Remove existing containers&lt;/span&gt;
docker &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;docker ps &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;# Remove the images&lt;/span&gt;
docker rmi &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;docker images&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;# Create the final container &lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; myfinalcontainer &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 &amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.eu-west-1.amazonaws.com/custom-httpd:v-Dockerfile


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Hit the machine via its IP:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8shw8xsdph4izn3ldxk5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8shw8xsdph4izn3ldxk5.png" alt="Final Container"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>containerapps</category>
      <category>devops</category>
    </item>
    <item>
      <title>DevOps: What it is, What it isn't.</title>
      <dc:creator>Nicolas El Khoury</dc:creator>
      <pubDate>Wed, 16 Nov 2022 10:26:17 +0000</pubDate>
      <link>https://forem.com/devopsbeyondlimitslb/devops-what-it-is-what-it-isnt-930</link>
      <guid>https://forem.com/devopsbeyondlimitslb/devops-what-it-is-what-it-isnt-930</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;DevOps, SysOps, DevSecOps, MLOps, CloudOps, PotatoOps, are all catchy and trendy buzzwords that are circulating all over LinkedIn and the tech realm in 2022. A simple search for such keywords on LinkedIn will return thousands of people and companies "applying" to them one way or the other. &lt;/p&gt;

&lt;p&gt;Unfortunately, as with most trends, numerous definitions and variations arise, leaving the general public in confusion, creating irrelevant job positions and career paths, and leading to inefficient Software Development Lifecycles, further complicating everything!&lt;/p&gt;

&lt;p&gt;In this article, I share my personal opinion on the matter, answering the following questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is DevOps?&lt;/li&gt;
&lt;li&gt;What are DevOps Engineers?&lt;/li&gt;
&lt;li&gt;How to apply DevOps in organizations?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Background Information
&lt;/h2&gt;

&lt;p&gt;Hi, my name is Nicolas, and I have been practicing DevOps since 2016. I was, and still am, heavily involved in applying DevOps for the companies I've worked with. In those 7 years, I worked with and advised for quite a few organizations ranging from small startups with a couple of services serving a small number of clients, to large enterprises developing and deploying absolutely complicated software solutions with even more complicated requirements.&lt;/p&gt;

&lt;p&gt;Some of my most impactful achievements include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Building a multi-disciplinary DevOps unit, capable of deploying - almost - any type of software on any kind of infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creating training materials and curriculums to transform junior and mid-level backend developers and system administrators into efficient DevOps Engineers. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creating a University Course curriculum entitled "Introduction to Fullstack and DevOps Engineering", aiming to better prepare university students to the relevant technical and personal skills needed in the market today.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Evolution of DevOps Solutions
&lt;/h2&gt;

&lt;p&gt;A lot of things changed in those 7 years, especially in DevOps related solutions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes is everywhere.&lt;/li&gt;
&lt;li&gt;Cloud Providers have hundreds of managed services.&lt;/li&gt;
&lt;li&gt;Continuous Delivery tools allow to easily automate everything.&lt;/li&gt;
&lt;li&gt;Infrastructure can now be created and managed through code.&lt;/li&gt;
&lt;li&gt;Multi-cloud solutions are a popular thing.&lt;/li&gt;
&lt;li&gt;Everyone wants to become a DevOps Engineer, and all the companies want to apply DevOps.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One thing did not really change though: There is no clear definition of DevOps. This article sums up my years of experience in the field, showcases my personal point of view regarding DevOps, and presents opinions on how to successfully become a DevOps Engineer and how to apply DevOps in organizations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Software Delivery Models.
&lt;/h2&gt;

&lt;p&gt;DevOps is nothing more than a software delivery model, embracing today's available technologies and aiming to enhance the software development lifecycle. To better understand DevOps, it is important to understand the preceding models, namely the Waterfall and Agile models.&lt;/p&gt;

&lt;h3&gt;
  
  
  Waterfall Model
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5miqatgp0vptln74mr9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5miqatgp0vptln74mr9.png" alt="Waterfall Model"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The waterfall model is one of the oldest software delivery models, introduced in the 70s. It divides the software development lifecycle into pre-defined sequential phases, each performing a specific activity, and must be fully complete before the next one can begin with no overlap between them.&lt;/p&gt;

&lt;p&gt;With the current technological advancements and capabilities, this model becomes cumbersome and inefficient:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The model employs rigid process, discouraging changes.&lt;/li&gt;
&lt;li&gt;Difficult to measure progress, due to the silo'ed mode of work.&lt;/li&gt;
&lt;li&gt;Deployments are slow and complex.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Agile Model
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1zp3cuiznfx8klxy0yu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1zp3cuiznfx8klxy0yu.png" alt="Agile Model"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Agile model, formally launched in the early 2000s, provides a more flexible approach to deliver software. Unlike the Waterfall model, Agile promotes continuous iteration of design, development, and testing throughout the software development lifecycle of the project, bringing down silos between the different phases, and shortening releases cycles from months to weeks, in what is called "sprints". As the name states, the model increases the agility through continuous planning, improvement, team collaboration, development, and delivery, and responses to change.&lt;/p&gt;

&lt;p&gt;Operations teams are left out given that infrastructure and operations did not require the same agility at that time.&lt;/p&gt;

&lt;h3&gt;
  
  
  DevOps Model
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhx2bxjxf03a663e939gk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhx2bxjxf03a663e939gk.png" alt="DevOps Model"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The birth of cloud computing, with its on-demand delivery of IT resources, revolutionized software delivery. Before, software development and delivery required an iterative approach, while the infrastructure remained rigid.&lt;/p&gt;

&lt;p&gt;With the adoption of the cloud (2006 onwards), the need for owning and maintaining physical data centers was replaced by renting them out from cloud providers using different flexible payment models (e.g., Pay as you go).&lt;/p&gt;

&lt;p&gt;Cloud computing came with several benefits, including but not limited to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Agility&lt;/strong&gt;: Ease of access to a wide range of compute resources, on demand, allowing for the creation of complex infrastructure in minutes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Elasticity&lt;/strong&gt;: Resource mis-utilization (over or under provisioning) are no longer a problem, especially with the ability to quickly modify the compute resources based on the varying needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Saving&lt;/strong&gt;: The pay as you go model, and the elasticity of the resources permit the users to continuously optimize the costs of the compute resources.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the adoption of cloud computing in software delivery, Development and Operations teams can no longer be siloed, as it is the case with the Agile model. As a matter of fact, the development and management of the infrastructure must now align with that of the application itself. &lt;/p&gt;

&lt;p&gt;DevOps is a bouquet of philosophies, set of tools &amp;amp; practices, that aim to decrease the cost, time, and complexity of delivering software applications, through unifying both the software development and infrastructure management processes.&lt;/p&gt;

&lt;p&gt;DevOps aims to automate as many processes as possible to reliably and efficiently:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create and manage infrastructure resources.&lt;/li&gt;
&lt;li&gt;Release software changes.&lt;/li&gt;
&lt;li&gt;Perform necessary tests (e.g., Unit, Integration, Stress tests, etc).&lt;/li&gt;
&lt;li&gt;Automatically spin new environments seamlessly.&lt;/li&gt;
&lt;li&gt;Enhance system security.&lt;/li&gt;
&lt;li&gt;Ensure scalability.&lt;/li&gt;
&lt;li&gt;Improve collaboration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In light of the above, &lt;strong&gt;DevSecOps, SysOps, MLOps, CloudOps, PotatoOps&lt;/strong&gt;, are catchy LinkedIn words that can be all replaced by the term &lt;strong&gt;DevOps&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  DevOps: what it is, what it isn't!
&lt;/h2&gt;

&lt;p&gt;Clearly, DevOps is nothing more than a set of philosophies and best practices to enhance the software delivery using today's existing technologies. &lt;/p&gt;

&lt;p&gt;Having said this, DevOps is not:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployment of software on the cloud using the Agile approach (Most of today's understanding confuses this with DevOps).&lt;/li&gt;
&lt;li&gt;Creating software using the Microservices approach.&lt;/li&gt;
&lt;li&gt;Using Infrastructure as Code tools with no clear purpose.&lt;/li&gt;
&lt;li&gt;The adoption of unneeded automation tools in general.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Many entities attempting to apply DevOps might fall for the misconceptions listed above, but rather still apply, unknowingly, the agile model, but on the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  DevOps Engineers: What they are, and what they aren't!
&lt;/h2&gt;

&lt;p&gt;The inability to truly define DevOps resulted in creating a lot of inefficient and weird job positions, that might not necessarily contribute to the Software Development Lifecycle. Worse, this can lead to further deteriorating the quality of the application and the lifecycle as a whole.&lt;/p&gt;

&lt;p&gt;In light of the above, DevOps Engineers are not:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Engineers who create cloud infrastructure. Those are &lt;strong&gt;Site Reliability Engineers&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Kubernetes Gurus. Those are Kubernetes Gurus, not DevOps Engineers.&lt;/li&gt;
&lt;li&gt;Cloud Enthusiasts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In brief, a DevOps Engineer is someone with enough skillset to bridge the gap between the development and operations teams, through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating the required infrastructure.&lt;/li&gt;
&lt;li&gt;Deploying the application.&lt;/li&gt;
&lt;li&gt;Providing Continuous Delivery Mechanisms.&lt;/li&gt;
&lt;li&gt;Automating all the processes previously done manually by the different departments: development, testing, security, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DevOps Engineers must have a strong background in System administration and/or software development. After all, whatever you need to automate, you must be able to do it manually at first!&lt;/p&gt;

&lt;h2&gt;
  
  
  How to apply DevOps in organizations
&lt;/h2&gt;

&lt;p&gt;Applying DevOps in organizations is an easy concept, that is difficult to apply. As a matter of fact, DevOps entails the upskilling of all the stakeholders in a software company (Developers, QA, Designers, Technical Project Managers, System Administrators, UI/UX Designers, and business teams as well).&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;DevOps is still a confusing term for most of the tech industry. Several definitions have been created, without clear standards and straight-forward value. &lt;/p&gt;

&lt;p&gt;DevOps is nothing more than a culture that further enhances the software delivery lifecycles.&lt;/p&gt;

&lt;p&gt;Becoming a successful DevOps Engineer requires you to have a strong knowledge in both development and operations.&lt;/p&gt;

&lt;p&gt;Applying DevOps successfully within an organization entails the upskilling of all the personnel in the organization, all-the-while applying the necessary set of tools and processes that promote collaboration, communication, and automation.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>sdlc</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Redis Hackathon: NK-Microservices</title>
      <dc:creator>Nicolas El Khoury</dc:creator>
      <pubDate>Sat, 06 Aug 2022 07:47:00 +0000</pubDate>
      <link>https://forem.com/devopsbeyondlimitslb/redis-hackathon-nk-microservices-31bm</link>
      <guid>https://forem.com/devopsbeyondlimitslb/redis-hackathon-nk-microservices-31bm</guid>
      <description>&lt;h3&gt;
  
  
  Overview of My Submission
&lt;/h3&gt;

&lt;p&gt;This project started out as an initiative to better explain how microservices and deploying Microservices work. As a matter of fact, few years ago, DevOps and Microservices were still a blackbox. In this regards, I developed this microservices project, composed of a Gateway Service, Backend Service, ArangoDB (persistent storage), and Redis (Caching). &lt;/p&gt;

&lt;p&gt;The main purpose of this project is to highlight the importance and complexities of distributed systems, and highlight how different components interact with one another.&lt;/p&gt;

&lt;p&gt;In addition to that, the project contains multiple deployment modes, with the aim of showcasing different ways of deploying a Microservices project&lt;/p&gt;

&lt;h3&gt;
  
  
  Submission Category:
&lt;/h3&gt;

&lt;p&gt;MEAN/MERN Mavericks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Language Used
&lt;/h3&gt;

&lt;p&gt;Node.JS&lt;/p&gt;

&lt;h3&gt;
  
  
  Link to Code
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Documentation: &lt;a href="https://github.com/automation-lb/nk-microservices-deployment"&gt;https://github.com/automation-lb/nk-microservices-deployment&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The repository above contains all the information needed to download and deploy the project.&lt;/p&gt;

</description>
      <category>redishackathon</category>
    </item>
    <item>
      <title>How I obtained all AWS associate level certificates in two weeks.</title>
      <dc:creator>Nicolas El Khoury</dc:creator>
      <pubDate>Thu, 04 Aug 2022 05:11:58 +0000</pubDate>
      <link>https://forem.com/aws-builders/how-i-obtained-all-aws-associate-level-certificates-in-two-weeks-3cip</link>
      <guid>https://forem.com/aws-builders/how-i-obtained-all-aws-associate-level-certificates-in-two-weeks-3cip</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Hi, my name is Nicolas, and I was able to pass all the AWS associate level exams in just two weeks (between 9/2/2021 and 25/2/2021), all while maintaining a full time job, and a hectic personal life. I tried to think of many catchy and trendy introductions about AWS Certificates, along with their advantages and whatnot. However, I noticed that the internet is filled with articles that contain excellent explanations. Therefore, in this article, I will omit this part, and cut straight to sharing my personal experience, along with some tips and tricks to better prepare for your associate level exams, and hopefully pass them with the right effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background Information
&lt;/h2&gt;

&lt;p&gt;The professional experience that I obtained in the past years played a major role in decreasing the complexity of understanding the exam material, and the time required to prepare for each exam. As a matter of fact, I have around 6 years of experience in providing DevOps Solutions, Cloud Infrastructure Solutions, and Software Architecture using the Microservices approach. &lt;/p&gt;

&lt;h2&gt;
  
  
  The exams
&lt;/h2&gt;

&lt;p&gt;Over the course of 2 weeks, I managed to prepare and pass all three AWS Associate level Certificates:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/certification/certified-sysops-admin-associate/" rel="noopener noreferrer"&gt;AWS Certified SysOps Administrator - Associate (SOA)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/certification/certified-solutions-architect-associate/" rel="noopener noreferrer"&gt;AWS Certified Solutions Architect - Associate (SAA)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/certification/certified-developer-associate/" rel="noopener noreferrer"&gt;AWS Certified Developer - Associate (DVA)&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  AWS Certified SysOps Administrator - Associate (SOA)
&lt;/h3&gt;

&lt;p&gt;This was the first exam I took. Needless to say, I was a little bit nervous, and did not know what to expect. I bought two courses from Udemy: &lt;a href="https://www.udemy.com/course/aws-certified-sysops-administrator-associate-training/" rel="noopener noreferrer"&gt;AWS Certified SysOps Administrator Associate 2022 [SOA-C02]&lt;/a&gt; and the corresponding &lt;a href="https://www.udemy.com/course/aws-certified-sysops-administrator-associate-aws-practice-exams/" rel="noopener noreferrer"&gt;practice exams&lt;/a&gt;. Not knowing how to prepare, I started by watching every episode(On a speed of x1.75). Surprisingly, I enjoyed it very much. Even though I am quite experienced in most of the course's content (i.e., VPC, EC2, ASG, S3, RDS, etc), I learned a lot, due to its fun and interactive approach, which not only focuses on theories and information, but also on interesting hands on labs that further strengthen the topic being studied.&lt;/p&gt;

&lt;p&gt;I finished the course in about 5 hours, and directly solved the practice test associated with the course. I was able to pass it easily, thus boosting my confidence. Afterwards, I solved two other test, which I also passed easily. That's when I decided that I was ready to take on the actual exam. I registered through the AWS Training Portal, and passed it with a score of &lt;strong&gt;926 / 1000&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famfhenv2xkd2twe6xxkr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famfhenv2xkd2twe6xxkr.jpeg" alt="Certificate - SOA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Certified Solutions Architect - Associate (SAA)
&lt;/h3&gt;

&lt;p&gt;After finishing the first exam, I became hesitant as to whether I should stop here or continue with the certificates. Before deciding, I went on and purchased another &lt;a href="https://www.udemy.com/course/aws-certified-solutions-architect-associate-hands-on/" rel="noopener noreferrer"&gt;Udemy course&lt;/a&gt; along with its &lt;a href="https://www.udemy.com/course/aws-certified-solutions-architect-associate-practice-tests-k/" rel="noopener noreferrer"&gt;practice exams&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Honestly, I felt lazy, especially after learning that the course is much longer and contains a lot of information than the AWS SysOps exam. In addition, the content was quite similar to the previous one. However, the course contains a wonderful feature (I really appreciated it): &lt;strong&gt;The exam cram&lt;/strong&gt;. At the end of every section, there exists one or two videos usually containing all the important information that must be kept in mind for the exam. Therefore, I skipped the whole content and only focused on the exam crams, which I finished in less than two hours. Finally, I solved 3 exams, which I was able to pass easily, before passing the actual exam on February 19th with a score of 825 / 1000. &lt;/p&gt;

&lt;p&gt;Even though my score was relatively low, I can easily say that this exam was not more difficult than the AWS SysOps Associate exam.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flfg4nh4zjvc7chbtxfyb.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flfg4nh4zjvc7chbtxfyb.jpeg" alt="Certificate - SAA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Certified Developer - Associate (DVA)
&lt;/h3&gt;

&lt;p&gt;Having passed two exams in 10 days, I was determined to prepare and achieve the third and final associate level AWS certificate. Therefore, again, I purchased another &lt;a href="https://www.udemy.com/course/aws-certified-developer-associate-exam-training/" rel="noopener noreferrer"&gt;Udemy course&lt;/a&gt; along with its &lt;a href="https://www.udemy.com/course/aws-developer-associate-practice-exams/" rel="noopener noreferrer"&gt;practice exams&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Unfortunately, The AWS Certified Developer course content is quite different than the other two. As a matter of fact, this course focuses a lot on AWS Services related to development (i.e., Amazon DynamoDB, AWS SQS, AWS Cognito, etc). Evidently, I was not experienced with such services. I skimmed through the exam crams quickly, and directly jumped into solving the practice exams, which I successfully failed miserably :). &lt;/p&gt;

&lt;p&gt;It was then when I noticed that unless I familiarize myself very well with these AWS services, I will never be able to pass the exam. Therefore, I researched and read a lot about every AWS service that I felt uncomfortable with, and I reviewed the explanation of every question that I failed to answer correctly. After around 12 hours of preparation, I felt ready for the exam, which I took and passed with a score of 987 / 1000.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgqstygxfh00ib2zzvj1.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgqstygxfh00ib2zzvj1.jpeg" alt="Certificate - DVA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion (and Tips)
&lt;/h2&gt;

&lt;p&gt;In brief, the AWS exams are not easy, but also not impossible. It is important to understand very well the concepts and use cases of every service, since most of the exam questions are scenario based. Below is a set of tips to consider when preparing for AWS exams:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Have a thorough understanding, and preferably hands on experience on every AWS service covered.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an AWS account and play around with the AWS free tier. The AWS console is very amusing and easy to navigate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Research the basic concepts behind the AWS services. For instance, if you're learning about AWS ELBs, a good approach would be to understand what load balancers are, how they operate, and why we need them. This will give you a great insight on why AWS services are built and are operated and used.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Preferably, make sure to have a few years of professional experience in using AWS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There are plenty of tailored online courses (and cheap). I highly recommend purchasing such courses, in addition to doing your own research.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Solve practice exams as much as possible. Solving these exams will give you more insight on the type of questions asked, and will always teach you information that you might have missed from your exam preparations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Best of luck!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>certifications</category>
      <category>devops</category>
    </item>
    <item>
      <title>Different Ways of Deploying a Microservices Application on AWS</title>
      <dc:creator>Nicolas El Khoury</dc:creator>
      <pubDate>Wed, 12 Jan 2022 10:07:37 +0000</pubDate>
      <link>https://forem.com/aws-builders/different-ways-of-deploying-a-microservices-application-on-aws-18ge</link>
      <guid>https://forem.com/aws-builders/different-ways-of-deploying-a-microservices-application-on-aws-18ge</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Traditionally, applications were designed and implemented using a Monolithic architectural style, with which the application was developed and deployed as a single component, divided into multiple modules. Monolithic applications are very easy to develop and deploy.&lt;/p&gt;

&lt;p&gt;However, such an architectural pattern becomes a burden once the application becomes too large:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Difficult to manage and maintain, due to the large code.&lt;/li&gt;
&lt;li&gt;All of the application is built using one programming language, thus the system may suffer from bottlenecks when performing tasks not suitable for this specific language.&lt;/li&gt;
&lt;li&gt;Difficult to scale the application.&lt;/li&gt;
&lt;li&gt;Difficult to use container based technologies (Due to the large size of the application).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With the emergence of Cloud Computing, and the concept of the on-demand provisioning of resources, a more suitable architectural pattern was required. Microservices rapidly gained popularity, and became a widely used architectural pattern, especially for applications deployed on the cloud. Microservcies are an architectural pattern which divides an application into smaller, independent, loosely coupled services that may communicate with each other via multiple protocols (e.g., HTTP, sockets, events, etc). Microservices provide the following advantages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Easy to maintain (smaller code in each service).
Highly scalable.&lt;/li&gt;
&lt;li&gt;Extremely suitable for container-based technologies.
Complements cloud solutions.&lt;/li&gt;
&lt;li&gt;Fault tolerance: If one microservice fails, the rest of the system remains functional.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbt7zrqtwc8u37irtvd1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbt7zrqtwc8u37irtvd1.png" alt="Microservices vs Monolithic"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Truly, the Microservices Architecture is a very powerful architectural pattern that goes hand in hand with the services provided by the cloud. However, a well designed system depends on two factors. A robust design of the software, and of the underlying infrastructure. There exists multiple articles, tutorials, courses, that explain and promote the design, and implementation of Microservices. &lt;/p&gt;

&lt;h1&gt;
  
  
  Microservices Example Project
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://github.com/nicolaselkhoury/nk-microservices-deployment" rel="noopener noreferrer"&gt;The NK Microservices project&lt;/a&gt; is a sample project built using the Microservices approach. This project will be used in this article to better illustrate the differences between deployment modes.&lt;/p&gt;

&lt;p&gt;This project is made of the following components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Gateway Microservice: A REST API Microservice built using SailsJS, and serves as a Gateway, and request router.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Backend Microservice: A REST API Microservice built using SailsJS, and serves as the first, out of many Microservices which can be incorporated and integrated with the aforementioned Gateway Service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Redis Database: An open source, in-memory data store, used for caching purposes, and for storing other ephemeral pieces of information such as JWT tokens.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Arango Database: A multi-model database used for storing persistent information.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The project requires all of the aforementioned components to be set up in order to function properly.&lt;/p&gt;

&lt;h1&gt;
  
  
  Deployment Modes
&lt;/h1&gt;

&lt;p&gt;Deploying applications on robust infrastructure is a key operation to the success of the product. Evidently, each application serves a specific purpose, and is designed uniquely using distinct technologies. In this regard, the underlying infrastructure for each application may differ based on the application, business, regulatory, etc, needs.&lt;/p&gt;

&lt;p&gt;Usually, when deploying software, several considerations must be taken into account, some of which include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Security: The system must always be secure from all sorts of unwanted access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability: The ability to scale up/down resources based on demand.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Availability: The system must be able to sustain failure, and avoid single points of failure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;System Observability: Tools that increase the system visibility (monitoring, logging, tracing, etc).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, in general, these different deployment modes can be categorized as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single Server Deployment.&lt;/li&gt;
&lt;li&gt;Multi Server Deployment.&lt;/li&gt;
&lt;li&gt;Deployment using Container Orchestration Tools.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The rest of this document explains the details of each deployment type, its advantages, and disadvantages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Single Server Deployment
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F334jvvlbxar3z70l414a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F334jvvlbxar3z70l414a.png" alt="Single Server Deployment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the title states, in this mode, a single server is supposed to host all the different components. In this case, All the components of the NK Microservices project (Arango Database, Redis, Backend, and API Gateway) will be deployed on configured on the same server. A similar deployment can be found &lt;a href="https://github.com/nicolaselkhoury/nk-microservices-deployment/blob/master/deployment-modes/linuxMachine.md" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Typically, the inter-service communication between the components can be done using the local machine's IP. Despite this advantage, this mode of deployment is highly unadvised, except for local and temporary testing. Below is a list of disadvantages, explaining why it is advised to never proceed with Single Server Deployments, especially for production workloads:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Time consuming: Properly installing and configuring each component may prove to be time consuming, especially as the number of components grows in sizes. The NK microservices project is a relatively small project of 4 components, but imagine larger ones with 20+ components. In this case, such a deployment is definitely inefficient.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Non-Scalable: Each component can be installed and configured to run as a standalone process. Evidently, on a single server, clustering databases and spinning multiple replicas of the same process is not only worthless (The server represents a SOF), but also will contribute to the server's performance degradation through consuming more resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Huge Downtime: Any configuration, maintenance work, or the slightest error may put the whole server (and application) down, until the server and all of its components are restored.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Single Point of Failure: Being a single server, no matter what disaster recovery mechanisms, security policies, or auto-repair mechanisms, if the server goes down, the whole application goes down with it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;May mess up the host machine: The different application components are using the same servers, sharing the same resources, and outputting content on the same server. As the number of applications on the server grows, along with the demand of each one, the server is at risk of infection, where one component may consume the resources of the others, thus blocking them from operating safely.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In summary, single server deployments are only advised for personal use, and short term testing of small software applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi Server Deployment
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feo46qwr781wj424bt8r7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feo46qwr781wj424bt8r7.png" alt="Multi Server Deployment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A better approach would be to divide the project's components across multiple servers. Monolithic applications are usually developed using MVC, a three-tier architectural style (Frontend, Backend, and Database). A proper and intuitive deployment approach would be to deploy each layer on a separate server. In the case of microservices, a similar approach can be taken, by dedicated a small server to each component of the application. Such a deployment mode, while not the best, is definitely a more convenient approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Separation of concerns: Processes are no longer sharing resources and risking each other's proper operation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Isolation: Each component may operate on its own, and use dedicated resources. A proper design of the application may allow partial system operation in case of partial downtimes. For instance, if the Redis database crashes, the rest of the system should be fully operational.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability: Database clustering and service replication are now possible by replicating the servers of each application independently from the other.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Evidently, multi server deployments are definitely a better alternative to their single server counterparts. However, this approach tends to become cumbersome at scale, especially when it comes to scalability, and managing the lifecycle of each component, and each replica of that component properly. Handling a system of 4 components, and a couple replicas per service may be easy. However, consider a system of 30 components and 5 replicas per component. Properly handling such a system may become cumbersome.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment using Container Orchestration Tools
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2op8sq6pkmc4y5gzhcm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2op8sq6pkmc4y5gzhcm6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Container Orchestration Tools provide a framework for managing microservices at scale. Such tools possess amazing capabilities and features to manage the lifecycle of distributed application components from a centralized command server, and can be used for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provisioning, configuration, resource allocation and deployment of services. &lt;/li&gt;
&lt;li&gt;Container availability &lt;/li&gt;
&lt;li&gt;Scale in/out of resources&lt;/li&gt;
&lt;li&gt;Load balancing and traffic distribution and routing &lt;/li&gt;
&lt;li&gt;Container health monitoring&lt;/li&gt;
&lt;li&gt;Managing inter-process communication.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There exists several mature tools that are widely used in the market, namely Kubernetes, Docker Swarm, Apache Mesos, etc. Describing these tools is out of scope of this article.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xqi12i079cn1vti3ppo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xqi12i079cn1vti3ppo.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagrams above clearly point how a simple Elastic Kubernetes Service (EKS) on AWS permits the provisioning of several servers, and deploying the NK microservices project in a production ready infrastructure composed of replicated servers and services. Such a deployment has several advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Easy scalability mechanisms for servers and containers.&lt;/li&gt;
&lt;li&gt;Improved governance and security controls.&lt;/li&gt;
&lt;li&gt;Better visibility on the system.&lt;/li&gt;
&lt;li&gt;Container health monitoring.&lt;/li&gt;
&lt;li&gt;Optimal resource allocation.&lt;/li&gt;
&lt;li&gt;Management of container lifecycle.&lt;/li&gt;
&lt;li&gt;System Visibility.&lt;/li&gt;
&lt;li&gt;Cost opimization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;In brief, this article summarized the different modes of deployment available for Microservices on AWS. Single server deployments are usually not advised, except for personal use and local temporary testing. Multi server deployments represent a better alternative, especially at small scale. However such a deployment may become cumbersome at scale, and must be replaced by a more convenient mode, such as using Container Orchestration Tools (At the expense of complexity).&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>docker</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Enforce MFA Access to the AWS Console</title>
      <dc:creator>Nicolas El Khoury</dc:creator>
      <pubDate>Tue, 16 Nov 2021 12:26:17 +0000</pubDate>
      <link>https://forem.com/aws-builders/enforce-mfa-access-to-the-aws-console-1ho1</link>
      <guid>https://forem.com/aws-builders/enforce-mfa-access-to-the-aws-console-1ho1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;One of the most important security concerns for entities using AWS is securing their AWS accounts. Evidently, managing an AWS account with 5 users may be somewhat a walk in the park. However, as the usage of the account scales, and the number of users increases, managing account becomes trivial. Several policies must be put in place in order to organize access between different stakeholders. &lt;/p&gt;

&lt;p&gt;One of the most common security features is to enable &lt;a href="https://en.wikipedia.org/wiki/Multi-factor_authentication"&gt;Multi-Factor Authentication&lt;/a&gt; on the AWS account users.&lt;/p&gt;

&lt;p&gt;In this tutorial, we are going to setup a process that forbids any AWS user from using any service without:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Setting up an MFA device.&lt;/li&gt;
&lt;li&gt;Signing in using MFA. Any user that signs in without MFA must not be allowed to manage any resource on AWS.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Steps
&lt;/h2&gt;

&lt;p&gt;In this tutorial we will perform the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create the required policy.&lt;/li&gt;
&lt;li&gt;Create a test user.&lt;/li&gt;
&lt;li&gt;Validate the setup.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Policy Creation
&lt;/h3&gt;

&lt;p&gt;Navigate to &lt;strong&gt;Policies&lt;/strong&gt; section, under the &lt;strong&gt;IAM&lt;/strong&gt; service, and create the following policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowViewAccountInfo",
            "Effect": "Allow",
            "Action": [
                "iam:GetAccountPasswordPolicy",
                "iam:GetAccountSummary",       
                "iam:ListVirtualMFADevices"
            ],
            "Resource": "*"
        },       
        {
            "Sid": "AllowManageOwnPasswords",
            "Effect": "Allow",
            "Action": [
                "iam:ChangePassword",
                "iam:GetUser"
            ],
            "Resource": "arn:aws:iam::*:user/${aws:username}"
        },
        {
            "Sid": "AllowManageOwnAccessKeys",
            "Effect": "Allow",
            "Action": [
                "iam:CreateAccessKey",
                "iam:DeleteAccessKey",
                "iam:ListAccessKeys",
                "iam:UpdateAccessKey"
            ],
            "Resource": "arn:aws:iam::*:user/${aws:username}"
        },
        {
            "Sid": "AllowManageOwnSigningCertificates",
            "Effect": "Allow",
            "Action": [
                "iam:DeleteSigningCertificate",
                "iam:ListSigningCertificates",
                "iam:UpdateSigningCertificate",
                "iam:UploadSigningCertificate"
            ],
            "Resource": "arn:aws:iam::*:user/${aws:username}"
        },
        {
            "Sid": "AllowManageOwnSSHPublicKeys",
            "Effect": "Allow",
            "Action": [
                "iam:DeleteSSHPublicKey",
                "iam:GetSSHPublicKey",
                "iam:ListSSHPublicKeys",
                "iam:UpdateSSHPublicKey",
                "iam:UploadSSHPublicKey"
            ],
            "Resource": "arn:aws:iam::*:user/${aws:username}"
        },
        {
            "Sid": "AllowManageOwnGitCredentials",
            "Effect": "Allow",
            "Action": [
                "iam:CreateServiceSpecificCredential",
                "iam:DeleteServiceSpecificCredential",
                "iam:ListServiceSpecificCredentials",
                "iam:ResetServiceSpecificCredential",
                "iam:UpdateServiceSpecificCredential"
            ],
            "Resource": "arn:aws:iam::*:user/${aws:username}"
        },
        {
            "Sid": "AllowManageOwnVirtualMFADevice",
            "Effect": "Allow",
            "Action": [
                "iam:CreateVirtualMFADevice",
                "iam:DeleteVirtualMFADevice"
            ],
            "Resource": "arn:aws:iam::*:mfa/${aws:username}"
        },
        {
            "Sid": "AllowManageOwnUserMFA",
            "Effect": "Allow",
            "Action": [
                "iam:DeactivateMFADevice",
                "iam:EnableMFADevice",
                "iam:ListMFADevices",
                "iam:ResyncMFADevice"
            ],
            "Resource": "arn:aws:iam::*:user/${aws:username}"
        },
        {
            "Sid": "DenyAllExceptListedIfNoMFA",
            "Effect": "Deny",
            "NotAction": [
                "iam:CreateVirtualMFADevice",
                "iam:EnableMFADevice",
                "iam:GetUser",
                "iam:ListMFADevices",
                "iam:ListVirtualMFADevices",
                "iam:ResyncMFADevice",
                "sts:GetSessionToken",
                "iam:ChangePassword",
                "iam:GetUser"
            ],
            "Resource": "*",
            "Condition": {
                "BoolIfExists": {
                    "aws:MultiFactorAuthPresent": "false"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The policy above allows a user to only perform certain actions related to their account such as changing the password, or setting an MFA device. Moreover, the policy denies every other action for the user, if signed in without an MFA device. This policy allows any user to login for the first time, and set their own MFA device.&lt;/p&gt;

&lt;p&gt;Give the policy a name and finalize its creation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test User Creation.
&lt;/h3&gt;

&lt;p&gt;Navigate to &lt;strong&gt;Users&lt;/strong&gt; section, under the &lt;strong&gt;IAM&lt;/strong&gt; service, and add a new user with the following options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name: testuser&lt;/li&gt;
&lt;li&gt;AWS Credential Type: autogenerated password that should be changed on first signed in for the AWS Management Console.&lt;/li&gt;
&lt;li&gt;Under the permissions section, navigate to "Attach existing policies directly", search for the name of the policy created previously, and add it to the user.&lt;/li&gt;
&lt;li&gt;Attach the &lt;strong&gt;AmazonEC2FullAccess&lt;/strong&gt; policy as well, giving the user full access to the EC2 service.&lt;/li&gt;
&lt;li&gt;Leave the remaining options as defaults, and create the user.&lt;/li&gt;
&lt;li&gt;A password will be generated. Use this password to login to the console with the new user.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  MFA setup and Validation
&lt;/h3&gt;

&lt;p&gt;In order to validate the setup, perform the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Login to the AWS management console using the new testuser. For the first time, you will be able to login using only the password. Moreover, you will be asked to change the password.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After successfully logging in, navigate to the EC2 instances console. You will be greeted with the following message "You are not authorized to perform this operation." forbidding you from listing any existing instances. Even though this user has full access to the EC2 service, managing EC2 resources is forbidden before signing in using MFA.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To setup MFA, navigate to the dashboard of the &lt;strong&gt;IAM&lt;/strong&gt; section. The dashboard will be filled with permission error messages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on &lt;strong&gt;Add MFA&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Assign MFA Device&lt;/strong&gt; --&amp;gt; Virtual MFA device.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Download an MFA software (Google, Microsoft, etc) on your phone, and complete the setup on AWS by scanning the QR code, and then copying 2 consecutive MFA Codes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Upon completion, you will receive the following message:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You have successfully assigned virtual MFA
This virtual MFA will be required during sign-in.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Sign out, and sign back in. This time, you will be prompted to enter the code from the authentication device.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After signing in, navigate again to the EC2 instances dashboard. The error message is now replaced by the list of EC2 instances present.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The end :)&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Proposed Infrastructure Setup on AWS for a Microservices Architecture (4)</title>
      <dc:creator>Nicolas El Khoury</dc:creator>
      <pubDate>Mon, 01 Nov 2021 10:36:02 +0000</pubDate>
      <link>https://forem.com/devopsbeyondlimitslb/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-4-epp</link>
      <guid>https://forem.com/devopsbeyondlimitslb/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-4-epp</guid>
      <description>&lt;h1&gt;
  
  
  Chapter 4: Deployment Strategies for Microservices.
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://dev.to/aws-builders/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-1-5c1n"&gt;Chapter 3&lt;/a&gt; promotes one way of deploying microservices, along with some best practices, in order to achieve security, scalability, and availability. In fact, an improper deployment of microservices may lead to numerous problems, namely, bottlenecks, single point of failure, increased downtimes, and many more.&lt;/p&gt;

&lt;p&gt;Now that the best practices and considerations have been discussion, what follows describes some of the different technologies that can be employed to manage and orchestrate Microservices.&lt;/p&gt;

&lt;p&gt;Microservices come with numerous advantages. One of the rather important ones is the isolation (and independence) each Microservice provides from the rest of the system. Therefore, assume an application composed of three Microservices: Catalog service, Customer Service, and Payment Service. If well architected, the failure of one Microservice must only impact its own part of the system, and not the system as a whole. For example, if the Payment service fails, the system payments will fail. However, the users should still be able to use the other functionalities of the system, provided by the other two systems. Another advantage of Microservices is its scalability. Assume, in the same example above, the Catalog service is receiving traffic much more than the Payment service. In this case, it would make more sense to have more replicas of the Catalog service, than that of the Payment service.&lt;/p&gt;

&lt;p&gt;To ensure the aforementioned scalability, and availability, proper orchestration tools must be employed. Before digging deeper in orchestration tools, below is a list of deployment modes to run microservices:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Physical Servers&lt;/strong&gt;: Installing and configuring servers is one way of deploying and running Microservices. However, with today's technologies, and the incredible amount of resources offered by the machine, in addition to the wide adoption of cloud based solutions, managing your own phyiscal servers is not quite the best idea. In addition to the lack of scalability options, and misuse of the resources (be it under-utilization or over-utilization), managing physical servers on-premises comes with great Capital and Operational Expenditures. Moreover, each Microservice must run in its own isolated environment. Running multiple Microservices on the same physical server may hinder this isolation. Running each Microservice on an independent server, on the other hand is not an optimal solution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Virtual Machines&lt;/strong&gt;: Dividing a physical machine into multiple virtual machines is definitely a better approach. In fact, each virtual machine spun on a physical server acts as an independent, isolated environment, allowing to host multiple Microservices on the same machine, and therefore, better resource consumption. However, Virtual Machines come with their own disadvantages. In fact, each virtual runs its own full copy of an Operating System, in addition to a copy of the underlying hardware. Evidently, this consumes an excessive amount of RAM and CPU. Virtual Machines, despite being a better solutions than running actual servers, they are still not quite suitable for hosting Microservices. Examples of Virtual Machine providers include, but are not limited to: VirtualBox, Hyper-V, and KVM.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Containers&lt;/strong&gt;: Similar to Virtual Machines, containers provide isolated environments that can run on a single host. However, containers share the physical server, and the host's operating system. Indeed, each container running on a machine shares the Operating System, its binaries, and its libraries. Therefore, containers do not have to own a copy of the operating system, thus heavily reducing the use of the host's CPU and RAM. Evidently, containers are lightweight isolated environments, that allow faster deployment, and the accommodation of a larger number of Microservices on the same host than Virtual Machines. Linux Containers, and Docker are examples of Container technologies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Serverless&lt;/strong&gt;: As the name states, Serverless is an approach that abstracts all kinds of server management for the users. Definitely, there exists servers on top of which the application runs. However, these servers are managed by the Cloud Providers (e.g., Amazon Web Services, Amazon Web Services, etc). Moreover, the functions hosted on the serverless technologies are only charged for the times they are being used. As opposed to the aforementioned three technologies, when the application is not being used (There exists no traffic), the application is not considered running. Serverless brings several advantages, namely, high scalability, reduces charges, and no servers to manage and maintain. Unfortunately, the Serverless technology comes with numerous disadvantages. In fact, since the functions are not running when idle, latencies may occur when first triggering a function. More importantly, each cloud provider provides its own set of libraries for writing applications using this technologies. Therefore, when changing cloud providers, or even technologies, one is at risk of performing major code modifications.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As a summary, this article discussed the several deployment modes for software application built using the Microservices approach. Evidently, the container and serverless technologies are newer and more suitable for Microservices than Virtual Machines and Physical Servers. The next chapter will discuss how Serverless and Containers can compliment each other, in addition to the benefits of incorporating them together.&lt;/p&gt;

&lt;h2&gt;
  
  
  List of articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/aws-builders/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-1-503o"&gt;Introduction and Design Considerations&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/aws-builders/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-2-35g3"&gt;Overview of the Infrastructure and Components&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/aws-builders/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-1-5c1n"&gt;Deployment Strategy for Microservices&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/nicolaselkhoury/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-4-epp"&gt;Deployment Strategies for Microservices&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>microservices</category>
      <category>devops</category>
    </item>
    <item>
      <title>Proposed Infrastructure Setup on AWS for a Microservices Architecture (3)</title>
      <dc:creator>Nicolas El Khoury</dc:creator>
      <pubDate>Sun, 31 Oct 2021 06:03:50 +0000</pubDate>
      <link>https://forem.com/aws-builders/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-1-5c1n</link>
      <guid>https://forem.com/aws-builders/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-1-5c1n</guid>
      <description>&lt;h1&gt;
  
  
  Chapter 3: Deployment Strategy for Microservices.
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://dev.to/aws-builders/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-2-35g3"&gt;Chapter 2&lt;/a&gt; provided an overview of the proposed infrastructure, and explained the different components used, along with its advantages. However, the aforementioned infrastructure is only as robust as the environment hosting the microservices. In fact, an improper deployment of microservices may lead to numerous problems, namely, bottlenecks, single point of failure, increased downtimes, and many more.&lt;/p&gt;

&lt;p&gt;This chapter promotes one way of deploying microservices, along with some best practices, in order to achieve security, scalability, and availability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgq314yn8uqoecedi1g9r.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgq314yn8uqoecedi1g9r.jpeg" alt="AWS Region"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To further illustrate the proposed solution, the diagram above represents a Virtual Private Cluster (VPC) located in the region Ireland (eu-west-1) (The details of creating a VPC and its underlying components are out of the scope of this artible). A VPC created in a region may support one or more Availability Zones (AZ). Each Availability Zone represents a distinct data center in the same region. While regions are isolated from one another, Availability Zones within a single region are connected to each other through low-latency links. For simplicity reasons, assume that this region is comprised of two availability zones (eu-west-1a, and eu-west-1b).&lt;/p&gt;

&lt;p&gt;In each AZ, a public subnet and a private subnet are created. In chapter 2, we clearly stated that all of the microservices and other backend components are to be created in private subnets, and never in public ones, even components or services that require to be accessed by the internet (e.g., frontend applications, API gateways, etc). Even though resources in private subnets by default cannot be accessed by the internet, attaching them to an internet-facing load balancer is enough to alleviate this problem. Therefore, microservices that must be accessed from users outside the VPC must be attached to an internet-facing load balancer, and the other ones must be attached to an internal load balancer. Evidently, all microservices communicate with one another through the load balancers, and never through IPs. In fact, such a communication through load balancers not only ensures a balanced load across multiple replicas of one service, but also has multiple other advantages that will be discussed later in this article.&lt;/p&gt;

&lt;p&gt;One must always take advantage of the existence of multiple AZs in a region. Thus, when deploying a microservice, it is always advisable to deploy multiple replicas of it, and span these replicas across multiple AZs. For instance, when deploying one microservice in the VPC above, a good approach would be to deploy two replicas of this service, one in each private subnet. In case of any failure, be it on the microservice, instance, or AZ level, the application still has another running instance ready to serve the requests, until the failing component is resolved.&lt;/p&gt;

&lt;p&gt;Assume the following three deployment scenarios to illustrate the importance of what has been said.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Microservice A is deployed as 1 replica in private subnet (a). A failure on the microservice level is enough to create an unwanted downtime. In fact, should this microservice fail, there exists no other replica to serve the requests, until this microservice comes back to life. &lt;strong&gt;(Failed approach)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Microservice A is deployed as 2 replicas, in private subnet (b). While this deployment ensured more than one replica for the service, both replicas are located in the same subnet, and in the same availability zone. In this case, the application is protected from a failure on the microservice level, since another replica is ready to serve the requests. However, a failure on the AZ level is enough to bring the service down. &lt;strong&gt;(Failed approach)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Microservice A is deployed as 2 replicas, one in private subnet (a), and another one in private subnet (b). With such a deployment, the only way to suffer from downtime is to bring the whole VPC down, which is something very difficult to happen. In fact, each replica of the service is located in different datacenter. Thus, a failure on the microservice level, and even on the datacenter level is mitigated. &lt;strong&gt;(Successful approach)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The three scenarios above illustrate the importance of replicating and spreading microservices as much as possible in order to provide reliability, and fault tolerance. What follows explains the importance of attaching all the replicas of every microservice into a load balancer. Assume a service of two replicas. Attaching the replicas as a target group to a load balancer provides the following advantages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Load balancing across all replicas&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Detecting and reporting failed replicas&lt;/strong&gt;: The load balancer performs regular health check on each replica. In case of a failure in one of the replicas, the load balancer stops forwarding requests to it. Alarms could be set using Cloudwatch in order to report such incidents.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ability to easily scale replicas&lt;/strong&gt;: AWS provides multiple mechanisms to scale in and scale out a service, and easily integrates the added/removed instances from the target group.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service Discovery&lt;/strong&gt;: The load balancer alleviates the need for a service discovery tool, through attaching each distinct service to the load balancer as a target group. The load balancer, namely the Application Load Balancer (ALB) supports multiple routing rules such as host-based routing, and path-based routing.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In brief, this article explained the best practices when choosing a mode of deployment for microservices. The proposed solution maximizes security, availability, and reliability of the microservices. The next chapter will describe the different technologies that can be used on which microservices can be hosted.&lt;/p&gt;

&lt;h2&gt;
  
  
  List of articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/aws-builders/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-1-503o"&gt;Introduction and Design Considerations&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/aws-builders/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-2-35g3"&gt;Overview of the Infrastructure and Components&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/aws-builders/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-1-5c1n"&gt;Deployment Strategy for Microservices&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/nicolaselkhoury/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-4-epp"&gt;Deployment Strategies for Microservices&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>microservices</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Proposed Infrastructure Setup on AWS for a Microservices Architecture (2)</title>
      <dc:creator>Nicolas El Khoury</dc:creator>
      <pubDate>Tue, 26 Oct 2021 12:52:02 +0000</pubDate>
      <link>https://forem.com/aws-builders/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-2-35g3</link>
      <guid>https://forem.com/aws-builders/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-2-35g3</guid>
      <description>&lt;h1&gt;
  
  
  Chapter 2: Overview of the Infrastructure and Components.
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://dev.to/aws-builders/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-1-503o"&gt;Chapter 1&lt;/a&gt; of this series explained the advantages and disadvantages of a Microservices architecture, in addition to the design considerations required to implement an infrastructure that is robust and adequate enough to host such types of architectures.&lt;/p&gt;

&lt;p&gt;This chapter provides an overview of the proposed infrastructure, and explains the different components used, along with the advantages it provides.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbfhrywcqagk4qtjyx3vd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbfhrywcqagk4qtjyx3vd.png" alt="Proposed AWS Infrastructure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Virtual Private cluster (VPC)&lt;/strong&gt;: is a private network, within the public cloud, that is logically isolated (hidden) from other virtual networks. Each VPC may contain one or more subnets (Logical divisions of the VPC) attached to it. There exists two types of subnets: public subnets, in which resources are exposed to the internet, and private subnets, which are completely isolated from the internet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Application load Balancer (ALB)&lt;/strong&gt;: An Application load balancer serves as a point of contact for clients. The load balancer evaluates, based on a set of predefined rules, each request that it receives, and redirects it to the appropriate target group. Moreover, the load balancer balances the load among the targets registered with a target group. A load balancer canbe interet-facing (Can be accessed from the internet), or internal (cannot be accessed from the internet). AWS provides three types of load balancers: 1) Application Load Balancer, Network Load Balancer, and Classic Load Balancer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Cloudwatch&lt;/strong&gt;: AWS’ monitoring tool for all the resources and applications on AWS. Collects and displays different metrics of resources deployed on AWS (e.g.. CPU Utilization, Memory Consumption, Dis Read/Write, Throughput, 5XX, 4XX, 3XX, 2XX, etc). CloudWatch alarms can be set on metrics in order to generate notifications (e.g., Send an alarm email), or trigger actions automatically (e.g., Autoscaling). Consider the following alarm: When the CPU Utilization of instance A averages higher than 65% for three minutes (Metric Threshold) Send an email to a set of recipients (Notification) and create a new replica of instance A (Scaling Action).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt;: An AWS storage service to store and retrieve objects.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Cloudfront&lt;/strong&gt;: A Content Delivery Network (CDN) service that enhances the performance of content delivery (e.g., data, video, images, etc) to the end user through a network of edge locations. AWS Cloudfront can be attached to an Amazon S3 bucket, or any server that hosts data, caches the objects stored on these servers, and serves them to the users upon requests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lambda Functions&lt;/strong&gt;: A type of serverless compute functions, which allows users to upload their code without having to manage servers. AWS handles all the provisioning of underlying machines. Lambda functions are triggered by events configured, namely, An object put on S3, an object sent to the SQS, periodically, etc.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The diagram above depicts an infrastructure, in which multiple resources are deployed. Aside from S3, Cloudfront, and Cloudwatch, all the resources are created and deployed inside the VPC. More importantly, all of these resources are inside private subnets, as can be seen later in this article. Resources spawned in private subnets only possess private IPs, and therefore cannot be accessed directly from outside the VPC. Such a setup maximizes the security. In fact, a database launched in a public subnet, and protected by a password, no matter how strong it is, is at a high risk of being breached directly (Simple brute force attack). However, a database launched in the private subnet is practically nonexistent for anyone outside the VPC. Even if not secured with a password, the database is only accessible to users inside the private network.&lt;br&gt;
The communication between the application components, such as microservices and databases passes through a load balancer. In more details, each microservice, database, or any other component is attached as a target group to a load balancer. The components that are given access to from the internet are attached to an internet-facing load balancer, whereas the backend system components are attached to an internal load balancer. This approach maximizes the availability, load balancing, and security of the system. To better explain the aforementioned, consider the following example:&lt;/p&gt;

&lt;p&gt;Assume an application composed from a front-end microservice, an api gateway microservice, a backend-end microservice, and a database. Typically, the frontend, and api gateway services should be accessed from the internet. Therefore, they should be attached as two target groups to the public facing load balancer. On the other hand, the backend service, and the database must never be accessed from the outside world, thus attached to the internal load balancer. Consider a user accessing the application, and requesting a list of all the products available, below is the flow of requests that will traverse the network:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Request from the user to the internet-facing load balancer.&lt;/li&gt;
&lt;li&gt;The load balancer routes the request to the frontend application to load the page in the user’s browser.&lt;/li&gt;
&lt;li&gt;The front-end application returns a response to the load balancer with the page to be loaded.&lt;/li&gt;
&lt;li&gt;The load balancer returns the response back to the user.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now that the page is loaded on the user’s device, another request should be made by the page asking to fetch the available products.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Request from the user to the internet-facing load balancer.&lt;/li&gt;
&lt;li&gt;The load balancer routes the request to the api gateway.&lt;/li&gt;
&lt;li&gt;The api gateway routes the request, through the internal load balancer, to the backend service that is supposed to fetch the products from the database.&lt;/li&gt;
&lt;li&gt;The backend service queries, through the internal load balancer, the products from the database.&lt;/li&gt;
&lt;li&gt;The response returns back to the user following the same route taken by the request.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If the page loaded contains files available in an S3 bucket, that is synced with AWS Cloudfront, the following steps are performed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Request from the user to the Cloudfront service requesting a file.&lt;/li&gt;
&lt;li&gt;Cloudfront checks if it possesses the file in one of the edge locations. If found, the file is directly served back to the user.&lt;/li&gt;
&lt;li&gt;If missing, Cloudfront fetches the file from S3, returns it back to the user, and caches it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Attaching the services as target groups to the load balancers provide multiple advantages (Which will be explored in details in the following chapter), namely security, through only allowing requests that match certain criteria pass, and load balancing, by balancing the requests through all the replicas registered of the same service.&lt;/p&gt;

&lt;p&gt;In summary, this article described a brief overview of the infrastructure proposed, how it operates, and the advantages it provides. The next chapter will describe in details how microservices should be deployed in a secure, available, and scalable fashion, in addition to setting autoscaling policies and alarms.&lt;/p&gt;

&lt;h2&gt;
  
  
  List of articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/aws-builders/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-1-503o"&gt;Introduction and Design Considerations&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/aws-builders/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-2-35g3"&gt;Overview of the Infrastructure and Components&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/aws-builders/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-1-5c1n"&gt;Deployment Strategy for Microservices&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/nicolaselkhoury/proposed-infrastructure-setup-on-aws-for-a-microservices-architecture-4-epp"&gt;Deployment Strategies for Microservices&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>microservices</category>
      <category>devops</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
