<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: chester htoo</title>
    <description>The latest articles on Forem by chester htoo (@halchester).</description>
    <link>https://forem.com/halchester</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/halchester"/>
    <language>en</language>
    <item>
      <title>Deploying container applications on AWS with CI/CD pipelines</title>
      <dc:creator>chester htoo</dc:creator>
      <pubDate>Fri, 10 Nov 2023 21:53:41 +0000</pubDate>
      <link>https://forem.com/halchester/deploying-container-applications-on-aws-with-cicd-pipelines-5d53</link>
      <guid>https://forem.com/halchester/deploying-container-applications-on-aws-with-cicd-pipelines-5d53</guid>
      <description>&lt;p&gt;In this blog, we will be creating a cloud environment, specifically on Amazon Web Services, to deploy a web application, which is a simple Vite application. The Vite application will be containerised using Docker, and will be pushed into our Amazon ECR Registry, which will later be used by Amazon ECS task definition to run a service on ECS Fargate. We will also be setting up a CI/CD pipeline using Github actions so that whenever a change is committed to &lt;code&gt;main&lt;/code&gt; branch (production), it will trigger an automatic docker image build process and update the ECS task to use the latest docker image from our ECR repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;The architecture overview will look something like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgx56bi5aawaiams0m9ch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgx56bi5aawaiams0m9ch.png" alt="Architecture" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's dive right into it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;p&gt;Before we get into the good stuffs, first we need to make sure we have the required services on our local machine, which are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS Cli&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS Account&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com" rel="noopener noreferrer"&gt;Github Account&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Folder Structure
&lt;/h2&gt;

&lt;p&gt;We will be using a (sort-of) monorepo approach for this project. We will have a &lt;code&gt;terraform&lt;/code&gt; folder for our infrastructure, and an &lt;code&gt;app&lt;/code&gt; folder for our web application. We will also have a &lt;code&gt;.github/workflows&lt;/code&gt; folder for our Github Actions workflow files. So it will look something like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;your-project
├── .github
│   └── workflows
├── app
└── terraform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating the Web Application
&lt;/h2&gt;

&lt;p&gt;We don't need a fancy fully-functional web application for this project. We just need a simple web application that we can use to deploy, build docker image and make a few changes to test our CI/CD pipeline. So we will just be using a simple react application boilerplate created with &lt;a href="https://vitejs.dev/" rel="noopener noreferrer"&gt;Vite&lt;/a&gt;. You can create your own or use any other boilerplate you like. So let's go into our working directory (any folder you like) and create a new vite application. I will be using &lt;code&gt;pnpm&lt;/code&gt; for this project and &lt;a href="https://vitejs.dev/guide/#scaffolding-your-first-vite-project" rel="noopener noreferrer"&gt;here&lt;/a&gt; is a link to their installation guide.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pnmp create vite app/
cd app/
pnpm install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll clean up a few changes in the &lt;code&gt;App.tsx&lt;/code&gt; file and run &lt;code&gt;pnpm run dev&lt;/code&gt;. If everything is working fine, you should be able to see the web application running on &lt;code&gt;localhost:3000&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Resources&lt;/strong&gt;: Don't worry about the codes, all the codes can be found on Github &lt;a href="https://github.com/halchester/ecr-ecs-ghactions" rel="noopener noreferrer"&gt;here&lt;/a&gt;. I will also be linking all the resouces either in the code commends or the end of the blog. Don't forget to star the repo and share this article if you find it useful 😄&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ft9s2xjhrvvsevx7h1w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ft9s2xjhrvvsevx7h1w.png" alt="Web application running" width="800" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cool, great! Now we got the application up and running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dockerizing the Web Application
&lt;/h2&gt;

&lt;p&gt;Now, let's create a &lt;code&gt;dockerfile&lt;/code&gt; to create a docker image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; --platform=linux/amd64 node:18-alpine&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"npm"&lt;/span&gt; ,&lt;span class="s2"&gt;"install"&lt;/span&gt;, &lt;span class="s2"&gt;"-g"&lt;/span&gt;,&lt;span class="s2"&gt;"pnpm"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json /vite-app/&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . /vite-app&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /vite-app&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"pnpm"&lt;/span&gt;, &lt;span class="s2"&gt;"install"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["pnpm", "dev"]&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's go through the dockerfile line by line.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;FROM --platform=linux/amd64 node:18-alpine&lt;/code&gt;: We are using the &lt;code&gt;node:18-alpine&lt;/code&gt; image as our base image. We are also specifying the platform to be &lt;code&gt;linux/amd64&lt;/code&gt; because we will be using this docker image on ECS Fargate, which is a linux environment. If we don't specify the platform, it will default to my system's platform, which is MacOS M1 (&lt;code&gt;darwin/arm64&lt;/code&gt;), and it will fail to run on ECS Fargate.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RUN ["npm" ,"install", "-g","pnpm"]&lt;/code&gt;: We are installing &lt;code&gt;pnpm&lt;/code&gt; globally. We will be using &lt;code&gt;pnpm&lt;/code&gt; to install our dependencies. You can use &lt;code&gt;npm&lt;/code&gt; or &lt;code&gt;yarn&lt;/code&gt; if you like.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;COPY package.json /vite-app/&lt;/code&gt;: We are copying the &lt;code&gt;package.json&lt;/code&gt; file to the &lt;code&gt;/vite-app&lt;/code&gt; directory in our docker image.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;COPY . /vite-app&lt;/code&gt;: We are copying the rest of the files to the &lt;code&gt;/vite-app&lt;/code&gt; directory in our docker image.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;WORKDIR /vite-app&lt;/code&gt;: We are setting the working directory to &lt;code&gt;/vite-app&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;RUN ["pnpm", "install"]&lt;/code&gt;: We are installing the dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;CMD ["pnpm", "dev"]&lt;/code&gt;: We are running the &lt;code&gt;dev&lt;/code&gt; script.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;EXPOSE 8000&lt;/code&gt;: We are exposing the container port &lt;code&gt;8000&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We also don't want to include &lt;code&gt;node_modules&lt;/code&gt; in our docker image, so we will add it to our &lt;code&gt;.dockerignore&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node_modules
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's build our docker image using &lt;code&gt;docker build -t vite-app:latest .&lt;/code&gt; (make sure you are in the &lt;code&gt;app&lt;/code&gt; directory). You should be able to see the docker image when you run &lt;code&gt;docker images&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;REPOSITORY             TAG              IMAGE ID       CREATED        SIZE
vite-app               latest           4dd38de114b8   42 hours ago   390MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's run the docker image using &lt;code&gt;docker run -p 3000:8000 vite-app:latest&lt;/code&gt;. You should be able to see the web application running on &lt;code&gt;localhost:3000&lt;/code&gt; if you have setup everything correctly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up AWS Environment
&lt;/h2&gt;

&lt;p&gt;Now, let's setup our AWS environment. We will be using Terraform to create our infrastructure. We will be creating the following main resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon ECR private repository&lt;/li&gt;
&lt;li&gt;Amazon ECS cluster&lt;/li&gt;
&lt;li&gt;Amazon ECS task definition&lt;/li&gt;
&lt;li&gt;Amazon ECS service&lt;/li&gt;
&lt;li&gt;Some IAM roles and policies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's first create a couple of files that we plan to use in our &lt;code&gt;terraform/&lt;/code&gt; directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd terraform/
touch main.tf providers.tf variables.tf outputs.tf main.tfvars iam.tf sg.tf vpc.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll be using Amazon Web Service provider for Terraform. So let's add the following to our &lt;code&gt;providers.tf&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"5.22.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"eu-west-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In our &lt;code&gt;main.tf&lt;/code&gt;, we will be creating some of our core services, which are: Amazon ECR private repository, Amazon ECS cluster, Amazon ECS task definition, and Amazon ECS service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;
&lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"vite_app_repository"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-aws-modules/ecr/aws"&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1.6.0"&lt;/span&gt;

  &lt;span class="nx"&gt;repository_name&lt;/span&gt;                 &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"vite-app-repository"&lt;/span&gt;
  &lt;span class="nx"&gt;repository_type&lt;/span&gt;                 &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"private"&lt;/span&gt;
  &lt;span class="nx"&gt;repository_image_tag_mutability&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"IMMUTABLE"&lt;/span&gt;
  &lt;span class="nx"&gt;create_lifecycle_policy&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="c1"&gt;# only keep the latest 5 images&lt;/span&gt;
  &lt;span class="nx"&gt;repository_lifecycle_policy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jsonencode&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="nx"&gt;rules&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;rulePriority&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="nx"&gt;description&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Expire images by count"&lt;/span&gt;
        &lt;span class="nx"&gt;selection&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;tagStatus&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"any"&lt;/span&gt;
          &lt;span class="nx"&gt;countType&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"imageCountMoreThan"&lt;/span&gt;
          &lt;span class="nx"&gt;countNumber&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nx"&gt;action&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"expire"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;merge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;common_tags&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ecs_cluster"&lt;/span&gt; &lt;span class="s2"&gt;"vite_app_cluster"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_cluster_name&lt;/span&gt;
  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;common_tags&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ecs_task_definition"&lt;/span&gt; &lt;span class="s2"&gt;"vite_app_runner"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;family&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_task_definition_name&lt;/span&gt;
  &lt;span class="nx"&gt;requires_compatibilities&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"FARGATE"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;network_mode&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"awsvpc"&lt;/span&gt;
  &lt;span class="nx"&gt;cpu&lt;/span&gt;                      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"512"&lt;/span&gt;
  &lt;span class="nx"&gt;memory&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1024"&lt;/span&gt;
  &lt;span class="nx"&gt;execution_role_arn&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_task_execution_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
  &lt;span class="nx"&gt;container_definitions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jsonencode&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_container_name&lt;/span&gt;
      &lt;span class="nx"&gt;image&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vite_app_repository&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;repository_url&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:latest"&lt;/span&gt;
      &lt;span class="nx"&gt;cpu&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;512&lt;/span&gt;
      &lt;span class="nx"&gt;memory&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;
      &lt;span class="nx"&gt;portMappings&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;containerPort&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8000&lt;/span&gt;
          &lt;span class="nx"&gt;hostPort&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8000&lt;/span&gt;
          &lt;span class="nx"&gt;protocol&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="nx"&gt;essential&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;common_tags&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ecs_service"&lt;/span&gt; &lt;span class="s2"&gt;"vite_app_service"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_service_name&lt;/span&gt;
  &lt;span class="nx"&gt;cluster&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_ecs_cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vite_app_cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;task_definition&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_ecs_task_definition&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vite_app_runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
  &lt;span class="nx"&gt;launch_type&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"FARGATE"&lt;/span&gt;
  &lt;span class="nx"&gt;desired_count&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;

  &lt;span class="nx"&gt;network_configuration&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;subnets&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vite_app_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_subnets&lt;/span&gt;
    &lt;span class="nx"&gt;security_groups&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web_access_sg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;security_group_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;assign_public_ip&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;common_tags&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, let's go through our &lt;code&gt;main.tf&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;First we are creating an ECR repository using the &lt;code&gt;terraform-aws-modules/ecr/aws&lt;/code&gt; module. We are also creating a lifecycle policy to only keep the latest 5 images. We are also creating an ECS cluster, ECS task definition, and ECS service. We are also creating a security group for our ECS service to allow traffic from the internet (see below).&lt;/p&gt;

&lt;p&gt;We also need to have some IAM permissions for us to allow ECS task to pull the docker image from our ECR repository. We will also need to create an IAM user for our Github Actions workflow to use to push the docker image to our ECR repository. We will be creating a couple of IAM roles and policies in our &lt;code&gt;iam.tf&lt;/code&gt; file. I won't be putting the codes here on the blog but you can see the codes &lt;a href="https://github.com/halchester/ecr-ecs-ghactions/blob/main/terraform/iam.tf" rel="noopener noreferrer"&gt;&lt;strong&gt;here&lt;/strong&gt;&lt;/a&gt; on Github. We will also need to create a security group for our ECS service to allow traffic from the internet. We will be creating a &lt;code&gt;sg.tf&lt;/code&gt; file for that. For the security group resource you can see &lt;a href="https://github.com/halchester/ecr-ecs-ghactions/blob/main/terraform/sg.tf" rel="noopener noreferrer"&gt;&lt;strong&gt;here&lt;/strong&gt;&lt;/a&gt; and for VPC resouce, you can get the terraform codes &lt;a href="https://github.com/halchester/ecr-ecs-ghactions/blob/main/terraform/vpc.tf" rel="noopener noreferrer"&gt;&lt;strong&gt;here&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We will also need to create a few variables, so let's create a &lt;code&gt;variables.tf&lt;/code&gt; file and need to provide them depending on the different environments &lt;a href="https://github.com/halchester/ecr-ecs-ghactions/blob/main/terraform/variables.tf" rel="noopener noreferrer"&gt;&lt;strong&gt;here&lt;/strong&gt;&lt;/a&gt;. We will be using a &lt;code&gt;main.tfvars&lt;/code&gt; file to store our variables for now since we only have one environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="nx"&gt;aws_region&lt;/span&gt;               &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"eu-west-1"&lt;/span&gt;
&lt;span class="nx"&gt;ecs_task_definition_name&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"vite-app-runner"&lt;/span&gt;
&lt;span class="nx"&gt;ecs_container_name&lt;/span&gt;       &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"vite-app"&lt;/span&gt;
&lt;span class="nx"&gt;ecs_cluster_name&lt;/span&gt;         &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"vite-app-cluster"&lt;/span&gt;
&lt;span class="nx"&gt;ecs_service_name&lt;/span&gt;         &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"vite-app-service"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will also need to create an &lt;code&gt;outputs.tf&lt;/code&gt; file to output some of the resources that we will be using later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"ecr_repo_url"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vite_app_repository&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;repository_url&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"github_actions_user_access_key_id"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_access_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;github_actions_user_access_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"github_actions_user_access_secret_key"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_access_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;github_actions_user_access_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;secret&lt;/span&gt;
  &lt;span class="nx"&gt;sensitive&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that, we've setup the AWS infrastructure in place, let's run &lt;code&gt;terraform init&lt;/code&gt; to initialise our Terraform project. Then, we can run &lt;code&gt;terraform plan -var-file=main.tfvars&lt;/code&gt; to see what resources will be created. If everything looks good, we can run &lt;code&gt;terraform apply -var-file=main.tfvars&lt;/code&gt; to create the resources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Apply complete! Resources: 26 added, 0 changed, 0 destroyed.

Outputs:

ecr_repo_url = "********.dkr.ecr.eu-west-1.amazonaws.com/vite-app-repository"
github_actions_user_access_key_id = "**********"
github_actions_user_access_secret_key = &amp;lt;sensitive&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we have our AWS infrastructure in place. We can now push our docker image to our ECR repository and run our ECS service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pushing Docker Image to ECR
&lt;/h2&gt;

&lt;p&gt;Now, let's push our docker image to our ECR repository. First, we need to login to our ECR repository using &lt;code&gt;aws ecr get-login-password --region your-region | docker login --username AWS --password-stdin your-account-id.dkr.ecr.your-region.amazonaws.com&lt;/code&gt;. Then, we can tag our docker image using &lt;code&gt;docker tag vite-app:latest your-account-id.dkr.ecr.your-region.amazonaws.com/vite-app-repository:latest&lt;/code&gt;. Then, we can push our docker image using &lt;code&gt;docker push your-account-id.dkr.ecr.your-region.amazonaws.com/vite-app-repository:latest&lt;/code&gt;. If everything is working fine, you should be able to see the docker image in your ECR repository. &lt;strong&gt;(You can also see the push command if you go to your ECR repository and click on &lt;code&gt;View push commands&lt;/code&gt;)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If everything works as expected, you should be able to see a task running in your ECS cluster. If you go into the &lt;code&gt;networking&lt;/code&gt;, you will see this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp38i13ro4ym3qwomhaq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp38i13ro4ym3qwomhaq4.png" alt="ECS Task" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you open that Public IP in your browser, and go to port &lt;code&gt;8000&lt;/code&gt;, you should be able to see the web application running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3v7nr9uqhd9xkprhy0x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3v7nr9uqhd9xkprhy0x.png" alt="Web application" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Github actions
&lt;/h2&gt;

&lt;p&gt;We wouldn't want to do the entire process manually every time we make a change to our web application. So, let's create a github action that will do that for us. We will be creating a workflow that will detect changes that are pushed to the &lt;code&gt;main&lt;/code&gt; branch, build the docker image, push it to our private ECR repository, and update the ECS service to use the latest docker image. We will be creating a &lt;code&gt;deploy.yml&lt;/code&gt; file in our &lt;code&gt;.github/workflows&lt;/code&gt; directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Push image to Amazon ECR and deploy to ECS&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;master&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout code&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4.1.1&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure AWS Credentials&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v4.0.1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;aws-access-key-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ACCESS_KEY }}&lt;/span&gt;
          &lt;span class="na"&gt;aws-secret-access-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_SECRET_KEY }}&lt;/span&gt;
          &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_REGION }}&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Login to Amazon ECR&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/amazon-ecr-login@v2.0.1&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;login-ecr&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set outputs&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vars&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo "sha_short=$(git rev-parse --short HEAD)" &amp;gt;&amp;gt; $GITHUB_OUTPUT&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build, tag and Push image to Amazon ECR&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build-and-tag-docker-image&lt;/span&gt;
        &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./app&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;ECR_REGISTRY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.login-ecr.outputs.registry }}&lt;/span&gt;
          &lt;span class="na"&gt;ECR_REPOSITORY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ECR_REPOSITORY }}&lt;/span&gt;
          &lt;span class="na"&gt;IMAGE_TAG&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git-${{ steps.vars.outputs.sha_short }}&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .&lt;/span&gt;
          &lt;span class="s"&gt;docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG&lt;/span&gt;
          &lt;span class="s"&gt;echo "IMAGE_URI=${{ env.ECR_REGISTRY }}/${{ env.ECR_REPOSITORY }}:${{ env.IMAGE_TAG }}" &amp;gt;&amp;gt; $GITHUB_OUTPUT&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Download task definition&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;aws ecs describe-task-definition \&lt;/span&gt;
          &lt;span class="s"&gt;--task-definition ${{ secrets.AWS_ECS_TASK_DEFINITION_NAME}} \&lt;/span&gt;
          &lt;span class="s"&gt;--query taskDefinition \&lt;/span&gt;
          &lt;span class="s"&gt;--output json &amp;gt; taskDefinition.json&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Fill in the new image ID in the Amazon ECS task definition&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;update-task-def&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/amazon-ecs-render-task-definition@v1.1.3&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;task-definition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;taskDefinition.json&lt;/span&gt;
          &lt;span class="na"&gt;container-name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ECS_CONTAINER_NAME }}&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.build-and-tag-docker-image.outputs.IMAGE_URI }}&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy Amazon ECS task definition&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy-ecs&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/amazon-ecs-deploy-task-definition@v1.4.11&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;task-definition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.update-task-def.outputs.task-definition }}&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{secrets.AWS_ECS_SERVICE_NAME}}&lt;/span&gt;
          &lt;span class="na"&gt;cluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{secrets.AWS_ECS_CLUSTER_NAME}}&lt;/span&gt;
          &lt;span class="na"&gt;wait-for-service-stability&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's go through each step.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;With the checkout action, we are checking out the code from the repository.&lt;/li&gt;
&lt;li&gt;We need some sort of a programmatic access to our AWS account (which is why we created a &lt;code&gt;github-actions-user&lt;/code&gt; IAM user in our &lt;code&gt;iam.tf&lt;/code&gt; file earlier &lt;a href="https://github.com/halchester/ecr-ecs-ghactions/blob/main/terraform/iam.tf#L28C1-L28C1" rel="noopener noreferrer"&gt;&lt;strong&gt;here&lt;/strong&gt;&lt;/a&gt;), so we are configuring our AWS credentials using the &lt;code&gt;aws-actions/configure-aws-credentials&lt;/code&gt; action.&lt;/li&gt;
&lt;li&gt;We then need to login to our ECR repository using the &lt;code&gt;aws-actions/amazon-ecr-login&lt;/code&gt; action.&lt;/li&gt;
&lt;li&gt;We then need to build our docker image, tag it, and push it to our ECR repository. We are also using the &lt;code&gt;git rev-parse --short HEAD&lt;/code&gt; command to get the short SHA of the commit that triggered the workflow. We will be using this short SHA as our docker image tag. We are also using the &lt;code&gt;aws-actions/amazon-ecs-render-task-definition&lt;/code&gt; action to update our ECS task definition with the new docker image.&lt;/li&gt;
&lt;li&gt;We then need to download our ECS task definition using the &lt;code&gt;aws ecs describe-task-definition&lt;/code&gt; command. What this step will do is simply call the &lt;code&gt;aws ecs describe-task-definition&lt;/code&gt; command and save the output to a file called &lt;code&gt;taskDefinition.json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;We then need to update our ECS task definition with the new docker image. We are using the &lt;code&gt;aws-actions/amazon-ecs-render-task-definition&lt;/code&gt; action to do that. We are also using the &lt;code&gt;aws-actions/amazon-ecs-deploy-task-definition&lt;/code&gt; action to deploy our ECS task definition.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After waiting for a few minutes, you should be able to see the new docker image in your ECR repository. You should also be able to see the new docker image being used in your ECS task definition.&lt;/p&gt;

&lt;p&gt;You will notice a few environment variables that are being used in this workflow. We will be storing these environment variables in our Github repository secrets. You can access your repository secrets by going to &lt;code&gt;Settings&lt;/code&gt; &amp;gt; &lt;code&gt;Secrets&lt;/code&gt; &amp;gt; &lt;code&gt;New repository secret&lt;/code&gt;. We will be storing the following secrets:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qyigp6gu2t9j9lcvq75.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qyigp6gu2t9j9lcvq75.png" alt="Github secrets" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most of the secrets are from our &lt;code&gt;terraform/main.tfvars&lt;/code&gt; file which are being used to pass as some sort of variables to our github action. We will also need to store our AWS credentials in our Github repository secrets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ftvd3ye7zoe5i3maw0p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ftvd3ye7zoe5i3maw0p.png" alt="Github actions" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great, now that everything's in place. Let's test out our pipeline. We'll remove the smiley face from our &lt;code&gt;App.tsx&lt;/code&gt; inside the vite application, and push it. As soon as we pushed it, we'll see it triggers the github actions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4of10htvyix0w6ylycc9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4of10htvyix0w6ylycc9.png" alt="Trigger" width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After waiting for a while, let's go into our ECS console and go onto our newly running task, and click on the Public IP and go onto port &lt;code&gt;8000&lt;/code&gt;. You should be able to see the web application running without the smiley face.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dhdid2s8yzlqu03bf7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dhdid2s8yzlqu03bf7r.png" alt="Without smiley face" width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And that's it! We've successfully created a CI/CD pipeline using Github Actions to deploy a simple web application to ECS. The application is a simple vite application that will be dockerized and pushed to ECR. The pipeline will be triggered on every push to the main branch. The pipeline will build the docker image, push it to ECR, and update the ECS service with the new image.&lt;/p&gt;

&lt;p&gt;Let me know if you have any questions or suggestions. You can also find the codes on Github &lt;a href="https://github.com/halchester/ecr-ecs-ghactions" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Don't forget to star the repo and share this article if you find it useful 😄&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>IAM Permissions Boundaries</title>
      <dc:creator>chester htoo</dc:creator>
      <pubDate>Sat, 12 Aug 2023 21:35:44 +0000</pubDate>
      <link>https://forem.com/halchester/iam-permissions-boundaries-4noa</link>
      <guid>https://forem.com/halchester/iam-permissions-boundaries-4noa</guid>
      <description>&lt;p&gt;I used to be a picky eater when I was young. Veggies like tomatoes, peas, and onions were always left untouched on my plate. My mom's solution was straightforward: eat your veggies or lose out on TV time. It was a cycle of negotiations and scoldings. One day, a guest was over for dinner, and the same old routine unfolded. However, this time, something interesting happened. The guest said, 'If you don't like it, you can leave it on the plate.' That sounded like music to my ears, but there was a catch. I knew that once the guest left, I'd still have to face my mom's disapproval, even though I had been given &lt;em&gt;'permission'&lt;/em&gt; not to eat my veggies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nzfj1tjc21i7ww2mxm3.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nzfj1tjc21i7ww2mxm3.gif" alt="Office - eating veggies"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This childhood memory reminds me of something quite technical - IAM Permissions Boundaries. In the realm of computer systems, it's like my mom setting the rules for my veggie intake, while the guest offered a contrasting &lt;em&gt;'permission'&lt;/em&gt;. But there's more to it than just veggies, and it all ties into how we manage access and actions within digital spaces.&lt;/p&gt;

&lt;p&gt;Imagine you're playing a story mode game, something like God of War, which by the way was my favourite game growing up, on PlayStation, you go through the story mode to collect power-ups and souls to upgrade your weapons which can later be used in the game. Think of these as "permissions", something that gives you permissions to do certain actions, ie. use powers and deal more damages. But sometimes, when you're fighting a boss or doing some quests, you're not allowed to use those powers, even though you have unlocked it. If you're a keen story mode games player, I'm sure that you have experienced these sorts of scenarios.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozi95wk78o42iw5op57j.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozi95wk78o42iw5op57j.gif" alt="Kratos throwing computer away"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let's get back to the real world of AWS. Consider a user or a role in AWS having multiple permissions to carry out actions, eg. being able to create a database or reading a secret text from the vault (Parameter Store), but bounded by a Permissions Boundary of only able to carry out actions that are database related. So in this case, even if the user has the ability to read the text from the vault, they can't do that, since they're bounded by a fence known as IAM Permissions Boundaries. &lt;/p&gt;

&lt;p&gt;Now let's take this approach into AWS environment, our digital playground. Consider a user named John. John loves managing Amazon S3, CloudWatch and Amazon EC2 - her favourite parts of the AWS playground. To make sure he sticks to these areas, you, as the administrator has set up an IAM permissions boundary. This boundary says, "John, you can only play in Amazon S3, CloudWatch and Amazon EC2".&lt;/p&gt;

&lt;p&gt;But remember, the boundary doesn't grant John the powers. He needs policies to do that. So, you create a policy that allows John to perform actions, both his policy and the Permissions Boundary need to agree. If they don't, his request is denied, just like I couldn't leave my plate without my mom's whopping.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxurhu6wfmo68r3dfx23e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxurhu6wfmo68r3dfx23e.png" alt="Effective permissions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, IAM Permissions Boundaries aren't the only rule setters. There are other types of policies too, like resource-based policies and session policies. And sometimes, there might be conflicts between them. It's like having multiple referees in a football field, each enforcing their own rules of, in-game referee and referees from the sidelines.&lt;/p&gt;

&lt;p&gt;So, to sum it all up, IAM Permissions Boundaries are like the rules that determine how far you can go on the AWS playground. They ensure that even if you have many different powers, you can only use the ones that are approved by all relevant policies.&lt;/p&gt;

&lt;p&gt;In our next blog post, we will dive deeper into the security pillar of the AWS well-architected framework.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Deploy Next JS Application to Amazon CloudFront with S3</title>
      <dc:creator>chester htoo</dc:creator>
      <pubDate>Tue, 18 Jul 2023 22:32:45 +0000</pubDate>
      <link>https://forem.com/halchester/deploy-next-js-application-to-amazon-cloudfront-with-s3-2ibb</link>
      <guid>https://forem.com/halchester/deploy-next-js-application-to-amazon-cloudfront-with-s3-2ibb</guid>
      <description>&lt;p&gt;Picture this: You and your friend had launched your SaaS application and the entire globe's rushing to your platform. The application's written in the bleeding-edge technology, Next JS and hosted on AWS Amplify, in Europe region. After a couple days, you saw your inbox's flooded with angry emails from your customers saying the website takes a long time to load up. As a young CTO, challenges rise and you're now scratching your head not know how to improve the performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmvumw9izvfbcuyt1tooh.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmvumw9izvfbcuyt1tooh.gif" alt="Burning CTO" width="500" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Behold: Edge locations
&lt;/h2&gt;

&lt;p&gt;You then stumbled upon something called "Edge locations". What are they? Edge locations refer to global network of data centres strategically placed around the world and are designed to bring content closer to your end-users geographically. They reduce latency and improves the overall performance of content delivery. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwl69wwgfp7txvit6iit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwl69wwgfp7txvit6iit.png" alt="Edge locations" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each edge location serves as a caching endpoint for content delivery networks (CDNs). In our case, our website may be hosted in Europe region, but if someone from Japan wants to access the website, the initial load takes a bit of time but after that, it caches the web page or a file on the closest edge location to the user. So next time when a user from that particular region wants to access your website, the network doesn't have to travel all the way across the world to Europe but can just get the content from that closet edge location, ready to be served and boost your sales.&lt;/p&gt;

&lt;p&gt;So how are we going to achieve this? Simple. We will create an Amazon S3 bucket, enable static website hosting, sync our Next JS static files into that bucket and let the CloudFront do its thing.&lt;/p&gt;

&lt;p&gt;Let's dive right into it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next JS Setup
&lt;/h2&gt;

&lt;p&gt;First we'll create a working Next JS app with a few pages, so we'll create a new directory and use &lt;code&gt;next-app&lt;/code&gt; template for it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn create next-app nextjs-s3-cloudfront 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Select the options that you want to use in creating Next JS application, I'll leave everything as default and use &lt;a href="https://nextjs.org/docs/app" rel="noopener noreferrer"&gt;App Router&lt;/a&gt; rather than Pages Router.&lt;/p&gt;

&lt;p&gt;Wait for a couple of minutes and you got your tiny little working Next JS application. So we'll go ahead and make some changes in the &lt;code&gt;app/page.tsx&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This will now be what our &lt;code&gt;app/page.tsx&lt;/code&gt; would look like.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import Image from "next/image";

export default function Home() {
  return (
    &amp;lt;main className="flex min-h-screen flex-col items-center justify-between p-24"&amp;gt;
      &amp;lt;div className="lg:flex"&amp;gt;&amp;lt;/div&amp;gt;

      &amp;lt;div className="flex place-items-center before:absolute before:h-[300px] before:w-[480px] before:-translate-x-1/2 before:rounded-full before:bg-gradient-radial before:from-white before:to-transparent before:blur-2xl before:content-[''] after:absolute after:-z-20 after:h-[180px] after:w-[240px] after:translate-x-1/3 after:bg-gradient-conic after:from-sky-200 after:via-blue-200 after:blur-2xl after:content-[''] before:dark:bg-gradient-to-br before:dark:from-transparent before:dark:to-blue-700 before:dark:opacity-10 after:dark:from-sky-900 after:dark:via-[#0141ff] after:dark:opacity-40 before:lg:h-[360px] z-[-1]"&amp;gt;
        &amp;lt;p className="text-4xl"&amp;gt;Live long and prosper!&amp;lt;/p&amp;gt;
      &amp;lt;/div&amp;gt;

      &amp;lt;div className="mb-32 grid text-center lg:mb-0 lg:grid-cols-4 lg:text-left"&amp;gt;&amp;lt;/div&amp;gt;
    &amp;lt;/main&amp;gt;
  );
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then let's head into &lt;code&gt;next.config.js&lt;/code&gt; file to configure Next JS build setting to be output type of export. This is now what our &lt;code&gt;next.config.js&lt;/code&gt; should look like.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/** @type {import('next').NextConfig} */
const nextConfig = {
  output: "export",
};

module.exports = nextConfig;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's build our Next JS application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn run build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll notice a new folder appear which is &lt;code&gt;out&lt;/code&gt; and if you open it, you will see a bunch of HTML files and &lt;code&gt;_next&lt;/code&gt; static files. This will come in handy when we transfer them into S3 later.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Resources Setup
&lt;/h2&gt;

&lt;p&gt;We will then, create a new &lt;code&gt;terraform&lt;/code&gt; file right inside the application to avoid having to create a mono-repo or another directory. In an actual working environment, what would be ideal is to create a separate folder and have the Terraform resources there.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir terraform &amp;amp;&amp;amp; cd terraform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And as usual, we will need 4 main files&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;providers.tf&lt;/code&gt; - To configure Terraform prodivers&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;main.tf&lt;/code&gt; - To provision resources&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;variables.tf&lt;/code&gt; - To use variables inside terraform files&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;output.tf&lt;/code&gt; - To get the URL of the CloudFront distribution (or any other properties that we want to check)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch providers.tf main.tf variables.tf output.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will need to grab our &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest" rel="noopener noreferrer"&gt;AWS provider&lt;/a&gt; Terraform registry into the &lt;code&gt;providers.tf&lt;/code&gt; file along with your AWS Access Key and Secret Key (Read more on how to generate these keys &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-services-iam-create-creds.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;). Since I'm using AWS IAM Identity centre with SSO login, I won't be adding the code for Access key and Secret Key, but I'll leave the config as it is. You will need to create a &lt;code&gt;main.tfvars&lt;/code&gt; to supply these values.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# providers.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "5.8.0"
    }
  }
}

provider "aws" {
  region     = var.aws_region
  access_key = var.access_key
  secret_key = var.secret_key
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# variables.tf
variable "aws_region" {
  type        = string
  description = "AWS Region"
  default     = "eu-west-1"
}

variable "secret_key" {
  type        = string
  description = "AWS Secret Key"
}


variable "access_key" {
  type        = string
  description = "AWS Access Key"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we will be using Terraform AWS &lt;a href="https://registry.terraform.io/modules/terraform-aws-modules/s3-bucket/aws/latest" rel="noopener noreferrer"&gt;S3 module&lt;/a&gt; and &lt;a href="https://registry.terraform.io/modules/terraform-aws-modules/cloudfront/aws/latest" rel="noopener noreferrer"&gt;CloudFront module&lt;/a&gt; to provision our resources. The architecture here is to create a S3 bucket with static website hosting option, and then have our Next JS static files there and then use CloudFront to actually serve the content on the edge! We will be making use of custom Access Control Lists (ACLs) so that users cannot directly access the S3 static website URL rather than the CloudFront URL. Here's what our &lt;code&gt;main.tf&lt;/code&gt; file should look like now.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "s3_bucket" {
  source  = "terraform-aws-modules/s3-bucket/aws"
  version = "3.14.0"
  bucket  = "my-crazy-good-nextjs-bucket"
}

module "cloudfront" {
  source              = "terraform-aws-modules/cloudfront/aws"
  version             = "3.2.1"
  is_ipv6_enabled     = true
  enabled             = true
  price_class         = "PriceClass_All"
  retain_on_delete    = false
  wait_for_deployment = false

  create_origin_access_identity = true
  origin_access_identities = {
    "oai-nextjs" = "cloudfront s3 oai for nextjs website"
  }

  origin = {
    s3 = {
      domain_name = module.s3_bucket.s3_bucket_bucket_regional_domain_name
      s3_origin_config = {
        origin_access_identity = "oai-nextjs" # key from origin_access_identities map
      }
    }
  }

  default_cache_behavior = {
    target_origin_id       = "s3" # key from origin map
    allowed_methods        = ["GET", "HEAD", "OPTIONS"]
    cached_methods         = ["GET", "HEAD", "OPTIONS"]
    viewer_protocol_policy = "redirect-to-https"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
    compress               = true
  }

  custom_error_response = [
    {
      error_code         = 403
      response_code      = 403
      response_page_path = "/index.html"
    }
  ]

  default_root_object = "index.html"
}

data "aws_iam_policy_document" "s3_policy" {
  version = "2012-10-17"

  statement {
    sid       = "1"
    effect    = "Allow"
    actions   = ["s3:GetObject"]
    resources = ["${module.s3_bucket.s3_bucket_arn}/*"]
    principals {
      type        = "AWS"
      identifiers = module.cloudfront.cloudfront_origin_access_identity_iam_arns
    }
  }
}

resource "aws_s3_bucket_policy" "s3_policy" {
  bucket = module.s3_bucket.s3_bucket_id
  policy = data.aws_iam_policy_document.s3_policy.json
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's go through them. &lt;/p&gt;

&lt;p&gt;We have 2 modules, each using terraform module specified above. We give a name of &lt;code&gt;my-crazy-good-nextjs-bucket&lt;/code&gt; for our S3 bucket. For CloudFront module, we set the OAI to be enabled so that users can only access the S3 website content from our CloudGront URL (Read more about Origin Access Identity, OAIs  &lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;). For the cache behaviours, we cache all the &lt;code&gt;GET&lt;/code&gt; request to our website on an edge location and only allow &lt;code&gt;HTTPS&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;There's another &lt;code&gt;data&lt;/code&gt; block which is the IAM policy document for our S3 bucket. The reason why we need to attach this policy is that, it is not a good idea to have our bucket publicly accessible from the internet and we only want to allow access from the ARN of the CloudFront OAI.&lt;/p&gt;

&lt;p&gt;We also want to see some details of the resources that we provision after running &lt;code&gt;terraform apply&lt;/code&gt; so let's create a &lt;code&gt;outputs.tf&lt;/code&gt; file to grab some values from the provisioning.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# outputs.tf
output "s3" {
  description = "S3 module outputs"

  value = {
    bucket_id  = module.s3_bucket.s3_bucket_id
  }
}


output "cloudfront" {
  description = "Cloudfront module outputs"

  value = {
    distribution_id = module.cloudfront.cloudfront_distribution_id
    domain          = module.cloudfront.cloudfront_distribution_domain_name
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What we're doing in this file is pretty much letting terraform know that, we want these certain values to be shown in the CLI, after the resources have been provisioned. &lt;/p&gt;

&lt;p&gt;That's pretty much it! Let's run &lt;code&gt;terraform init&lt;/code&gt; and &lt;code&gt;terraform plan&lt;/code&gt;. It will show us a bunch of resources that terraform will create. Normally this plan should be reviewed with other team members to finalise the changes but since it's only a small business, let's go ahead and run &lt;code&gt;terraform apply&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;After waiting for a couple of seconds, the resources will be provisioned and it will show us these output values.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Apply complete! Resources: 13 added, 0 changed, 0 destroyed.

Outputs:

cloudfront = {
  "arn" = "arn:aws:cloudfront::389144622841:distribution/E3QS5X2RJNINOF"
  "distribution_id" = "E3QS5X2RJNINOF"
  "domain" = "d11r27a15bgorv.cloudfront.net"
}

s3 = {
  "bucket_id" = "my-crazy-good-nextjs-bucket"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will need 2 things which are the S3 bucket name and the CloudFront distribution ID. You'll also see a CloudFront domain from the outputs, but if you visit it now, you will see nothing but an Error page. This is because we have only created the resources but not moved the data from our Next JS app to our S3 bucket.&lt;/p&gt;

&lt;p&gt;So for that, let's go back to our Next JS app by running &lt;code&gt;cd ..&lt;/code&gt;. We will be using AWS cli to copy our Next JS &lt;code&gt;out/&lt;/code&gt; for static HTML pages to our s3 by running&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 sync out/ s3://my-crazy-good-nextjs-bucket
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will now copy all the content inside the Next JS export directory &lt;code&gt;out&lt;/code&gt; to our newly created S3 bucket. Now that we got our required static pages, we will revalidate our CloudFront distribution with the new contents by running&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws cloudfront create-invalidation --distribution-id E3QS5X2RJNINOF --paths "/*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Note: Paste the distribution ID from the outputs in the args of --distribution-id&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now if we visit our CloudFront domain again, we'll see our blazing fast Next JS website served from the edge.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6nwl9p46nfadh5pbp75.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6nwl9p46nfadh5pbp75.png" alt="Website" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it for this blog! All the codes can be found &lt;a href="https://github.com/halchester/nextjs-cloudfront-s3" rel="noopener noreferrer"&gt;here&lt;/a&gt; on my Github!&lt;/p&gt;

&lt;h2&gt;
  
  
  Improvements
&lt;/h2&gt;

&lt;p&gt;Of course, there are ways that we can improve this deployment further.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We can deploy on our own custom domain name. If we want to have a custom domain name, we need to name the S3 bucket as the same name as our domain's A record (ie. &lt;a href="http://www.google.com" rel="noopener noreferrer"&gt;www.google.com&lt;/a&gt;), and have our certificate in the ACM.&lt;/li&gt;
&lt;li&gt;Setting up CI/CD pipeline to automate static file transfer to S3 and revalidating the CloudFront cache every-time we push to our VCS could also help in our case if we want to automate the process.&lt;/li&gt;
&lt;li&gt;We can even integrate this application with a custom CRM that we did on &lt;a href="https://dev.to/halchester/crm-with-lambda-and-terraform-2c1p"&gt;another blog-post&lt;/a&gt; and have our customers reach out to us. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it for now and I hope to see you in the next one! Ciao!&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>cloud</category>
      <category>programming</category>
    </item>
    <item>
      <title>CRM with Lambda and Terraform</title>
      <dc:creator>chester htoo</dc:creator>
      <pubDate>Fri, 14 Jul 2023 01:57:18 +0000</pubDate>
      <link>https://forem.com/halchester/crm-with-lambda-and-terraform-2c1p</link>
      <guid>https://forem.com/halchester/crm-with-lambda-and-terraform-2c1p</guid>
      <description>&lt;p&gt;Many of us have visited websites, scrolled around and click about. If the website's interesting, you guys have also send inquiry on more details about the product on the website's little form that's called Contact Us.&lt;/p&gt;

&lt;p&gt;So, today let's dive into building a minimal working backend service for contact us form and saving that inquiry into our CRM, Hubspot.&lt;/p&gt;

&lt;p&gt;Technologies that we'll use are as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Terraform (IaC - Resource provisioning)&lt;/li&gt;
&lt;li&gt;AWS Lambda (Compute Infrastructure on the AWS)&lt;/li&gt;
&lt;li&gt;HubSpot API (to save inquiries)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Let's start by creating a new working directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir contact-us
cd contact-us
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The plan here is to create a minimal lambda function that saves the user data from the client application (WordPress website, wix, custom client website, etc) into our HubSpot CRM. So let's get straight into it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Plan
&lt;/h2&gt;

&lt;p&gt;We'll start off by provisioning our lambda function from Terraform.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch main.tf providers.tf outputs.tf variables.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;providers.tf&lt;/code&gt; - This file is for defining which Terraform providers that we want to use.&lt;br&gt;
&lt;code&gt;main.tf&lt;/code&gt; - Our resource provisioning logic will sit in this file (if it's a huge application, we would create separate module files)&lt;br&gt;
&lt;code&gt;variables.tf&lt;/code&gt; - This file is for defining variables that are needed for our terraform module.&lt;br&gt;
&lt;code&gt;outputs.tf&lt;/code&gt; - Any output data that we want after provisioning our resources.&lt;/p&gt;

&lt;p&gt;We will need to grab our &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest" rel="noopener noreferrer"&gt;AWS provider&lt;/a&gt; Terraform registry into the &lt;code&gt;providers.tf&lt;/code&gt; file along with your AWS Access Key and Secret Key (Read more on how to generate these keys &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-services-iam-create-creds.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# providers.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "5.8.0"
    }
  }
}

provider "aws" {
  region     = var.aws_region
  access_key = var.access_key
  secret_key = var.secret_key
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# variables.tf
variable "aws_region" {
  type        = string
  description = "AWS Region"
  default     = "eu-west-1"
}

variable "secret_key" {
  type        = string
  description = "AWS Secret Key"
}


variable "access_key" {
  type        = string
  description = "AWS Access Key"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For our lambda function planning, we will be using &lt;a href="https://registry.terraform.io/modules/terraform-aws-modules/lambda/aws/latest" rel="noopener noreferrer"&gt;terraform lambda module&lt;/a&gt; rather than the resource so that if we were to reuse, we can just use that particular module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "lambda" {
  source        = "terraform-aws-modules/lambda/aws"
  version       = "5.2.0"
  function_name = "contact-us"
  architectures = ["arm64"]
  runtime       = "nodejs18.x"
  handler       = "index.handler"

  attach_policy_statements = true

  policy_statements = {
    AmazonSSMReadOnlyAccess = {
      sid       = "AmazonSSMReadOnlyAccess"
      effect    = "Allow"
      actions   = ["ssm:Describe*", "ssm:Get*", "ssm:List*"]
      resources = ["*"]
    }
  }

  source_path = [{
    path = "${path.module}/functions/contact-us"
  }]

  create_lambda_function_url = true

  cors = {
    allowed_credentials = false
    allowed_headers     = ["*"]
    allowed_methods     = ["POST", "OPTIONS", ]
    allowed_origins     = ["*"] # We would only want to allow our domain here
    max_age_seconds     = 3000
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So let's go through the inputs line by line. First we define what source we are using from terraform module and its version. We then define what sort of architecture, programming language and runtime we're using for the lambda function. For permissions, we're using inline policy statements here, in which we're allowing access to SSM Parameter store, since we don't want to store the HubSpot API keys in the lambda function directly, and will be storing in the Parameter store. We want to create a lambda function url that we can invoke directly so we flag &lt;code&gt;true&lt;/code&gt; for &lt;code&gt;create_lambda_function_url&lt;/code&gt; (this is not the most ideal way, more on it later), followed by CORS config.&lt;/p&gt;

&lt;p&gt;Let's run &lt;code&gt;terraform init&lt;/code&gt; and get the required providers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great stuffs! Now we'll setup HubSpot.&lt;/p&gt;

&lt;h2&gt;
  
  
  HubSpot
&lt;/h2&gt;

&lt;p&gt;HubSpot is a CRM platform with a lot of integrations and resources for marketing, sales and content management. The product that we want to focus on for this part is their CRM hub contacts. &lt;a href="https://knowledge.hubspot.com/get-started/set-up-your-account" rel="noopener noreferrer"&gt;Here&lt;/a&gt; is their documentation on how to setup your HubSpot account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fem8cwi20xfi3d52cp02x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fem8cwi20xfi3d52cp02x.png" alt="Hubspot" width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the hubspot app, we can create our own private app (which is similar to connected app if you've ever used SalesForce or think of it as a client), and then under scopes, choose &lt;code&gt;crm.objects.contacts&lt;/code&gt; read/write access. That's it! We then get our own HubSpot Access key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqdhfoa8oz1u3scxaa93.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqdhfoa8oz1u3scxaa93.png" alt="Private App" width="800" height="737"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will then store this key in our AWS Parameter store, and store it as an encrypted string. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrsvvurnsslhc3w8ubly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrsvvurnsslhc3w8ubly.png" alt="SSM Parameter store" width="800" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it! Now we get into coding the actual lambda function.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda
&lt;/h2&gt;

&lt;p&gt;We'll create a new folder called &lt;code&gt;functions&lt;/code&gt; and then a subfolder called &lt;code&gt;contact-us&lt;/code&gt;. This is where the lambda function will sit. In there, we will create a &lt;code&gt;package.json&lt;/code&gt; file by called &lt;code&gt;yarn init -y&lt;/code&gt; and create a blank &lt;code&gt;index.js&lt;/code&gt; with the following content.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const handler = async (event, context) =&amp;gt; {};

module.exports = { handler };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll install 3 libraries, which are &lt;code&gt;@aws-sdk/client-ssm&lt;/code&gt; to get our HubSpot access key and &lt;code&gt;@hubspot/api-client&lt;/code&gt; to interact with HubSpot API.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn add @aws-sdk/client-ssm @hubspot/api-client 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and this is what our &lt;code&gt;index.js&lt;/code&gt; looks like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const AWS = require("@aws-sdk/client-ssm");
const hubspot = require("@hubspot/api-client");

const handler = async (event, context) =&amp;gt; {
  const body = JSON.parse(event.body);

  const ssm = new AWS.SSM({
    region: "eu-west-1",
  });

  const hubspot_key = await ssm.getParameter({
    Name: "HUBSPOT_ACCESS_KEY",
    WithDecryption: true,
  });

  const hubspot_access_key = hubspot_key.Parameter.Value;

  const hubspotClient = new hubspot.Client({
    accessToken: hubspot_access_key,
  });

  await hubspotClient.crm.contacts.basicApi.create({
    properties: {
      email: body.email,
      firstname: body.firstname,
      lastname: body.lastname,
      phone: body.phone,
      message: body.message,
    },
  });

  return {
    statusCode: 200,
    body: JSON.stringify({
      message: "Thanks for contacting us! We will be in touch soon.",
    }),
  };
};

module.exports = { handler };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now before we do &lt;code&gt;terraform plan&lt;/code&gt; to see the changes that it's going to make, first we'll need to create &lt;code&gt;main.tfvars&lt;/code&gt; and then give the AWS Access key and Secret Key, but personally, I have an IAM Identity Centre enabled on my personal organisation, so I will be skipping this.&lt;/p&gt;

&lt;p&gt;Then we do &lt;code&gt;terraform plan&lt;/code&gt;. It lists out a bunch of changes that terraform is planning to make, and if everything looks good, we can go ahead and do &lt;code&gt;terraform apply&lt;/code&gt;. It will then apply the changes and now your lambda function will be live in no time!&lt;/p&gt;

&lt;p&gt;So with our newly created Lambda function, let's test it out on Postman. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjfo3yw5st21y2sq63ft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjfo3yw5st21y2sq63ft.png" alt="Postman" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great stuffs! Now we've successfully deployed our lambda function and if we go check to HubSpot, we'll also see a new contact added there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0b23u6j4r47bsdlnihl9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0b23u6j4r47bsdlnihl9.png" alt="Hubspot" width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, that's it really. We've successfully built our own little contact us functionality, waiting to be integrated with your client applications!😄&lt;/p&gt;

&lt;h2&gt;
  
  
  Improvements
&lt;/h2&gt;

&lt;p&gt;Sure, you wouldn't use this lambda function alone when your application grows bigger and bigger. There are definitely ways on how this can be improved.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;We wouldn't use the lambda function URL as its own endpoint in big application. Instead, we can create an API Gateway that fronts the lambda function and set them as targets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If we want the inquiries to be notified to us, we can either add SES service or Slack API to be notified in our own channel.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Setting up CD pipelines for lambda function on deploy is also another thing that can be improved.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In real world application, where there are a bunch of routes in the API gateway, we wouldn't use just a single Terraform file to deploy them. We would have a dedicated file structure on dealing with different terraform states for different functions and resources.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So that's the end of this little lab. I hope you guys enjoy it and I hope to see yous in the next one! Ciao!&lt;/p&gt;

&lt;p&gt;Link to Github repo: &lt;a href="https://github.com/halchester/contact-us-lambda-hubspot" rel="noopener noreferrer"&gt;https://github.com/halchester/contact-us-lambda-hubspot&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>programming</category>
      <category>terraform</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Why Infrastructure-as-Code is a way to go</title>
      <dc:creator>chester htoo</dc:creator>
      <pubDate>Mon, 03 Jul 2023 23:16:02 +0000</pubDate>
      <link>https://forem.com/halchester/why-infrastructure-as-code-is-a-way-to-go-30ap</link>
      <guid>https://forem.com/halchester/why-infrastructure-as-code-is-a-way-to-go-30ap</guid>
      <description>&lt;p&gt;Picture this: You and your friend just launched an amazing SaaS web app. The response is overwhelming, with customers flocking to your platform from all corners of the globe. But here's the challenge: the surge in traffic is pushing your app to its limits, and you fear it might crash. As resourceful, albeit inexperienced, engineers, you scramble to provision additional instances to keep up with the demand. The plot thickens when you realise your customers are now spread across the world, requiring instances in various locations. It's like a wild adventure, searching high and low for server space to cater to their needs.&lt;/p&gt;

&lt;p&gt;With a sense of urgency, you dash to the dashboard, determined to handle the mounting traffic. Region after region, you tirelessly replicate the same architecture, but it hits you: the repetition is daunting. A flicker of curiosity sparks within you, and you ponder if there's a better way to conquer this challenge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Behold: Infrastructure as Code tools
&lt;/h2&gt;

&lt;p&gt;In the pre-IaC era, deploying resources meant embarking on an endless clicking spree across multiple regions on the dashboard—a phenomenon aptly dubbed ClickOps. While ClickOps had its charm, human errors were always lurking around the corner.&lt;/p&gt;

&lt;p&gt;But fear not! Engineers came to the rescue with the ingenious concept of Infrastructure as Code. This approach treats infrastructure provisioning, configuration, and management as code. Think of it as software engineering meets infrastructure magic, where programming languages (like the mighty Python, but not limited to it) and declarative configuration files or scripting languages take centre stage. These tools automate the creation and management of infrastructure resources such as servers, networks, and storage.&lt;/p&gt;

&lt;p&gt;Enter the stages of the Development lifecycle. Day 0 marks the grand planning phase, where the architecture's foundation takes shape and high-level overviews come to life. Day 1 witnesses the successful deployment of apps, with the Board of Directors applauding the team's performance. But the true challenge lies in Day 2 and beyond—the maintenance and patching of the infrastructure resources while ensuring reliability, stability, and top-notch performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjhsuo9awbfwj1euagqpx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjhsuo9awbfwj1euagqpx.png" alt="devops lifecycle" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's where the Operations team realizes that manually clicking through tens of thousands of deployed resources is simply impractical. Enter the superhero IaC tools, ready to save the day from Day 0 itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Provisioning
&lt;/h2&gt;

&lt;p&gt;Now, let's dive into two resource provisioning tools: Terraform and AWS CDK.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhwv1fpaxd4m2l0gwj73.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhwv1fpaxd4m2l0gwj73.png" alt="Terraform Hashicorp" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Terraform, developed by HashiCorp, is an open-source infrastructure provisioning tool. With its simple and readable syntax, you can define your infrastructure resources in code, creating an execution plan that automates the provisioning and management process. It's versatile, supporting multiple cloud providers like AWS, Azure, and Google Cloud Platform.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS CDK
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzoues6qkz697j1etzk5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzoues6qkz697j1etzk5.png" alt="Amazon Web Services Cloud Development Kit" width="618" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS CDK, on the other hand, is Amazon Web Services' Cloud Development Kit. This open-source framework lets you define your cloud infrastructure using familiar programming languages such as TypeScript, Python, or Java. By writing code that represents your resources, you can leverage programming language features to manage complex setups effortlessly. Under the hood, CDK uses AWS CloudFormation to create and manage your defined resources.&lt;/p&gt;

&lt;p&gt;Both Terraform and AWS CDK provide powerful options for infrastructure provisioning. The choice between them ultimately depends on factors like your coding preferences and the complexity of your infrastructure setup. No matter which tool you choose, embracing infrastructure as code will unlock automation, scalability, and efficient resource management for your applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration Management
&lt;/h2&gt;

&lt;p&gt;In the realm of IaC tools, configuration management tools are also superheroes when it comes to managing your resources. Now, let's explore two of them: Ansible and Puppet, each with its own unique approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ansible
&lt;/h3&gt;

&lt;p&gt;Ansible, my personal favorite, is an open-source automation tool that focuses on simplicity and ease of use. It empowers you to define and manage infrastructure configurations through "playbooks" written in YAML. With Ansible, tasks like package installation, configuration file management, and service deployments can be effortlessly automated across multiple servers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9mk7wmotwxl2sixwfs8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9mk7wmotwxl2sixwfs8.png" alt="Ansible playbook nodes and agents" width="708" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of Ansible's standout features is its agentless architecture. By leveraging SSH connections, it eliminates the need for additional software or agents on the managed nodes. Ansible playbooks are designed to be idempotent, ensuring you can safely run them multiple times without unintended side effects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Puppet
&lt;/h3&gt;

&lt;p&gt;Puppet, on the other hand, provides a robust configuration management solution. Using a declarative language, either Puppet DSL or Ruby code, you describe the desired state of your infrastructure. Puppet's client-server architecture involves a central Puppet master server and Puppet agent nodes on managed servers. This setup allows for centralized management and consistent enforcement of configurations across your infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2d8n8l9harnmre80o7sa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2d8n8l9harnmre80o7sa.png" alt="Puppet Infrastructure automation" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Puppet boasts an extensive ecosystem of pre-built modules and resources, enabling you to manage various aspects of your infrastructure. From users and packages to services and files, Puppet covers a wide range of configuration needs. It also offers reporting and auditing capabilities to track changes and ensure compliance.&lt;/p&gt;

&lt;p&gt;Whether you lean towards Ansible's simplicity or Puppet's scalable architecture, both tools are invaluable for automating configuration management. By embracing these tools, you can streamline operations, achieve consistency, and simplify the maintenance of your infrastructure. It's time to bid farewell to manual configurations and welcome the efficiency of configuration management tools like Ansible and Puppet.&lt;/p&gt;

&lt;h2&gt;
  
  
  What now?
&lt;/h2&gt;

&lt;p&gt;The world of infrastructure management has been revolutionised by Infrastructure as Code (IaC) tools. With the rise of platform engineering practices, engineers realised that manual configurations and repetitive tasks were holding them back. By embracing IaC tools, such as Terraform, AWS CDK, Ansible, and Puppet, teams can automate resource provisioning, simplify configuration management, and ensure the stability and scalability of their infrastructure.&lt;/p&gt;

&lt;p&gt;Gone are the days of ClickOps, where clicking through endless dashboards was the norm. With IaC, engineers can define their infrastructure resources in code, leveraging the power of programming languages and declarative configuration files. This shift brings software engineering principles to infrastructure management, enabling teams to treat infrastructure as software.&lt;/p&gt;

&lt;p&gt;By adopting IaC tools, teams can tackle the challenges of scaling their applications, managing resources across multiple regions, and maintaining consistency and reliability. Whether it's provisioning instances, automating configuration tasks, or ensuring the desired state of infrastructure, IaC tools provide the means to navigate the dynamic landscape of IT operations.&lt;/p&gt;

&lt;p&gt;So, in your quest for seamless infrastructure management, remember to harness the power of Infrastructure as Code. Embrace the automation, scalability, and efficiency it brings to your operations. Say goodbye to manual configurations and repetitive tasks, and welcome a world where infrastructure is as agile and adaptable as the applications it supports.&lt;/p&gt;

&lt;p&gt;In my next blog, I'll explain how to use Terraform to provision resources on AWS and also another blog on Kubernetes. So please leave some feedback, or follow for more Cloud content.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>terraform</category>
      <category>devops</category>
    </item>
    <item>
      <title>I passed Certified Solutions Architect Exam in 4-ish months</title>
      <dc:creator>chester htoo</dc:creator>
      <pubDate>Sun, 18 Jun 2023 08:39:15 +0000</pubDate>
      <link>https://forem.com/halchester/i-passed-certified-solutions-architect-in-4-ish-months-5c12</link>
      <guid>https://forem.com/halchester/i-passed-certified-solutions-architect-in-4-ish-months-5c12</guid>
      <description>&lt;h3&gt;
  
  
  A little bit of backstory
&lt;/h3&gt;

&lt;p&gt;I'm a software engineer/student who's been working in the industry for a couple of years now and I've been purely on the software side of things. Before 2021, I have no idea what cloud is. I've used S3 for one of my backend services to store photos but I have no idea how the whole cloud or AWS thingy works. These were all buzzwords for me.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Spoiler alert: It's just someone else's computer.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9hn4u391uvrkda5snjpt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9hn4u391uvrkda5snjpt.png" alt="Image description" width="495" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are plenty of cloud vendors out there, from small startups to big providers from companies like Amazon, Microsoft and Alibaba, and each platform has their own version of certification that determines the credibility of an engineer using that particular platform and foundational knowledge of the cloud. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But why is the industry standard of cloud certifications sort of lean towards AWS and why I personally took one of AWS' associate level certification?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amongst all cloud providers like Azure, GCP and AWS, Amazon was one of the earliest players in the cloud computing field, and they're still dominating the current cloud market at a whopping 32% this year. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hnzv9adtxmwe1pfzz2x.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hnzv9adtxmwe1pfzz2x.jpeg" alt="Image description" width="800" height="595"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Personal Motives
&lt;/h3&gt;

&lt;p&gt;Personally, I strongly believes in real world problem solving skills are better than traditional education of degrees and certifications, but what makes me go ahead and break my own beliefs? Because AWS certifications are aimed towards solving real world problems rather than just knowing theories of how each services work.&lt;/p&gt;

&lt;p&gt;It does not ask you things that you can normally go and look up in the documentation (well, not entirely true, keep reading, I'll get into it in a bit). It asks you how you will handle scenario x with y requirements when you have to focus on z. Let's say a customer comes up to you and ask how you are going to build a frontend application that involves GraphQL and React, you provide a solution of how that application should look like with simple and resilient architecture at a minimum cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Different certifications
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0yeb2vl7ldeygsleciem.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0yeb2vl7ldeygsleciem.png" alt="Image description" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS offers certifications at 4 levels, Practitioner level, Associate level, Professional level and Speciality level. Some of the exams focus on building solutions for customer specific problems, some on DevOps like how we would leverage AWS to build a better performant applications and some on specific fields such as Databases, SAP and Machine Learning. I took the Certified Cloud Practitioner exam and Solutions Architect associate exam. I am not going to dive into all these exams but here is a link if you want to know more. &lt;/p&gt;

&lt;h3&gt;
  
  
  What is the Solutions Architect Exam
&lt;/h3&gt;

&lt;p&gt;The Solutions Architect exam has two levels, associate level (the easier one that I took) and the professional level. I personally took this exam to get a deeper understanding of how AWS services work in conjunction with one another and how I can provide better solutions for the customers. This exam focuses on more technical side of cloud services offered by AWS and a bit harder than the practitioner exam. &lt;/p&gt;

&lt;p&gt;To be completely honest, it took me 6 months to prepare for this exam rather than 4 months (sorry, wasn't a clickbait xD) because I spent the first 2 months procrastinating. But do keep in mind that it took me 4 months to pass this exam because I am a full-time student and working part-time as a Software Engineer who has no idea how AWS works and trying to maintain a good GPA to keep my asian parents happy. This timeline can vary to different people on how much time they can commit to studying, past experiences and level of expertise using Amazon Web Services.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to prepare for the exam
&lt;/h3&gt;

&lt;p&gt;There are 3 things to keep in mind when you are preparing for any sort of certification exams (or any exams in general).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Domain Knowledge of the field&lt;/li&gt;
&lt;li&gt;Hands-on (practical) labs and&lt;/li&gt;
&lt;li&gt;Practice Exams&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Usually I would recommend using online learning platforms like Udemy, youtube and google, but &lt;a href="https://www.whizlabs.com/" rel="noopener noreferrer"&gt;whizlabs.com&lt;/a&gt; offers a great platform for people who are taking certification exams. Of course there are other platforms like &lt;a href="https://www.oreilly.com/" rel="noopener noreferrer"&gt;O'Reilly&lt;/a&gt; and &lt;a href="https://www.udemy.com/" rel="noopener noreferrer"&gt;Udemy&lt;/a&gt;, but back then whizlabs were having a sale so I just got that. &lt;/p&gt;

&lt;p&gt;I used whizlabs.com to study for the domain specific knowledge. They have great video lectures on how different services work, explain really well and touch on almost all domain knowledge that the exam can ask for. It even explains on how you would use it in the real world application and how you can pair it with some other services to create a better product with great performance. &lt;/p&gt;

&lt;p&gt;Having just the domain knowledge is not definitely enough to pass the exam, that's for sure. That's when you need to get your hands dirty. There are plenty of free platforms that you can use to practice different architectures like hosting a website on S3 and serving it though CloudFront or building a serverless API that the public can access, with AWS workshop or using AWS' free tier plan. I used whizlabs' demo environment to practice my understanding of how different architectures work since it is also included in my purchase. &lt;/p&gt;

&lt;p&gt;Finally I cannot stress this enough but practice exams. Do your practice exams! By do your practice exams, don't just go through questions and click "Reveal Answer" and reading the explanation. I would really recommend to take the practice exam under strict exam condition, like not leaving up to take a leak or use your phone every 30 minutes or so (yes, I know. been there, done that). And after each exam questions, always review what and how it went wrong and revise that particular topic back. There are practice exams that you can purchase from Udemy. I personally recommend &lt;a href="https://www.udemy.com/course/aws-certified-solutions-architect-associate-amazon-practice-exams-saa-c03/" rel="noopener noreferrer"&gt;practice exams by Tutorial dojo&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tips and Tricks
&lt;/h3&gt;

&lt;p&gt;Now that you've prepared for the exam with sleepless nights and pulled every hair out of your head trying to understand what different services are, now is the time to take the exam. Here are some tips and tricks (I am not going to give you generic tips like eat well before exam, drink water and blah blah.)&lt;/p&gt;

&lt;p&gt;You probably will not understand all the technologies that are used in a question like RabbitMQ and Apache Kafka, but when you see something that you have no idea what that is, my advice is to take an educated guess. There is definitely no way that you can use RDS with a key-value database, nor a way to use a structured data in a Dynamo Database. &lt;/p&gt;

&lt;p&gt;Some of the Multiple choice questions are really really ambiguous but they always have one single word that shifts the possibility of the right answer towards one option. What I would recommend is to look for keywords like graph, relational, disaster recovery, etc.&lt;/p&gt;

&lt;p&gt;You will see questions that has phrases like "most cost efficient way" or "most secure". These questions test your scope of focus and priority. You wouldn't choose EFS over EBS if your main focus is cost, like you wouldn't choose Security Groups over NACL for network level security. Look for what the questions' focuses are.&lt;/p&gt;

&lt;p&gt;One another personal trick is to keep yourself entertained during the entire 2 hours. I know that the exam can be stressful and all that but for someone who has really really short attention spam, sometimes it's hard for me to focus on one thing for a long time. Normally what I do is, after every 10-15 questions answered, I always close my eyes and let my mind go wild for 5-10 seconds, and then focus back. This not only helps me with my attention deficiency but also helps with my eyes. &lt;/p&gt;

&lt;p&gt;Now that you're all prepared for the exam, you can chill and ace the exam. Good luck folks! May the Jeff be with you!&lt;/p&gt;

&lt;h3&gt;
  
  
  Reflections
&lt;/h3&gt;

&lt;p&gt;Looking back, I probably could've done the exam much faster than 4 months if I had spent more time on the labs and then have some blocked out time during the day to study. But I'm happy that I passed the exam as well as the Certified Cloud Practitioner exam. Now I'm studying for the professional level Solutions Architect exam and once I passed, I will create more blogs like this. In the meantime, since this is my first ever blog post, feel free to let me know if there are any mistakes or more tips!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>certification</category>
      <category>cloud</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
