<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Vikas Arora</title>
    <description>The latest articles on Forem by Vikas Arora (@aroravicky).</description>
    <link>https://forem.com/aroravicky</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/aroravicky"/>
    <language>en</language>
    <item>
      <title>Say Goodbye to Orphaned Snapshots: Automate Cleanup with Serverless, Terraform, and AWS EventBridge!</title>
      <dc:creator>Vikas Arora</dc:creator>
      <pubDate>Tue, 29 Oct 2024 13:49:34 +0000</pubDate>
      <link>https://forem.com/aroravicky/say-goodbye-to-orphaned-snapshots-automate-cleanup-with-serverless-terraform-and-aws-eventbridge-1ok9</link>
      <guid>https://forem.com/aroravicky/say-goodbye-to-orphaned-snapshots-automate-cleanup-with-serverless-terraform-and-aws-eventbridge-1ok9</guid>
      <description>&lt;p&gt;Over time, AWS accounts can accumulate resources that are no longer necessary but continue to incur costs. One common example is orphaned EBS snapshots left behind after volumes are deleted. Managing these snapshots manually can be tedious and costly. &lt;/p&gt;

&lt;p&gt;This guide shows how to automate the cleanup of orphaned EBS snapshots using &lt;strong&gt;Python (Boto3)&lt;/strong&gt; in an &lt;strong&gt;AWS Lambda&lt;/strong&gt; function, which is then triggered using &lt;strong&gt;AWS EventBridge&lt;/strong&gt; on a schedule or event. &lt;/p&gt;

&lt;p&gt;By the end, you’ll have a complete serverless solution to keep your AWS environment clean and cost-effective.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Installing AWS CLI and Terraform&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;First, let’s ensure the essential tools are installed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS CLI&lt;/strong&gt;&lt;br&gt;
The AWS CLI allows command-line access to AWS services. Install it according to your operating system:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;macOS:&lt;/strong&gt; &lt;code&gt;brew install awscli&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Windows:&lt;/strong&gt; &lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS CLI Installer&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Linux:&lt;/strong&gt; Use the package manager (e.g., &lt;code&gt;sudo apt install awscli&lt;/code&gt; for Ubuntu).&lt;br&gt;
Verify installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt;&lt;br&gt;
Terraform is a popular Infrastructure as Code (IaC) tool for defining and managing AWS resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;macOS:&lt;/strong&gt; &lt;code&gt;brew install terraform&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Windows:&lt;/strong&gt; Terraform Installer&lt;br&gt;
&lt;strong&gt;Linux:&lt;/strong&gt; Download the binary and move it to &lt;code&gt;/usr/local/bin&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Verify installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform -version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Configuring AWS Access&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Configure your AWS CLI with access keys to allow Terraform and Lambda to authenticate with AWS services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get Access Keys&lt;/strong&gt; from your AWS account (&lt;a href="https://console.aws.amazon.com/iam/" rel="noopener noreferrer"&gt;AWS IAM Console&lt;/a&gt;).&lt;br&gt;
&lt;strong&gt;Configure AWS CLI:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Follow the prompts to enter your Access Key, Secret Access Key, default region (e.g., &lt;code&gt;us-east-1&lt;/code&gt;), and output format (e.g., &lt;code&gt;json&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Next, since we are going to build the entire stack with Terraform, please fork the repository located &lt;a href="https://github.com/devopsulting/delete-orphan-snapshots" rel="noopener noreferrer"&gt;here&lt;/a&gt;, which contains the full code for the project.&lt;/p&gt;

&lt;p&gt;Clone it to your local machine and open it in a code editor.&lt;/p&gt;

&lt;p&gt;I have used Visual Studio Code, and it appears as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnr0vrbou7eprcqzambd2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnr0vrbou7eprcqzambd2.png" alt="Image description" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Delete the following two files from the project, as these will be recreated when you run the terraform from your code editor:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;orphan-snapshot-delete.zip&lt;/li&gt;
&lt;li&gt;.terraform.lock.hcl&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, lets configure the S3 backend:&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Create an S3 Bucket for Terraform State&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Go to the S3 Console:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sign in to your AWS account and navigate to the S3 service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Create a New Bucket:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;strong&gt;Create bucket&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Give the bucket a &lt;strong&gt;unique name&lt;/strong&gt;, such as &lt;code&gt;my-terraform-state-bucket&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Choose an AWS Region that matches your infrastructure region for latency reasons.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Configure Bucket Settings:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep &lt;strong&gt;Block Public Access settings&lt;/strong&gt; enabled to restrict access to the bucket.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Versioning:&lt;/strong&gt; Enable versioning to maintain a history of changes to the state file. This is useful for disaster recovery or rollbacks.&lt;/li&gt;
&lt;li&gt;Leave other settings as default.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Create the Bucket:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;strong&gt;Create bucket&lt;/strong&gt; to finalize the setup.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Create a DynamoDB Table for State Locking (Optional but Recommended)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Using a DynamoDB table for state locking ensures that only one Terraform process can modify the state at a time, preventing conflicts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Go to the DynamoDB Console:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In your AWS Console, go to DynamoDB.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Create a New Table:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;strong&gt;Create table&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Name your table, e.g., &lt;code&gt;terraform-state-locking&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Partition Key:&lt;/strong&gt; Set the partition key to &lt;code&gt;LockID&lt;/code&gt; and use the &lt;strong&gt;String&lt;/strong&gt; data type.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Configure Settings:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Leave &lt;strong&gt;default settings&lt;/strong&gt; (such as read and write capacity) unless you have specific requirements.&lt;/li&gt;
&lt;li&gt;Create the table by clicking &lt;strong&gt;Create table&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Configure IAM Permissions for Terraform&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Terraform needs specific permissions to interact with S3 and DynamoDB (if using locking).&lt;/p&gt;

&lt;p&gt;This step is necessary only if you are operating under the &lt;strong&gt;least privileged&lt;/strong&gt; access. If you already have administrator access, you can skip this step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Create or Use an IAM User:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you don’t have an IAM user for Terraform (You can use your own IAM user and attach these policies to it), create one in the IAM Console.&lt;/li&gt;
&lt;li&gt;Attach policies that grant permissions to access S3 and DynamoDB.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Attach S3 and DynamoDB Policies:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Use an inline policy or add the following permissions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access to the S3 bucket.&lt;/li&gt;
&lt;li&gt;Access to the DynamoDB table (if using locking).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example IAM Policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": "arn:aws:s3:::my-terraform-state-bucket/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:PutItem",
                "dynamodb:GetItem",
                "dynamodb:DeleteItem",
                "dynamodb:DescribeTable"
            ],
            "Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/terraform-state-locking"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After completing all the prerequisites, let's examine the Python and Terraform code that will perform the actual magic.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Step 1: Python Code for Orphaned Snapshot Cleanup&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the code editor, open the &lt;code&gt;orphan-snapshot-delete.py&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;The complete function code is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import logging

logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    ec2_cli = boto3.client("ec2")
    response = ec2_cli.describe_snapshots(OwnerIds=["self"], DryRun=False)
    snapshot_id = []
    for each_snapshot in response["Snapshots"]:
        try:
            volume_stat = ec2_cli.describe_volume_status(
                VolumeIds=[each_snapshot["VolumeId"]], DryRun=False
            )
        except ec2_cli.exceptions.ClientError as e:
            if e.response["Error"]["Code"] == "InvalidVolume.NotFound":
                snapshot_id.append(each_snapshot["SnapshotId"])
            else:
                raise e

    if snapshot_id:
        for each_snap in snapshot_id:
            try:
                ec2_cli.delete_snapshot(SnapshotId=each_snap)
                logger.info(f"Deleted SnapshotId {each_snap}")
            except ec2_cli.exceptions.ClientError as e:
                return {
                    "statusCode": 500,
                    "body": f"Error deleting snapshot {each_snap}: {e}",
                }

    return {"statusCode": 200}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Lambda function uses &lt;strong&gt;Boto3&lt;/strong&gt;, &lt;strong&gt;AWS’s Python SDK&lt;/strong&gt;, to list all EBS snapshots, check their associated volume status, and delete snapshots where the volume is no longer available. Here’s the complete function code:&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 2: Terraform Configuration for Serverless Infrastructure&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Using Terraform, we’ll create a &lt;strong&gt;Lambda function&lt;/strong&gt;, &lt;strong&gt;IAM role&lt;/strong&gt;, and &lt;strong&gt;policy&lt;/strong&gt; to deploy this script to AWS. Additionally, we’ll set up an &lt;strong&gt;EventBridge rule&lt;/strong&gt; to trigger Lambda on a regular schedule.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform Setup and Provider Configuration&lt;/strong&gt;&lt;br&gt;
This section configures Terraform, including setting up remote state management in S3.&lt;/p&gt;

&lt;p&gt;Open the terraform file name &lt;code&gt;main.tf&lt;/code&gt; in code editor and start reviewing the code as shown in the following sections.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Terraform Setup and Provider Configuration&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This section configures Terraform, including setting up remote state management in S3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Change the &lt;code&gt;required_version&lt;/code&gt; value as per the &lt;code&gt;terraform -version&lt;/code&gt; output.&lt;/li&gt;
&lt;li&gt;Update the &lt;code&gt;bucket&lt;/code&gt;, &lt;code&gt;key&lt;/code&gt;, and &lt;code&gt;dynamodb_table&lt;/code&gt; values for the S3 backend to match what you have created in the previous steps.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "&amp;gt;=1.5.6"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 5.72.0"
    }
  }
  backend "s3" {
    bucket         = "terraform-state-files-0110"
    key            = "delete-orphan-snapshots/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "tf_state_file_locking"
  }
}

provider "aws" {
  region = var.aws_region
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;IAM Role and Policy for Lambda&lt;/strong&gt;&lt;br&gt;
This IAM configuration sets up permissions for Lambda to access EC2 and CloudWatch, enabling snapshot deletion and logging.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_role" "lambda_role" {
  name               = "terraform_orphan_snapshots_delete_role"
  assume_role_policy = &amp;lt;&amp;lt;EOF
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Action": "sts:AssumeRole",
          "Principal": { "Service": "lambda.amazonaws.com" },
          "Effect": "Allow"
        }
      ]
    }
EOF
}

resource "aws_iam_policy" "iam_policy_for_lambda" {
  name   = "terraform_orphan_snapshots_delete_policy"
  policy = &amp;lt;&amp;lt;EOF
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": [
              "logs:CreateLogGroup",
              "logs:CreateLogStream",
              "logs:PutLogEvents"
          ],
          "Resource": "arn:aws:logs:*:*:*"
        },
        {
          "Effect": "Allow",
          "Action": [
              "ec2:DescribeVolumeStatus",
              "ec2:DescribeSnapshots",
              "ec2:DeleteSnapshot"
          ],
          "Resource": "*"
        }
      ]
    }
EOF
}

resource "aws_iam_role_policy_attachment" "attach_iam_policy_to_iam_role" {
  role       = aws_iam_role.lambda_role.name
  policy_arn = aws_iam_policy.iam_policy_for_lambda.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Packaging and Deploying the Lambda Function&lt;/strong&gt;&lt;br&gt;
Here, we package the Python code and deploy it as a Lambda function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "archive_file" "lambda_zip" {
  type        = "zip"
  source_file = "${path.module}/python/orphan-snapshots-delete.py"
  output_path = "${path.module}/python/orphan-snapshots-delete.zip"
}

resource "aws_lambda_function" "lambda_function" {
  filename      = data.archive_file.lambda_zip.output_path
  function_name = "orphan-snapshots-delete"
  role          = aws_iam_role.lambda_role.arn
  handler       = "orphan-snapshots-delete.lambda_handler"
  runtime       = "python3.12"
  timeout       = 30
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;EventBridge Rule for Lambda Invocation&lt;/strong&gt;&lt;br&gt;
AWS EventBridge allows you to create scheduled or event-based triggers for Lambda functions. Here, we’ll configure EventBridge to invoke our Lambda function on a schedule, like every 24 hours.&lt;/p&gt;

&lt;p&gt;You can learn more about EventBridge and scheduled events in AWS documentation &lt;a href="https://docs.aws.amazon.com/eventbridge/latest/userguide/what-is-amazon-eventbridge.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cloudwatch_event_rule" "schedule_rule" {
  name        = "orphan-snapshots-schedule-rule"
  description = "Trigger Lambda every day to delete orphaned snapshots"
  schedule_expression = "rate(24 hours)"
}

resource "aws_cloudwatch_event_target" "target" {
  rule      = aws_cloudwatch_event_rule.schedule_rule.name
  arn       = aws_lambda_function.lambda_function.arn
}

resource "aws_lambda_permission" "allow_eventbridge" {
  statement_id  = "AllowExecutionFromEventBridge"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.lambda_function.function_name
  principal     = "events.amazonaws.com"
  source_arn    = aws_cloudwatch_event_rule.schedule_rule.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Step 3: Applying the Terraform Configuration&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;After defining the infrastructure, initialize and apply the Terraform configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Step 4: Testing and Monitoring the Lambda Function&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To verify that the solution works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Manually Trigger the Event&lt;/strong&gt; (optional): For initial testing, trigger the Lambda function manually from the AWS Lambda console.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitor CloudWatch Logs:&lt;/strong&gt; The Lambda function writes logs to CloudWatch, where you can review entries to verify snapshot deletions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adjust the Schedule as Needed:&lt;/strong&gt; Modify the &lt;code&gt;schedule_expression&lt;/code&gt; to set a custom frequency for snapshot cleanup.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Enhancements&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The following enhancements could be implemented in this project:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Instead of scheduling an Eventbridge rule, the deletion of EBS volumes could be detected by Eventbridge, which would then trigger the Lambda function to delete the corresponding snapshot.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06ckrvqs8gnkt84uz84d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06ckrvqs8gnkt84uz84d.png" alt="Image description" width="611" height="151"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Paging could be incorporated into the Python function to manage situations where the number of snapshots is substantial.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Wrapping Up&lt;/strong&gt;&lt;br&gt;
By combining &lt;strong&gt;Python (Boto3)&lt;/strong&gt;, &lt;strong&gt;Lambda&lt;/strong&gt;, &lt;strong&gt;AWS EventBridge&lt;/strong&gt;, and &lt;strong&gt;Terraform&lt;/strong&gt;, we’ve created a fully automated, serverless solution to clean up orphaned EBS snapshots. This setup not only reduces cloud costs but also promotes a tidy, efficient AWS environment. With scheduled invocations, you can rest assured that orphaned resources are consistently removed.&lt;/p&gt;

&lt;p&gt;Try this solution in your own AWS account and experience the benefits of automation in cloud resource management!&lt;/p&gt;

&lt;p&gt;Please feel free to share your thoughts on this article in the comments section. Thank you for reading.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>python</category>
      <category>serverless</category>
      <category>aws</category>
    </item>
    <item>
      <title>Deploy Docker Example Voting App on AWS EKS using Github actions and expose it over the internet using Nginx Ingress Controller</title>
      <dc:creator>Vikas Arora</dc:creator>
      <pubDate>Mon, 14 Aug 2023 11:33:01 +0000</pubDate>
      <link>https://forem.com/aroravicky/deploy-docker-example-voting-app-on-aws-eks-using-github-actions-and-expose-it-over-the-internet-using-nginx-ingress-controller-311j</link>
      <guid>https://forem.com/aroravicky/deploy-docker-example-voting-app-on-aws-eks-using-github-actions-and-expose-it-over-the-internet-using-nginx-ingress-controller-311j</guid>
      <description>&lt;p&gt;Working on an end-to-end project was my way of reinforcing the knowledge I gained from the Kubernetes hands-on course that I took last month. At the same time, I wanted to share what I learned.&lt;/p&gt;

&lt;p&gt;I wanted to use a microservices-based application for this project to unleash the full potential of Kubernetes. Luckily, I stumbled upon Docker’s &lt;a href="https://github.com/dockersamples/example-voting-app" rel="noopener noreferrer"&gt;example-voting-app&lt;/a&gt; when I was going over docker-compose concepts earlier.&lt;/p&gt;

&lt;p&gt;The original Docker’s Example Voting App repository helps you to deploy and run the application on your local machine using docker-compose or kubernetes.&lt;/p&gt;

&lt;p&gt;I have modified some manifest files and created Github actions (CI/CD) workflows to deploy it on AWS EKS cluster and expose it over the internet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application architecture
&lt;/h2&gt;

&lt;p&gt;The architecture shows each component of the application and a brief description of what they do&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhvdqz31jel05sm0tn73.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhvdqz31jel05sm0tn73.png" alt="example-voting-app architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;voting-app:&lt;/strong&gt; Front-end of the application, written in python flask framework, which allows the users to cast their votes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;redis:&lt;/strong&gt; An in-memory data structure store, used as a temporary database to store the votes casted by the users.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;worker:&lt;/strong&gt; Service written in .NET, that retrieves the votes data from redis and stores it into PostgreSQL DB service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL DB:&lt;/strong&gt; PostgreSQL DB used as persistent storage database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;result-app:&lt;/strong&gt; Service written in node js, displays the voting results to the user.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;An &lt;strong&gt;IAM&lt;/strong&gt; user with &lt;strong&gt;Administrator&lt;/strong&gt; access and &lt;strong&gt;Access keys&lt;/strong&gt; to be used with GitHub Actions and Terraform. You can follow the link in the step-by-step approach below.&lt;/li&gt;
&lt;li&gt;AWS CLI V2, steps &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Dynamodb table in AWS to store the terraform state locking, steps &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/getting-started-step-1.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Make Partition Key as ‘LockID’.&lt;/li&gt;
&lt;li&gt;AWS ECR (Amazon Elastic Container) Registry with name voting-app (case sensitive) using steps defined &lt;a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;S3 bucket to store Terraform state files. Steps given &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Terraform V1.0 or higher, install to your local terminal. Steps provided &lt;a href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Kubectl, install to your local terminal using the steps given &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;A valid domain name (I used cloudempowered.co for this demo) that you can get from any domain registrar (for example, godaddy, namecheap, etc.) or from Amazon route 53.&lt;/li&gt;
&lt;li&gt;Project has been executed in the AWS North Virginia (us-east-1) region. If you want to use a different region, you will need to find and change the region value in the project repositories.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Technical Architectural overview
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8h9kxpef3p7g4bo5gqs.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi8h9kxpef3p7g4bo5gqs.gif" alt="Technical architectural overview"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Build and deployment of the example-voting-app on Kubernetes is done using the following DevOps tools/AWS Services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terraform:&lt;/strong&gt; It deploys the AWS EKS cluster and the necessary AWS resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; It is the code repository for the application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub actions:&lt;/strong&gt; It is the CI/CD tool that build and deploy the application on the EKS cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nginx Ingress Controller:&lt;/strong&gt; It creates a network load balancer in AWS to enable external access to the application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS EKS:&lt;/strong&gt; It is the Kubernetes service in AWS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon VPC:&lt;/strong&gt; It hosts the AWS EKS Cluster and all the required networking components for the project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS ECR:&lt;/strong&gt; It is a private repository (like docker hub private registry) that hosts the application docker images.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Route 53:&lt;/strong&gt; DNS service provided by AWS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Certification Manager:&lt;/strong&gt; Certification authority that provides the SSL certificate to the validated domain name.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is an intermediate level of implementation, and i assume some familiarity with AWS, Kubernetes, Github and CI/CD. I have provided links for more details where possible.&lt;/p&gt;

&lt;p&gt;You might incur high cost if you keep the infrastructure components deployed on AWS for longer than needed. Destroy them through Terraform when your work is over.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-by-step approach to implement:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;First fork these two GitHub repositories:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/devopsulting/k8s-cluster-creation" rel="noopener noreferrer"&gt;devopsulting/k8s-cluster-creation (github.com)&lt;/a&gt;: This &lt;br&gt;
  repository contains Terraform code for creating the AWS &lt;br&gt;
  infrastructure needed to deploy the example-voting-app into &lt;br&gt;
  an AWS EKS cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/devopsulting/k8s-example-voting-app" rel="noopener noreferrer"&gt;devopsulting/k8s-example-voting-app (github.com)&lt;/a&gt;: &lt;br&gt;
  This repository contains the application source code and the &lt;br&gt;
  Kubernetes manifests for the example-voting-app.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To begin the setup, clone these two GitHub repositories to 
your local machine.&lt;/li&gt;
&lt;li&gt;Setup your EKS infrastructure using the terraform repo &lt;strong&gt;k8s-cluster-creation&lt;/strong&gt; and process defined by &lt;strong&gt;Ankit Jodhani&lt;/strong&gt; &lt;a href="https://www.showwcase.com/show/35778/provisioning-the-amazon-eks-cluster-using-terraform" rel="noopener noreferrer"&gt;here&lt;/a&gt;. I used his Terraform repository with necessary modifications. This article also covers how to create an IAM user, set up AWS CLI, and installing Terraform itself. Please do not clone any of the repositories mentioned in this article.&lt;/li&gt;
&lt;li&gt;I have added some extra Terraform code to create &lt;strong&gt;Route 53&lt;/strong&gt; and &lt;strong&gt;SSL Certificate&lt;/strong&gt; resources. These resources will allow to access the &lt;strong&gt;vote&lt;/strong&gt; and &lt;strong&gt;result&lt;/strong&gt; user interfaces over the internet using your own domain name. To do this, you need to change the value of the &lt;strong&gt;HOSTED_ZONE&lt;/strong&gt; variable in the &lt;strong&gt;infra-creation/terraform.tfvars&lt;/strong&gt; file to match your domain name.&lt;/li&gt;
&lt;li&gt;Create your AWS infrastructure, including EKS cluster by running the Terraform.&lt;/li&gt;
&lt;li&gt;As soon as this infrastructure is created, copy the NS (Nameserver) values from your domain’s &lt;strong&gt;hosted zone&lt;/strong&gt; in &lt;strong&gt;Route 53&lt;/strong&gt;, and update them in the Nameserver settings for your domain name with respective domain registrar/provider (goadaddy, namecheap etc.)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Route 53
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz36rz6tgofz66zlgle38.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz36rz6tgofz66zlgle38.jpg" alt="Hosted zone"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29sa7fzsx0u7e3oqjbsg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29sa7fzsx0u7e3oqjbsg.jpg" alt="Hosted zone NS records"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Update NS settings for the domain name with your domain registrar/provider
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdiov3nkw1wjke39z97w5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdiov3nkw1wjke39z97w5.png" alt="Domain registrar NS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pictures above shows the NS values that I used for my domain name. You should not use the same values for your domain name. Route 53 will generate specific NS entries for your hosted zone.&lt;/p&gt;

&lt;p&gt;Also, above hosted zone setup assumes that you registered your domain name with a different domain registrar than Route 53. If you registered your domain name with Route 53, then you don’t need to do this setup. Route 53 will automatically create a hosted zone for your domain name and update the name servers for you. In that case, you can remove the Terraform code for Route 53 setup.&lt;/p&gt;

&lt;p&gt;Significance of this step is that your domain name will be validated by the AWS Certificate Manager and a valid SSL certificate will be issued for your domain name. The verification process may take 15–30 minutes (for domain names that are not registered with Route 53). During this time, terraform setup will wait for the verification to finish. Therefore, you should do this step as soon as your hosted zone is created in Route 53.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnom8qbuanylhjae2hxfx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnom8qbuanylhjae2hxfx.jpg" alt="Certificate issued"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjnlu9hm09isdsdw0zkn9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjnlu9hm09isdsdw0zkn9.jpg" alt="Certificate validated"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before moving ahead with application deployment using &lt;strong&gt;k8s-example-voting-app&lt;/strong&gt; repository, we need to understand the brief about Github actions, it’s setup and pivotal role it played in creating the application docker images, storing them in the &lt;strong&gt;AWS ECR&lt;/strong&gt; and finally pushing them to the EKS cluster using manifests.&lt;/p&gt;

&lt;p&gt;Open the &lt;strong&gt;k8s-example-voting-app&lt;/strong&gt; repository, and follow along.&lt;/p&gt;
&lt;h2&gt;
  
  
  Github Actions (GA)
&lt;/h2&gt;

&lt;p&gt;We are using GA as a CI/CD tool to perform build, test, and deployment actions. This is how we deploy the application on the EKS cluster. You can learn more about GA in this article &lt;a href="https://docs.github.com/en/actions" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;GA has some advantages over other CI/CD tools. It is integrated with Github, so you don’t need to install or manage a separate server or plugins. It also has a feature that allows you to connect to AWS using OIDC (OpenID Connect, a Web-identity based role). This means you don’t have to add AWS credentials to Github, which makes it more secure. However, i did not use this feature in this project, because it adds some complexity. But I plan to implement it later as an improvement to this project.&lt;br&gt;
GA workflows are created within a Github repository, by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating a &lt;strong&gt;.github/workflows&lt;/strong&gt; directory.&lt;/li&gt;
&lt;li&gt;In the .github/workflows directory, create the required yaml files, which define the necessary action.&lt;/li&gt;
&lt;li&gt;This project uses two yaml files, called &lt;strong&gt;build.yaml&lt;/strong&gt; and &lt;strong&gt;deploy.yaml&lt;/strong&gt;, for the CI/CD process. You will see these files in the &lt;strong&gt;.github/workflows&lt;/strong&gt; folder when you fork the &lt;strong&gt;k8s-example-voting-app&lt;/strong&gt; repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufij214bo1g4va1rf47b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufij214bo1g4va1rf47b.png" alt="GA Workflow files"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add AWS &lt;strong&gt;Access Key&lt;/strong&gt; and &lt;strong&gt;Secret Access Key&lt;/strong&gt; in the Github repository settings (your forked &lt;strong&gt;k8s-example-voting-app&lt;/strong&gt; repository), for the Github secrets, which will be used by GA to work with AWS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1murl427nzr02m6gijdq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1murl427nzr02m6gijdq.png" alt="Github secrets"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  build.yaml
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;build.yaml&lt;/strong&gt; workflow is responsible for building the docker images from the three application modules: &lt;strong&gt;vote&lt;/strong&gt;, &lt;strong&gt;worker&lt;/strong&gt;, and &lt;strong&gt;result&lt;/strong&gt;. It then stores them in the ECR private registry (Dockerhub registry can also be used alternatively, though ECR is seamless being an AWS service) . These docker images will then be deployed to the Kubernetes (EKS) cluster from the ECR registry.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;postgresql&lt;/strong&gt; and &lt;strong&gt;redis&lt;/strong&gt; images will be downloaded directly from the Docker registry and used in Kubernetes.&lt;/p&gt;

&lt;p&gt;Here is the code for &lt;strong&gt;build.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Build Application Images and store in ECR
on:
  push:
    branches:
      - "main"
    paths:
      - .github/workflows/build.yaml
      - "result/**"
      - "vote/**"
      - "worker/**"
env:
  AWS_REGION : "us-east-1"
  ENV: "prod"
permissions:
  id-token: write
  contents: read
jobs:
  build:
    name: create application images from source code and store them in the ECR
    runs-on: ubuntu-latest
    steps:
      - name: Update runner's docker Version, as worker module requires it
        run: |
          docker --version
          sudo apt update
          sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
          curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
          echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list &amp;gt; /dev/null
          sudo apt update
          apt-cache policy docker-ce
          sudo apt install docker-ce -y
          docker --version

      - name: Checkout code from GitHub to runner
        uses: actions/checkout@v2
        with:
          token: ${{ secrets.GITHUB_TOKEN }}

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.AWS_AP_ACCESS_KEY }}
          aws-secret-access-key: ${{ secrets.AWS_AP_SECRET_KEY }}
          aws-region: ${{ env.AWS_REGION }}

      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v1

      - name: Build, tag, and push the vote docker image to Amazon ECR
        id: build-vote-image
        env:
          ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
          ECR_REPOSITORY: voting-app
          IMAGE_TAG: vote-${{ env.ENV }}-latest
        # Build docker images for vote module and push it to ECR so that it can be deployed to EKS
        run: |
          docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG --build-arg aws_region=${{ env.AWS_REGION }} --build-arg copy_or_mount="copy" -f vote/Dockerfile ./vote
          docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
          echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"


      - name: Build, tag, and push the result docker image to Amazon ECR
        id: build-result-image
        env:
          ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
          ECR_REPOSITORY: voting-app
          IMAGE_TAG: result-${{ env.ENV }}-latest
        # Build docker images for result module and push it to ECR so that it can be deployed to EKS.
        run: |
          docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG --build-arg aws_region=${{ env.AWS_REGION }} --build-arg in_aws="yes" --build-arg copy_or_mount="copy" -f result/Dockerfile ./result
          docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
          echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"


      - name: Build, tag, and push the worker docker image to Amazon ECR
        id: build-worker-image
        env:
          ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
          ECR_REPOSITORY: voting-app
          IMAGE_TAG: worker-${{ env.ENV }}-latest
        # Build docker images for worker module and push it to ECR so that it can be deployed to EKS
        run: |
          docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG --build-arg aws_region=${{ env.AWS_REGION }} --build-arg copy_or_mount="copy" -f worker/Dockerfile ./worker
          docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
          echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;build.yaml&lt;/strong&gt; workflow will run when you make a pull request from your working branch to the main branch. This means that you want to merge the changes you made in your working branch to the main branch. The workflow will build the docker images from your updated code and store them in the ECR registry.&lt;/p&gt;

&lt;h2&gt;
  
  
  deploy.yaml
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;deploy.yaml&lt;/strong&gt; workflow uses the manifests in the &lt;strong&gt;k8s-specifications&lt;/strong&gt; folder of the &lt;strong&gt;k8s-example-voting-app&lt;/strong&gt; repository to deploy the application on the EKS cluster. This will create the necessary components for the application, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Nginx Ingress Controller:&lt;/strong&gt; This will set up a network load balancer in AWS that will allow us to access the voting and result user interfaces from the internet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application Ingress:&lt;/strong&gt; This will create a mapping between the network load balancer URL and the domain names of the voting and result user interfaces. This way, we can use our own domain names to access the application.&lt;/li&gt;
&lt;li&gt;Postgresql db service and deployment&lt;/li&gt;
&lt;li&gt;Redis service and deployment&lt;/li&gt;
&lt;li&gt;Vote service and deployment&lt;/li&gt;
&lt;li&gt;Result service and deployment&lt;/li&gt;
&lt;li&gt;Worker deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is the code for the &lt;strong&gt;deploy.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Deploy images from ECR to k8s
on:
  workflow_dispatch
env:
  AWS_REGION: "us-east-1"
  ENV: "prod"
permissions:
  id-token: write
  contents: read
jobs:
  deployment:
    name: Deploy application to EKS cluster
    runs-on: ubuntu-latest
    steps: 
    - name: Checkout code from GitHub to runner
      uses: actions/checkout@v2
      with:
        token: ${{ secrets.GITHUB_TOKEN }}

    - name: Configure AWS Credentials
      uses: aws-actions/configure-aws-credentials@v2
      with:
        aws-access-key-id: ${{ secrets.AWS_AP_ACCESS_KEY }}
        aws-secret-access-key: ${{ secrets.AWS_AP_SECRET_KEY }}
        aws-region: ${{ env.AWS_REGION }}

    - name: Install kubectl
      uses: azure/setup-kubectl@v3
        # with:
        #   version: '1.27' # default is latest stable
    - name: Update kube config
      run: |
        aws eks update-kubeconfig --name k8s-voting-app --region ${{ env.AWS_REGION }}  

    - name: Deploy application images to EKS cluster using manifest
      run: |
        kubectl version --short
        kubectl apply -f k8s-specifications/db-secret.yaml
        kubectl apply -f k8s-specifications/db-deployment.yaml
        kubectl apply -f k8s-specifications/db-service.yaml
        kubectl apply -f k8s-specifications/redis-deployment.yaml
        kubectl apply -f k8s-specifications/redis-service.yaml
        kubectl apply -f k8s-specifications/worker-deployment.yaml
        kubectl apply -f k8s-specifications/ingress-controller.yaml
        sleep 15
        kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
        kubectl apply -f k8s-specifications/vote-deployment.yaml
        kubectl apply -f k8s-specifications/vote-service.yaml        
        kubectl apply -f k8s-specifications/result-deployment.yaml 
        kubectl apply -f k8s-specifications/result-service.yaml 
        kubectl apply -f k8s-specifications/app-ingress.yaml  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Very important point to note about deploy.yaml is that it’s &lt;strong&gt;triggered manually&lt;/strong&gt;, from the &lt;strong&gt;Actions&lt;/strong&gt; tab.&lt;/p&gt;

&lt;p&gt;Lets continue with the remaining steps to build and deploy the application to the EKS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a local working branch for the &lt;strong&gt;k8s-example-voting-app&lt;/strong&gt; cloned repository and switch to it. You can name the branch whatever you like. I used the name &lt;strong&gt;feature&lt;/strong&gt; for my branch.&lt;/li&gt;
&lt;li&gt;Open the &lt;strong&gt;k8s-specifications/ ingress-controller.yaml&lt;/strong&gt; file and find the line 348 where it says &lt;strong&gt;service.beta.kubernetes.io/aws-load-balancer-ssl-cert:&lt;/strong&gt;. Replace it’s value with the SSL certificate ARN that you got from the AWS Certificate Manager service for your domain name. This certificate was created by the Terraform code that you ran earlier.&lt;/li&gt;
&lt;li&gt;Open &lt;strong&gt;k8s-specifications/app-ingress.yaml&lt;/strong&gt;, and update both the host entries on line 10 and 20 respectively, with your domain name (vote.yourdomainname/result.yourdomainname)&lt;/li&gt;
&lt;li&gt;You need to change the image field in the &lt;strong&gt;vote-deployment.yaml&lt;/strong&gt;, &lt;strong&gt;result-deployment.yaml&lt;/strong&gt; and &lt;strong&gt;worker-deployment.yaml&lt;/strong&gt; files to match your AWS account number instead of &lt;strong&gt;123456789012&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8woxvgszt22p7n4ml1b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8woxvgszt22p7n4ml1b.png" alt="Image URI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now publish this branch to the remote, followed by the pull request to the main branch.&lt;/p&gt;

&lt;p&gt;Note: While creating the pull request, choose your own github main branch as the base and not the source fork repository(&lt;strong&gt;devopsulting/k8s-example-voting-app&lt;/strong&gt;) for the pull request.&lt;/p&gt;

&lt;p&gt;Here’s how these actions takes place:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8s25vzqe2lamcpcz9sal.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8s25vzqe2lamcpcz9sal.png" alt="Push changes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbfxrzf06q9tu31ymk8xq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbfxrzf06q9tu31ymk8xq.png" alt="Open pull request"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrld62isv32g18l0aa23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrld62isv32g18l0aa23.png" alt="Merge pull request"&gt;&lt;/a&gt;&lt;br&gt;
Once Pull request is merged, GitHub actions is triggered and can be observed from the Actions tab&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpzr1yxxa34ciw4g343w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpzr1yxxa34ciw4g343w.png" alt="GA in action"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fey1uzyjt3fs7nrwwfnjx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fey1uzyjt3fs7nrwwfnjx.png" alt="GA in-progress"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Note:&lt;/strong&gt; If you have no real code changes to make, you can still trigger the build workflow by editing a comment in any of the files in the &lt;strong&gt;vote&lt;/strong&gt;, &lt;strong&gt;result&lt;/strong&gt; or &lt;strong&gt;worker&lt;/strong&gt; modules, or in the &lt;strong&gt;build.yaml&lt;/strong&gt; file. Then, push the changes to your remote working branch and create a pull request to the main branch.&lt;/p&gt;

&lt;p&gt;This action will build the docker images from the application modules, namely vote, worker and result respectively and store these images in the ECR private registry named &lt;strong&gt;voting-app&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4igevlxjizvbqmgva6k.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4igevlxjizvbqmgva6k.jpg" alt="ECR repo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2xm3ydjygwasvbbs6yj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2xm3ydjygwasvbbs6yj.jpg" alt="ECR repo images"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Next step is to deploy these images, along with ingress functionality on the EKS cluster, using manifests.&lt;/li&gt;
&lt;li&gt;Make sure that the &lt;strong&gt;build&lt;/strong&gt; workflow has finished successfully before you run the &lt;strong&gt;deploy&lt;/strong&gt; workflow. This will ensure that the application images are stored in the ECR and ready to be deployed.&lt;/li&gt;
&lt;li&gt;On the Actions tab, in the left menu, you will find link &lt;strong&gt;Deploy images from ECR to k8s&lt;/strong&gt;. From there you need to expand &lt;strong&gt;Run workflow&lt;/strong&gt; dropdown and then click on the &lt;strong&gt;Run workflow&lt;/strong&gt; button to run it, as shown below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6s82g65kyl4yzbzcp81.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6s82g65kyl4yzbzcp81.png" alt="GA manual trigger"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the deploy workflow is done, the application will be set up on the EKS cluster. Only thing left to do is to check the ingress mapping to the network load balancer and create two A records in the route 53 hosting zone. This will allow you to access the &lt;strong&gt;vote&lt;/strong&gt; and &lt;strong&gt;result&lt;/strong&gt; interfaces using your domain name.&lt;/li&gt;
&lt;li&gt;Run the following commands on your local terminal (where you configured your AWS-CLI and kubectl), approx. after 5 mins of completion of deploy workflow.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;aws eks update-kubeconfig --name k8s-voting-app --region us-east-1&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;kubectl describe ingress basic-routing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It should look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88o4krr0req5bt00nvxa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88o4krr0req5bt00nvxa.png" alt="Ingress LB mapping"&gt;&lt;/a&gt;&lt;br&gt;
Note that &lt;strong&gt;Address&lt;/strong&gt; field is updated with the Network Load Balancer url.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After you have verified that the network load balancer is correctly mapped to the ingress host URLs, you need to create two A records in the Route 53 hosted zone as shown below:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm3mxi8codemsy40qu8na.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm3mxi8codemsy40qu8na.jpg" alt="Route53 A records"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For each A record, you need to select the network load balancer as the alias target. It should be done like below:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyzfygxvap5a1zyhsd69.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyzfygxvap5a1zyhsd69.png" alt="A record LB mapping"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, you have to wait for about 15 minutes for the DNS settings to propagate. After that, you can try to access the vote and result user interfaces using your domain name. These should look like below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw46v73bc3kfouhbvc8hr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw46v73bc3kfouhbvc8hr.jpg" alt="vote screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmeksxsk0dyw4qtgfsa00.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmeksxsk0dyw4qtgfsa00.jpg" alt="result screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test your application by voting from different browsers and check how CATS and DOGS percentage changes in the result screen.&lt;/li&gt;
&lt;li&gt;Once you have tested the application, it’s time to delete the entire infrastructure to avoid incurring unnecessary cost.&lt;/li&gt;
&lt;li&gt;Start by deleting the three entries in the Route 53 hosted zone. These are the &lt;strong&gt;two A&lt;/strong&gt; records and the &lt;strong&gt;CNAME&lt;/strong&gt; record.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6c0iz8ji4ldmfrkkg08n.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6c0iz8ji4ldmfrkkg08n.jpg" alt="Delete route53 records"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Delete the Network Load Balancer from the EC2 Load Balancer section.&lt;/li&gt;
&lt;li&gt;Next, you can run the command &lt;strong&gt;terraform destroy&lt;/strong&gt; to destroy the entire infrastructure that you created with Terraform.&lt;/li&gt;
&lt;li&gt;After that, you also need to delete the ECR repository, S3 bucket, and Dynamodb table that you created manually.&lt;/li&gt;
&lt;li&gt;Finally, you should check all the AWS services that you used and make sure that they are destroyed and no longer available. This will prevent any unwanted charges.&lt;/li&gt;
&lt;li&gt;You also need to restore your domain registrar NS settings to the previous ones. This will disconnect your domain name from AWS and make it available for other purposes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I hope you enjoyed the article and learned something new. Please tell me how it went for you, or ask me for help if you faced any difficulties. I would love to hear your feedback.&lt;/p&gt;

&lt;h1&gt;
  
  
  kubernetes #k8s #github #githubactions #cicd #aws #eks #ecr #python #nodejs #redis #postgresql #devops #containerization #terraform
&lt;/h1&gt;

</description>
      <category>kubernetes</category>
      <category>githubactions</category>
      <category>eks</category>
      <category>terraform</category>
    </item>
  </channel>
</rss>
