<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kelechi Edeh</title>
    <description>The latest articles on Forem by Kelechi Edeh (@kelechiedeh).</description>
    <link>https://forem.com/kelechiedeh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kelechiedeh"/>
    <language>en</language>
    <item>
      <title>Understanding Docker, Containers, and How to Dockerize Your Application</title>
      <dc:creator>Kelechi Edeh</dc:creator>
      <pubDate>Sat, 12 Jul 2025 15:32:21 +0000</pubDate>
      <link>https://forem.com/kelechiedeh/understanding-docker-containers-and-how-to-dockerize-your-application-17lj</link>
      <guid>https://forem.com/kelechiedeh/understanding-docker-containers-and-how-to-dockerize-your-application-17lj</guid>
      <description>&lt;p&gt;In today's software development world, the shift from monolithic to microservices architecture has revolutionized how we build and deploy applications. At the heart of this evolution is Docker a platform that enables developers to package, ship, and run applications in lightweight, portable containers.&lt;/p&gt;

&lt;p&gt;This article provides a comprehensive overview of Docker, explains what containers are, and walks through the process of dockerizing an application with practical examples.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is Docker?
&lt;/h3&gt;

&lt;p&gt;Docker is an open-source platform that allows developers to automate the deployment of applications inside lightweight, portable containers. These containers run consistently across any environment—whether it's a developer's laptop, a test server, or a production cloud environment.&lt;/p&gt;

&lt;p&gt;Docker uses containerization technology to package an application with all its dependencies, configuration files, libraries, and binaries, ensuring it will run the same regardless of where it's deployed.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Are Containers?
&lt;/h3&gt;

&lt;p&gt;A container is a standard unit of software that encapsulates an application and all its dependencies so it can run quickly and reliably across different environments.&lt;/p&gt;

&lt;p&gt;Unlike virtual machines (VMs), containers do not include a full operating system. They share the host OS kernel and isolate the application processes. This makes them more lightweight and faster to start compared to VMs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Containers vs Virtual Machines
&lt;/h3&gt;

&lt;p&gt;Here’s a quick comparison between &lt;strong&gt;Virtual Machines&lt;/strong&gt; and &lt;strong&gt;Containers&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Feature&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Virtual Machines&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Containers&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OS&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Includes guest OS&lt;/td&gt;
&lt;td&gt;Shares host OS kernel&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Resource Usage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Heavy (more memory &amp;amp; CPU)&lt;/td&gt;
&lt;td&gt;Lightweight&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Boot Time&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Minutes&lt;/td&gt;
&lt;td&gt;Seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Portability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Less portable&lt;/td&gt;
&lt;td&gt;Highly portable&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Docker Architecture
&lt;/h3&gt;

&lt;p&gt;Docker has three major components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Docker Client – CLI tool (docker) that users interact with.&lt;/li&gt;
&lt;li&gt;Docker Daemon – Runs in the background to manage containers.&lt;/li&gt;
&lt;li&gt;Docker Images – Read-only templates used to create containers.&lt;/li&gt;
&lt;li&gt;Docker Containers – Running instances of Docker images.&lt;/li&gt;
&lt;li&gt;Docker Registry – A repository for Docker images (e.g., Docker Hub).&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Why Use Docker?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Consistency across environments&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rapid deployment and scalability&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Isolation of applications&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Efficient CI/CD integration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Simplified dependency management&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Microservices support&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setting Up Docker
&lt;/h3&gt;

&lt;p&gt;Install Docker from the &lt;a href="https://docs.docker.com/engine/install/" rel="noopener noreferrer"&gt;official website&lt;/a&gt; or use your system’s package manager:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ubuntu: sudo &lt;code&gt;apt install docker.io&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;macOS: Use Docker Desktop&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Windows: Use Docker Desktop (WSL 2 backend recommended)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Dockerizing an Application
&lt;/h3&gt;

&lt;p&gt;To show you how to dockerize an application, I have dockerized a Prime Video clone built with React. The original frontend project can be found here: &lt;a href="https://github.com/NikhilManglik/Prime-Video-Clone/tree/main" rel="noopener noreferrer"&gt;Prime Video Clone on GitHub&lt;/a&gt;. My Docker setup allows the app to run consistently across environments, making it production-ready and easily deployable.&lt;/p&gt;

&lt;h4&gt;
  
  
  Create a Dockerfile
&lt;/h4&gt;

&lt;p&gt;A Dockerfile is a text file that contains a set of instructions Docker uses to build a Docker image. Think of it as a blueprint for packaging your application and its environment (OS, libraries, dependencies, etc.) into a portable container.&lt;/p&gt;

&lt;p&gt;When you run &lt;code&gt;docker build&lt;/code&gt;, Docker reads the Dockerfile line by line, executing each instruction to assemble the final image.&lt;/p&gt;

&lt;p&gt;Inside your project root &lt;code&gt;(prime-video-clone/)&lt;/code&gt;, create a Dockerfile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Use official Node.js base image for build step
FROM node:20-alpine

# Set working directory
WORKDIR /app

# Copy package files and install dependencies
COPY package*.json .
RUN npm install


# Copy the rest of the source code
COPY . .

# Expose port 3000
EXPOSE 3000

# Start app
CMD [ "npm", "start"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Add a .dockerignore File
&lt;/h3&gt;

&lt;p&gt;Exclude unnecessary files from your Docker build context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node_modules
build
.dockerignore
Dockerfile
.git
.gitignore

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Build the Docker Image
&lt;/h3&gt;

&lt;p&gt;Open your terminal and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t kelzceana/prime-video-appe .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4i77a5uq2azxh33au4te.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4i77a5uq2azxh33au4te.png" alt=" " width="800" height="107"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Run the Container
&lt;/h3&gt;

&lt;p&gt;To run the app locally on port 3000&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -p 3000:3000 kelzceana/prime-video-app

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuwo3ulmozhz9ieatcal7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuwo3ulmozhz9ieatcal7.png" alt=" " width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kelzceana/docker-projects/tree/main/Prime-Video-Clone" rel="noopener noreferrer"&gt;code repository&lt;/a&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Dockerizing your Prime Video clone not only improves the developer experience, but also prepares your app for real-world deployment scenarios. You now have a portable, production-ready version of your React frontend all in a single container.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Step-by-Step Guide: Creating an Amazon EKS Cluster Using Terraform</title>
      <dc:creator>Kelechi Edeh</dc:creator>
      <pubDate>Mon, 07 Jul 2025 21:44:00 +0000</pubDate>
      <link>https://forem.com/kelechiedeh/step-by-step-guide-creating-an-amazon-eks-cluster-using-terraform-5204</link>
      <guid>https://forem.com/kelechiedeh/step-by-step-guide-creating-an-amazon-eks-cluster-using-terraform-5204</guid>
      <description>&lt;p&gt;Manually provisioning cloud infrastructure can be repetitive and error-prone. Tools like Terraform allow us to define our infrastructure as code, making deployments repeatable, auditable, and scalable.&lt;/p&gt;

&lt;p&gt;In this article, I’ll walk you through how I created a production-ready Amazon EKS cluster using Terraform, AWS, and two powerful open-source modules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/latest" rel="noopener noreferrer"&gt;terraform-aws-modules/vpc/aws&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest" rel="noopener noreferrer"&gt;terraform-aws-modules/eks/aws&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  What is Terraform?
&lt;/h1&gt;

&lt;p&gt;Terraform is an open-source Infrastructure as Code (IaC) tool developed by HashiCorp. It allows you to define, provision, and manage cloud infrastructure using declarative configuration files. Rather than clicking through web consoles, Terraform empowers you to codify your infrastructure and manage it just like your application code with versioning, collaboration, and automation.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why Use Terraform for AWS EKS?
&lt;/h1&gt;

&lt;p&gt;Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service that simplifies running Kubernetes on AWS without the operational overhead of managing the control plane.&lt;/p&gt;

&lt;p&gt;Provisioning EKS manually can be complex due to the number of components involved (VPCs, subnets, IAM roles, node groups, etc.). Terraform removes this complexity by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enabling repeatable and auditable deployments.&lt;/li&gt;
&lt;li&gt;Simplifying dependency management between AWS resources.&lt;/li&gt;
&lt;li&gt;Integrating with CI/CD pipelines for automated infrastructure changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;p&gt;Before creating an EKS cluster, ensure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account and credentials configured locally&lt;/li&gt;
&lt;li&gt;Terraform installed (terraform -v)&lt;/li&gt;
&lt;li&gt;AWS CLI installed and configured (aws configure)&lt;/li&gt;
&lt;li&gt;Basic knowledge of Terraform syntax&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Project Structure
&lt;/h1&gt;

&lt;p&gt;To keep things clean and efficient, I used only three main files to deploy both the networking (VPC) and the EKS cluster, leveraging the official Terraform modules for best practices.&lt;/p&gt;

&lt;p&gt;Here’s what my final project structure looks like:&lt;br&gt;
&lt;code&gt;├── eks.tf&lt;br&gt;
├── provider.tf&lt;br&gt;
├── terraform.tfstate&lt;br&gt;
├── terraform.tfvars&lt;br&gt;
└── vpc.tf&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
By keeping networking and compute separate, I can manage, extend, or even reuse each part of the infrastructure more easily.&lt;/p&gt;
&lt;h1&gt;
  
  
  Networking with terraform-aws-modules/vpc/aws
&lt;/h1&gt;

&lt;p&gt;In &lt;code&gt;vpc.tf&lt;/code&gt;, I used the terraform-aws-modules/vpc/aws module to create a complete VPC setup with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Public and private subnets across multiple AZs&lt;/li&gt;
&lt;li&gt;A NAT Gateway&lt;/li&gt;
&lt;li&gt;Required tags for EKS subnet auto-discovery&lt;/li&gt;
&lt;li&gt;DNS and VPN gateway support (optional for hybrid setups)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s the simplified breakdown of what this module gives me:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Configure the AWS Provider
provider "aws" {
  region = "us-east-1"
}


variable vpc_cidr_blocks {}
variable public_subnet_cidr_blocks {}
variable private_subnet_cidr_blocks {}

data "aws_availability_zones" "azs" {}

module "my-eks-cluster-vpc" {
  source = "terraform-aws-modules/vpc/aws"
  version = "5.1.2"

  name = "my-vpc"
  cidr = var.vpc_cidr_blocks
  private_subnets = var.private_subnet_cidr_blocks
  public_subnets = var.public_subnet_cidr_blocks

  azs = data.aws_availability_zones.azs.names


  enable_nat_gateway = true
  enable_vpn_gateway = true
  enable_dns_hostnames = true

  tags = {
    "kubernetes.io/cluster/my-eks-cluster" = "shared"

  }

  public_subnet_tags = {
    "kubernetes.io/cluster/my-eks-cluster" = "shared"
    "kubernetes.io/role/elb" = 1
  }
  private_subnet_tags = {
    "kubernetes.io/cluster/my-eks-cluster" = "shared"
    "kubernetes.io/role/internal-elb" = 1
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why tagging matters:&lt;/strong&gt;&lt;br&gt;
EKS needs to know which subnets it can use for placing worker nodes and load balancers. The tags &lt;code&gt;kubernetes.io/cluster/&amp;lt;name&amp;gt;&lt;/code&gt; and &lt;code&gt;kubernetes.io/role/internal-elb&lt;/code&gt; or &lt;code&gt;elb&lt;/code&gt; signal to AWS which subnets are eligible&lt;/p&gt;
&lt;h1&gt;
  
  
  Deploying EKS with terraform-aws-modules/eks/aws
&lt;/h1&gt;

&lt;p&gt;In &lt;code&gt;eks.tf&lt;/code&gt;, I used the terraform-aws-modules/eks/aws module to spin up the Kubernetes control plane and a managed node group.&lt;/p&gt;

&lt;p&gt;Here’s what it includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EKS control plane with the latest version&lt;/li&gt;
&lt;li&gt;IAM roles and security groups (auto-generated)&lt;/li&gt;
&lt;li&gt;Managed node group with autoscaling&lt;/li&gt;
&lt;li&gt;Automatic subnet and VPC discovery from the vpc module
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "19.17.2"

  cluster_name = "my-eks-cluster"
  cluster_version = "1.30"

  subnet_ids = module.my-eks-cluster-vpc.private_subnets
  vpc_id = module.my-eks-cluster-vpc.vpc_id

  tags = {
    env = "dev"
  }

  #node group configuration
   eks_managed_node_groups = {
    dev = {
      min_size     = 1
      max_size     = 3
      desired_size = 2

      instance_types = ["t2.small"]
    }
  }


}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h1&gt;
  
  
  Using terraform.tfvars for Inputs
&lt;/h1&gt;

&lt;p&gt;To separate logic from data, I defined my VPC CIDR and subnet ranges in terraform.tfvars like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vpc_cidr_blocks = "10.0.0.0/16"
private_subnet_cidr_blocks = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnet_cidr_blocks = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This makes the code reusable just update the &lt;code&gt;.tfvars&lt;/code&gt; file to spin up a new environment (e.g., staging, production, dev).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmu55szn58f4kv7rq92k3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmu55szn58f4kv7rq92k3.png" alt=" " width="800" height="186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2ci7wvaiuvtlziqw2m6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2ci7wvaiuvtlziqw2m6.png" alt=" " width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  S3 Native State Locking
&lt;/h1&gt;

&lt;p&gt;One nice bonus I added: S3 native locking. In previous Terraform versions, you needed a DynamoDB table for state locking. But with AWS Provider v5.20.0+, you can use use_lockfile = true to enable native state locking directly in S3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "terraform_state" {
  bucket = "terraform-state-bucker12345"

  lifecycle {
    prevent_destroy = false
  }
}

terraform {  
  backend "s3" {  
    bucket       = "terraform-state-bucker12345"  
    key          = "dev/terraform-state-file"  
    region       = "us-east-1"  
    encrypt      = true  
    //use_lockfile = true  #S3 native locking
  }  
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This keeps my state safe from concurrent edits without needing a separate DynamoDB table.&lt;/p&gt;

&lt;h1&gt;
  
  
  Wrapping up
&lt;/h1&gt;

&lt;p&gt;In this article, I demonstrated how to provision a production-ready Amazon EKS cluster using just a few Terraform files and the official AWS modules for VPC and EKS.&lt;/p&gt;

&lt;p&gt;By leveraging:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The terraform-aws-modules/vpc/aws module for a highly available and EKS-compatible VPC,&lt;/li&gt;
&lt;li&gt;The terraform-aws-modules/eks/aws module to simplify Kubernetes cluster provisioning,&lt;/li&gt;
&lt;li&gt;And S3 native locking to manage Terraform state securely without DynamoDB,&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I was able to deploy a scalable and maintainable Kubernetes environment using clean, modular infrastructure-as-code. This setup is not only efficient and reusable but also adheres to AWS best practices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kelzceana/terraform-projects/tree/main/deploy-eks-cluster" rel="noopener noreferrer"&gt;code repo&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Creating a Secured Kubernetes Cluster on Amazon EKS</title>
      <dc:creator>Kelechi Edeh</dc:creator>
      <pubDate>Sat, 05 Jul 2025 03:27:48 +0000</pubDate>
      <link>https://forem.com/kelechiedeh/creating-a-secured-kubernetes-cluster-on-amazon-eks-4ffj</link>
      <guid>https://forem.com/kelechiedeh/creating-a-secured-kubernetes-cluster-on-amazon-eks-4ffj</guid>
      <description>&lt;p&gt;Amazon Elastic Kubernetes Service (EKS) makes it easier to deploy, manage, and scale containerized applications using Kubernetes on AWS. But for production environments, ensuring the security of your cluster is critical especially when deploying it inside a private network.&lt;/p&gt;

&lt;p&gt;In this article, I document how I deployed a fully secured Amazon EKS (Elastic Kubernetes Service) cluster inside private subnets using AWS best practices. My approach ensured that the Kubernetes control plane, worker nodes, and sensitive services were all protected from public exposure while still being fully manageable and scalable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Considerations Before You Begin
&lt;/h2&gt;

&lt;p&gt;Before diving into the technical steps, I aligned my infrastructure design with core security principles in AWS and Kubernetes. Here are the key things I considered while designing the cluster:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;VPC and Subnet Design&lt;/strong&gt;: I ensured the EKS nodes were launched in private subnets, and only necessary resources like the bastion host resided in public subnets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM and Identity Management&lt;/strong&gt;: I created distinct IAM roles for the EKS control plane, node group, and EC2 bastion host using the principle of least privilege.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node Security&lt;/strong&gt;: Nodes were isolated, updated, and assigned minimal permissions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bastion Host for Access&lt;/strong&gt;: I deployed a bastion host as a secure jump box, with SSH restricted to my IP address.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes Network Policies&lt;/strong&gt;: I applied policies to control pod-to-pod communication within the cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secrets and Configuration Management&lt;/strong&gt;: Kubernetes secrets were encrypted using AWS KMS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logging and Monitoring&lt;/strong&gt;: I enabled logging to Amazon CloudWatch and audit trails via CloudTrail and GuardDuty.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Control Plane Access&lt;/strong&gt;: I limited EKS API server access to specific IPs and used RBAC for access control.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use of IRSA (IAM Roles for Service Accounts)&lt;/strong&gt;: This ensured fine-grained pod-level permissions without over-privileging node IAM roles.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cluster Updates and Maintenance&lt;/strong&gt;: I deployed the latest Kubernetes version and hardened AMIs, with plans to automate updates.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Steps I Took to Deploy the Cluster
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Step 1: Created a Custom VPC with Public and Private Subnets
&lt;/h2&gt;

&lt;p&gt;I began by creating a custom VPC with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2 public subnets: to host internet-facing resources like a bastion host.&lt;/li&gt;
&lt;li&gt;2 private subnets: dedicated to hosting the EKS worker nodes and the control plane.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each pair of public/private subnets was spread across two Availability Zones to ensure high availability. I also ensured the private subnets had no direct route to the internet, and only communicated outward through NAT Gateways in the public subnets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1e7vgyr1jsjktqxx97hm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1e7vgyr1jsjktqxx97hm.png" alt="Image description" width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Create IAM Role for the EKS Cluster Control Plane
&lt;/h2&gt;

&lt;p&gt;You need an IAM role that allows EKS to create and manage resources on your behalf. To allow EKS to interact with other AWS services, I created an IAM role for the EKS control plane with the &lt;strong&gt;AmazonEKSClusterPolicy&lt;/strong&gt; and &lt;strong&gt;AmazonEKSServicePolicy&lt;/strong&gt; attached.&lt;/p&gt;

&lt;p&gt;This IAM role will be specified during cluster creation to ensure the control plane had the necessary permissions to manage resources securely.&lt;br&gt;
To create this IAM role, follow these steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the IAM Roles section in the AWS Console.&lt;/li&gt;
&lt;li&gt;Click on Create role.&lt;/li&gt;
&lt;li&gt;Under Trusted entity type, choose AWS service.&lt;/li&gt;
&lt;li&gt;For the use case, select EKS - Cluster&lt;/li&gt;
&lt;li&gt;Attach the policies to the role&lt;/li&gt;
&lt;li&gt;Review and click &lt;strong&gt;Create&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllmcuwtu9xhk6brgxgzn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllmcuwtu9xhk6brgxgzn.png" alt="Image description" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3: Create the EKS Cluster
&lt;/h2&gt;

&lt;p&gt;The EKS cluster using the AWS Management Console.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to EKS &amp;gt; Clusters&lt;/li&gt;
&lt;li&gt;Click Create Cluster&lt;/li&gt;
&lt;li&gt;Under Name, enter secure-eks-cluster&lt;/li&gt;
&lt;li&gt;Select the Kubernetes version&lt;/li&gt;
&lt;li&gt;Under Cluster Service Role, select the IAM role you created earlier &lt;/li&gt;
&lt;li&gt;Choose VPC and subnets created in Step 1&lt;/li&gt;
&lt;li&gt;Enable Private access to the Kubernetes API server and disable public access in the cluster endpoint access&lt;/li&gt;
&lt;li&gt;Enable control plane logging (audit, API, authenticator, etc.)&lt;/li&gt;
&lt;li&gt;For Amazon EKS add-ons, i selected the CoreDNS, Amazon VPC CNI, and kube-proxy add-ons&lt;/li&gt;
&lt;li&gt;Click Create&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Creating the eks cluster may take several minutes. Wait for the cluster status to change to "Active."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzxgrq9pi519shujs5bu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzxgrq9pi519shujs5bu.png" alt="Image description" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4: Create IAM Role for Node Group
&lt;/h2&gt;

&lt;p&gt;Creating a node group requires IAM roles to be used by the nodes. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to IAM &amp;gt; Roles &amp;gt; Create Role&lt;/li&gt;
&lt;li&gt;Choose EC2&lt;/li&gt;
&lt;li&gt;Attach:

&lt;ul&gt;
&lt;li&gt;AmazonEKSWorkerNodePolicy&lt;/li&gt;
&lt;li&gt;AmazonEKS_CNI_Policy&lt;/li&gt;
&lt;li&gt;AmazonEC2ContainerRegistryReadOnly&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Name the role&lt;/li&gt;
&lt;li&gt;Click create&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Step 5: Create Security Group for Node Group
&lt;/h2&gt;

&lt;p&gt;Before creating your node group, it's important to set up a security group that defines how the nodes can communicate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Create the Security Group&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to EC2 &amp;gt; Security Groups&lt;/li&gt;
&lt;li&gt;Click Create Security Group&lt;/li&gt;
&lt;li&gt;Name it &lt;/li&gt;
&lt;li&gt;Select your VPC&lt;/li&gt;
&lt;li&gt;Under Inbound rules, add:&lt;/li&gt;
&lt;li&gt;Type: SSH, Source: My IP&lt;/li&gt;
&lt;li&gt;This allows internal traffic within the cluster. You can further tighten this based on workloads.&lt;/li&gt;
&lt;li&gt;Under Outbound rules, keep the default: All traffic allowed.
Click Create security group&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You will associate this security group with the node group in the next step.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 6: Create Node Group in EKS Cluster
&lt;/h2&gt;

&lt;p&gt;A node group is a group of EC2 instances that supply compute capacity to your Amazon EKS cluster. Multiple node groups can be added to the cluster&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the EKS Console, go to your cluster &amp;gt; Compute tab&lt;/li&gt;
&lt;li&gt;Click Add Node Group&lt;/li&gt;
&lt;li&gt;Enter name of node group&lt;/li&gt;
&lt;li&gt;Select the IAM role created in step 4&lt;/li&gt;
&lt;li&gt;Choose instance type (e.g., t3.medium), desired size, min and max nodes&lt;/li&gt;
&lt;li&gt;Select the private subnets&lt;/li&gt;
&lt;li&gt;Configure remote access to node and select your valid key pair&lt;/li&gt;
&lt;li&gt;Select the security group created in step 5&lt;/li&gt;
&lt;li&gt;Review and  Create&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodryj6bs9vkbk2dydijf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodryj6bs9vkbk2dydijf.png" alt="Image description" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 7: Launch a Bastion Host
&lt;/h2&gt;

&lt;p&gt;A bastion host (also called a jump box) is a special-purpose EC2 instance used to securely access resources in a private network such as your EKS worker nodes or Kubernetes control plane without exposing those resources to the public internet.&lt;/p&gt;

&lt;p&gt;The EKS cluster created above was created with private endpoint access only. This means it cannot be reached from your local machine or the public internet. A bastion host solves this by acting as a secure intermediary.&lt;/p&gt;

&lt;p&gt;To access your private EKS cluster, create a bastion host in a public subnet.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to EC2 &amp;gt; Launch Instance&lt;/li&gt;
&lt;li&gt;Select AMI (I selected Amazon linux 2023 AMI)&lt;/li&gt;
&lt;li&gt;Select Instace type (t2.micro - Free Tier)&lt;/li&gt;
&lt;li&gt;Select key pair for SSH access&lt;/li&gt;
&lt;li&gt;Choose your custom VPC and public subnet created in step 1&lt;/li&gt;
&lt;li&gt;Create a security group allowing only your IP on port 22&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjvy6f6sb8ltkawh7iiw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjvy6f6sb8ltkawh7iiw.png" alt="Image description" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 8: Configure AWS CLI in Bastion
&lt;/h2&gt;

&lt;p&gt;Once the Bastion host instance is running, you can securely connect to your private EKS resources using the bastion host. By establishing an SSH session to the bastion, you'll gain command-line access within the VPC, allowing you to run kubectl and other AWS CLI tools without exposing your Kubernetes API or nodes to the internet.&lt;/p&gt;

&lt;p&gt;SSH into the instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i my-key.pem ec2-user@&amp;lt;bastion-public-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we will configure the AWS CLI with the credentials and preferences it needs to interact with your AWS account.&lt;br&gt;
It prompts you to enter four key pieces of information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Access Key ID: This is part of your credentials used to authenticate your identity with AWS services.&lt;/li&gt;
&lt;li&gt;AWS Secret Access Key: A secret counterpart to your access key. It should be kept safe and never exposed in public code or documentation.&lt;/li&gt;
&lt;li&gt;Default Region Name: Specifies the AWS region you want the CLI to interact with by default (e.g., us-east-1, us-west-2). This should match the region where your EKS cluster is deployed.&lt;/li&gt;
&lt;li&gt;Default Output Format: Controls how the CLI formats the output. Common options are:

&lt;ul&gt;
&lt;li&gt;json (default)&lt;/li&gt;
&lt;li&gt;table (human-readable)&lt;/li&gt;
&lt;li&gt;text (compact, script-friendly)
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWS Access Key ID [None]: ****************
AWS Secret Access Key [None]: ***********************
Default region name [None]: us-east-1
Default output format [None]: json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After this, the CLI stores the credentials in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/.aws/credentials
~/.aws/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These files allow AWS CLI and tools like kubectl (via aws eks update-kubeconfig) to authenticate and communicate securely with your AWS resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 9: Configure kubectl in Bastion Host
&lt;/h2&gt;

&lt;p&gt;Once the AWS CLI is installed, I proceeded to install the kubectl. The kubectl command line tool is the main tool you will use to manage resources within your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;To install kubectl on Linux for Kubernetes 1.33, i followed the following steps&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.33.0/2025-05-01/bin/linux/amd64/kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the SHA-256 checksum for your downloaded binary with one of the following commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sha256sum -c kubectl.sha256
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply execute permissions to the binary.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x ./kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the binary to a folder in your PATH&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p $HOME/bin &amp;amp;&amp;amp; cp ./kubectl $HOME/bin/kubectl &amp;amp;&amp;amp; export PATH=$HOME/bin:$PATH
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can explore the full step-by-step process &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html#linux_amd64_kubectl" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 10: Create a Kubeconfig File
&lt;/h2&gt;

&lt;p&gt;A kubeconfig file is a configuration file used by the kubectl command-line tool to determine how to access and authenticate with a Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;I created a kubeconfig file automatically using the command below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks update-kubeconfig --region region-code --name my-cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace region-code with the &lt;code&gt;AWS Region&lt;/code&gt; that your cluster is in and replace &lt;code&gt;my-cluster&lt;/code&gt; with the name of your cluster.&lt;/p&gt;

&lt;p&gt;Test your configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-9dhsx             2/2     Running   0          90m
kube-system   aws-node-r2sxb             2/2     Running   0          90m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can read more on creating a kubeconfig file &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important: Allow HTTPS Traffic from Bastion to EKS&lt;/strong&gt;&lt;br&gt;
Action: Update the security group associated with your EKS cluster to allow inbound HTTPS traffic (TCP port 443) from the bastion host’s security group.&lt;/p&gt;

&lt;p&gt;Why?&lt;br&gt;
The EKS API server listens on port 443 for secure communication. Since my cluster is configured with private access only, my kubectl commands (executed from the bastion host) need permission to reach the API server. By allowing traffic on port 443 from the bastion host's security group, i have enabled secure cluster management without exposing the API publicly.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 11: Configure IAM Roles for EC2 instances
&lt;/h2&gt;

&lt;p&gt;Initially, I used long-term IAM user credentials inside Kubernetes pods to access AWS services like S3. However, this is not considered a best practice, especially in production environments, because it introduces risks such as credential leakage and over-privileged access.&lt;/p&gt;

&lt;p&gt;To follow security best practices, I migrated to using IAM Roles. This allows specific pods to securely assume fine-grained IAM roles without storing access keys inside the container.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create IAM role for bastion host instance&lt;/li&gt;
&lt;li&gt;Go to EC2 &amp;gt; Instances &amp;gt; Bastion Host &amp;gt; Actions &amp;gt; Security&lt;/li&gt;
&lt;li&gt;Click Modify IAM role&lt;/li&gt;
&lt;li&gt;Select the role created for the bastion host instance&lt;/li&gt;
&lt;li&gt;Click Update Iam role&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even after assigning an IAM role to your bastion host EC2 instance via the AWS Console, the AWS CLI may still use old IAM user credentials stored in ~/.aws/credentials and ~/.aws/config. This happens because the AWS CLI defaults to credentials in those files before checking for instance metadata.&lt;/p&gt;

&lt;p&gt;To fix this issus, i renamed the existing AWS CLI Credentials&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mv ~/.aws/credentials ~/.aws/credentials.bak
mv ~/.aws/config ~/.aws/config.bak
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, i ran AWS CLI Commands without stored credentials&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws sts get-caller-identity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb41xvim6u48fsxe8j5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb41xvim6u48fsxe8j5c.png" alt="Image description" width="800" height="75"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Deploying a fully private and secure Amazon EKS cluster required careful planning, precision, and a solid grasp of AWS infrastructure and Kubernetes operations. From designing the VPC and locking down IAM roles, to configuring private API access and implementing IRSA, every decision was guided by security best practices &lt;/p&gt;

&lt;p&gt;Managing access through a bastion host, fine-tuning security groups, and resolving credential conflicts reinforced the importance of automation, least privilege, and clean access boundaries. &lt;/p&gt;

&lt;p&gt;If you're interested in automating this process with Terraform, stay tuned for my next article where I’ll share how I built a fully automated version of this architecture!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Adding CI/CD Integration to My Cloud Resume Challenge</title>
      <dc:creator>Kelechi Edeh</dc:creator>
      <pubDate>Tue, 10 Jun 2025 02:14:50 +0000</pubDate>
      <link>https://forem.com/kelechiedeh/adding-cicd-integration-to-my-cloud-resume-challenge-58j9</link>
      <guid>https://forem.com/kelechiedeh/adding-cicd-integration-to-my-cloud-resume-challenge-58j9</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshbrlhcm1mh6bkw37cne.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshbrlhcm1mh6bkw37cne.png" alt="Image description" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As part of my Cloud Resume Challenge, I wanted to go beyond just creating a resume website on AWS by implementing a Continuous Integration and Continuous Deployment (CI/CD) pipeline. This added a new layer of automation, efficiency, and reliability to the project, ensuring that every update I make to my resume site goes live seamlessly. In this post, I’ll walk you through my process for setting up a CI/CD workflow using GitHub Actions to automate updates to my resume, hosted on AWS, and to handle CloudFront cache invalidation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why CI/CD for a Resume Site?
&lt;/h3&gt;

&lt;p&gt;Adding CI/CD to a resume website might seem like overkill at first glance, but here’s why it’s a valuable addition:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Efficiency&lt;/strong&gt;: I can push updates directly from GitHub, avoiding the need for manual uploads or deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reliability&lt;/strong&gt;: Automated testing ensures that any changes don’t inadvertently break the site.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Learning Opportunity&lt;/strong&gt;: This was a chance to practice CI/CD with AWS, a skill highly relevant to real-world DevOps and cloud-based projects.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  My CI/CD Goals for the Project
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automate Deployments to S3&lt;/strong&gt;: Whenever I push updates to my GitHub repository, the workflow should sync the changes to my S3 bucket, which hosts my static website.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Invalidate CloudFront Cache&lt;/strong&gt;: After the files are updated, the CloudFront distribution’s cache should be invalidated to ensure that visitors see the latest content.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integrate Testing&lt;/strong&gt;: Though simple, my code includes some basic HTML, CSS, and JavaScript, so any critical testing should pass before deploying.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Setting Up the CI/CD Pipeline
&lt;/h3&gt;

&lt;p&gt;For this project, I used &lt;a href="https://github.com/features/actions" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt; as my CI/CD tool due to its seamless integration with GitHub repositories and support for AWS.&lt;/p&gt;

&lt;p&gt;I created a &lt;code&gt;.yml&lt;/code&gt; file in &lt;code&gt;.github/workflows&lt;/code&gt; within my repository. This file defines the entire CI/CD pipeline, divided into two main jobs: &lt;strong&gt;Deploy to S3&lt;/strong&gt; and &lt;strong&gt;Invalidate CloudFront Cache&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kelzceana/cloudresume" rel="noopener noreferrer"&gt;Code repository&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s break down what each part does:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Trigger&lt;/strong&gt;: The workflow is triggered by a push to the &lt;code&gt;master&lt;/code&gt; branch. Whenever I update my resume’s code in this branch, the workflow automatically runs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Deploy to S3 Job&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Checkout&lt;/strong&gt;: The code from the repository is checked out into the runner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configure AWS Credentials&lt;/strong&gt;: GitHub Actions configures AWS credentials from the stored secrets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sync to S3&lt;/strong&gt;: This command uploads all the files from the repository to my S3 bucket, ensuring that any outdated files are replaced, and deleted files are removed.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Invalidate CloudFront Cache Job&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dependency on Deploy Job&lt;/strong&gt;: This job depends on the successful completion of the S3 deployment. If deployment fails, cache invalidation won’t occur.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cache Invalidation&lt;/strong&gt;: This command clears the cache in CloudFront, ensuring that the updated content is available globally without delay.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Benefits of the CI/CD Pipeline
&lt;/h3&gt;

&lt;p&gt;With this CI/CD setup, every update to my resume becomes a simple commit and push. GitHub Actions takes care of uploading to S3 and clearing the CloudFront cache. This setup has brought several benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rapid Updates&lt;/strong&gt;: I can make updates quickly without manual intervention.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduced Errors&lt;/strong&gt;: Automating deployments has reduced the chance of human error.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-World DevOps Practice&lt;/strong&gt;: Setting up this CI/CD pipeline gave me valuable experience with AWS and GitHub Actions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Implementing CI/CD for my Cloud Resume Challenge has been a rewarding experience, adding both professionalism and efficiency to my project. If you’re working on a similar project, I highly recommend setting up a CI/CD pipeline with GitHub Actions. It simplifies deployment and makes it easy to keep your site updated and relevant.&lt;/p&gt;

&lt;p&gt;. Thanks for reading, and happy coding!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building My Cloud Resume: A Step-by-Step Journey</title>
      <dc:creator>Kelechi Edeh</dc:creator>
      <pubDate>Tue, 10 Jun 2025 02:13:30 +0000</pubDate>
      <link>https://forem.com/kelechiedeh/building-my-cloud-resume-a-step-by-step-journey-7l7</link>
      <guid>https://forem.com/kelechiedeh/building-my-cloud-resume-a-step-by-step-journey-7l7</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2buugo2wmi33vqt73uu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2buugo2wmi33vqt73uu.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As I took on the &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/aws/" rel="noopener noreferrer"&gt;&lt;strong&gt;Cloud Resume Challenge&lt;/strong&gt;&lt;/a&gt;, I was excited about the chance to build a fully cloud-based resume from scratch. Having developed some foundational skills through my learning at &lt;strong&gt;Lighthouse Labs&lt;/strong&gt;, I knew this would be the perfect way to put my HTML skills and newfound AWS knowledge to practical use. Here’s a breakdown of the steps I’ve taken so far to make this project a reality.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Creating the HTML Resume
&lt;/h4&gt;

&lt;p&gt;Thanks to the skills I gained at &lt;a href="https://www.lighthouselabs.ca/" rel="noopener noreferrer"&gt;Lighthouse Labs&lt;/a&gt;, I was able to create an HTML resume showcasing my experience and qualifications. This static HTML file is the foundation of my cloud resume, and it’s designed to be lightweight and easily accessible.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: Registering My Domain with Porkbun
&lt;/h4&gt;

&lt;p&gt;With the resume designed, I needed a custom domain to make it look professional. I chose &lt;a href="https://porkbun.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;Porkbun&lt;/strong&gt;&lt;/a&gt; for domain registration because of their competitive pricing and user-friendly interface. I registered &lt;a href="https://kelechiedeh.info" rel="noopener noreferrer"&gt;&lt;strong&gt;kelechiedeh.info&lt;/strong&gt;&lt;/a&gt; as my domain name, which aligns with my personal brand.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: Uploading Static Pages to Amazon S3
&lt;/h4&gt;

&lt;p&gt;To host the HTML resume on AWS, I turned to &lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt;&lt;/a&gt;. S3 is an ideal service for hosting static websites, as it provides high availability, scalability, and security. I created a new S3 bucket, configured it to host a website, and uploaded my HTML resume files to this bucket.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 4: Setting Up CloudFront for Content Delivery
&lt;/h4&gt;

&lt;p&gt;To improve the performance of my resume website and enhance user experience, I set up &lt;a href="https://aws.amazon.com/cloudfront/" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon CloudFront&lt;/strong&gt;&lt;/a&gt;, AWS’s content delivery network. CloudFront caches my website’s static content closer to users around the world, leading to faster load times and a more responsive site.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 5: Managing DNS with Route 53
&lt;/h4&gt;

&lt;p&gt;Next, I configured &lt;a href="https://aws.amazon.com/route53/" rel="noopener noreferrer"&gt;Amazon Route 53&lt;/a&gt; to manage the DNS for my domain. I created a hosted zone for &lt;a href="http://kelechiedeh.info" rel="noopener noreferrer"&gt;kelechiedeh.info&lt;/a&gt; and set up an alias record pointing my domain to the CloudFront distribution. Route 53 provides a reliable way to route traffic to my S3-hosted website.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 6: Securing the Site with AWS Certificate Manager
&lt;/h4&gt;

&lt;p&gt;To ensure my website is secure, I used &lt;a href="https://aws.amazon.com/certificate-manager/" rel="noopener noreferrer"&gt;&lt;strong&gt;AWS Certificate Manager (ACM)&lt;/strong&gt;&lt;/a&gt; to provision an SSL/TLS certificate for my domain. With HTTPS enabled, my visitors can be confident that their connection to my resume site is secure. ACM simplified the process of managing the certificate, and I configured CloudFront to use it, providing a secure browsing experience.&lt;/p&gt;

&lt;h4&gt;
  
  
  Final Thoughts
&lt;/h4&gt;

&lt;p&gt;These steps have transformed my HTML resume into a fully hosted, scalable, and secure cloud resume. By leveraging AWS services like S3, CloudFront, Route 53, and Certificate Manager, I’ve gained hands-on experience with cloud infrastructure, which is incredibly rewarding. The project has been an exciting learning journey, and I’m eager to continue refining my skills as I progress through the Cloud Resume Challenge.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Congratulations, you just earned a badge: So whats next?</title>
      <dc:creator>Kelechi Edeh</dc:creator>
      <pubDate>Tue, 10 Jun 2025 02:05:51 +0000</pubDate>
      <link>https://forem.com/kelechiedeh/congratulations-you-just-earned-a-badge-so-whats-next-218h</link>
      <guid>https://forem.com/kelechiedeh/congratulations-you-just-earned-a-badge-so-whats-next-218h</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbw68yivhk9dzj4z2p7h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbw68yivhk9dzj4z2p7h.png" alt="Image description" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations, you've earned a badge! It's an incredible moment—the feeling of accomplishment after all the hard work, late nights, and hours of study and practice. But now that you've reached this milestone, you might be wondering: &lt;em&gt;what’s next?&lt;/em&gt; This was exactly the question I asked myself after achieving my AWS Solutions Architect - Associate certification.&lt;/p&gt;

&lt;p&gt;For me, the journey didn't stop there. I'm diving into the &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/aws/" rel="noopener noreferrer"&gt;&lt;strong&gt;Cloud Resume Challenge&lt;/strong&gt;&lt;/a&gt;, a hands-on project that puts cloud skills into real action and, ultimately, brings all the theoretical knowledge into practical application. This challenge is about building and refining a cloud-native resume website while exploring essential AWS services, infrastructure as code, and CI/CD pipelines. Join me as I navigate this next chapter, sharing my learnings, challenges, and triumphs along the way. Let's see where this journey takes us!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Setting Up a Dev Environment on Google Cloud: A Challenge Lab Walkthrough</title>
      <dc:creator>Kelechi Edeh</dc:creator>
      <pubDate>Thu, 10 Apr 2025 20:38:40 +0000</pubDate>
      <link>https://forem.com/kelechiedeh/setting-up-a-dev-environment-on-google-cloud-a-challenge-lab-walkthrough-2jci</link>
      <guid>https://forem.com/kelechiedeh/setting-up-a-dev-environment-on-google-cloud-a-challenge-lab-walkthrough-2jci</guid>
      <description>&lt;p&gt;Hey there, fellow cloud enthusiasts! 👋&lt;/p&gt;

&lt;p&gt;I recently tackled a pretty interesting Challenge Lab on Google Cloud titled "Set Up an App Dev Environment on Google Cloud"—and it was the perfect mix of practical hands-on learning and real-world cloud engineering. Whether you're prepping for certification or just trying to boost your GCP skills, this one's worth diving into.&lt;/p&gt;

&lt;p&gt;In this post, I’ll break down what the lab was about, how I approached it, and some takeaways that might help you breeze through it, too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario: Welcome to Jooli Inc
&lt;/h2&gt;

&lt;p&gt;You’ve just landed a junior cloud engineer role at Jooli Inc.. Not bad, right?&lt;/p&gt;

&lt;p&gt;Your first big task is to help the newly formed Memories team set up their environment for a photo storage and thumbnail generation app. Your responsibilities include creating storage and Pub/Sub infrastructure, deploying a Cloud Run function, and cleaning up old user access.&lt;/p&gt;

&lt;p&gt;The kicker? There are no step-by-step instructions. Just you, your GCP know-how, and a list of tasks.&lt;/p&gt;

&lt;p&gt;Let’s get into it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task 1: Create a Bucket for Photo Storage&lt;/strong&gt;&lt;br&gt;
   This is the starting point, creating a Cloud Storage bucket where uploaded images would live.&lt;br&gt;
  &lt;strong&gt;Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to Cloud Storage &amp;gt; Buckets.&lt;/li&gt;
&lt;li&gt;Click Create bucket, give it the exact name Bucket Name.&lt;/li&gt;
&lt;li&gt;Select the correct region (as specified).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also create a Cloud Storage bucket using the gsutil command (a part of gcloud CLI).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  gcloud storage buckets create gs://Bucket-Name --location=REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure to replace Bucket-Name with your desired name and REGION with the appropriate region (e.g., us-central1).&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Task 2: Create a Pub/Sub Topic&lt;/strong&gt;&lt;br&gt;
This topic will be used by the Cloud Run function to publish messages when thumbnails are created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Head to Pub/Sub &amp;gt; Topics.&lt;/li&gt;
&lt;li&gt;Click Create Topic, name it exactly Topic Name.&lt;/li&gt;
&lt;li&gt;No bells and whistles—just a straight-up topic setup.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To create a Pub/Sub topic, use the following gcloud command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud pubsub topics create Topic-Name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace Topic-Name with the desired name for the Pub/Sub topic.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Task 3: Deploy a Thumbnail Generator with Cloud Run (2nd Gen)&lt;/strong&gt;&lt;br&gt;
You’ll deploy a Cloud Run (2nd gen) function in Node.js 22 that watches for new images in the bucket, generates 64x64 thumbnails using Sharp, and sends a message to the Pub/Sub topic.&lt;/p&gt;

&lt;p&gt;Key Notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set the trigger to Cloud Storage (Object Finalized).&lt;/li&gt;
&lt;li&gt;Use Cloud Functions Framework with Node.js 22.&lt;/li&gt;
&lt;li&gt;Make sure the entry point matches your function name.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The main logic in index.js&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const functions = require('@google-cloud/functions-framework');
const { Storage } = require('@google-cloud/storage');
const { PubSub } = require('@google-cloud/pubsub');
const sharp = require('sharp');

functions.cloudEvent('', async cloudEvent =&amp;gt; {
  const event = cloudEvent.data;

  console.log(`Event: ${JSON.stringify(event)}`);
  console.log(`Hello ${event.bucket}`);

  const fileName = event.name;
  const bucketName = event.bucket;
  const size = "64x64";
  const bucket = new Storage().bucket(bucketName);
  const topicName = "";
  const pubsub = new PubSub();

  if (fileName.search("64x64_thumbnail") === -1) {
    // doesn't have a thumbnail, get the filename extension
    const filename_split = fileName.split('.');
    const filename_ext = filename_split[filename_split.length - 1].toLowerCase();
    const filename_without_ext = fileName.substring(0, fileName.length - filename_ext.length - 1); // fix sub string to remove the dot

    if (filename_ext === 'png' || filename_ext === 'jpg' || filename_ext === 'jpeg') {
      // only support png and jpg at this point
      console.log(`Processing Original: gs://${bucketName}/${fileName}`);
      const gcsObject = bucket.file(fileName);
      const newFilename = `${filename_without_ext}_64x64_thumbnail.${filename_ext}`;
      const gcsNewObject = bucket.file(newFilename);

      try {
        const [buffer] = await gcsObject.download();
        const resizedBuffer = await sharp(buffer)
          .resize(64, 64, {
            fit: 'inside',
            withoutEnlargement: true,
          })
          .toFormat(filename_ext)
          .toBuffer();

        await gcsNewObject.save(resizedBuffer, {
          metadata: {
            contentType: `image/${filename_ext}`,
          },
        });

        console.log(`Success: ${fileName} → ${newFilename}`);

        await pubsub
          .topic(topicName)
          .publishMessage({ data: Buffer.from(newFilename) });

        console.log(`Message published to ${topicName}`);
      } catch (err) {
        console.error(`Error: ${err}`);
      }
    } else {
      console.log(`gs://${bucketName}/${fileName} is not an image I can handle`);
    }
  } else {
    console.log(`gs://${bucketName}/${fileName} already has a thumbnail`);
  }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the package.json:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
 "name": "thumbnails",
 "version": "1.0.0",
 "description": "Create Thumbnail of uploaded image",
 "scripts": {
   "start": "node index.js"
 },
 "dependencies": {
   "@google-cloud/functions-framework": "^3.0.0",
   "@google-cloud/pubsub": "^2.0.0",
   "@google-cloud/storage": "^6.11.0",
   "sharp": "^0.32.1"
 },
 "devDependencies": {},
 "engines": {
   "node": "&amp;gt;=4.3.2"
 }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the Cloud Run Function (2nd Generation) that handles the thumbnail creation, I used gcloud commands to deploy the function. Here’s the full flow:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Set up the Cloud Function:&lt;/strong&gt; Ensure that you have the necessary code files (index.js and package.json) for the function ready in a directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enable necessary APIs: Before deploying the function, ensure the required APIs are enabled&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy the Cloud Run Function: Navigate to the directory containing your &lt;code&gt;index.js&lt;/code&gt; and &lt;code&gt;package.json&lt;/code&gt; files and run the following:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud run deploy Cloud-Run-Function-Name \
  --image=gcr.io/cloud-builders/npm \
  --platform=managed \
  --region=REGION \
  --allow-unauthenticated \
  --trigger-bucket=gs://Bucket-Name \
  --memory=256Mi \
  --cpu=1 \

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Replace:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cloud-Run-Function-Name&lt;/strong&gt; with your desired Cloud Run function name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;REGION&lt;/strong&gt; with the region where the function should be deployed (e.g., us-central1).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bucket-Name&lt;/strong&gt; with the name of the bucket you created earlier.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: The --trigger-bucket flag connects the function to your bucket, so it runs when a new object is uploaded.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task 4: Remove the Previous Cloud Engineer’s Access&lt;/strong&gt;&lt;br&gt;
To remove the previous engineer’s access, I used the gcloud IAM commands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;List IAM members: You can check the current IAM roles with:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud projects get-iam-policy PROJECT_ID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Remove the previous engineer’s role: If the previous engineer has the Viewer role, you can remove them with:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud projects remove-iam-policy-binding PROJECT_ID \
  --member='user:USERNAME' \
  --role='roles/viewer'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;strong&gt;PROJECT_ID&lt;/strong&gt; with your project ID, and &lt;strong&gt;USERNAME&lt;/strong&gt; with the email address of the previous engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This lab did a great job simulating a real-world task flow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating infrastructure&lt;/li&gt;
&lt;li&gt;Automating image processing&lt;/li&gt;
&lt;li&gt;Managing permissions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It helped solidify my understanding of how Cloud Storage, Pub/Sub, and Cloud Run can work together to build scalable, event-driven systems.&lt;/p&gt;

&lt;p&gt;If you're prepping for the Associate Cloud Engineer certs like me, labs like these are gold.&lt;/p&gt;

&lt;p&gt;What are your thoughts?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Unsung Heroes of SaaS: Delivering Exceptional Technical Support</title>
      <dc:creator>Kelechi Edeh</dc:creator>
      <pubDate>Fri, 28 Mar 2025 19:50:59 +0000</pubDate>
      <link>https://forem.com/kelechiedeh/the-unsung-heroes-of-saas-delivering-exceptional-technical-support-32bd</link>
      <guid>https://forem.com/kelechiedeh/the-unsung-heroes-of-saas-delivering-exceptional-technical-support-32bd</guid>
      <description>&lt;p&gt;When people think of Software as a Service (SaaS), they often picture sleek user interfaces, powerful APIs, and seamless integrations. However, behind every successful SaaS product lies a team of dedicated technical support professionals—problem solvers, troubleshooters, and customer champions who ensure users get the most out of the product. Despite their critical role, technical support teams often don’t receive the recognition they deserve. Let’s change that.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Critical Role of Technical Support in SaaS
&lt;/h2&gt;

&lt;p&gt;Technical support is not just about answering tickets; it’s about bridging the gap between customers and engineering, translating complex technical issues into actionable solutions. Their work directly impacts customer satisfaction, retention, and even product development. Here’s how:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Enhancing Customer Experience&lt;/strong&gt;: A well-trained support team ensures that customers receive timely, accurate, and helpful responses. By resolving issues quickly and efficiently, they help users stay productive and satisfied.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reducing Churn and Increasing Retention&lt;/strong&gt;: Customers don’t leave just because of bugs; they leave when they feel unheard or unsupported. Proactive and empathetic support keeps users engaged, reducing churn and fostering loyalty.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Providing Valuable Product Feedback&lt;/strong&gt;: Support teams act as a direct line between customers and developers. They identify recurring issues, feature requests, and usability pain points, driving meaningful product improvements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enabling Growth Through Education&lt;/strong&gt;: Beyond troubleshooting, support teams educate users on best practices, new features, and advanced workflows, empowering customers to get the most out of the product&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Challenges Faced by Technical Support Teams
&lt;/h2&gt;

&lt;p&gt;Despite their importance, support teams often face significant challenges:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High Ticket Volume&lt;/strong&gt;: Scaling support to match a growing user base can be overwhelming.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Burnout and Stress&lt;/strong&gt;: Handling frustrated customers and complex &lt;br&gt;
technical issues can take a toll on support agents.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lack of Recognition&lt;/strong&gt;: Unlike engineering or sales, support work &lt;br&gt;
often goes unnoticed unless something goes wrong.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Keeping Up with Rapid Product Changes&lt;/strong&gt;: SaaS products evolve quickly, requiring support teams to continuously update their knowledge base.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Best Practices for Exceptional SaaS Support
&lt;/h2&gt;

&lt;p&gt;To overcome these challenges and provide world-class technical support, SaaS companies should adopt the following best practices:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Invest in Training and Knowledge Management&lt;/strong&gt;: Provide ongoing training for support agents and maintain a robust knowledge base to ensure quick and accurate responses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Leverage Automation and AI&lt;/strong&gt;: Use chatbots, self-service portals, and automated ticket routing to streamline support workflows and reduce repetitive tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Foster a Culture of Appreciation&lt;/strong&gt;: Recognize and celebrate the contributions of support teams through internal awards, career growth opportunities, and public acknowledgment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Encourage Collaboration Between Support and Engineering&lt;/strong&gt;: Establish clear communication channels between support and development teams to ensure faster bug resolution and better product insights.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitor and Act on Support Metrics&lt;/strong&gt;: Track key performance indicators (KPIs) such as response times, resolution rates, and customer satisfaction (CSAT) scores to continually improve support quality.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Technical support teams are the unsung heroes of SaaS. Their dedication to problem-solving, customer satisfaction, and product improvement is invaluable. It’s time to recognize and elevate the role of technical support in building successful SaaS businesses. Whether you’re a founder, developer, or user, take a moment to appreciate the people working behind the scenes to keep things running smoothly.&lt;/p&gt;

&lt;p&gt;What are your thoughts on the role of technical support in SaaS? Share your experiences in the comments!⬇️ ⬇️ ⬇️&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Linux, Azure, and NGINX: The Ultimate Trio for Web Hosting Fun!</title>
      <dc:creator>Kelechi Edeh</dc:creator>
      <pubDate>Fri, 14 Feb 2025 21:03:40 +0000</pubDate>
      <link>https://forem.com/kelechiedeh/linux-azure-and-nginx-the-ultimate-trio-for-web-hosting-fun-nfa</link>
      <guid>https://forem.com/kelechiedeh/linux-azure-and-nginx-the-ultimate-trio-for-web-hosting-fun-nfa</guid>
      <description>&lt;p&gt;Ready to get your hands dirty with some cloud magic? Today, we’re building a Linux VM on Azure and spinning up NGINX, the web server that’s faster than your morning coffee kick-in. Let’s make the cloud our playground! ☁️&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 1: Create a Linux VM on Azure
&lt;/h1&gt;

&lt;p&gt;If you’ve caught my previous post on setting up a Windows 11 VM, you're already halfway to cloud mastery! If not, don’t sweat it—you can catch up &lt;a href="https://dev.to/kelechiedeh/how-to-create-a-windows-11-virtual-machine-on-azure-and-have-fun-doing-it-16ib"&gt;here&lt;/a&gt;. Instead of a Windows image, we’re embracing the power of Linux. Linux is lean, mean, and built for performance. Plus, it pairs perfectly with NGINX, making it the ultimate tag-team for web hosting.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OS: Choose your linux distribution.&lt;/li&gt;
&lt;li&gt;Size: A small VM (like B1s) is perfect for this project.&lt;/li&gt;
&lt;li&gt;Username: Create a username for your vm&lt;/li&gt;
&lt;li&gt;Authentication: Use an SSH key for security points. Give your ssh key a name&lt;/li&gt;
&lt;li&gt;Network: select SSH and HTTP in the inbound port&lt;/li&gt;
&lt;li&gt;Review &amp;amp; Launch: Hit Review + Create, then Create, &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmspjegk4me3lgos570p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmspjegk4me3lgos570p.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download the private key and remember its location—you'll need it to securely SSH into your VM later.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl466ot6wi8yn8jbj70le.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl466ot6wi8yn8jbj70le.png" alt="Image description" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 2: Install NGINX on the Linux VM
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open a terminal and connect via SSH:

&lt;ul&gt;
&lt;li&gt;Change the permission of your private key
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmode 600 &amp;lt;path-to-private-key&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;If you're using a private key for authentication (without a password), use the -i option to specify the path to your private key file:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh &amp;lt;path-to-private-key&amp;gt;username@your-vm-ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;username&lt;/code&gt;: The username you want to use to log into the Linux server.&lt;br&gt;
&lt;code&gt;your-vm-ip&lt;/code&gt;: The domain name or IP address of the Linux server.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 3: Install NGINX (Your Web Server)
&lt;/h1&gt;

&lt;p&gt;Once connected, let’s install NGINX:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;if you aren not logged in as the root user use &lt;code&gt;sudo&lt;/code&gt; when running your commands
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install nginx -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;apt = is the package manager of nginx&lt;br&gt;
  install = this is a verb and the action that you want the package manager to perform&lt;br&gt;
   nginx = this is what you want to install on the VM&lt;br&gt;
   -y = This is a command that prompts the system to automatically accept anything that requires you to accept a yes or no&lt;/p&gt;

&lt;p&gt;We can verify this installation by pasting the IP Address of the VM on a browser&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4qp781y9sgq01a6ihj4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4qp781y9sgq01a6ihj4.png" alt="Image description" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Congrats! You’ve deployed your first Linux VM on Azure and installed NGINX! You’re officially a cloud adventurer. Want to add a custom webpage next? Let me know in the comments below!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Deploy a Windows Server on a Virtual Machine and Install IIS Server</title>
      <dc:creator>Kelechi Edeh</dc:creator>
      <pubDate>Fri, 14 Feb 2025 18:57:12 +0000</pubDate>
      <link>https://forem.com/kelechiedeh/how-to-deploy-a-windows-server-on-a-virtual-machine-and-install-iis-server-2j2i</link>
      <guid>https://forem.com/kelechiedeh/how-to-deploy-a-windows-server-on-a-virtual-machine-and-install-iis-server-2j2i</guid>
      <description>&lt;p&gt;Deploying a Windows Server on a Virtual Machine (VM) and setting up an IIS (Internet Information Services) server is a crucial skill for developers, system administrators, and IT enthusiasts. This guide walks you through the process step-by-step, from creating a VM to verifying the IIS installation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Windows Server&lt;/strong&gt;: A robust operating system designed for enterprise environments, offering features such as Active Directory for user and security management, Hyper-V for virtualization, and built-in security tools like Windows Defender and BitLocker. It supports scalable deployments with load balancing and integrates seamlessly with Microsoft tools like SQL Server and Exchange.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IIS (Internet Information Services)&lt;/strong&gt;: A web server platform that hosts websites and applications, providing native support for ASP.NET, SSL security, and application pools to isolate services. IIS is scalable with load balancing and web farms and offers powerful extensions such as URL Rewrite and detailed logging for performance monitoring.&lt;/p&gt;

&lt;p&gt;This combination is ideal for hosting websites, web services, and enterprise applications with high performance and security.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 1: Set Up a Virtual Machine (VM)
&lt;/h1&gt;

&lt;p&gt;If you’ve read my previous post on setting up a Windows 11 VM, you’re already halfway there! If not, no worries—you can catch up by checking it out &lt;a href="https://dev.to/kelechiedeh/how-to-create-a-windows-11-virtual-machine-on-azure-and-have-fun-doing-it-16ib"&gt;here&lt;/a&gt;. But this time, we’re stepping up our game. Instead of Windows 11, we’re choosing a Windows Server image. Why? Because this isn’t just about a desktop experience—it’s about building a powerhouse that can host websites, applications, and more. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the Windows server image and leave other configuration as default&lt;/li&gt;
&lt;li&gt;For the inbound port selection, ensure that RDP and HTTP is checked&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdkgks7dzluav56akzv8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdkgks7dzluav56akzv8.png" alt="Image description" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to the windows server using the native RDP&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpv0lknkmlk708or04rl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpv0lknkmlk708or04rl.png" alt="Image description" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flw4x09nsmbq0monao8sz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flw4x09nsmbq0monao8sz.png" alt="Image description" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhww7hpz7zxk3um040qn7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhww7hpz7zxk3um040qn7.png" alt="Image description" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations!!! Your windows server is up!&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 2: Install IIS (Internet Information Services)
&lt;/h1&gt;

&lt;p&gt;We will be installing IIS using powershell. PowerShell is a powerful command-line shell and scripting language built on .NET, designed for system administration and automation on Windows systems. It is an essential tool for managing Windows Server, making complex tasks quicker and more efficient.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run Windows Powershell as an administrator&lt;/li&gt;
&lt;li&gt;Click on the Start menu.&lt;/li&gt;
&lt;li&gt;Type PowerShell and open Windows PowerShell or Windows PowerShell ISE as an administrator (right-click and select Run as administrator)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdutb944yn9c7to9ll180.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdutb944yn9c7to9ll180.png" alt="Image description" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run the following command to install the IIS role and management tools:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Install-WindowsFeature -name Web-Server -IncludeManagementTools

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffppncyi0jbdxkuzfnfud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffppncyi0jbdxkuzfnfud.png" alt="Image description" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify Installation
You can verify that IIS has been installed by opening a web browser and navigating to the ip address of the VM in a browser and verifiy that you installed the webserver.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Friibrbwntaveswqksjao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Friibrbwntaveswqksjao.png" alt="Image description" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4n8p543qmefheuj7hygm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4n8p543qmefheuj7hygm.png" alt="Image description" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion:
&lt;/h1&gt;

&lt;p&gt;Congratulations! You've successfully deployed a Windows Server on a VM and installed IIS. This server can now host websites, web apps, or services. You can further explore SSL configurations, custom domains, and advanced IIS settings to enhance your server's capabilities.&lt;/p&gt;

&lt;p&gt;Would you like to learn more about configuring IIS for a specific website or setting up virtual directories? Let us know in the comments below!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>iis</category>
      <category>windowsserver</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Building a Network Monitoring Tool with Python and Linode</title>
      <dc:creator>Kelechi Edeh</dc:creator>
      <pubDate>Wed, 12 Feb 2025 04:28:58 +0000</pubDate>
      <link>https://forem.com/kelechiedeh/building-a-network-monitoring-tool-with-python-and-linode-4e7h</link>
      <guid>https://forem.com/kelechiedeh/building-a-network-monitoring-tool-with-python-and-linode-4e7h</guid>
      <description>&lt;p&gt;Have you ever wanted to monitor your application, get notified when it goes down, and automatically restart it—without breaking a sweat? Well, that’s exactly what we’re doing today! In this guide, we’ll set up a Linode server, deploy NGINX as a Docker container, and use a Python script to monitor the application endpoint. If something goes wrong, our script will send an email alert and attempt to restart the container or even the server!&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 1: Setting Up a Linode Server 🖥️
&lt;/h1&gt;

&lt;p&gt;We’ll start by spinning up a Linode instance with Debian 11. Here’s how:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Linode Account – Head over to &lt;a href="https://www.linode.com/" rel="noopener noreferrer"&gt;Linode&lt;/a&gt; and log in.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a New Linode:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbedjk9hlaomyzgrwn7nm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbedjk9hlaomyzgrwn7nm.png" alt="Image description" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select Debian 11 as the operating system.&lt;/li&gt;
&lt;li&gt;Choose a plan (a Nanode works fine for testing).&lt;/li&gt;
&lt;li&gt;Set a root password (you’ll need this later).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0vr8cefjpaxgbei98su.png" alt="Image description" width="800" height="435"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5gbl9bcixry4otegep4.png" alt="Image description" width="800" height="434"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Access Your Server – Use SSH to connect:&lt;br&gt;
To access the Linode server using SSH, the server needs to be configured to accept SSH request.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy your public ssh key and add to the linode server
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat ~/.ssh/id_rsa.pub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqeffstjet7nemjy1ol2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqeffstjet7nemjy1ol2y.png" alt="Image description" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to your linode server
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh root@your-linode-ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🎉 Woohoo! Your first cloud server is live! 🚀 Time to give it some love, deploy cool stuff, and rule the cloud like a pro. The sky (or should we say, the &lt;em&gt;cloud&lt;/em&gt;) is the limit! ☁️😎&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 2: Installing Docker 🐳
&lt;/h1&gt;

&lt;p&gt;Docker simplifies app deployment using lightweight containers. Since our server runs Debian, we'll follow the official Docker installation guide for Debian, found &lt;a href="https://docs.docker.com/engine/install/debian/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
  $(. /etc/os-release &amp;amp;&amp;amp; echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list &amp;gt; /dev/null
sudo apt-get update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To install the latest version run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check that Docker is running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Step 3: Deploying NGINX in a Docker Container 🌍
&lt;/h1&gt;

&lt;p&gt;We’ll now set up an NGINX web server inside a Docker container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name nginx-container -p 8080:80 nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify it’s running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, visiting &lt;a href="http://your-linode-ip:8080" rel="noopener noreferrer"&gt;http://your-linode-ip:8080&lt;/a&gt; should display the default NGINX welcome page.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 4: Writing a Python Monitoring Script
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Now for the fun part!&lt;/strong&gt; Our Python script will:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitor the application endpoint where NGINX is running by making an HTTP request and checking the status code. This determines whether the application is up or experiencing issues—such as inaccessibility, errors, or a crashed container.
&lt;/li&gt;
&lt;li&gt;Trigger an email alert if the application is down.
&lt;/li&gt;
&lt;li&gt;Attempt to restart the Docker container to restore service.
&lt;/li&gt;
&lt;li&gt;If the issue persists, reboot the entire Linode server to bring everything back online.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/kelzceana/python-scripts-for-devops-automation/blob/master/network-monitoring/network-monitoring.py" rel="noopener noreferrer"&gt;Code repository&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Skills Gained from This Project
&lt;/h1&gt;

&lt;p&gt;By working through this project, you have developed and strengthened several key skills, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cloud Server Management&lt;/strong&gt; – Setting up and configuring a Linode server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker &amp;amp; Containerization&lt;/strong&gt; – Deploying applications inside Docker containers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Python Scripting&lt;/strong&gt; – Writing automation scripts to monitor and manage applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Networking &amp;amp; Troubleshooting&lt;/strong&gt; – Diagnosing and resolving connectivity issues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Linux System Administration&lt;/strong&gt; – Installing and managing software on Debian.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Email Notifications &amp;amp; Alerting&lt;/strong&gt; – Using SMTP to send alerts on application failures.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server Automation &amp;amp; Recovery&lt;/strong&gt; – Automating container and server restarts for high availability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API &amp;amp; Remote Server Management&lt;/strong&gt; – Interacting with Linode’s API for automated server management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Final Thoughts
&lt;/h1&gt;

&lt;p&gt;With this setup, you have a self-healing infrastructure that minimizes downtime. Whether you're running a personal project or a production service, this approach keeps your app online and responsive. &lt;/p&gt;

&lt;p&gt;Have questions or ideas to improve the setup? Let’s discuss! 👇&lt;/p&gt;

</description>
      <category>python</category>
      <category>devops</category>
      <category>monitoring</category>
      <category>automation</category>
    </item>
    <item>
      <title>Azure Applied Skills: Providing private storage for internal company documents</title>
      <dc:creator>Kelechi Edeh</dc:creator>
      <pubDate>Sat, 08 Feb 2025 10:41:14 +0000</pubDate>
      <link>https://forem.com/kelechiedeh/providing-private-storage-for-internal-company-documents-g3d</link>
      <guid>https://forem.com/kelechiedeh/providing-private-storage-for-internal-company-documents-g3d</guid>
      <description>&lt;p&gt;In today’s digital landscape, secure and highly available storage is essential for businesses that manage private data while ensuring backup solutions for critical assets. This guide walks through setting up a robust cloud storage architecture that provides high availability, restricted access, cost efficiency, and seamless backup mechanisms.&lt;/p&gt;

&lt;p&gt;By completing this task, you will have developed essential skills in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating a storage account for private company documents.&lt;/li&gt;
&lt;li&gt;Configuring redundancy to ensure high availability.&lt;/li&gt;
&lt;li&gt;Setting up shared access signatures (SAS) for restricted file access.&lt;/li&gt;
&lt;li&gt;Implementing backup solutions for public website storage.&lt;/li&gt;
&lt;li&gt;Managing storage lifecycle policies to transition content to the cool tier efficiently.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Setting Up High-Availability Storage
&lt;/h1&gt;

&lt;p&gt;To begin, we create a storage account tailored for internal private company documents. This ensures secure storage with redundancy to withstand potential regional outages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps to Create a Storage Account&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the portal, search for and select Storage accounts.&lt;/li&gt;
&lt;li&gt;Select + Create.&lt;/li&gt;
&lt;li&gt;Set the Storage account name to a unique name&lt;/li&gt;
&lt;li&gt;Select Review, and then Create the storage account.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ymazf9igik1u0bki0ce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ymazf9igik1u0bki0ce.png" alt="Image description" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring Redundancy for High Availability&lt;/strong&gt;&lt;br&gt;
Since business continuity is a priority, we enable Geo-Redundant Storage (GRS):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Within the Data management section, select Redundancy.&lt;/li&gt;
&lt;li&gt;Choose Geo-redundant storage (GRS) to replicate data to a secondary region.&lt;/li&gt;
&lt;li&gt;Refresh and verify the primary and secondary locations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmpkar95urfz4wvdcwq5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmpkar95urfz4wvdcwq5.png" alt="Image description" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Restricting Access to Corporate Data
&lt;/h1&gt;

&lt;p&gt;Access control is essential when handling private documents. We configure a private storage container with limited access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating a Private Storage Container&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under Data storage, navigate to Containers.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click + Container&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rbjo7oaa509hutqo71s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rbjo7oaa509hutqo71s.png" alt="Image description" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Name it private.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set Public access level to Private (no anonymous access).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffckvm7scyoadxny3va92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffckvm7scyoadxny3va92.png" alt="Image description" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Uploading and Testing Access Control&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the private container.&lt;/li&gt;
&lt;li&gt;Click Upload, select a file, and upload it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fueg3n47lp1xkoii900e2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fueg3n47lp1xkoii900e2.png" alt="Image description" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy the file’s URL and attempt to access it in a browser. A restricted access error should appear.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdt7kq7mlvp4dcm217y42.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdt7kq7mlvp4dcm217y42.png" alt="Image description" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2bxqrvsy24z0h15nriww.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2bxqrvsy24z0h15nriww.png" alt="Image description" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Providing Limited Partner Access Using SAS
&lt;/h1&gt;

&lt;p&gt;For external partners requiring temporary access, we generate a Shared Access Signature (SAS).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generating a SAS Token&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the uploaded file and navigate to Generate SAS.&lt;/li&gt;
&lt;li&gt;Assign only Read permissions.&lt;/li&gt;
&lt;li&gt;Set the expiration time to 24 hours.&lt;/li&gt;
&lt;li&gt;Generate and copy the SAS URL.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvmv25vf944wiz3uzbss.png" alt="Image description" width="800" height="410"&gt;
&lt;/li&gt;
&lt;li&gt;Test access by opening the SAS URL in a new browser tab.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf3cjqj7t3zrerv1duxl.png" alt="Image description" width="800" height="477"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Optimizing Costs with Storage Tiers
&lt;/h1&gt;

&lt;p&gt;To minimize storage costs, we move data from the hot tier to the cool tier after 30 days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementing Lifecycle Management&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Data management section, select Lifecycle management.&lt;/li&gt;
&lt;li&gt;Click Add rule and name it movetocool.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6y1zg7sauno8eazplg6.png" alt="Image description" width="800" height="410"&gt;
&lt;/li&gt;
&lt;li&gt;Apply the rule to all blobs in the storage account and click next.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fet7fp8uqev2dzyia7brt.png" alt="Image description" width="800" height="409"&gt;
&lt;/li&gt;
&lt;li&gt;Set Last modified to More than 30 days ago.&lt;/li&gt;
&lt;li&gt;Choose Move to cool storage.&lt;/li&gt;
&lt;li&gt;Save the rule.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff325s269px3j0jl8q57b.png" alt="Image description" width="800" height="410"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Backing Up Public Website Data
&lt;/h1&gt;

&lt;p&gt;To protect website files, we create a backup mechanism.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating a Backup Storage Container&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the private storage account, create a new container named &lt;strong&gt;backup&lt;/strong&gt; using default settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enabling Object Replication for Automated Backup&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the publicwebsite storage account.&lt;/li&gt;
&lt;li&gt;Select Object replication under Data management.&lt;/li&gt;
&lt;li&gt;Click Create replication rule.&lt;/li&gt;
&lt;li&gt;Set the Destination storage account to the private storage account.&lt;/li&gt;
&lt;li&gt;Choose Source container as public and Destination container as backup.&lt;/li&gt;
&lt;li&gt;Create the rule.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyaej7yq736cebo5e5ez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyaej7yq736cebo5e5ez.png" alt="Image description" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fks6fivn42ii7obyuvww5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fks6fivn42ii7obyuvww5.png" alt="Image description" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Final Thoughts
&lt;/h1&gt;

&lt;p&gt;Implementing these storage strategies ensures a secure, highly available, cost-efficient, and automated backup system for company assets. By leveraging Azure Storage capabilities, businesses can enhance data security, streamline partner collaboration, and optimize storage costs effectively.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>storage</category>
      <category>blob</category>
      <category>azure</category>
    </item>
  </channel>
</rss>
