<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ukeme David Eseme</title>
    <description>The latest articles on Forem by Ukeme David Eseme (@ukemzyskywalker).</description>
    <link>https://forem.com/ukemzyskywalker</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ukemzyskywalker"/>
    <language>en</language>
    <item>
      <title>How I Built a Fully Automated Ethereum PoW Faucet on AWS Using Terraform + Ansible</title>
      <dc:creator>Ukeme David Eseme</dc:creator>
      <pubDate>Wed, 21 Jan 2026 04:32:16 +0000</pubDate>
      <link>https://forem.com/aws-builders/one-script-to-rule-them-all-automating-ethereum-pow-faucets-on-aws-3p78</link>
      <guid>https://forem.com/aws-builders/one-script-to-rule-them-all-automating-ethereum-pow-faucets-on-aws-3p78</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;As I step into 2026, I made a deliberate decision to narrow my focus: &lt;strong&gt;Node Operations&lt;/strong&gt; and &lt;strong&gt;DevOps for blockchain&lt;/strong&gt; infrastructure.&lt;/p&gt;

&lt;p&gt;To build real depth in this niche, I started working on advanced, production-style node operations projects. One of my first targets was the &lt;strong&gt;Optimism&lt;/strong&gt; ecosystem.&lt;/p&gt;

&lt;p&gt;While setting up an end-to-end &lt;strong&gt;OP Stack Rollup&lt;/strong&gt; deployment on Sepolia testnet, I quickly ran into a practical problem — I needed between &lt;strong&gt;&lt;em&gt;2–3 SepoliaETH&lt;/em&gt;&lt;/strong&gt;, but I only had &lt;strong&gt;&lt;em&gt;0.5&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Public faucets were slow, rate-limited, and unreliable for serious testing. Instead of waiting, I turned this bottleneck into a side quest: building my own self-hosted &lt;strong&gt;&lt;em&gt;Ethereum PoW faucet&lt;/em&gt;&lt;/strong&gt; using proper automation and infrastructure-as-code practices.&lt;/p&gt;

&lt;p&gt;In this article, I’ll walk through how I built a fully automated faucet stack on AWS using Terraform, Ansible, and a single orchestration script, complete with monitoring and observability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Overview
&lt;/h3&gt;

&lt;p&gt;This article documents the principles and processes for automating the complete deployment of a &lt;em&gt;&lt;strong&gt;PoWFaucet node on AWS&lt;/strong&gt;&lt;/em&gt; infrastructure, including a comprehensive monitoring stack. Using Infrastructure as Code (IaC) principles with Terraform and Ansible, you can deploy a production-ready Ethereum testnet faucet in minutes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5ukzzyeulincp7exyvy.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5ukzzyeulincp7exyvy.webp" alt="crypto_faucet_diagram" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is PoWFaucet?
&lt;/h3&gt;

&lt;p&gt;A faucet in blockchain terminology is a service that distributes small amounts of cryptocurrency for free, typically on test networks (testnets). Developers and testers use faucets to obtain testnet tokens needed for deploying and testing smart contracts, dApps, and other blockchain applications without spending real money.&lt;/p&gt;

&lt;p&gt;PoWFaucet is a proof-of-work based faucet for Ethereum testnets. Instead of traditional captcha-based rate limiting, users must solve computational puzzles to receive testnet ETH. This approach effectively prevents abuse while maintaining accessibility.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxsupg32u8z6gdvf5mqo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxsupg32u8z6gdvf5mqo.png" alt="POW-diagram" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Testnet Operators: Deploy faucets for Ethereum testnets (Sepolia, Holesky, etc.)&lt;/li&gt;
&lt;li&gt;Development Teams: Provide testnet ETH to developers&lt;/li&gt;
&lt;li&gt;Educational Institutions: Support blockchain education programs&lt;/li&gt;
&lt;li&gt;Protocol Testing: Facilitate testing of smart contracts and dApps&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AWS CLI configured with appropriate credentials&lt;/li&gt;
&lt;li&gt;Terraform &amp;gt;= 1.0&lt;/li&gt;
&lt;li&gt;Ansible &amp;gt;= 2.9&lt;/li&gt;
&lt;li&gt;SSH key pair generated locally&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.alchemy.com/faucets/ethereum-sepolia" rel="noopener noreferrer"&gt;Alchemy&lt;/a&gt; or &lt;a href="https://www.infura.io/" rel="noopener noreferrer"&gt;infura&lt;/a&gt; ETH RPC Url for testnet, its free&lt;/li&gt;
&lt;li&gt;Crypto Wallet (metamask, trustwallet etc...) &lt;/li&gt;
&lt;li&gt;Have some testnet ETH in your wallet&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To follow up with this article, access the project repo here: &lt;a href="https://github.com/UkemeSkywalker/pow-faucet-node" rel="noopener noreferrer"&gt;pow-faucet-node-repo&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture: Two Layers Working in Harmony
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffigow7iqccgj0uyk0jfl.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffigow7iqccgj0uyk0jfl.webp" alt="Automation image" width="740" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This setup follows the same automation patterns I use when managing blockchain infrastructure in production environments: infrastructure as code, configuration automation, and observability-first design. The goal isn’t just to run a faucet — it’s to build something reproducible, scalable, and maintainable.&lt;/p&gt;

&lt;p&gt;The beauty of this project lies in how it separates concerns into two distinct layers that work together seamlessly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure Layer: Terraform Does the Heavy Lifting
&lt;/h3&gt;

&lt;p&gt;Terraform handles all the AWS infrastructure provisioning.&lt;/p&gt;

&lt;p&gt;We're talking about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Compute&lt;/strong&gt;: t3.medium EC2 instance running Ubuntu LTS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage&lt;/strong&gt;: 20GB encrypted gp3 EBS volume&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network&lt;/strong&gt;: Dedicated VPC with public subnet and Elastic IP&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: Security group with controlled port access&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Terraform Modules
&lt;/h4&gt;

&lt;p&gt;The infrastructure is organized into reusable modules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;VPC Module&lt;/strong&gt;: Creates isolated network with public subnet, internet gateway, and route tables&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EC2 Module&lt;/strong&gt;: Provisions instance with security group, manages SSH keys and Elastic IP association&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This modular approach means you can easily spin up multiple faucets or adjust configurations without rewriting everything from scratch.&lt;/p&gt;

&lt;h3&gt;
  
  
  Application Layer: Ansible Orchestrates the Software
&lt;/h3&gt;

&lt;p&gt;Once Terraform has built our infrastructure playground, Ansible steps in to configure everything. The playbook—over 400 lines of carefully orchestrated tasks—handles the installation and configuration of not just PoWFaucet itself, but an entire monitoring stack.&lt;/p&gt;

&lt;p&gt;The Ansible playbook automates installation of:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;System Setup&lt;/strong&gt;: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Package installation&lt;/li&gt;
&lt;li&gt;User creation&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;PoWFaucet Application&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node.js 24 runtime&lt;/li&gt;
&lt;li&gt;PoWFaucet from official repository&lt;/li&gt;
&lt;li&gt;Systemd service for automatic startup&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Monitoring Stack&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prometheus&lt;/strong&gt;: Metrics collection and time-series database&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node Exporter&lt;/strong&gt;: System-level metrics (CPU, memory, disk, network)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Loki&lt;/strong&gt;: Log aggregation system&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Promtail&lt;/strong&gt;: Log shipping agent&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grafana&lt;/strong&gt;: Unified visualization dashboard&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Configuration Management
&lt;/h3&gt;

&lt;p&gt;One of the key design decisions was keeping secrets out of version control while still maintaining reproducibility. The solution? Jinja2 templates combined with environment variables.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;.env&lt;/code&gt; file for secrets (RPC URL, private keys, Network)&lt;/li&gt;
&lt;li&gt;Jinja2 templates for dynamic configuration&lt;/li&gt;
&lt;li&gt;Version-controlled infrastructure definitions
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;ethRpcHost&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;eth_rpc_url&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;span class="na"&gt;ethWalletKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;eth_wallet_key&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;span class="na"&gt;faucetTitle&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;network&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Testnet&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Faucet"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Variables are injected from environment at deployment time, keeping secrets out of version control.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Considerations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;SSH key-based authentication only&lt;/li&gt;
&lt;li&gt;Security group restricts access to necessary ports&lt;/li&gt;
&lt;li&gt;EBS volume encryption at rest&lt;/li&gt;
&lt;li&gt;Secrets managed via environment variables (never committed)&lt;/li&gt;
&lt;li&gt;Services run as non-root users&lt;/li&gt;
&lt;li&gt;Grafana password change enforced on first login&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Automated Deployment
&lt;/h2&gt;

&lt;p&gt;Now, this is where everything crystallizes into something beautiful:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deploy.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This single script is the orchestrator that brings Terraform and Ansible together into one seamless experience; Hence the title of this post: &lt;strong&gt;The One Script to rule them all 😄&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It's the difference between spending an afternoon manually running commands and grabbing a coffee while your infrastructure deploys itself.&lt;/p&gt;

&lt;p&gt;This script:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Loads environment variables&lt;/li&gt;
&lt;li&gt;Provisions AWS infrastructure&lt;/li&gt;
&lt;li&gt;Configures the instance&lt;/li&gt;
&lt;li&gt;Deploys all services&lt;/li&gt;
&lt;li&gt;Imports Grafana dashboards&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Script Workflow
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Prerequisites Check&lt;/strong&gt;: Validates AWS CLI, Terraform, Ansible installation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment Setup&lt;/strong&gt;: Loads secrets from &lt;code&gt;.env&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure Provisioning&lt;/strong&gt;: Terraform creates AWS resources&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSH Wait&lt;/strong&gt;: Ensures instance is ready for configuration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuration Management&lt;/strong&gt;: Ansible installs and configures all services&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verification&lt;/strong&gt;: Displays access URLs for all services&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Quick Four (4) Step Deployment
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/UkemeSkywalker/pow-faucet-node.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd pow-faucet-node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy &lt;code&gt;cp .env.example .env&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Edit .env and add:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NETWORK: Your desired Ethereum testnet network (Sepoli / Hoodi etc..)
ETH_RPC_URL: Your Ethereum RPC endpoint (Alchemy/Infura)
ETH_WALLET_PRIVATE_KEY: Faucet wallet private key (without 0x prefix)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: The One Script to Rule Them All&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./deploy.sh [path_to_ssh_private_key]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./deploy.sh ~/.ssh/id_rsa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fug1ry22rznemvxdyfqc4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fug1ry22rznemvxdyfqc4.png" alt="Deployment complete" width="800" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more details on manual deployment steps, check the repo &lt;a href="https://github.com/UkemeSkywalker/pow-faucet-node.git" rel="noopener noreferrer"&gt;&lt;em&gt;README&lt;/em&gt;&lt;/a&gt; file&lt;/p&gt;

&lt;p&gt;Once deployment is complete you should be able to access the faucet through the browser using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;your-machine-public-IP&amp;gt;:8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;18.211.242.139:8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfcblndfz6mxg4rbhz7q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfcblndfz6mxg4rbhz7q.png" alt="Deployment complete" width="466" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next step is to open your Metamask or which ever crypto wallet you use, get your ETH wallet address and insert it in the UI as shown below, then click &lt;strong&gt;Start Mining&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You should see this window, to confirm mining process has started&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrzoo0m5uglzlbdd429d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrzoo0m5uglzlbdd429d.png" alt="Faucet Mining" width="800" height="1101"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring
&lt;/h2&gt;

&lt;p&gt;Running infrastructure without monitoring is like driving with your eyes closed—you might be fine for a while, but eventually something will go wrong and you won't see it coming.&lt;/p&gt;

&lt;p&gt;So, once mining is in progress, you can access prometheus through port &lt;code&gt;9090&lt;/code&gt;, to make queries from there, but its standard practice to do all of that in Grafana&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;your-machine-public-IP&amp;gt;:9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;18.211.242.139:9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pki3ie0uuxhlxmn5lwo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pki3ie0uuxhlxmn5lwo.png" alt="prometheus-ui" width="800" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Metrics Collection
&lt;/h3&gt;

&lt;p&gt;Prometheus scrapes metrics from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Itself (Prometheus internals)&lt;/li&gt;
&lt;li&gt;Node Exporter (system metrics)&lt;/li&gt;
&lt;li&gt;Loki (log ingestion metrics)&lt;/li&gt;
&lt;li&gt;Promtail (log shipping metrics)
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Visualization
&lt;/h3&gt;

&lt;p&gt;The automation steps in the ansible playbook already connected the datasources and imported dashboard to Grafana.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frg6y54hp99s1qi74ym02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frg6y54hp99s1qi74ym02.png" alt="Grafana Login" width="618" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grafana provides:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dashboard 1860&lt;/strong&gt;: Node Exporter metrics (system performance)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dashboard 14055&lt;/strong&gt;: Loki stack monitoring (log pipeline health)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explore View&lt;/strong&gt;: Ad-hoc log queries using LogQL&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;login to Grafana in via port &lt;code&gt;3000&lt;/code&gt;, default username and password should be &lt;code&gt;admin&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Navigate to dashboards and access the already integrated dashboards there for metrics and logs&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0jtjaug3shsqh30lqfc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0jtjaug3shsqh30lqfc.png" alt="Grafana Dashboard" width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Claim Rewards
&lt;/h3&gt;

&lt;p&gt;Claiming mined tokens is very easy, just click on the &lt;strong&gt;Stop Mining &amp;amp; Claim Rewards&lt;/strong&gt; button, then wait for the transaction to complete, then check your wallet for claimed reward, or view transaction history in the block explorer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1fje1djk1fcp8yk92dd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1fje1djk1fcp8yk92dd.png" alt="Claim Rewards" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzitv9cik29p5a0a73dp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzitv9cik29p5a0a73dp2.png" alt="Tx success" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons from the Trenches
&lt;/h2&gt;

&lt;p&gt;This project was eye opening, as i learnt some subtle downsides, that aren't obvious at first.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Its worthy to note that most PoW Faucet that utilizes the browser for its User Interface, actually uses the browser for mining and not the machine its hosted on.&lt;br&gt;
All threads and workers would use the compute resources in your local machine, which might spin up your PC fan.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Also note that your wallet address needs to be verified on Gitcoin Passport, to prevent any form of authentication issues, I didn't face this, cause i have already verified my account before then.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The amount of already existing Testnet ETH tokens in your wallet would determine the speed of mining and the amount of tokens rewarded&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Automating this faucet saved me hours of manual setup and removed my dependency on unreliable public faucets. More importantly, it reinforced something I strongly believe in as an infrastructure engineer: if you need something repeatedly, automate it properly once.&lt;/p&gt;

&lt;p&gt;If you’re running blockchain nodes, building rollups, or managing Web3 infrastructure at scale, this pattern applies far beyond faucets.&lt;/p&gt;

&lt;p&gt;👉 The full deployment code is available on GitHub: &lt;a href="https://github.com/UkemeSkywalker/pow-faucet-node" rel="noopener noreferrer"&gt;pow-faucet-node-repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If this helped you, consider starring 🌠 the repo and connecting with me — I regularly share practical infrastructure automation workflows for Web3 and cloud environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;For those who want to dive deeper, check out the &lt;a href="https://github.com/pk910/PoWFaucet" rel="noopener noreferrer"&gt;PoWFaucet GitHub repository&lt;/a&gt; for the application itself. The &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs" rel="noopener noreferrer"&gt;Terraform AWS Provider documentation&lt;/a&gt; is invaluable for understanding infrastructure options. &lt;a href="https://docs.ansible.com/" rel="noopener noreferrer"&gt;Ansible's documentation&lt;/a&gt; covers configuration management in depth. And for monitoring, the &lt;a href="https://prometheus.io/docs/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; and &lt;a href="https://grafana.com/docs/loki/latest/" rel="noopener noreferrer"&gt;Grafana Loki&lt;/a&gt; docs are your best friends.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>aws</category>
      <category>blockchain</category>
      <category>ethereum</category>
    </item>
    <item>
      <title>Building an AI-Powered IoT Analytics Pipeline on AWS: From Shoe Sensors to Predictive Insights</title>
      <dc:creator>Ukeme David Eseme</dc:creator>
      <pubDate>Sat, 23 Aug 2025 13:57:55 +0000</pubDate>
      <link>https://forem.com/aws-builders/building-an-ai-powered-iot-analytics-pipeline-on-aws-from-shoe-sensors-to-predictive-insights-13bg</link>
      <guid>https://forem.com/aws-builders/building-an-ai-powered-iot-analytics-pipeline-on-aws-from-shoe-sensors-to-predictive-insights-13bg</guid>
      <description>&lt;p&gt;In the world of connected devices, turning raw sensor data into real-time insights requires a robust, scalable, and secure architecture. &lt;/p&gt;

&lt;p&gt;In this post, we’ll walk through an end-to-end AWS IoT analytics pipeline—inspired by a use case of a smart shoe with an embedded IoT chip. &lt;/p&gt;

&lt;p&gt;This architecture highlights how raw telemetry flows from devices to dashboards, with machine learning inference and notifications along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 Ingestion Layer: Connecting the Shoe to the Cloud
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxi0fa3tfpfmk7okcabrt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxi0fa3tfpfmk7okcabrt.png" alt="Ingestion Layer" width="558" height="260"&gt;&lt;/a&gt;&lt;br&gt;
The pipeline begins at the &lt;strong&gt;Shoe IoT Chip&lt;/strong&gt;, which transmits sensor readings (e.g., motion, pressure, gait) via &lt;strong&gt;Bluetooth&lt;/strong&gt; to a &lt;strong&gt;Mobile App&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;From here, two main ingestion paths exist:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;HTTPS → API&lt;/strong&gt; Gateway for structured data and commands.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MQTT → AWS IoT Core&lt;/strong&gt; for lightweight, event-driven telemetry.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This dual-ingestion strategy ensures flexibility—supporting both synchronous API calls and asynchronous device messaging.&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  🔄 Messaging Layer: Handling High-Volume Data
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fayz8zl4vpxg2g7qlt2w5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fayz8zl4vpxg2g7qlt2w5.png" alt="Messaging Layer" width="109" height="238"&gt;&lt;/a&gt;&lt;br&gt;
Once inside the AWS ecosystem, the data may fan out into the Messaging Layer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Kinesis Data Streams&lt;/strong&gt; handles real-time streaming ingestion, enabling downstream processing with low latency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon SQS (Simple Queue Service)&lt;/strong&gt; provides durable event buffering, ideal for decoupled microservices and event-triggered processing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This separation ensures the system can scale with bursts of IoT data while maintaining reliability.&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  🧠 Compute Layer: Processing and Machine Learning
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhnsqbhe5i5e6ec3muun.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhnsqbhe5i5e6ec3muun.png" alt="Compute Layer" width="452" height="342"&gt;&lt;/a&gt;&lt;br&gt;
At the core of the pipeline is the &lt;strong&gt;Compute Layer&lt;/strong&gt;, powered by AWS Lambda and Amazon SageMaker:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lambda (Ingest):&lt;/strong&gt; Acts as the real-time bridge, consuming from Kinesis or IoT Core and normalizing payloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SageMaker Endpoint:&lt;/strong&gt; Provides &lt;strong&gt;inference and prediction&lt;/strong&gt;, such as anomaly detection, step classification, or injury risk modeling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lambda (Process):&lt;/strong&gt; Handles post-inference workflows, including analytics aggregation and pushing processed results to storage or alerts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This serverless approach eliminates infrastructure management while ensuring &lt;strong&gt;event-driven scaling&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  💾 Storage Layer: Persisting Raw and Processed Data
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmv1ht0sk11kue2ea061.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmv1ht0sk11kue2ea061.png" alt="Storage Layer" width="403" height="392"&gt;&lt;/a&gt;&lt;br&gt;
Processed insights and raw logs are stored in a multi-tier storage strategy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon S3&lt;/strong&gt; for raw and pre-processed datasets (useful for model retraining).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon DynamoDB&lt;/strong&gt; for fast lookups of processed results (e.g., “latest gait analysis score for a user”).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Aurora (with time-series extensions)&lt;/strong&gt; for advanced analytical queries and reporting.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This combination balances cost efficiency, query performance, and long-term data retention.&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  🔐 Security and Monitoring
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fec01brwe25t34ug2kg9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fec01brwe25t34ug2kg9u.png" alt="Security and Monitoring" width="269" height="632"&gt;&lt;/a&gt;&lt;br&gt;
A production-grade IoT solution requires strong governance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Cognito&lt;/strong&gt; enables secure user authentication, while &lt;strong&gt;IAM&lt;/strong&gt; enforces &lt;strong&gt;role-based access control&lt;/strong&gt; across services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CloudWatch&lt;/strong&gt; and &lt;strong&gt;AWS X-Ray&lt;/strong&gt; provide observability, enabling teams to trace events, track anomalies, and gather performance metrics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optional SNS Integration&lt;/strong&gt; ensures real-time &lt;strong&gt;push notifications&lt;/strong&gt; (e.g., “Abnormal gait detected, please rest your foot”).

&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🧩 Putting It All Together
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmywse5hn5r0d101ht5zz.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmywse5hn5r0d101ht5zz.jpeg" alt="Putting It All Together" width="" height=""&gt;&lt;/a&gt;&lt;br&gt;
The architecture represents a &lt;strong&gt;scalable, secure, and intelligent IoT-to-ML pipeline:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data Capture:&lt;/strong&gt; IoT chip → Mobile App → API Gateway / IoT Core.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Transport:&lt;/strong&gt; Kinesis / SQS ensures reliable flow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Intelligence:&lt;/strong&gt; Lambda + SageMaker perform real-time inference.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Persistence:&lt;/strong&gt; S3, DynamoDB, and Aurora store and organize results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security &amp;amp; Monitoring:&lt;/strong&gt; Cognito, IAM, CloudWatch, and X-Ray ensure governance and visibility.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User Interaction:&lt;/strong&gt; Notifications and dashboards close the loop with end-users.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This setup doesn’t just collect sensor data—it transforms it into real-time, actionable insights for athletes, healthcare providers, or everyday users.&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  🌍 Applications Beyond Shoes
&lt;/h2&gt;

&lt;p&gt;While our example revolves around a &lt;strong&gt;smart shoe&lt;/strong&gt;, this architecture generalizes to multiple IoT domains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Healthcare wearables&lt;/strong&gt; (vital signs monitoring)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Industrial IoT&lt;/strong&gt; (machine health prediction)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smart homes&lt;/strong&gt; (energy optimization and anomaly detection)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agriculture IoT&lt;/strong&gt; (soil/moisture telemetry with predictive yield analytics)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By leveraging &lt;strong&gt;AWS serverless and managed AI services&lt;/strong&gt;, organizations can build scalable IoT solutions without reinventing the wheel.&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Building intelligent IoT systems isn’t just about connecting devices—it’s about designing an ecosystem &lt;strong&gt;where data flows securely, insights are generated in real-time, and end-users receive value instantly.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This AWS-powered pipeline demonstrates how to integrate IoT, serverless compute, machine learning, and storage into a &lt;strong&gt;production-ready smart system.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>machinelearning</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>How I Survived the Great Kubernetes Exodus: Migrating EKS Cluster from v1.26 to v1.33 on AWS</title>
      <dc:creator>Ukeme David Eseme</dc:creator>
      <pubDate>Sat, 05 Jul 2025 00:25:04 +0000</pubDate>
      <link>https://forem.com/aws-builders/how-i-survived-the-great-kubernetes-exodus-migrating-eks-cluster-from-v126-to-v133-on-aws-39ko</link>
      <guid>https://forem.com/aws-builders/how-i-survived-the-great-kubernetes-exodus-migrating-eks-cluster-from-v126-to-v133-on-aws-39ko</guid>
      <description>&lt;p&gt;&lt;em&gt;A comprehensive tale of migrating a production AWS Kubernetes cluster with 6000+ resources, 46 CRDs, 7 SSL certificates, 12 Namespaces and zero downtime&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction: The Challenge Ahead
&lt;/h2&gt;

&lt;p&gt;Upgrading a production-grade Kubernetes cluster is never a walk in the park—especially when it spans multiple environments, critical workloads, and tight deadlines.&lt;/p&gt;

&lt;p&gt;So when it was time to migrate a clients 3-4 years old Amazon EKS cluster from &lt;strong&gt;v1.26 to v1.33&lt;/strong&gt;, I knew it wouldn’t just be a version bump—it would be a battlefield.&lt;/p&gt;

&lt;p&gt;This cluster wasn't just any cluster—it was a complex ecosystem running critical healthcare applications with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;46 Custom Resource Definitions (CRDs)&lt;/strong&gt; across multiple systems&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;7 production domains&lt;/strong&gt; with SSL certificates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Critical data&lt;/strong&gt; in PostgreSQL databases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero downtime tolerance&lt;/strong&gt; for production services&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex networking&lt;/strong&gt; with Istio service mesh&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring stack&lt;/strong&gt; with Prometheus and Grafana&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the story of how we successfully migrated this beast using a hybrid approach, the challenges we faced, and the lessons we learned along the way.&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Chapter 1: The Reconnaissance Phase
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5n55ez1ibget46derumw.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5n55ez1ibget46derumw.webp" alt="Reconnaissance Phase" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Mapping the Battlefield
&lt;/h3&gt;

&lt;p&gt;Before diving into the migration, we needed to understand exactly what we were dealing with. There was no gitops, no manifest files, what we got was AWS access, Lens and an outdated cluster that needs to be upgraded.&lt;/p&gt;

&lt;p&gt;Kubernetes enforces a strict version &lt;strong&gt;skew policy&lt;/strong&gt;, especially when you’re using managed services like Elastic Kubernetes Service (EKS).&lt;/p&gt;

&lt;p&gt;The control plane must always be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;one minor version ahead of the kubelets (worker nodes).&lt;/li&gt;
&lt;li&gt;All supporting tools—kubeadm, kubelet, kubectl, and add-ons—must also respect this version skew policy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So what does this mean?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If your control plane is running &lt;strong&gt;v1.33&lt;/strong&gt;, your worker nodes can only be on &lt;strong&gt;v1.32 or v1.33&lt;/strong&gt;. Nothing lower.&lt;/li&gt;
&lt;li&gt;And no, you can’t jump straight from &lt;strong&gt;v1.26 to v1.33.&lt;/strong&gt; You must upgrade sequentially:
&lt;strong&gt;v1.26 → v1.27 → v1.28 → ... → v1.33&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each upgrade step? A potential minefield of broken dependencies, deprecated APIs, and mysterious behavior.&lt;/p&gt;
&lt;h4&gt;
  
  
  💀 The Aging Cluster
&lt;/h4&gt;

&lt;p&gt;The cluster I inherited was running Kubernetes v1.26—with some workloads and CRDs that hadn’t been touched in about 4 years. &lt;br&gt;
It was ancient. It was fragile. And it was about to get a rude awakening.&lt;br&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  🧪 First Attempt: The “By-the-Book” Upgrade
&lt;/h3&gt;

&lt;p&gt;I tried to play nice.&lt;br&gt;
The goal: Upgrade the cluster manually, &lt;strong&gt;step-by-step&lt;/strong&gt; from &lt;strong&gt;v1.26 **all the way to **v1.33.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But the moment I moved from &lt;strong&gt;v1.26 → v1.27&lt;/strong&gt;, the floodgates opened:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Pods crashing from all directions,&lt;br&gt;
Incompatible controllers acting out,&lt;br&gt;
Deprecation warnings lighting up the logs like Christmas trees.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s just say—manual upgrades were off the table.&lt;/p&gt;



&lt;h3&gt;
  
  
  🛠️ Second Attempt: The Manifest Extraction Strategy
&lt;/h3&gt;

&lt;p&gt;Time to pivot.&lt;/p&gt;

&lt;p&gt;The new plan?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Spin up a fresh EKS cluster running &lt;strong&gt;v1.33&lt;/strong&gt;, then lift-and-shift resources from the old cluster.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Extract All Resources&lt;/strong&gt;&lt;br&gt;
From the old cluster I ran:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get all &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; yaml &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; all-resources.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I backed up other critical components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ConfigMaps&lt;/li&gt;
&lt;li&gt;Secrets&lt;/li&gt;
&lt;li&gt;PVCs&lt;/li&gt;
&lt;li&gt;Ingresses&lt;/li&gt;
&lt;li&gt;CRDs&lt;/li&gt;
&lt;li&gt;RBAC&lt;/li&gt;
&lt;li&gt;ServiceAccounts
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get configmaps,secrets,persistentvolumeclaims,ingresses,customresourcedefinitions,roles,rolebindings,clusterroles,clusterrolebindings,serviceaccounts &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; yaml &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; extras.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Apply to the New Cluster&lt;/strong&gt;&lt;br&gt;
Switched context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config use-context &amp;lt;cluster-arn&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; all-resources.yaml extras.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Boom—in one swoop, everything started deploying into the new cluster.&lt;/p&gt;

&lt;p&gt;For a moment, I thought:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Wow… that was easy. Too easy.”&lt;/p&gt;
&lt;/blockquote&gt;



&lt;h3&gt;
  
  
  🚨 Reality Check: The Spaghetti Hit the Fan
&lt;/h3&gt;

&lt;p&gt;After 8 hours of hopeful waiting, the nightmare unfolded:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CrashLoopBackOff&lt;/li&gt;
&lt;li&gt;ImagePullBackOff &lt;/li&gt;
&lt;li&gt;Pending Pods&lt;/li&gt;
&lt;li&gt;Service Not Reachable&lt;/li&gt;
&lt;li&gt;VolumeMount and PVC errors everywhere&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It was &lt;strong&gt;YAML spaghetti&lt;/strong&gt;, tangled and broken.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi51f0gf4hgmnnxjffbfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi51f0gf4hgmnnxjffbfk.png" alt="The Spaghetti Hit the Fan" width="800" height="871"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The old cluster’s legacy configurations simply did not translate cleanly to the modern version.&lt;br&gt;
And now, I had to dig in deep—resource by resource, namespace by namespace, to rebuild sanity, which i didn't have the time and luxury  for.&lt;br&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  ⚙️ Third Attempt: Enter Velero
&lt;/h3&gt;

&lt;p&gt;The next strategy? &lt;a href="https://velero.io/" rel="noopener noreferrer"&gt;&lt;strong&gt;Use Velero.&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
Install it in the old cluster, run a full backup, switch contexts, and restore everything into the shiny new v1.33 cluster.&lt;/p&gt;

&lt;p&gt;Simple, right?&lt;/p&gt;

&lt;p&gt;Not quite.&lt;/p&gt;

&lt;p&gt;Velero pods immediately got stuck in Pending.&lt;br&gt;
Why?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Insufficient resources&lt;/strong&gt; in the old cluster&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CNI-related&lt;/strong&gt; issues that blocked network provisioning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So instead of backup and restore magic, I found myself deep in another rabbit hole.&lt;br&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  🧠 Fourth Attempt: Organized Manifest Extraction — The Breakthrough
&lt;/h3&gt;

&lt;p&gt;Out of frustration, I raised the issue during a session in the AWS DevOps Study Group.&lt;/p&gt;

&lt;p&gt;That’s when Theo and Jaypee stepped in with game-changing advice:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Forget giant YAML dumps. Instead, extract manifests systematically, grouped by namespace and resource type. Organize them in folders. Leverage Amazon Q in VS Code to make sense of the structure.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It was a &lt;strong&gt;lightbulb moment💡.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I restructured the entire migration approach based on their idea—breaking down the cluster into modular, categorized directories.&lt;br&gt;
It brought clarity, control, and confidence back to the process.&lt;/p&gt;
&lt;h3&gt;
  
  
  📦 The CRD Explosion
&lt;/h3&gt;

&lt;p&gt;Once things were neatly organized, the &lt;strong&gt;real scale of the system&lt;/strong&gt; came into focus.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Major CRDs We Had to Handle:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Istio Service Mesh:&lt;/strong&gt; 12 CRDs managing traffic routing and security&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prometheus/Monitoring:&lt;/strong&gt; 8 CRDs for metrics and alerting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cert-Manager:&lt;/strong&gt; 7 CRDs handling SSL certificate automation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Velero Backup:&lt;/strong&gt; 8 CRDs for disaster recovery&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Controllers:&lt;/strong&gt; 11 CRDs for cloud integration&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;🧮 Total: 46 CRDs&lt;/strong&gt; — each one a potential migration minefield&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  🔍 Custom Resources Inventory
&lt;/h3&gt;

&lt;p&gt;Beyond the CRDs themselves, the custom resources were no less intimidating:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;11+ TLS Certificates&lt;/strong&gt; across multiple namespaces&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;6+ ServiceMonitors&lt;/strong&gt; for Prometheus scraping&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple PrometheusRules&lt;/strong&gt; for alerting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VirtualServices and DestinationRules&lt;/strong&gt; for Istio routing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The message was clear:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This wasn’t a “one-file kubectl apply” kind of migration.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  ✅ API Compatibility Victory
&lt;/h3&gt;

&lt;p&gt;With the structure in place, we ran API compatibility checks using &lt;a href="https://github.com/FairwindsOps/pluto" rel="noopener noreferrer"&gt;Pluto&lt;/a&gt; and a custom script generated via Amazon Q in VS Code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./scripts/api-compatibility-check.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Result:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;✅ No deprecated or incompatible API versions found.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A small win—but a &lt;strong&gt;huge morale boost&lt;/strong&gt; in a complex migration journey.&lt;/p&gt;



&lt;h2&gt;
  
  
  📦 Chapter 2: The Data Dilemma
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2kij0is1xt44v00tzfk.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2kij0is1xt44v00tzfk.jpeg" alt="Data Dilemma" width="500" height="375"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  💡 Choosing Our Weapon: Manual EBS Snapshots
&lt;/h3&gt;

&lt;p&gt;When it came to &lt;strong&gt;migrating persistent data&lt;/strong&gt;, we faced a critical decision. Several options were on the table:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Velero backups&lt;/strong&gt; – our usual go-to, but ruled out due to earlier issues with pod scheduling and CNI errors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database dumps&lt;/strong&gt; – possible, but slow, error-prone, and fragile under pressure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manual EBS snapshots&lt;/strong&gt; – low-level, reliable, and simple&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After weighing the risks, we went old-school with &lt;strong&gt;manual EBS snapshots.&lt;/strong&gt;&lt;br&gt;
They offered &lt;strong&gt;direct access to data volumes&lt;/strong&gt; with minimal tooling—and in a high-stakes migration, simplicity is a virtue.&lt;/p&gt;

&lt;p&gt;Sometimes, the old ways are still the best ways.&lt;/p&gt;
&lt;h3&gt;
  
  
  🛠️ Automation to the Rescue
&lt;/h3&gt;

&lt;p&gt;To streamline the snapshot process, I wrote a simple backup script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./scripts/manual-ebs-backup.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It handled the tagging and snapshot creation for each critical volume, ensuring traceability and rollback capability.&lt;/p&gt;

&lt;h4&gt;
  
  
  🔐 Critical Volumes Backed Up
&lt;/h4&gt;

&lt;p&gt;Here are some of the most important data volumes we preserved:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;tools/pgadmin-pgadmin4&lt;/code&gt; 
→ &lt;code&gt;snap-06257a13c49e125b1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sonarqube/data-sonarqube-postgresql-0&lt;/code&gt; 
→ &lt;code&gt;snap-0e590f608a631fcc3&lt;/code&gt;
Each snapshot became a &lt;strong&gt;lifeline&lt;/strong&gt;, preserving vital stateful components of our workloads as we prepped the new cluster.&lt;/li&gt;
&lt;/ul&gt;



&lt;h2&gt;
  
  
  🏗️ Chapter 3: Building the New Kingdom
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiao7bo7k90lfkrv3fh7a.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiao7bo7k90lfkrv3fh7a.jpg" alt="Building the New Kingdom" width="736" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the old cluster was archived and dissected, it was time to construct the new realm—clean, modern, and battle-hardened.&lt;/p&gt;
&lt;h3&gt;
  
  
  ⚙️  The Foundation: CRD Installation Order Matters
&lt;/h3&gt;

&lt;p&gt;One of the most overlooked but mission-critical lessons we learned during this journey:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The order in which you install your CRDs can make or break your cluster.&lt;br&gt;
Install them in the wrong sequence, and you’ll find yourself swimming in &lt;strong&gt;cryptic errors, broken controllers, and cascading failures&lt;/strong&gt; that seem to come from nowhere.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After a lot of trial, and error (especially with &lt;a href="https://istio.io/" rel="noopener noreferrer"&gt;&lt;strong&gt;istio&lt;/strong&gt;&lt;/a&gt;, gave me a lot of trouble), I landed on a battle-tested CRD deployment sequence:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. Cert-Manager (many other components rely on it for TLS provisioning)&lt;/span&gt;
helm &lt;span class="nb"&gt;install &lt;/span&gt;cert-manager jetstack/cert-manager &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; cert-manager &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;installCRDs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# 2. Monitoring Stack (metrics, alerting, dashboards)&lt;/span&gt;
helm &lt;span class="nb"&gt;install &lt;/span&gt;prometheus-stack prometheus-community/kube-prometheus-stack &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; monitoring &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--create-namespace&lt;/span&gt;

&lt;span class="c"&gt;# 3. AWS Integration (Load balancer controller, IAM roles, etc.)&lt;/span&gt;
helm &lt;span class="nb"&gt;install &lt;/span&gt;aws-load-balancer-controller eks/aws-load-balancer-controller &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system

&lt;span class="c"&gt;# 4. Service Mesh (Istio control plane)&lt;/span&gt;
istioctl &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--set&lt;/span&gt; values.defaultRevision&lt;span class="o"&gt;=&lt;/span&gt;default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;🧘 **Pro Tip: **After each installation, wait until the operator and all dependent pods are fully healthy before continuing.&lt;br&gt;
Kubernetes is fast… but rushing this step will cost you hours down the line.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  🧬 Data Resurrection: Bringing Back Our State
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6qg6p1alc61cv50c7bj.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6qg6p1alc61cv50c7bj.jpeg" alt="Data Resurrection" width="400" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the new infrastructure laid out, it was time to resurrect the lifeblood of the platform—its data.&lt;/p&gt;

&lt;p&gt;Using our EBS snapshots from earlier, we restored the volumes and re-attached them to their rightful claimants:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bash scripts/restore-ebs-volumes.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Restored Volumes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;tools/pgadmin&lt;/code&gt; 
→ &lt;code&gt;vol-0166bbae7bd2eb793&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sonarqube/postgresql&lt;/code&gt; 
→ &lt;code&gt;vol-0262e16e1bd5df028&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Held my breath… and then—&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;✅ PersistentVolumes bound successfully&lt;br&gt;
✅ StatefulSets recovered&lt;br&gt;
✅ Pods restarted with their original data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It was official: our new kingdom had data, structure, and a beating heart.&lt;/p&gt;



&lt;h2&gt;
  
  
  🎭 Chapter 4: The Application Deployment Dance
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1zw53ndcx0agf0bod83.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1zw53ndcx0agf0bod83.webp" alt="Application Deployment Dance" width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  The Dependency Choreography
&lt;/h3&gt;

&lt;p&gt;Deploying applications in Kubernetes isn’t just about applying YAML files—it’s a delicate choreography of interdependent resources, where the order of execution can make or break your deployment.&lt;/p&gt;

&lt;p&gt;Get the sequence wrong, and you’re looking at a cascade of errors:&lt;br&gt;
missing secrets, broken RBAC, unbound PVCs, and pods stuck in limbo.&lt;br&gt;
We approached it like conducting an orchestra—&lt;strong&gt;each instrument with its cue.&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  🪜 Step-by-Step Deployment Strategy
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Foundation First: ServiceAccounts, ConfigMaps, and Secrets&lt;/strong&gt;&lt;br&gt;
These are the building blocks of your cluster environment.&lt;br&gt;
No app should be launched before its supporting config and identity infrastructure are in place.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/&lt;span class="k"&gt;*&lt;/span&gt;/serviceaccounts/
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/&lt;span class="k"&gt;*&lt;/span&gt;/configmaps/
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/&lt;span class="k"&gt;*&lt;/span&gt;/secrets/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. RBAC Granting the Right Access&lt;/strong&gt;&lt;br&gt;
Once identities are in place, we assign the right permissions using Roles and RoleBindings—especially for monitoring and system tools.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/monitoring/roles/
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/monitoring/rolebindings/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;⚠️ Lesson: Don’t skip this step or your logging agents and monitoring stack will sit silently—failing without errors.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;3. Persistent Storage: Claim Before You Launch&lt;/strong&gt;&lt;br&gt;
Storage is like the stage on which your stateful applications perform.&lt;br&gt;
We provisioned all &lt;strong&gt;PersistentVolumeClaims (PVCs)&lt;/strong&gt; before deploying workloads to avoid CrashLoopBackOff errors related to missing mounts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/tools/persistentvolumeclaims/
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/sonarqube/persistentvolumeclaims/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Workloads: Let the Apps Take the Stage&lt;/strong&gt;&lt;br&gt;
With the foundation solid and access configured, it was time to deploy the actual workloads—both stateless and stateful.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/tools/deployments/
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/sonarqube/statefulsets/
&lt;span class="c"&gt;# ... and the rest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Status: Applications Deployed and Running&lt;/strong&gt; ✅&lt;/p&gt;

&lt;p&gt;At first glance, everything seemed perfect—pods were green, services were responsive, and dashboards were lighting up.&lt;/p&gt;

&lt;p&gt;I exhaled.&lt;br&gt;
But the celebration didn’t last long.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Behind those green pods were &lt;strong&gt;networking glitches, DNS surprises, and service discovery issues&lt;/strong&gt; lurking in the shadows—ready to pounce.&lt;/p&gt;
&lt;/blockquote&gt;



&lt;h2&gt;
  
  
  🔐 Chapter 5: The Great SSL Certificate Saga
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxe923ezhthch5ouznuq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxe923ezhthch5ouznuq.jpg" alt="SSL Certificate Saga" width="577" height="432"&gt;&lt;/a&gt;&lt;br&gt;
Just when I thought the migration was complete and everything was running smoothly, &lt;strong&gt;the ghost of SSL past&lt;/strong&gt; returned to haunt.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Mystery of the Expired Certificates
&lt;/h3&gt;

&lt;p&gt;Just when we thought we were done, we discovered a critical issue:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAMESPACE     NAME                 CLASS    HOSTS                                         PORTS   AGE
qaclinicaly    bida-fe-clinicaly    &amp;lt;none&amp;gt;   bida-fe-qaclinicaly.example.net         80,443   59s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At first glance, it looked fine. But a quick curl and browser visit revealed a nasty surprise:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Your connection is not private”&lt;br&gt;
“This site’s security certificate expired 95 days ago”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Another issue, that should cause panic and confusion, but I was calm. We can fix this!&lt;br&gt;
Upon further inspection, every certificate in the cluster was showing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;READY: False
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cert-manager was deployed. The pods were healthy. But nothing was being issued.&lt;br&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  🔎 The Missing Link: ClusterIssuer
&lt;/h3&gt;

&lt;p&gt;Digging deeper into the logs, I found the root cause:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The ClusterIssuer for Let’s Encrypt was missing entirely.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Without it, &lt;strong&gt;Cert-Manager had no idea how to obtain or renew certificates.&lt;/strong&gt;&lt;br&gt;
Somehow, it had slipped through the cracks during our migration process.&lt;br&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  🛠️ The Quick Fix
&lt;/h3&gt;

&lt;p&gt;Recreated the missing ClusterIssuer using the standard ACME configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cert-manager.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIssuer&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt-prod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;acme&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://acme-v02.api.letsencrypt.org/directory&lt;/span&gt;
    &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;good-devops@example.com&lt;/span&gt;
    &lt;span class="na"&gt;privateKeySecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;letsencrypt-prod&lt;/span&gt;
    &lt;span class="na"&gt;solvers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http01&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Applied it to the cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f cluster-issuer.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Despite the ClusterIssuer being present and healthy, the certificates still wouldn’t renew. The plot thickened...&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚠️ Chapter 6: The AWS Load Balancer Controller Nightmare
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhr25fflpn8qt6r9e8cy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhr25fflpn8qt6r9e8cy.jpg" alt="Load Balancer Controller Nightmare" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just when I thought the worst was behind me, the AWS Load Balancer Controller decided to stir up fresh chaos.&lt;/p&gt;
&lt;h3&gt;
  
  
  🧩 The IAM Permission Maze
&lt;/h3&gt;

&lt;p&gt;The first clue came from the controller logs—littered with authorization errors like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="s2"&gt;"error"&lt;/span&gt;:&lt;span class="s2"&gt;"operation error EC2: DescribeAvailabilityZones, https response error StatusCode: 403, RequestID: 3ba25abe-7bb2-4b05-bb33-26fde9696931, api error UnauthorizedOperation: You are not authorized to perform this operation"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That 403 told me everything I needed to know:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The controller &lt;strong&gt;lacked the necessary IAM permissions&lt;/strong&gt; to interact with AWS APIs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What followed was a deep dive into the &lt;strong&gt;AWS IAM Policy abyss&lt;/strong&gt;—where small misconfigurations can lead to hours of head-scratching and trial-and-error debugging.&lt;br&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  🔐 The Fix: A Proper IAM Role and Trust Policy
&lt;/h3&gt;

&lt;p&gt;To get the controller working, I created a dedicated IAM role with the required permissions using Amazon Q, and then annotated the Kubernetes service account to assume it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create the IAM role&lt;/span&gt;
aws iam create-role &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--role-name&lt;/span&gt; AmazonEKS_AWS_Load_Balancer_Controller &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--assume-role-policy-document&lt;/span&gt; file://aws-lb-controller-trust-policy.json

&lt;span class="c"&gt;# Attach the managed policy&lt;/span&gt;
aws iam attach-role-policy &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--role-name&lt;/span&gt; AmazonEKS_AWS_Load_Balancer_Controller &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--policy-arn&lt;/span&gt; arn:aws:iam::830714671200:policy/AWSLoadBalancerControllerIAMPolicy

&lt;span class="c"&gt;# Annotate the controller's service account in Kubernetes&lt;/span&gt;
kubectl annotate serviceaccount aws-load-balancer-controller &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="se"&gt;\&lt;/span&gt;
  eks.amazonaws.com/role-arn&lt;span class="o"&gt;=&lt;/span&gt;arn:aws:iam::830714671200:role/AmazonEKS_AWS_Load_Balancer_Controller &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--overwrite&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the IAM role in place and attached, I expected smooth sailing—but Kubernetes had other plans.&lt;br&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  🌐 The Internal vs Internet-Facing Revelation
&lt;/h3&gt;

&lt;p&gt;Even with the right permissions, &lt;strong&gt;certificates still weren’t issuing.&lt;/strong&gt;&lt;br&gt;
Let’s Encrypt couldn’t validate the ACME HTTP-01 challenge—and I soon discovered why.&lt;/p&gt;

&lt;p&gt;Running this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws elbv2 describe-load-balancers &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--names&lt;/span&gt; k8s-ingressn-ingressn-9a8b080581 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; eu-central-1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'LoadBalancers[0].Scheme'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Returned:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;json
"internal"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The NGINX ingress LoadBalancer was &lt;strong&gt;internal&lt;/strong&gt;, which made it unreachable from the internet—completely blocking Let’s Encrypt from reaching the verification endpoint.&lt;br&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  🛠️ The Fix: Force Internet-Facing Scheme
&lt;/h3&gt;

&lt;p&gt;I updated the annotation on the NGINX controller service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl annotate svc ingress-nginx-controller &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-n&lt;/span&gt; ingress-nginx &lt;span class="se"&gt;\&lt;/span&gt;
  service.beta.kubernetes.io/aws-load-balancer-scheme&lt;span class="o"&gt;=&lt;/span&gt;internet-facing &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--overwrite&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This change &lt;strong&gt;recreated the LoadBalancer&lt;/strong&gt;, this time with internet-facing access.&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  🌐 Chapter 7: The DNS Migration Challenge
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd50w8opml8707ljjnq8t.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd50w8opml8707ljjnq8t.webp" alt="DNS Migration Challenge" width="640" height="359"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  The Automated Solution
&lt;/h3&gt;

&lt;p&gt;Once the internet-facing LoadBalancer was live and SSL certs were flowing, there was still one critical piece left: &lt;strong&gt;DNS&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The new LoadBalancer came with a &lt;strong&gt;new DNS name&lt;/strong&gt;, and I had &lt;strong&gt;seven production domains&lt;/strong&gt; that needed to point to it.&lt;/p&gt;

&lt;p&gt;Doing this manually in the Route 53 console?&lt;br&gt;
Slow. Risky. Error-prone.&lt;/p&gt;
&lt;h3&gt;
  
  
  ⚙️ The Automated Solution
&lt;/h3&gt;

&lt;p&gt;To avoid mistakes and speed things up, I wrote a script to automate the DNS updates using the AWS CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nv"&gt;HOSTED_ZONE_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Z037069025V45CB576XJD"&lt;/span&gt;
&lt;span class="nv"&gt;NEW_LB&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"k8s-ingressn-ingressn-testing12345-9287c75b76ge25zc.elb.eu-central-1.amazonaws.com"&lt;/span&gt;
&lt;span class="nv"&gt;NEW_LB_ZONE_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Z3F0SRJ5LGBH90"&lt;/span&gt;

update_dns_record&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;domain&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;
    aws route53 change-resource-record-sets &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--hosted-zone-id&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOSTED_ZONE_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--change-batch&lt;/span&gt; &lt;span class="s2"&gt;"{
            &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Changes&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: [{
                &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Action&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;UPSERT&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,
                &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;ResourceRecordSet&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: {
                    &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Name&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$domain&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,
                    &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Type&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;A&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,
                    &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;AliasTarget&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: {
                        &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;DNSName&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$NEW_LB&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,
                        &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;EvaluateTargetHealth&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: true,
                        &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;HostedZoneId&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$NEW_LB_ZONE_ID&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;
                    }
                }
            }]
        }"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By calling update_dns_record with each domain, I was able to quickly and safely redirect traffic to the new cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;✅ Domains migrated:&lt;/strong&gt;&lt;br&gt;
Here are the domains I successfully updated:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;kafka-dev.example.net&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;pgadmin-dev.example.net&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sonarqube.example.net&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;bida-fe-qaclinicaly.example.net&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;bida-gateway-qaclinicaly.example.net&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;bida-fe-qaprod.example.net&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;eduaid-admin-qaprod.example.net&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each one now points to the new LoadBalancer, resolving to the right service in the new EKS cluster.&lt;/p&gt;



&lt;h2&gt;
  
  
  🏁 Chapter 8: The Final Victory
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgplk1g1789u263q0oz3r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgplk1g1789u263q0oz3r.jpg" alt="The Final Victory" width="666" height="500"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  ⚔️ The Moment of Truth
&lt;/h3&gt;

&lt;p&gt;After battling through IAM issues, LoadBalancer headaches, DNS rewiring, and countless YAML files, it all came down to one final moment: &lt;strong&gt;Would the certificates issue successfully?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I decided to start fresh and purge any leftover Cert-Manager resources to ensure there were no stale or broken states hanging around:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clean slate approach&lt;/span&gt;
kubectl delete challenges &lt;span class="nt"&gt;--all&lt;/span&gt; &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt;
kubectl delete orders &lt;span class="nt"&gt;--all&lt;/span&gt; &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt;
kubectl delete certificates &lt;span class="nt"&gt;--all&lt;/span&gt; &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I waited.....&lt;br&gt;
Refreshed.....&lt;br&gt;
Checked logs.....&lt;br&gt;
Waited some more....&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;✅ And Then—Success&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAMESPACE    NAME                                            READY   SECRET                                          AGE
qaclinicaly   bida-fe-qaclinicaly.example.net-crt        True    bida-fe-qaclinicaly.example.net-crt        3m
qaclinicaly   bida-gateway-qaclinicaly.example.net-crt   True    bida-gateway-qaclinicaly.example.net-crt   3m
qaprod       bida-fe-qaprod.example.net-crt            True    aida-fe-qaprod.example.net-crt            2m59s
qaprod       eduaid-admin-qaprod.example.net-crt          True    eduaid-admin-qaprod.example.net-crt          2m59s
sonarqube    sonarqube.example.net-crt                 True    sonarqube.example.net-crt                 2m59s
tools        kafka-dev.example.net-tls                 True    kafka-dev.example.net-tls                 2m59s
tools        pgadmin-dev.example.net-tls               True    pgadmin-dev.example.net-tls               2m59s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ALL 7 CERTIFICATES flipped to : READY = TRUE&lt;/strong&gt; 🎉&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  📘 Chapter 9: Lessons Learned
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxubnrvkrl56bwfot42h6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxubnrvkrl56bwfot42h6.jpg" alt="Lessons Learned" width="622" height="477"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  🔧 Technical Insights
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;CRD Installation Order is Critical&lt;/strong&gt;: 
Install core dependencies first. Cert-manager before anything else.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM Permissions are Tricky&lt;/strong&gt;:
Minimal IAM policies might pass linting, but they’ll fail at runtime. Use comprehensive, purpose-built roles.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LoadBalancer Schemes Matter&lt;/strong&gt;: 
The difference between internal and internet-facing can break certificate validation entirely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DNS Automation Saves Time and Sanity&lt;/strong&gt;: 
Manual Route 53 updates are error-prone. Automate with scripts and avoid the guesswork.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;EBS Snapshots are Underrated&lt;/strong&gt;: &lt;br&gt;
Sometimes the simplest tools are the most reliable. EBS snapshots gave me peace of mind and fast recovery.&lt;br&gt;
&lt;/p&gt;




&lt;h3&gt;
  
  
  🧠 Operational Insights
&lt;/h3&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Plan for the Unexpected&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
SSL certificate issues took more time than the core migration itself.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automate Early, Automate Often&lt;/strong&gt;: &lt;br&gt;&lt;br&gt;
The scripts I wrote saved hours and helped enforce repeatable processes.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Document Everything&lt;/strong&gt;: &lt;br&gt;&lt;br&gt;
Every command, every fix, every gotcha—write it down. It pays off when something goes wrong (and it will).&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Be Patient&lt;/strong&gt;: &lt;br&gt;&lt;br&gt;
DNS propagation and cert validation can be slow. Don’t panic—just wait.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Always Have a Rollback Plan&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Keeping the old cluster alive gave me confidence to move fast with less fear of failure.&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;

&lt;/ol&gt;

&lt;h3&gt;
  
  
  🛠️ Custom Tools That Saved Us
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;scripts/update-dns-records.sh&lt;/code&gt;&lt;/strong&gt; - Automated DNS cutover&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;scripts/manual-ebs-backup.sh&lt;/code&gt;&lt;/strong&gt; - Fast and reliable data backup&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;letsencrypt-clusterissuer.yaml&lt;/code&gt;&lt;/strong&gt; - Enabled SSL cert automation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comprehensive IAM policies&lt;/strong&gt; - Smooth AWS integration with the load balancer controller

&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  📊  Chapter 10: The Final Status
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;✅  Migration Scorecard&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Area&lt;/th&gt;
&lt;th&gt;Status&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Infrastructure&lt;/td&gt;
&lt;td&gt;46 CRDs and all operators deployed ✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data Migration&lt;/td&gt;
&lt;td&gt;EBS volumes restored successfully ✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DNS Migration&lt;/td&gt;
&lt;td&gt;All 7 domains updated ✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SSL Certificates&lt;/td&gt;
&lt;td&gt;All validated and active ✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LoadBalancer&lt;/td&gt;
&lt;td&gt;Internet-facing and functional ✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Applications&lt;/td&gt;
&lt;td&gt;Fully deployed and operational ✅&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Performance Metrics
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Total Migration Time&lt;/strong&gt;: ~18 hours (including troubleshooting)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Downtime&lt;/strong&gt;: 0 minutes (DNS cutover was seamless)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Loss&lt;/strong&gt;: 0 bytes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Certificate Validation Time&lt;/strong&gt;: 3 minutes (after fixes)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DNS Propagation Time&lt;/strong&gt;: 2-5 minutes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion: The Journey's End
&lt;/h2&gt;

&lt;p&gt;What started as a routine Kubernetes version upgrade turned into an epic journey through the depths of AWS IAM policies, LoadBalancer configurations, and SSL certificate validation. We faced challenges we never expected and learned lessons that will serve us well in future migrations.&lt;/p&gt;

&lt;p&gt;The key takeaway? &lt;strong&gt;Kubernetes migrations are never just about Kubernetes&lt;/strong&gt;. They're about the entire ecosystem—DNS, SSL certificates, cloud provider integrations, and all the moving parts that make modern applications work.&lt;/p&gt;

&lt;p&gt;Our hybrid approach using manual EBS snapshots proved to be the right choice for our use case. While it required more manual work upfront, it gave us confidence in our data integrity and a clear rollback path.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's Next?
&lt;/h3&gt;

&lt;p&gt;With our new v1.33 cluster running smoothly, we're already planning for the future:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implementing GitOps for better deployment automation&lt;/li&gt;
&lt;li&gt;Enhancing our monitoring and alerting&lt;/li&gt;
&lt;li&gt;Preparing for the next major version upgrade (with better automation!)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Final Words
&lt;/h3&gt;

&lt;p&gt;To anyone embarking on a similar journey: expect the unexpected, automate everything you can, and always have a rollback plan. The path may be challenging, but the destination—a modern, secure, and scalable Kubernetes cluster—is worth every debugging session.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Migration Status: ✅ COMPLETE&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The cluster is dead, long live the cluster!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Django’s Game of Life Meets AWS ECS – The Ultimate Deployment Hack!</title>
      <dc:creator>Ukeme David Eseme</dc:creator>
      <pubDate>Wed, 29 Jan 2025 23:54:23 +0000</pubDate>
      <link>https://forem.com/aws-builders/djangos-game-of-life-meets-aws-ecs-the-ultimate-deployment-hack-2dha</link>
      <guid>https://forem.com/aws-builders/djangos-game-of-life-meets-aws-ecs-the-ultimate-deployment-hack-2dha</guid>
      <description>&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Project Setup
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Project Structure&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AWS Infrastructure
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;ECR Repository Setup&lt;/li&gt;
&lt;li&gt;Export Environmental Variables&lt;/li&gt;
&lt;li&gt;IAM Roles &amp;amp; Permissions&lt;/li&gt;
&lt;li&gt;ECS Cluster Creation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Manually build and push image to ECR
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Build Docker Image&lt;/li&gt;
&lt;li&gt;Login to ECR&lt;/li&gt;
&lt;li&gt;Tag and push image&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Task Definition Configuration
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Dynamically update task definition file&lt;/li&gt;
&lt;li&gt;Register task definition&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Deploy Game Service
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Input Service Details&lt;/li&gt;
&lt;li&gt;Load balancing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  View Deployed Game
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Access Load Balancer End-point&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The Game of Life, created by mathematician John Conway in 1970, is a fascinating example of cellular automation where simple rules create complex patterns. &lt;/p&gt;

&lt;p&gt;Our project takes this classic simulation and implements it as a web application using Django. &lt;/p&gt;

&lt;p&gt;By deploying it on Amazon Elastic Container Service (ECS), we're making this mathematical marvel accessible through the cloud, demonstrating how modern container orchestration can bring traditional concepts to life in a scalable, reliable way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS Account&lt;/li&gt;
&lt;li&gt;AWS CLI configured&lt;/li&gt;
&lt;li&gt;Docker installed&lt;/li&gt;
&lt;li&gt;Git repository with the Game of Life code&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Project Setup
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;View project repo &lt;a href="https://github.com/UkemeSkywalker/game_of_life" rel="noopener noreferrer"&gt;Here&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/UkemeSkywalker/game_of_life
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Navigate into the project.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd game_of_life
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Project Structure&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;game-of-life/
├── Dockerfile
├── buildspec.yml
├── requirements.txt
├── manage.py
├── game_of_life/
│   ├── __init__.py
│   ├── settings.py
│   ├── urls.py
│   ├── asgi.py
│   └── wsgi.py
├── life/
│   ├── templates/
│   │   └── life/
│   │       ├── landing.html
│   │       ├── select_pattern.html
│   │       └── game.html
│   └── [other app files]
└── ecs/
    └── task-definition.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  AWS Infrastructure
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1: ECR Repository Setup&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create ECR Repo
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr create-repository \
  --repository-name game-of-life \
  --image-scanning-configuration scanOnPush=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2y9ullqoqkh3l6drwwy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2y9ullqoqkh3l6drwwy.png" alt="created ecr repo" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Login to your AWS console and navigate to the ECR service to find the created ECR repository&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5obleikntg5p8emfigd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5obleikntg5p8emfigd.png" alt="The created ECR repo" width="800" height="49"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next steps, would be to export the needed environmental variables, run the below command&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Export Environmental Variables&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export AWS_REGION=us-east-1
export ECR_REPOSITORY_URI=$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/game-of-life
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Test Login Command
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;expected output should be &lt;code&gt;Login Succeeded&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: IAM Roles &amp;amp; Permissions&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Login to your AWS console and create an IAM ROLE
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;IAM &amp;gt; Roles &amp;gt; Create Role &amp;gt; Use Case: Elastic Container Service &amp;gt; 
ECS Task &amp;gt; Select Policy: AmazonECSTaskExecutionRolePolicy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Trusted entity type:&lt;/strong&gt; AWS service&lt;br&gt;
&lt;strong&gt;Service:&lt;/strong&gt; Elastic Container Service&lt;br&gt;
&lt;strong&gt;Use Case:&lt;/strong&gt; Elastic Container Service Task&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrkgp4umbr52231jp8z1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrkgp4umbr52231jp8z1.png" alt="create role" width="800" height="548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Next, add permissions policies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Select:&lt;/strong&gt; &lt;code&gt;AmazonECSTaskExecutionRolePolicy&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqsw19jeby3flagxpsn1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqsw19jeby3flagxpsn1.png" alt="select policy" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role Name:&lt;/strong&gt; &lt;code&gt;ecsTaskExecutionRole&lt;/code&gt;&lt;br&gt;
Then create the role, it should look similar to the below.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58zgsfjjt7qg5oijgajg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58zgsfjjt7qg5oijgajg.png" alt="ecsTaskExecutionRole" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: ECS Cluster Creation&lt;/strong&gt;&lt;br&gt;
Run the below command to create the cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs create-cluster \
  --cluster-name game-of-life \
  --capacity-providers FARGATE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once cluster is created, it should output data similar to the below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qqnb9jz9n4v986cy2nv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qqnb9jz9n4v986cy2nv.png" alt="ECS Cluster created" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Confirm created cluster
Login to your AWS console and navigate to ECS, you should see the created cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdf3iritw0v6i2hyndhz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdf3iritw0v6i2hyndhz4.png" alt="cluster" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Manually build and push image to ECR
&lt;/h2&gt;

&lt;p&gt;Next step is to build and push the project docker image to ECS&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Build Docker Image&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Dockerfile, already exist in the root directory of the project. &lt;br&gt;
Run the below command to build the image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build --platform linux/x86_64 -t game-of-life .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To view the created image, run the docker command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26gixxbyesset6ujyixn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26gixxbyesset6ujyixn.png" alt="Docker image in terminal" width="800" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Login to ECR&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 7: Tag and push image&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag game-of-life:latest $ECR_REPOSITORY_URI
docker push $ECR_REPOSITORY_URI
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check ECR for the pushed docker image&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8q7hnea752v2g9asimw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8q7hnea752v2g9asimw.png" alt="Docker image" width="800" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Task Definition Configuration
&lt;/h3&gt;

&lt;p&gt;In the project directory, you would find the task definition file in &lt;code&gt;ecs/task-definition.json&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the &lt;code&gt;ecs directory in the project folder&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ecs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run the below command to replace the placeholders with previously exported environmental variable values in the &lt;code&gt;task-definition.json&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8: Dynamically update task definition file&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sed -i '' "s/\${AWS_ACCOUNT_ID}/$AWS_ACCOUNT_ID/g" task-definition.json
sed -i '' "s/\${AWS_REGION}/$AWS_REGION/g" task-definition.json
escaped_uri=$(echo $ECR_REPOSITORY_URI | sed 's/\//\\\//g')
sed -i '' "s/\${ECR_REPOSITORY_URI}/$escaped_uri/g" task-definition.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below is the content of the &lt;code&gt;task-definition.json&lt;/code&gt; file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "family": "game-of-life",
    "networkMode": "awsvpc",
    "requiresCompatibilities": ["FARGATE"],
    "cpu": "256",
    "memory": "512",
    "executionRoleArn": "arn:aws:iam::${AWS_ACCOUNT_ID}:role/ecsTaskExecutionRole",
    "runtimePlatform": {
        "operatingSystemFamily": "LINUX",
        "cpuArchitecture": "X86_64"
    },
    "containerDefinitions": [
        {
            "name": "game-of-life",
            "image": "${ECR_REPOSITORY_URI}:latest",
            "portMappings": [
                {
                    "containerPort": 8000,
                    "protocol": "tcp"
                }
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "/ecs/game-of-life",
                    "awslogs-region": "${AWS_REGION}",
                    "awslogs-stream-prefix": "ecs"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 9: Register task definition&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To create the task definition, run the below command while in the &lt;code&gt;ecs&lt;/code&gt; directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs register-task-definition --cli-input-json file://task-definition.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Login to your AWS console and navigate to ECS using the below path view the registered task definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Amazon Elastic Container Service &amp;gt; Task &amp;gt; definitions &amp;gt;game-of-life
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bnvrbj44cz4kvmlmr8l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bnvrbj44cz4kvmlmr8l.png" alt="Task Definition" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy Game Service
&lt;/h2&gt;

&lt;p&gt;Login to the AWS console and navigate to our previously created cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Amazon Elastic Container Service &amp;gt; Clusters &amp;gt; game-of-life
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In the service tab click on create&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqt5wtfstdztttszshp5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqt5wtfstdztttszshp5c.png" alt="game cluster" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 10: Input Service Details&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Environment&lt;/code&gt; section: leave everything as default&lt;/li&gt;
&lt;li&gt;Deployment configuration: Only change the &lt;code&gt;Family&lt;/code&gt; dropdown to the created task definition &lt;code&gt;game-of-life&lt;/code&gt; and the &lt;strong&gt;Revision&lt;/strong&gt; as latest&lt;/li&gt;
&lt;li&gt;Service name: game-of-life-svc&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqnlzksxm2jbxco9ppwjf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqnlzksxm2jbxco9ppwjf.png" alt="svc name" width="800" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 11: Load balancing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scroll down and select the &lt;code&gt;Use load balancing&lt;/code&gt; check box.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Load balancer type:&lt;/code&gt; select &lt;code&gt;Application Load Balancer&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Add the &lt;code&gt;Load balancer name:&lt;/code&gt;&lt;em&gt;game-of-life&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydymysk4ls2kvqvty3ps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydymysk4ls2kvqvty3ps.png" alt="Load balancing" width="800" height="644"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leave every other thing as default, and scroll to the end and click on &lt;code&gt;create&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi40rdd5pl3m3d61jjd9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi40rdd5pl3m3d61jjd9e.png" alt="create load balancing" width="800" height="644"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  View Deployed Game
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 12: Access Load Balancer End-point&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once the service has been deployed, click on the service name &lt;code&gt;game-of-life-svc&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flk4hf4q8pyyp7jy9di7u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flk4hf4q8pyyp7jy9di7u.png" alt="deployed Service" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the service section, select the &lt;code&gt;Configuration and networking&lt;/code&gt; tab&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuld2be448g0gnnlpfk9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuld2be448g0gnnlpfk9u.png" alt="Configuration and networking" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scroll down to &lt;code&gt;Network configuration&lt;/code&gt;, copy the loadbalancer endpoint &lt;code&gt;DNS names&lt;/code&gt;, and input it in your browser URL.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxajdgphxxs3p1lx1op1y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxajdgphxxs3p1lx1op1y.png" alt="loadbalancer endpoint" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations! 🥳🥳🥳&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgiyo88wce3t0gcucwos.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgiyo88wce3t0gcucwos.png" alt="Game Home" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F229ln48ftlysy24dfroi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F229ln48ftlysy24dfroi.png" alt="Game Options" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Successfully deploying the Game of Life on AWS ECS showcases how traditional applications can be modernized using container technology and cloud infrastructure. &lt;/p&gt;

&lt;p&gt;Through this deployment, we've leveraged AWS's managed container orchestration to ensure our application runs reliably and can scale as needed. &lt;/p&gt;

&lt;p&gt;The combination of Django's robust web framework capabilities with AWS's infrastructure provides a stable platform for users to explore Conway's mathematical masterpiece, demonstrating the perfect blend of classical computing concepts with modern cloud architecture.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>python</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Automated Nginx and Fluentd Deployment on AWS EC2 using Ansible: A DevOps Guide</title>
      <dc:creator>Ukeme David Eseme</dc:creator>
      <pubDate>Sun, 19 Jan 2025 17:27:43 +0000</pubDate>
      <link>https://forem.com/aws-builders/automated-nginx-and-fluentd-deployment-on-aws-ec2-using-ansible-a-devops-guide-b5n</link>
      <guid>https://forem.com/aws-builders/automated-nginx-and-fluentd-deployment-on-aws-ec2-using-ansible-a-devops-guide-b5n</guid>
      <description>&lt;p&gt;Table of Content&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Introduction&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Brief overview of the problem: Managing web servers and logs at scale&lt;/li&gt;
&lt;li&gt;Why AWS EC2 + Ansible + Nginx + Fluentd is a powerful combination&lt;/li&gt;
&lt;li&gt;Why This Stack Works Together&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prerequisites&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Infrastructure Setup&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create security group&lt;/li&gt;
&lt;li&gt;Security Group configuration for web traffic&lt;/li&gt;
&lt;li&gt;Verify the security group rules&lt;/li&gt;
&lt;li&gt;Create your key pair&lt;/li&gt;
&lt;li&gt;Launch EC2 instance using created security group&lt;/li&gt;
&lt;li&gt;Copy keypair to ssh directory&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Ansible Configuration&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Folder Structure&lt;/li&gt;
&lt;li&gt;Prepare workspace environment&lt;/li&gt;
&lt;li&gt;Create the host.yaml file&lt;/li&gt;
&lt;li&gt;Test ansible connection &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set Up Roles&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Common Task&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Nginx Playbook&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security Playbook&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Log Management FluentD Playbook&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deployment and Testing&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Brief overview of the problem: Managing web servers and logs at scale
&lt;/h3&gt;

&lt;p&gt;In today's cloud environments, managing web servers and logs at scale presents significant challenges. &lt;/p&gt;

&lt;p&gt;DevOps teams struggle with manual server configurations, inconsistent deployments, and the overwhelming task of processing massive log data for insights and security analysis. &lt;/p&gt;

&lt;p&gt;Our automated solution combines AWS, Ansible, Nginx, and Fluentd to streamline these operations efficiently.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why AWS EC2 + Ansible + Nginx + Fluentd is a powerful combination
&lt;/h3&gt;

&lt;p&gt;AWS EC2 + Ansible + Nginx + Fluentd creates a robust web infrastructure by combining cloud scalability with automation and efficient logging. &lt;/p&gt;

&lt;p&gt;AWS EC2 provides the flexible compute resources, Ansible automates deployment and configuration tasks, Nginx serves as a high-performance web server, and Fluentd handles comprehensive log collection and processing. &lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Stack Works Together
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS EC2 (Infrastructure)&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk84x0cihpqbm6zku3kx9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk84x0cihpqbm6zku3kx9.png" alt="EC2 logo" width="740" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides on-demand, scalable computing resources&lt;/li&gt;
&lt;li&gt;Offers multiple instance types to match workload needs&lt;/li&gt;
&lt;li&gt;Integrates seamlessly with other AWS services&lt;/li&gt;
&lt;li&gt;Enables global deployment with multiple regions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Ansible (Automation)&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzbsl15psthiqe7vmmhx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzbsl15psthiqe7vmmhx.png" alt="Ansible Logo" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automates server configuration and application deployment&lt;/li&gt;
&lt;li&gt;Uses simple YAML syntax for easy maintenance&lt;/li&gt;
&lt;li&gt;Requires no agents on managed servers (agentless)&lt;/li&gt;
&lt;li&gt;Ensures consistent configurations across all servers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Nginx (Web Server)&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dnhin0jsb5uiabl5aqa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dnhin0jsb5uiabl5aqa.png" alt="Nginx Logo" width="800" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Delivers high-performance web serving capabilities&lt;/li&gt;
&lt;li&gt;Handles concurrent connections efficiently&lt;/li&gt;
&lt;li&gt;Provides reverse proxy and load balancing features&lt;/li&gt;
&lt;li&gt;Offers robust security features and SSL/TLS support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Fluentd (Log Management)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fef4z6oihqxitdphrl17w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fef4z6oihqxitdphrl17w.png" alt="Fluentd" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Collects and processes logs from multiple sources&lt;/li&gt;
&lt;li&gt;Offers flexible routing of log data&lt;/li&gt;
&lt;li&gt;Integrates well with various data outputs (S3, CloudWatch)&lt;/li&gt;
&lt;li&gt;Provides reliable log buffering and failover*&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Basic Knowledge of AWS &amp;amp; Ansible&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;AWS CLI installation&lt;/a&gt; on local machine&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/configure/index.html" rel="noopener noreferrer"&gt;Configure&lt;/a&gt; AWS CLI&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html" rel="noopener noreferrer"&gt;Install Ansible&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Infrastructure Setup
&lt;/h2&gt;

&lt;p&gt;Before we proceed, with setting up infrastructure. It is best we confirm Ansible is installed on our machine and AWS CLI is well configured.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6xvn1go4hnl1d8eodev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6xvn1go4hnl1d8eodev.png" alt="Ansible Version" width="800" height="114"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws sts get-caller-identity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujnmxt1br0arg3zz9xjp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujnmxt1br0arg3zz9xjp.png" alt="aws caller identity" width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can go ahead with setting up infrastructure. &lt;/p&gt;

&lt;h3&gt;
  
  
  Create security group
&lt;/h3&gt;

&lt;p&gt;Next step is to create the security group, copy and paste the below code in your terminal&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-security-group \
    --group-name nginx-web-server-sg \
    --description "Security group for Nginx web server and SSH access"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command would create a security group and output the &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GroupId &lt;/li&gt;
&lt;li&gt;and SecurityGroupArn;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "GroupId": "sg-0ee2b6c700c11e902",
    "SecurityGroupArn": "arn:aws:ec2:us-east-1:910883278292:security-group/sg-0ee2b6c700c11e902"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can login to your AWS console to confirm the creation of the Security Group&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmm5fv2ndlqlww21hky9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmm5fv2ndlqlww21hky9.png" alt="AWS Security Group" width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Group configuration for web traffic
&lt;/h3&gt;

&lt;p&gt;Lets Add the necessary inbound rules for HTTP (80), HTTPS (443), and SSH (22):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Store the Security Group ID in a variable (from the previous command output; Replace with your actual Security Group ID)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export SG_ID="sg-0ee2b6c700c11e902"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;confirm you have exported your SG_ID&lt;br&gt;
&lt;code&gt;echo $SG_ID&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Allow HTTP (port 80)
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 authorize-security-group-ingress \
    --group-id $SG_ID \
    --protocol tcp \
    --port 80 \
    --cidr 0.0.0.0/0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The above command should output similar results (Remember your security ID is different from mine)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Return": true,
    "SecurityGroupRules": [
        {
            "SecurityGroupRuleId": "sgr-081264c3c8fe3e58c",
            "GroupId": "sg-0ee2b6c700c11e902",
            "GroupOwnerId": "910883278292",
            "IsEgress": false,
            "IpProtocol": "tcp",
            "FromPort": 80,
            "ToPort": 80,
            "CidrIpv4": "0.0.0.0/0",
            "SecurityGroupRuleArn": "arn:aws:ec2:us-east-1:910883278292:security-group-rule/sgr-081264c3c8fe3e58c"
        }
    ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Allow HTTPS (port 443)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 authorize-security-group-ingress \
    --group-id $SG_ID \
    --protocol tcp \
    --port 443 \
    --cidr 0.0.0.0/0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Command outcrt should be similar&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Return": true,
    "SecurityGroupRules": [
        {
            "SecurityGroupRuleId": "sgr-06591dee28547a3eb",
            "GroupId": "sg-0ee2b6c700c11e902",
            "GroupOwnerId": "910883278292",
            "IsEgress": false,
            "IpProtocol": "tcp",
            "FromPort": 443,
            "ToPort": 443,
            "CidrIpv4": "0.0.0.0/0",
            "SecurityGroupRuleArn": "arn:aws:ec2:us-east-1:910883278292:security-group-rule/sgr-06591dee28547a3eb"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt; Allow SSH (port 22) - Best practice is to limit this to your IP
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 authorize-security-group-ingress \
    --group-id $SG_ID \
    --protocol tcp \
    --port 22 \
    --cidr 0.0.0.0/0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: for a more secure environment, its advisable to only allow your specific IPs instead of using 0.0.0.0/0 as the cidr block &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Command output should be similar&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Return": true,
    "SecurityGroupRules": [
        {
            "SecurityGroupRuleId": "sgr-06c1f6c3205247aea",
            "GroupId": "sg-0ee2b6c700c11e902",
            "GroupOwnerId": "910883278292",
            "IsEgress": false,
            "IpProtocol": "tcp",
            "FromPort": 22,
            "ToPort": 22,
            "CidrIpv4": "105.113.64.38/32",
            "SecurityGroupRuleArn": "arn:aws:ec2:us-east-1:910883278292:security-group-rule/sgr-06c1f6c3205247aea"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verify the security group rules
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 describe-security-groups --group-ids $SG_ID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Due to the how lengthy the command outputs are, I wont be posting it here. &lt;/p&gt;

&lt;h3&gt;
  
  
  Create your key pair
&lt;/h3&gt;

&lt;p&gt;This command would create your key pair, and save it on your local machine&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-key-pair --key-name nginx-server-key --query 'KeyMaterial' --output text &amp;gt; nginx-server-key.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirm the created key&lt;br&gt;
&lt;code&gt;cat nginx-server-key.pem&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Launch EC2 instance using created security group
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 run-instances \
    --image-id ami-0e1bed4f06a3b463d \
    --instance-type t2.micro \
    --key-name nginx-server-key \
    --security-group-ids $SG_ID \
    --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=nginx-fluentd-server}]'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;*&lt;em&gt;Image AMI: *&lt;/em&gt; &lt;code&gt;ami-0e1bed4f06a3b463d&lt;/code&gt; : installs Ubuntu 22&lt;/li&gt;
&lt;li&gt;*&lt;em&gt;Instance Type: *&lt;/em&gt; vCPUs: 1, Memory: 1 GiB&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key Pair:&lt;/strong&gt; &lt;code&gt;Key=Name,Value=nginx-fluentd-server&lt;/code&gt; : identifies as the name of the server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4bffnid3adpr7zp41mva.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4bffnid3adpr7zp41mva.png" alt="AWS t2.micro instance" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Copy keypair to ssh directory
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;cp nginx-server-key.pem ~/.ssh&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  SSH into EC2 instance
&lt;/h3&gt;

&lt;p&gt;You need this step to confirm you can actually log into the machine.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Change key pair permissions
&lt;code&gt;chmod 400 ~/.ssh/nginx-server-key.pem&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Connect using ssh
&lt;code&gt;ssh -i ~/.ssh/nginx-server-key.pem ubuntu@&amp;lt;instance-ip-address&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyygndipuu227znzxcr6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyygndipuu227znzxcr6.png" alt="ssh login to instance via terminal" width="800" height="209"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you can log into the instance via ssh, we can move to the next section.&lt;/p&gt;
&lt;h2&gt;
  
  
  Ansible Configuration
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Folder Structure
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── hosts.yaml
├── site.yaml
└── roles/
    ├── common/
    ├── └── defaults/
    │   |   └── main.yaml
    │   └── tasks/
    │   |    └── main.yaml
    │   └── handlers/
    │       └── main.yaml
    ├── nginx/
    │   └── tasks/
    │   |   └── main.yaml
    │   └── templates/
    │       └── nginx-logrotate.j2
    ├── security/
    |   └── defaults/
    │   |   └── main.yaml
    │   └── tasks/
    │       └── main.yaml
    └── fluentd/
        └── defaults/
        |   └── main.yaml
        └── tasks/
        |   └── main.yml
        └── files/
        |   └── denylist.txt
        └── templates/
            └── td-agent.conf.j2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Prepare workspace environment
&lt;/h3&gt;

&lt;p&gt;Open your preferred terminal or use vscode terminal, create a new folder called &lt;code&gt;AWS-Nginx-Ansible-FluentD&lt;/code&gt; and navigate into that folder&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir AWS-Nginx-Ansible-FluentD
cd AWS-Nginx-Ansible-FluentD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create the host.yaml file
&lt;/h3&gt;

&lt;p&gt;The hosts.yaml file is an Ansible inventory file that defines the target servers (hosts) Ansible will manage. &lt;br&gt;
This file tells Ansible where and how to connect to the managed nodes for executing tasks or running playbooks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch host.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;add the below code&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
nginx_server:
  hosts:
    nginx-server-1:
      ansible_host: &amp;lt;your-instance-public-ip&amp;gt;
      ansible_user: ubuntu
      ansible_ssh_private_key_file: "{{ lookup('env', 'HOME') }}/.ssh/nginx-server-key.pem"
      ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; change the value of ansible_host to your created instance IP&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Test ansible connection
&lt;/h3&gt;

&lt;p&gt;Run the below command in the project folder terminal&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible nginx-server -i hosts.yaml -m ping
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If connection is successfull, you should see the below result&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nginx-server-1 | SUCCESS =&amp;gt; {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3.10"
    },
    "changed": false,
    "ping": "pong"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Set Up Roles
&lt;/h2&gt;

&lt;p&gt;In Ansible, roles are a way to organize and reuse configurations by breaking them into modular, reusable components. &lt;br&gt;
Each role contains specific; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tasks, &lt;/li&gt;
&lt;li&gt;Variables, &lt;/li&gt;
&lt;li&gt;Templates, &lt;/li&gt;
&lt;li&gt;Files, &lt;/li&gt;
&lt;li&gt;and handlers 
required to perform a particular function, such as installing a web server or configuring a database. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By using roles, you can structure your playbooks more efficiently, making them cleaner, more scalable, and easier to maintain&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create site.yaml: This file is the root directory for all roles.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch site.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Add the below snippet to the playbook
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- name: Configure Web Server with Nginx and Fluentd
  hosts: nginx_server
  become: yes

  roles:
    - common
    - nginx
    - security
    - fluentd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Common Task
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Create Required directories and files
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt; In the root directory, create a folder called &lt;code&gt;common&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir common
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Create defaults, handler &amp;amp; task folders
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir common/defaults common/handler common/tasks
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Create the main files in defaults, handler &amp;amp; tasks
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch common/defaults/main.yaml common/handler/main.yaml common/tasks/main.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Common Playbook
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;variable definition&lt;/strong&gt;: Add the below code to &lt;code&gt;common/defaults/main.yaml&lt;/code&gt;
The content of the file defines settings for your Ansible tasks. In simple terms, it’s configuring paths and settings for Nginx and Fluentd, as well as setting a rule for how long logs are retained.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
nginx_html_root: /var/www/html
fluentd_config_dir: /etc/td-agent
log_retention_days: 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Handler Task List:&lt;/strong&gt; Add the below code to &lt;code&gt;common/handler/main.yaml&lt;/code&gt; 
Restart Nginx: Restarts the nginx service to apply any changes or ensure it is running.
Restart Fluentd: Restarts the fluentd service to reload its configuration or ensure it is operational.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- name: restart nginx
  service:
    name: nginx
    state: restarted

- name: restart fluentd
  service:
    name: fluentd
    state: restarted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Main Common Task&lt;/strong&gt;: Add the below code to &lt;code&gt;common/tasks/main.yaml&lt;/code&gt; 
The content of the file updates the package cache on systems using apt (like Debian or Ubuntu).
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- name: Update apt cache
  apt:
    update_cache: yes
    cache_valid_time: 3600
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;update_cache: yes: Refreshes the list of available packages from repositories.&lt;br&gt;
cache_valid_time: 3600: Ensures the cache is valid for 3600 seconds (1 hour), skipping updates if the cache is still recent.&lt;/p&gt;
&lt;h3&gt;
  
  
  Nginx Playbook
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt; In the root directory, create a folder called &lt;code&gt;nginx&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Create nginx task folder
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir nginx/tasks nginx/templates
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Create the nginx tasks &amp;amp; files
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch nginx/tasks/main.yaml nginx/templates/nginx-logrotate.j2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Nginx Task playbook&lt;/strong&gt;: Add the below code to &lt;code&gt;nginx/tasks/main.yaml&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- name: Install Nginx
  apt:
    name: nginx
    state: present

- name: Create simple HTML page
  copy:
    content: |
      &amp;lt;!DOCTYPE html&amp;gt;
      &amp;lt;html&amp;gt;
      &amp;lt;head&amp;gt;&amp;lt;title&amp;gt;Hello, World!&amp;lt;/title&amp;gt;&amp;lt;/head&amp;gt;
      &amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;Hello, World!&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;
      &amp;lt;/html&amp;gt;
    dest: "{{ nginx_html_root }}/index.html"
    mode: '0644'
  notify: restart nginx

- name: Enable and start Nginx
  service:
    name: nginx
    state: started
    enabled: yes

- name: Configure logrotate for Nginx
  template:
    src: nginx-logrotate.j2
    dest: /etc/logrotate.d/nginx
    mode: '0644'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This is an Ansible Playbook Task List that sets up and configures Nginx with the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Install Nginx:&lt;/strong&gt; Ensures the Nginx package is installed on the target machine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create a Simple HTML Page:&lt;/strong&gt; Copies a basic “Hello, World!” HTML file to the Nginx web root (nginx_html_root), with proper permissions (0644). It also triggers a restart nginx handler when changes occur.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enable and Start Nginx:&lt;/strong&gt; Ensures the Nginx service is running (started) and configured to start automatically on system boot (enabled).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configure Logrotate for Nginx:&lt;/strong&gt; Deploys a log rotation configuration file for Nginx using a Jinja2 template (nginx-logrotate.j2), setting appropriate permissions (0644).&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Nginx Jinja Template&lt;/strong&gt;: Add the below code to &lt;code&gt;nginx/templates/nginx-logrotate.j2&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/var/log/nginx/*.log {
    daily
    rotate {{ log_retention_days }}
    missingok
    notifempty
    compress
    delaycompress
    postrotate
        invoke-rc.d nginx rotate &amp;gt;/dev/null 2&amp;gt;&amp;amp;1
    endscript
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This template configures log rotation for Nginx logs, ensuring logs don’t grow indefinitely. &lt;/p&gt;

&lt;p&gt;It rotates logs daily, keeps them for a specified number of days, compresses old logs, and notifies Nginx after rotation to ensure smooth operation. &lt;br&gt;
The log_retention_days variable makes the retention period dynamic.&lt;/p&gt;
&lt;h3&gt;
  
  
  Security Playbook
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt; In the root directory, create a folder called &lt;code&gt;security&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir security
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Create security defaults &amp;amp; task folder
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir security/defaults security/tasks 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Create the security defaults &amp;amp; tasks files
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch security/defaults/main.yaml security/tasks/main.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Security variable definition&lt;/strong&gt;: Add the below code to &lt;code&gt;security/defaults/main.yaml&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This file sets firewall rules to allow essential traffic and disables unnecessary services to enhance security.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
ufw_rules:
  - { rule: 'allow', port: '80', proto: 'tcp' }
  - { rule: 'allow', port: '443', proto: 'tcp' }
  - { rule: 'allow', port: '22', proto: 'tcp' }

disabled_services:
  - rpcbind
  - cups
  - avahi-daemon
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Security Task playbook&lt;/strong&gt;: Add the below code to &lt;code&gt;security/tasks/main.yaml&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- name: Install UFW
  apt:
    name: ufw
    state: present

- name: Configure UFW rules
  ufw:
    rule: "{{ item.rule }}"
    port: "{{ item.port }}"
    proto: "{{ item.proto }}"
    state: enabled
  with_items: "{{ ufw_rules }}"

- name: Disable unnecessary services
  service:
    name: "{{ item }}"
    state: stopped
    enabled: no
  with_items: "{{ disabled_services }}"
  ignore_errors: yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These tasks install and configure a firewall for secure access while stopping and disabling unneeded services to improve security and performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  FluentD Playbook
&lt;/h3&gt;

&lt;p&gt;Fluentd is used to centralize and manage log data efficiently, enabling better monitoring, troubleshooting, and analytics for applications and systems.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; In the root directory, create a folder called &lt;code&gt;fluentd&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir fluentd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create fluentd defaults, files, task &amp;amp; templates folder
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir fluentd/defaults fluentd/files fluentd/tasks  fluentd/templates 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create the fluentd defaults, files, task &amp;amp; templates files
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch fluentd/defaults/main.yaml fluentd/files/denylist.txt fluentd/tasks/main.yaml fluentd/templates/td-agent.conf.j2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;fluentd variable definition&lt;/strong&gt;: Add the below code to &lt;code&gt;fluentd/defaults/main.yaml&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fluentd_config_dir: "/etc/fluentd"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This variable tells Ansible (or any script using it) where to find or manage Fluentd’s configuration files, which typically include settings for log inputs, filters, and outputs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fluentd files&lt;/strong&gt;: To deny a list of IP addresses, you can add them to the &lt;code&gt;fluentd/files/denylist.txt&lt;/code&gt;file.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;192.168.1.100
10.0.0.50
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fluentd Task Playbook&lt;/strong&gt;: Add the below code to &lt;code&gt;fluentd/tasks/main.yaml&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- name: Download and run Fluentd installation script
  shell: |
    curl -fsSL https://toolbelt.treasuredata.com/sh/install-ubuntu-jammy-fluent-package5-lts.sh | sh
  args:
    executable: /bin/bash

- name: Verify Fluentd installation
  command: fluentd --version
  register: fluentd_version

- debug:
    var: fluentd_version.stdout

- name: Create Fluentd config directory
  file:
    path: "{{ fluentd_config_dir }}"
    state: directory
    mode: '0755'
  become: yes

- name: Create Fluentd config
  template:
    src: td-agent.conf.j2
    dest: "{{ fluentd_config_dir }}/td-agent.conf"
    mode: '0644'
  notify: restart fluentd

- name: Enable and start Fluentd
  service:
    name: fluentd
    state: started
    enabled: yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This playbook automates installing Fluentd, setting up its configuration, and ensuring the service is up and running.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fluentd Configuration Jinja Templates&lt;/strong&gt;: Add the below code to &lt;code&gt;fluentd/templates/td-agent.conf.j2&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Input for Nginx access logs
&amp;lt;source&amp;gt;
  @type tail
  path /var/log/nginx/access.log
  pos_file /var/log/td-agent/nginx.access.pos
  tag nginx.access
  &amp;lt;parse&amp;gt;
    @type nginx
  &amp;lt;/parse&amp;gt;
&amp;lt;/source&amp;gt;

# Input for Nginx error logs
&amp;lt;source&amp;gt;
  @type tail
  path /var/log/nginx/error.log
  pos_file /var/log/td-agent/nginx.error.pos
  tag nginx.error
  &amp;lt;parse&amp;gt;
    @type regexp
    expression /^(?&amp;lt;time&amp;gt;\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}) \[(?&amp;lt;log_level&amp;gt;\w+)\] (?&amp;lt;pid&amp;gt;\d+).*?: (?&amp;lt;message&amp;gt;.*)$/
    time_format %Y/%m/%d %H:%M:%S
  &amp;lt;/parse&amp;gt;
&amp;lt;/source&amp;gt;

# Filter to check IPs against denylist
&amp;lt;filter nginx.access&amp;gt;
  @type grep
  &amp;lt;regexp&amp;gt;
    key remote_addr
    pattern /^(?!#{File.readlines("#{ENV['FLUENT_CONFIG_DIR'] || '/etc/td-agent'}/denied_ips/denylist.txt").map(&amp;amp;:strip).join('|')}).*$/
  &amp;lt;/regexp&amp;gt;
&amp;lt;/filter&amp;gt;

# Route normal logs (non-denied IPs)
&amp;lt;match nginx.access&amp;gt;
  @type file
  path /var/log/td-agent/nginx_access
  append true
  &amp;lt;buffer&amp;gt;
    timekey 1d
    timekey_use_utc true
    timekey_wait 10m
  &amp;lt;/buffer&amp;gt;
  &amp;lt;format&amp;gt;
    @type json
  &amp;lt;/format&amp;gt;
&amp;lt;/match&amp;gt;

# Route denied IP logs to audit file
&amp;lt;match nginx.access&amp;gt;
  @type copy
  &amp;lt;store&amp;gt;
    @type file
    path /var/log/td-agent/audit/denylist_audit
    append true
    &amp;lt;buffer&amp;gt;
      timekey 1d
      timekey_use_utc true
      timekey_wait 10m
    &amp;lt;/buffer&amp;gt;
    &amp;lt;format&amp;gt;
      @type json
      include_time_key true
      time_key timestamp
    &amp;lt;/format&amp;gt;
  &amp;lt;/store&amp;gt;
&amp;lt;/match&amp;gt;

# Handle error logs
&amp;lt;match nginx.error&amp;gt;
  @type file
  path /var/log/td-agent/nginx_error
  append true
  &amp;lt;buffer&amp;gt;
    timekey 1d
    timekey_use_utc true
    timekey_wait 10m
  &amp;lt;/buffer&amp;gt;
  &amp;lt;format&amp;gt;
    @type json
  &amp;lt;/format&amp;gt;
&amp;lt;/match&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration collects and processes Nginx access and error logs. It checks access logs against a denylist, routes normal logs and denied IP logs to separate files, and stores error logs in a structured JSON format for easy analysis. &lt;/p&gt;

&lt;p&gt;This setup is ideal for monitoring, auditing, and managing log data efficiently.&lt;/p&gt;

&lt;h2&gt;
  
  
  If you’ve made it this far, well done! Congratulations!
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftiye1u1gnjhlyrscqe2c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftiye1u1gnjhlyrscqe2c.png" alt="Congratulations" width="800" height="798"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment and Testing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Running the Playbook
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Check syntax&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook -i hosts.yaml site.yaml --syntax-check
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Run playbook&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook -i hosts.yaml site.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ansible would run through all the task listed in the &lt;code&gt;site.yaml&lt;/code&gt; file, and run each of the playbooks sequentially.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd1hqf29vwippx7zi9g24.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd1hqf29vwippx7zi9g24.png" alt="Ansible playbook result 1" width="800" height="397"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3m4eyqcgbov6wgdg755.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3m4eyqcgbov6wgdg755.png" alt="Ansible playbook result 2" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you face an error with &lt;code&gt;TASK [security : Disable unnecessary services]&lt;/code&gt; theres no need to panic, it just means, you don't have the unnecessary services to disable and you playbook, would continue to the other task&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bnba6z0ch2u6o0jijlo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bnba6z0ch2u6o0jijlo.png" alt="Disable unnecessary services error" width="800" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification Steps
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Check Nginx status&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible webservers -i hosts.yaml -m shell -a "systemctl status nginx"  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpd683ech2p33i3um7nfj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpd683ech2p33i3um7nfj.png" alt="check nginx service" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;or input the instance public ip on a browser, to get the &lt;code&gt;Hello, World!&lt;/code&gt; page&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnfil832teo0pl5l7f6r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnfil832teo0pl5l7f6r.png" alt="Nginx Hello world" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check Fluentd status&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible webservers -i hosts.yaml -m shell -a "systemctl status td-agent"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfkc6xah4gc8hlwuq6ao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfkc6xah4gc8hlwuq6ao.png" alt="Fluentd service status" width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test log processing&lt;/strong&gt;&lt;br&gt;
First make a curl request to the server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible nginx_server -i hosts.yaml -m shell -a "curl http://localhost"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8xh4k92p07hngl0g9f6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8xh4k92p07hngl0g9f6.png" alt="nginx curl request" width="800" height="142"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then check the tail of the fluentd logs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible nginx_server -i hosts.yaml -m shell -a "tail  /var/log/fluent/fluentd.log"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpi772wlyavjh2p8iwr9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpi772wlyavjh2p8iwr9.png" alt="fluentd logs" width="800" height="126"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This solution provides a robust, automated approach to deploying and managing web servers on AWS. &lt;br&gt;
By combining Ansible's automation capabilities with AWS services, we create a scalable and maintainable infrastructure that follows DevOps best practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  🎉 Congratulations! If you've made it this far, you've just learned how to:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Launch and configure an EC2 instance like a pro 🚀&lt;/li&gt;
&lt;li&gt;Set up a secure web server that would make security experts nod in approval 🔒&lt;/li&gt;
&lt;li&gt;Create a logging system that catches every digital whisper 🔍&lt;/li&gt;
&lt;li&gt;Automate everything so smoothly that your future self will thank you ⚡&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Fun fact: The automation you just built will save you approximately 127 cups of coffee worth of manual configuration time per year! ☕&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
      <category>nginx</category>
    </item>
    <item>
      <title>🚀 Unleashing the Power of Cloud Magic: Transforming a Lone AWS EC2 Instance into a K8s Powerhouse! 🌐🔥</title>
      <dc:creator>Ukeme David Eseme</dc:creator>
      <pubDate>Wed, 07 Feb 2024 09:27:55 +0000</pubDate>
      <link>https://forem.com/ukemzyskywalker/unleashing-the-power-of-cloud-magic-transforming-a-lone-aws-ec2-instance-into-a-k8s-powerhouse-75o</link>
      <guid>https://forem.com/ukemzyskywalker/unleashing-the-power-of-cloud-magic-transforming-a-lone-aws-ec2-instance-into-a-k8s-powerhouse-75o</guid>
      <description>&lt;h3&gt;
  
  
  Table of Content:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Prequisite&lt;/li&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;SSH into EC2&lt;/li&gt;
&lt;li&gt;Install Docker&lt;/li&gt;
&lt;li&gt;Install Kubectl&lt;/li&gt;
&lt;li&gt;Install KIND&lt;/li&gt;
&lt;li&gt;Setup Kubernetes Cluster&lt;/li&gt;
&lt;li&gt;Setup Visualizer (KubeOps View)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Perquisite
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;EC2 instance running Amazon Linux 2023 AMI - How to &lt;a href="https://www.youtube.com/watch?v=V1lyXkDSakk" rel="noopener noreferrer"&gt;Video&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance" rel="noopener noreferrer"&gt;Doc&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Available private key pair for the instance&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Welcome to the realm of cloud enchantment! In this captivating journey, we will delve into the art of transforming a solitary AWS EC2 instance into a formidable Kubernetes (K8s) powerhouse. Brace yourself as we unravel the secrets of cloud magic, unlocking the potential of your EC2 instance to orchestrate a dynamic Kubernetes cluster. With a touch of innovation and a dash of determination, you'll soon wield the power of the cloud like never before.&lt;/p&gt;

&lt;h2&gt;
  
  
  SSH into EC2
&lt;/h2&gt;

&lt;p&gt;SSH stands for "Secure Shell." It is a cryptographic network protocol used for securely connecting to a remote server or device over an unsecured network. &lt;/p&gt;

&lt;p&gt;To connect via SSH to the virtual Machine (EC2), you would need a secure shell client like &lt;a href="https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html" rel="noopener noreferrer"&gt;Putty&lt;/a&gt; or &lt;a href="https://mobaxterm.mobatek.net/download.html" rel="noopener noreferrer"&gt;MobaXterm&lt;/a&gt; or just your plain terminal.&lt;/p&gt;

&lt;p&gt;I would be using &lt;a href="https://tabby.sh/" rel="noopener noreferrer"&gt;Tabby Terminal&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the terminal and navigate to the directory were you downloaded the *&lt;em&gt;EC2 Key Pair *&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In my case its&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/Downloads/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Run this command, if necessary, to ensure your key is not publicly viewable.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod 400 "your-Key-pair-file.pem"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Connect to your instance using its Public DNS
For example
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i "your-Key-pair-file.pem" ec2-user@ec2-your-ip.compute-1.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you are connected to the instance, you should see a welcome screen like the below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fieydb1arp1ylhrbuats9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fieydb1arp1ylhrbuats9.jpg" alt="Amazon Linux Welcome Screen" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install Docker
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9dwascu5q270akb8qi3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9dwascu5q270akb8qi3.png" alt="docker logo" width="359" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Linux 2023 uses &lt;code&gt;dnf&lt;/code&gt; as its package manager.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Update AL2023 Packages
&lt;/h3&gt;

&lt;p&gt;Since its a new linux VM, run the below command to perform an update.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dnf update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command is used to update the installed packages and package cache on a Fedora system.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Installing Docker on Amazon Linux 2023
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dnf install docker

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above installs the Docker Engine, the Docker command-line interface, and the containerd runtime.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4f9bmi9ljl3203ctgxza.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4f9bmi9ljl3203ctgxza.JPG" alt="Install docker" width="800" height="348"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  3. Start and Enable Docker Service
&lt;/h3&gt;

&lt;p&gt;After installation docker services, don't start up by default, we have to manually start the process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl start docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, we want to set docker to automatically start with system boot&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl enable docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To be sure docker is currently running as expected, we need to check its status.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should have a similar result, like the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdlv27cr095omonwfccsl.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdlv27cr095omonwfccsl.JPG" alt="Docker status" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Enable Docker to run without requiring sudo
&lt;/h3&gt;

&lt;p&gt;Once the installation is finished, it's cumbersome to use sudo every time you want to execute Docker commands. To alleviate this inconvenience, we need to include our current user in the Docker group. Utilize the provided command to accomplish this."&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -aG docker $USER
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;apply the changes to the docker group&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;newgrp docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To verify and check docker version&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should have data similar to the below image&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvbpnz7aropgcfndrl61.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvbpnz7aropgcfndrl61.JPG" alt="Docker version" width="667" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install Kubectl
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0l593bz33rl9qhrx1z7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0l593bz33rl9qhrx1z7.png" alt="kubectl" width="800" height="407"&gt;&lt;/a&gt;&lt;br&gt;
&lt;code&gt;kubectl&lt;/code&gt; is a command-line interface (CLI) tool used to interact with Kubernetes clusters. It allows users to perform various operations on Kubernetes resources, such as deploying applications, managing pods, services, and deployments, inspecting cluster resources, and debugging cluster issues.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: You should ensure that the version of kubectl you use is within one minor version of your Kubernetes cluster. For instance, a client with version v1.29 can communicate effectively with control planes of versions v1.28, v1.29, and v1.30. Utilizing the most recent compatible kubectl version is essential to prevent unexpected complications.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h4&gt;
  
  
  1. Install the kubectl binary on Linux using curl:
&lt;/h4&gt;

&lt;p&gt;Download the latest release with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Validate the binary (optional)
&lt;/h4&gt;

&lt;p&gt;Download the kubectl checksum file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Validate the kubectl binary against the checksum file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "$(cat kubectl.sha256)  kubectl" | sha256sum --check
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If valid, the output is:&lt;br&gt;
&lt;code&gt;kubectl: OK&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  3. Install kubectl
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Test to ensure the version you installed is up-to-date:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl version --client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or use this for detailed view of version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl version --client --output=yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install KIND
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6ndrukonvaiwxbfcwov.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6ndrukonvaiwxbfcwov.png" alt="Kind Logo" width="800" height="482"&gt;&lt;/a&gt;&lt;br&gt;
We would be using KIND to create our kubernetes cluster.&lt;br&gt;
&lt;strong&gt;What is KIND ?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Kubernetes, "Kind" refers to Kubernetes in Docker. It is a tool for running local Kubernetes clusters using Docker container "nodes".&lt;br&gt;
Its is a lightweight and easy-to-use Kubernetes environment for testing and development purposes. &lt;/p&gt;
&lt;h4&gt;
  
  
  For AMD64 / x86_64
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ $(uname -m) = x86_64 ] &amp;amp;&amp;amp; curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.21.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Confirm KIND is installed.
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; kind --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You should see the current version of KIND installed.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setup Kubernetes Cluster
&lt;/h2&gt;

&lt;p&gt;in your terminal create a new file with the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nano three-node-cluster.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Paste this code in the editor&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# three node (two workers) cluster config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 32000
    hostPort: 32000
    listenAddress: "0.0.0.0"
    protocol: tcp
  - containerPort: 32100
    hostPort: 32100
    listenAddress: "0.0.0.0"
    protocol: tcp
  - containerPort: 30000
    hostPort: 30000
    listenAddress: "0.0.0.0"
    protocol: tcp
  - containerPort: 30100
    hostPort: 30100
    listenAddress: "0.0.0.0"
    protocol: tcp
  - containerPort: 30200
    hostPort: 30200
    listenAddress: "0.0.0.0"
    protocol: tcp
  - containerPort: 30300
    hostPort: 30300
    listenAddress: "0.0.0.0"
    protocol: tcp
  - containerPort: 30400
    hostPort: 30400
    listenAddress: "0.0.0.0"
    protocol: tcp
- role: worker
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    listenAddress: "0.0.0.0"
    protocol: tcp
  - containerPort: 8000
    hostPort: 8000
    listenAddress: "0.0.0.0"
    protocol: tcp
  - containerPort: 8080
    hostPort: 8001
    listenAddress: "0.0.0.0"
    protocol: tcp

- role: worker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use &lt;code&gt;Ctrl+x&lt;/code&gt; To save the changes and exit editing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now let's break down the configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;kind:&lt;/strong&gt; Specifies the kind of resource being defined, which is a Cluster in this case.&lt;br&gt;
&lt;strong&gt;apiVersion:&lt;/strong&gt; Specifies the version of the Kubernetes API being used.&lt;br&gt;
&lt;strong&gt;nodes:&lt;/strong&gt; Specifies the configuration for the nodes in the cluster.&lt;/p&gt;
&lt;h4&gt;
  
  
  The first node:
&lt;/h4&gt;

&lt;p&gt;Defined is a control-plane node (role: control-plane). This node has extraPortMappings configured, which maps container ports to host ports. This is useful for accessing services running inside Kubernetes from outside the cluster. The listed container ports are mapped to the same host ports (32000, 32100, 30000, 30100, 30200, 30300, 30400) and listen on all available network interfaces (0.0.0.0) using the TCP protocol.&lt;/p&gt;
&lt;h4&gt;
  
  
  The second node (role: worker)
&lt;/h4&gt;

&lt;p&gt;Also has extraPortMappings configured. It maps container ports 80, 8000, and 8080 to host ports 80, 8000, and 8001 respectively. &lt;/p&gt;
&lt;h4&gt;
  
  
  The last node
&lt;/h4&gt;

&lt;p&gt;is simply specified with role: worker, but it doesn't have any extraPortMappings configured.&lt;/p&gt;
&lt;h4&gt;
  
  
  Create the KIND Cluster
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind create cluster --config three-node-cluster.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Once its done, to get the cluster info&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl cluster-info --context kind-kind
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flarbnb17g0ryrojt3y7j.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flarbnb17g0ryrojt3y7j.JPG" alt="create KIND cluster" width="776" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Get a list of the running nodes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12ngw0va1kta89lzlaht.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12ngw0va1kta89lzlaht.JPG" alt="kubectl get nodes" width="528" height="95"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;View all running pods across all namespaces&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; kubectl get pods -A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjwovoy6b43x6kbiwo94.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjwovoy6b43x6kbiwo94.JPG" alt="All pods running across all namespaces" width="742" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Visualizer
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrutvffwnlok9h8apbq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrutvffwnlok9h8apbq6.png" alt="kubeops view logo" width="800" height="431"&gt;&lt;/a&gt;&lt;br&gt;
KubeOps View is a read-only system dashboard for multiple Kubernetes clusters, providing a common operational picture for understanding cluster setups in a visual way. It allows users to render nodes, indicate their overall status, show node capacity, and more.&lt;/p&gt;
&lt;h4&gt;
  
  
  1. Install Git
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dnf install git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  2. Clone Git Repo
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/UkemeSkywalker/kube-ops-view
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  3. Apply kubeOps deployment
&lt;/h4&gt;

&lt;p&gt;Navigate to the clone repository&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd kube-ops-view/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply deployment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f deploy/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  4. Check Deployment
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmqmx8q7x55ih6wah7ks.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmqmx8q7x55ih6wah7ks.JPG" alt="kubectl get pods" width="498" height="54"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  5. Update EC2 security group inbound rules
&lt;/h4&gt;

&lt;p&gt;In your Ec2 instance details page, scroll down and navigate to the security section.&lt;br&gt;
Click on the default security group. It should take you to the dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fznumsix1oxp0eek5yni3.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fznumsix1oxp0eek5yni3.JPG" alt="Ec2 security section" width="800" height="186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on edit inbound rules, and add a new rule&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type:&lt;/th&gt;
&lt;th&gt;Custom TCP&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Port:&lt;/td&gt;
&lt;td&gt;32000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Source:&lt;/td&gt;
&lt;td&gt;select &lt;em&gt;My IP&lt;/em&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Click on save rule. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnjnp22ytxeps261a7pn.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnjnp22ytxeps261a7pn.JPG" alt="Inbound Rule" width="800" height="87"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  6. Finally, Access the visualizer on your browser
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://your-ec2-pubic-ip:32000/#scale=2.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpvrmdymnblktthkoqf3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpvrmdymnblktthkoqf3.png" alt="kube-ops view on Ec2" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As our adventure draws to a close, you now possess the knowledge and prowess to harness the full potential of your AWS EC2 instance. From the exhilarating setup of Docker and KIND to the orchestration of your very own Kubernetes cluster, you've embarked on a journey filled with discovery and empowerment. &lt;/p&gt;

&lt;p&gt;With KubeOps View offering a visual glimpse into your cloud domain, the possibilities are endless. Embrace the magic of the cloud, and may your Kubernetes endeavors continue to flourish in the ever-expanding landscape of technology. &lt;/p&gt;

&lt;p&gt;Until next time, may your clouds be clear and your clusters be mighty! ✨🌐🚀&lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>aws</category>
      <category>devops</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Building a Node.js Express Application with AWS CodeBuild and Amazon ECR🚀</title>
      <dc:creator>Ukeme David Eseme</dc:creator>
      <pubDate>Fri, 19 Jan 2024 07:08:09 +0000</pubDate>
      <link>https://forem.com/ukemzyskywalker/building-and-deploying-a-nodejs-express-application-with-aws-codebuild-and-amazon-ecr-4459</link>
      <guid>https://forem.com/ukemzyskywalker/building-and-deploying-a-nodejs-express-application-with-aws-codebuild-and-amazon-ecr-4459</guid>
      <description>&lt;h2&gt;
  
  
  Table of Content:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;What is AWS CodeBuild&lt;/li&gt;
&lt;li&gt;What is Amazon ECR&lt;/li&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;Step 1: Buildspec.yml&lt;/li&gt;
&lt;li&gt;Step 2: IAM Roles and Permissions&lt;/li&gt;
&lt;li&gt;Step 3: Create a CodeBuild Project&lt;/li&gt;
&lt;li&gt;Step 4:  Run CodeBuild project&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;In this article, we'll explore how to use AWS CodeBuild to build a Node.js Express application, create a Docker image, and push it to Amazon ECR.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is AWS CodeBuild:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhmwbemdtlj5e9tydmpk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhmwbemdtlj5e9tydmpk.png" alt="AWS CodeBuild" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS CodeBuild is a fully managed continuous integration service by Amazon Web Services that automates the build and testing phases of software development. &lt;/p&gt;

&lt;p&gt;Developers define build projects using buildspec.yml files, specifying the steps for building Docker images and running tests. &lt;/p&gt;

&lt;p&gt;With support for various programming languages, build tools, and seamless integrations with AWS services and version control systems, CodeBuild facilitates the efficient creation, testing, and deployment of applications within a scalable and customizable environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Amazon ECR:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6og8qgh39m3c77vukzn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6og8qgh39m3c77vukzn.png" alt="Amazon ECR Logo" width="369" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Elastic Container Registry (Amazon ECR) is a fully managed Docker container registry service provided by Amazon Web Services (AWS). It allows developers to store, manage, and deploy Docker container images. &lt;/p&gt;

&lt;p&gt;ECR integrates seamlessly with other AWS services, making it easy to build, store, and deploy containerized applications using tools like Amazon ECS (Elastic Container Service) and AWS Fargate. With features such as image scanning, encryption, and fine-grained access control, Amazon ECR provides a secure and scalable solution for container image management within the AWS ecosystem.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Without delay, let's delve into the practical aspects of the topic at hand.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cylrf9engh7r690m4yc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cylrf9engh7r690m4yc.jpg" alt="Practice Meme" width="600" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before you begin, make sure you have the following prerequisites:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An AWS account&lt;/li&gt;
&lt;li&gt;A Node.js Express application hosted on a version control system like CodeCommit or GitHub&lt;/li&gt;
&lt;li&gt;Already setup ECR repository&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 1: Buildspec.yml
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0rdtj07no6hjbw7sl1j.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0rdtj07no6hjbw7sl1j.JPG" alt="buildspec.yml file" width="800" height="422"&gt;&lt;/a&gt;&lt;br&gt;
Create a &lt;code&gt;buildspec.yml&lt;/code&gt; file in the root of your Node.js Express project. This file defines the build steps for CodeBuild.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2

phases:
  install: 
    runtime-versions:
      nodejs: latest

  pre_build:
    commands:
      - echo Logging in to Amazon ECR...    
      - aws --version
      - aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
      - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)


  build:
    commands:
      - echo Build started on `date`
      - echo Building the Docker image...
      - docker build -t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG .
      - docker tag $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO_NAME:latest

  post_build:
    commands:
      - echo Build completed on `date` 
      - echo Pushing the Docker image...
      - docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: IAM Roles and Permissions
&lt;/h2&gt;

&lt;p&gt;To enable CodeBuild to push Docker images to Amazon ECR, it is essential to establish IAM roles and permissions.&lt;/p&gt;

&lt;p&gt;Begin by crafting an IAM role specifically for CodeBuild, equipped with the necessary authorizations&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;ECR permissions to facilitate the pushing of Docker images to your repository. Opt for the aws-managed &lt;code&gt;AmazonEC2ContainerRegistryPowerUser&lt;/code&gt; role for a streamlined approach.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CloudWatch Logs permissions to enable the writing of build logs to CloudWatch Logs. Activating CloudWatch during the setup will automatically add the requisite policies.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 3: Create a CodeBuild Project
&lt;/h2&gt;

&lt;p&gt;a) To create a CodeBuild project, you need to follow these steps.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the AWS Management Console.&lt;/li&gt;
&lt;li&gt;Navigate to the CodeBuild service.&lt;/li&gt;
&lt;li&gt;Click on "Create build project.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsexlobwg8d7f4mji8vj.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsexlobwg8d7f4mji8vj.JPG" alt="CodeBuild AWS Console" width="800" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdu4daedatp70aesdn3rp.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdu4daedatp70aesdn3rp.JPG" alt="create Build Project" width="800" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;b) Give your project a name and description, and select the source code location as the newly created AWS CodeCommit repository, and the associated branch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38jaeujog8xs5dhjtdie.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38jaeujog8xs5dhjtdie.JPG" alt="Source" width="728" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;c) For the Environment section, use the below settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Provisioning mode&lt;/strong&gt;: On-demand&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment image&lt;/strong&gt;: Managed image&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compute&lt;/strong&gt;: EC2&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operating system&lt;/strong&gt;: Amazon Linux&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runtime(s)&lt;/strong&gt;: Standard&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image&lt;/strong&gt;: use the latest standard image&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image version&lt;/strong&gt;: Always use the latest image for this runtime version&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe31a0f13kktec3ppea7s.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe31a0f13kktec3ppea7s.JPG" alt="Environment Settings" width="735" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;d) For the Service role, select the Existing IAM Role, you created earlier with &lt;code&gt;AmazonEC2ContainerRegistryPowerUser&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;e) Additional configuration:&lt;br&gt;
Scroll down to Environment Variables and update your environmental variables&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvq0w9s3zf8rrhghz3nka.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvq0w9s3zf8rrhghz3nka.JPG" alt="Environmental Variables" width="649" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;f) For Build specifications, select &lt;strong&gt;Use a buildspec file&lt;/strong&gt; and for &lt;strong&gt;Logs&lt;/strong&gt;, add CloudWatch Group name, Stream name and click &lt;strong&gt;Create build project&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4:  Run CodeBuild project
&lt;/h2&gt;

&lt;p&gt;Select the codebuild project and click on &lt;strong&gt;Start build&lt;/strong&gt;&lt;br&gt;
Wait for build to Succeed!&lt;br&gt;
Then check ECR for build Image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ed1lwyb8f50tstcb1jz.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ed1lwyb8f50tstcb1jz.JPG" alt="build complete status" width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9wj1q4d7715uig5igyf.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9wj1q4d7715uig5igyf.JPG" alt="ECR Image" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's it! You have successfully used AWS CodeBuild to build a Node.js Express application, create a Docker image, and push it to Amazon ECR.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fub27bbgk5m0l1p35jk9n.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fub27bbgk5m0l1p35jk9n.jpg" alt="Victory Dance" width="480" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>node</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Decoding EC2 Placement Groups: Unveiling the Pros and Cons of Cluster, Spread, Partition Strategies and its implementation</title>
      <dc:creator>Ukeme David Eseme</dc:creator>
      <pubDate>Wed, 15 Nov 2023 16:53:43 +0000</pubDate>
      <link>https://forem.com/ukemzyskywalker/decoding-ec2-placement-groups-unveiling-the-pros-and-cons-of-cluster-spread-partition-strategies-and-its-implementation-22fe</link>
      <guid>https://forem.com/ukemzyskywalker/decoding-ec2-placement-groups-unveiling-the-pros-and-cons-of-cluster-spread-partition-strategies-and-its-implementation-22fe</guid>
      <description>&lt;p&gt;Amazon EC2 Placement Groups are a feature of Amazon Elastic Compute Cloud (EC2) that enable you to influence the placement of instances within the AWS infrastructure.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;on a specific Availability Zone&lt;/li&gt;
&lt;li&gt;A specific rack&lt;/li&gt;
&lt;li&gt;or Zone based placement etc&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The placement group you choose for your instances can significantly impact the performance, availability, and fault tolerance of your applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  There are three (3) types of EC2 Placement Groups:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Cluster Placement Group.&lt;/li&gt;
&lt;li&gt;Spread Placement Group.&lt;/li&gt;
&lt;li&gt;Partition Placement Group&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Cluster Placement Group:
&lt;/h3&gt;

&lt;p&gt;Instances are placed on the same hardware, meaning instances in a cluster placement group are placed in close proximity to each other, providing low-latency communication between them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6tjd8mbw7s89r0lfl4az.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6tjd8mbw7s89r0lfl4az.jpg" alt="Cluster Placement Group Image" width="779" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Cases:&lt;/strong&gt; Ideal for applications that require low-latency, high-throughput communication between instances. Commonly used for high-performance computing (HPC) workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Low Latency:&lt;/strong&gt; Instances in a cluster placement group share the same hardware, leading to low-latency communication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. High Throughput:&lt;/strong&gt; Ideal for high-performance computing (HPC) workloads that require high throughput and minimal communication overhead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Performance Optimization:&lt;/strong&gt; Well-suited for applications that benefit from instances being close to each other, minimizing network latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Limited Fault Tolerance:&lt;/strong&gt; All instances are on the same hardware, making them susceptible to correlated failures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Instance Type Limitations:&lt;/strong&gt; Not all instance types are supported in cluster placement groups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. No Cross-Region Support:&lt;/strong&gt; Cluster placement groups cannot span multiple AWS regions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Spread Placement Group:
&lt;/h3&gt;

&lt;p&gt;Instances in a spread placement group are placed on distinct underlying hardware to reduce the risk of simultaneous failures.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmz00amzqzfkp9ui1af28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmz00amzqzfkp9ui1af28.png" alt="Spread Placement Group" width="605" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Cases:&lt;/strong&gt; Useful for applications that require a small number of critical instances to be kept separate for fault tolerance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Fault Tolerance:&lt;/strong&gt; Offers high fault tolerance, as instances are placed on distinct underlying hardware.&lt;br&gt;
&lt;strong&gt;2. Isolation:&lt;/strong&gt; Provides high isolation between instances, reducing the risk of correlated failures.&lt;br&gt;
&lt;strong&gt;3. Availability Zone Support:&lt;/strong&gt; Can span multiple Availability Zones, enhancing availability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Communication Overhead:&lt;/strong&gt; Instances in a spread placement group may experience higher communication overhead due to being on separate hardware.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Limited Instance Type Support:&lt;/strong&gt; Not all instance types are supported in spread placement groups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Limited Number of Instances:&lt;/strong&gt; There is a limit on the number of instances per Availability Zone in a spread placement group.&lt;br&gt;
(Maximum of 7 instances per group per Availability Zone.)&lt;/p&gt;

&lt;h3&gt;
  
  
  Partition Placement Group:
&lt;/h3&gt;

&lt;p&gt;Instances are spread across logical partitions, each with its own set of racks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2gfm5jrchhakprz8hca.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2gfm5jrchhakprz8hca.png" alt="Partition Placement Group Image 1" width="800" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzfvfhd3bng9j6t3dcnmk.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzfvfhd3bng9j6t3dcnmk.JPG" alt="Partition Placement Group Image 2" width="346" height="486"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Cases:&lt;/strong&gt; Suitable for large distributed and replicated workloads, such as big data and database systems.&lt;br&gt;
&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Fault Tolerance:&lt;/strong&gt; Offers better fault tolerance than cluster placement groups, as instances are spread across different logical partitions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Scalability:&lt;/strong&gt; Can handle a larger number of instances compared to cluster placement groups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Isolation:&lt;/strong&gt; Provides moderate isolation between instances due to partitioning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Communication Overhead:&lt;/strong&gt; Instances in different partitions may have higher communication overhead compared to those in a cluster placement group.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Instance Type Limitations:&lt;/strong&gt; Instances in a partition placement group must be of the same type.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Partition Limitations:&lt;/strong&gt; There is a limit on the number of partitions per Availability Zone.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to create placement group
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Sign in to your AWS console and navigate to the EC2 Dashboard, on the left navigation section, select  Placement Groups, under Network &amp;amp; Security.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mg6biahd0bgbtxsv4a7.JPG" alt="select placement group, EC2 dashboard" width="206" height="203"&gt;
&lt;/li&gt;
&lt;li&gt;On the top right hand corner, select the "create placement group" button
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiswy14sehmjlu0tfr9t5.JPG" alt="Create Placement Group" width="800" height="134"&gt;
&lt;/li&gt;
&lt;li&gt;Input the Name, select your placement group strategy, then click the "create group button", on the low right corner.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80z6okwcorfkdxuh2pc1.png" alt="select placement strategy" width="800" height="532"&gt;To implement the created placement group in an instance.&lt;/li&gt;
&lt;li&gt;Upon creation of an Instance, scroll down to the advanced details section, scroll down to placement group and select the created placement group.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fia9jyydjiisvayk83new.png" alt="Implement placement group in EC2" width="779" height="186"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Conclusion:
&lt;/h3&gt;

&lt;p&gt;EC2 Placement Groups provide a way to optimize the placement of your instances based on your specific workload requirements, whether it's maximizing performance, achieving fault tolerance, or enhancing isolation between instances.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>ec2</category>
      <category>devops</category>
    </item>
    <item>
      <title>Turbocharging EC2: A SysOps Guide to ENA and EFA</title>
      <dc:creator>Ukeme David Eseme</dc:creator>
      <pubDate>Wed, 15 Nov 2023 13:59:40 +0000</pubDate>
      <link>https://forem.com/ukemzyskywalker/turbocharging-ec2-a-sysops-guide-to-ena-and-efa-39fe</link>
      <guid>https://forem.com/ukemzyskywalker/turbocharging-ec2-a-sysops-guide-to-ena-and-efa-39fe</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Brief overview of Amazon EC2's role in cloud computing.
&lt;/h3&gt;

&lt;p&gt;Amazon Elastic Compute Cloud (EC2) is a foundational service offered by Amazon Web Services (AWS).&lt;/p&gt;

&lt;p&gt;It provides scalable compute capacity on the cloud, allowing users to run virtual servers known as instances. These instances serve as the building blocks for a wide range of applications, from simple web servers to complex, distributed systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mwth2aq9p2n2dfvk9nr.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mwth2aq9p2n2dfvk9nr.gif" alt="Thank you for telling me what I already know" width="480" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding ENA and EFA Basics
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is ENA?
&lt;/h3&gt;

&lt;p&gt;ENA is an acronym for Elastic Network Adapter, provides high-performance networking capabilities, including support for up to &lt;strong&gt;100 Gbps&lt;/strong&gt; of network bandwidth. This is crucial for applications and workloads that require low-latency and high-throughput network connectivity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Elastic Network Adapter (ENA) Features :
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Reduced Network Latency:&lt;/strong&gt;&lt;br&gt;
ENA is designed to minimize network latency, making it suitable for applications that require fast and responsive network communication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Performance Scaling&lt;/strong&gt;: ENA is designed to scale with the performance characteristics of the underlying EC2 instance. It can adapt to the varying requirements of different workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Improved Security Groups Performance:&lt;/strong&gt; ENA is designed to enhance the performance of security groups, allowing for efficient filtering of inbound and outbound traffic at the network interface level.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is EFA?
&lt;/h3&gt;

&lt;p&gt;Elastic Fabric Adapter (EFA) is a specialized networking interface designed for high-performance computing (HPC) workloads within the Amazon EC2 environment.&lt;/p&gt;

&lt;p&gt;Specifically designed to meet the low-latency and high-throughput demands of HPC applications, EFA plays a crucial role in enabling seamless communication between instances in a cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Elastic Fabric Adapter (EFA) Use Case
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. MPI (Message Passing Interface) Workloads:&lt;/strong&gt; EFA significantly enhances MPI-based applications, which heavily rely on inter-node communication. It ensures efficient message passing between nodes, crucial for parallel processing in scientific simulations and computational research.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Large-Scale Simulations:&lt;/strong&gt; EFA's advantages shine in HPC simulations, where multiple instances collaborate on intricate computations. The low-latency communication facilitated by EFA accelerates the exchange of data, improving overall simulation performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Data-Parallel Applications:&lt;/strong&gt; Workloads that involve parallel data processing, such as distributed data analytics and machine learning, benefit from EFA's ability to facilitate quick and reliable communication between nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Precision Medicine and Genomics:&lt;/strong&gt; In genomics research, where analyzing massive datasets across multiple nodes is common, EFA aids in speeding up inter-node communication, contributing to faster genomic sequencing and analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Financial Modeling with MPI:&lt;/strong&gt; EFA is instrumental in financial applications that leverage MPI standards for parallel computation. It reduces communication bottlenecks, enabling quicker analysis and decision-making in complex financial modeling scenarios&lt;/p&gt;

&lt;h3&gt;
  
  
  The similarities and differences between Elastic Network Adapter (ENA) and Elastic Fabric Adapter (EFA):
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Feature&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;Elastic Network Adapter (ENA)&lt;/th&gt;
&lt;th&gt;Elastic Fabric Adapter (EFA)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Purpose&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Enhances general network performance for EC2 instances&lt;/td&gt;
&lt;td&gt;Optimizes inter-instance communication in HPC and ML&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Use Cases&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;- High-performance computing (HPC)&lt;/td&gt;
&lt;td&gt;- High-performance computing (HPC)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;- Machine learning and data analytics&lt;/td&gt;
&lt;td&gt;- Machine learning and deep learning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;- Big data processing&lt;/td&gt;
&lt;td&gt;- Clustered applications, parallel processing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;- Content delivery&lt;/td&gt;
&lt;td&gt;- High-performance storage systems&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;- Network-intensive applications&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Throughput&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High throughput for general networking&lt;/td&gt;
&lt;td&gt;High throughput for inter-instance communication&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Latency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Low latency for general networking&lt;/td&gt;
&lt;td&gt;Low latency for inter-instance communication&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Packet Size Support&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Supports jumbo frames for efficient data transfer&lt;/td&gt;
&lt;td&gt;Efficient handling of large packets and message sizes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Packet Offloading&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Offloads checksum calculations&lt;/td&gt;
&lt;td&gt;Provides offloading for collective operations and more&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Traffic Mirroring&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Supports traffic mirroring for network analysis&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Instance Support&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Supported by a wide range of EC2 instance types&lt;/td&gt;
&lt;td&gt;Limited instances support; primarily HPC-optimized types&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MPI Workloads&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Generally used for various workloads&lt;/td&gt;
&lt;td&gt;Optimized for Message Passing Interface (MPI) workloads&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Security Group Support&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Enhances the performance of security groups&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Custom Networking&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Provides advanced networking features&lt;/td&gt;
&lt;td&gt;Focused on optimized communication within clusters&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AWS Compatibility&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Compatible with a broader range of EC2 instance types&lt;/td&gt;
&lt;td&gt;Primarily designed for specific HPC-optimized instances&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Bottom line is, If you just want enhanced networking for lower latency, look for an Elastic Network Adapter (ENA). &lt;br&gt;
If you have a High Performance Cluster (HPC), use Elastic Fabric Adapter (EFA).&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzr9xsv5vsfry24316lot.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzr9xsv5vsfry24316lot.gif" alt="Lets Practice" width="480" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lets do a quick ENA Demo&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcuas12tzvl2djlkqesyt.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcuas12tzvl2djlkqesyt.gif" alt="Demo GiF" width="500" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: ENA is only available for newer generation instances.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Spin-up a new EC2 instance running, a new generation instance type e.g t3.micro, running an Amazon Linux AMI&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fua9gw6cm7lglgueydsuo.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fua9gw6cm7lglgueydsuo.JPG" alt="Running EC2 Instance" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SSH or Instance Connect into the running instance and run&lt;br&gt;
&lt;strong&gt;&lt;code&gt;modinfo ena&lt;/code&gt;&lt;/strong&gt; this would display details about the Elastic Network Adapter (ENA) kernel module, showing that enhanced networking can be leveraged.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fex4i7wdhzojtnux87b62.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fex4i7wdhzojtnux87b62.JPG" alt="ena driver details" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then run &lt;strong&gt;&lt;code&gt;ethtool -i eth0&lt;/code&gt;&lt;/strong&gt; to display information about the specified network interface&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmndkkr78d81qo9jttli.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmndkkr78d81qo9jttli.JPG" alt="Network interface information" width="422" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In summary, when selecting EC2 instances, it's important to check whether the instance type supports ENA, as not all instance types include this network adapter.&lt;/p&gt;

&lt;p&gt;ENA and EFA are valuable enhancements for SysOps professionals, enabling them to optimize network performance, enhance efficiency, and support specialized workloads such as HPC and machine learning. Keeping abreast of these networking enhancements allows SysOps professionals to make informed decisions when architecting and managing AWS environments.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>sysops</category>
      <category>ec2</category>
      <category>networking</category>
    </item>
    <item>
      <title>AWS CodeCommit vs. Git: Choosing the Right Version Control System for Your Project</title>
      <dc:creator>Ukeme David Eseme</dc:creator>
      <pubDate>Sat, 01 Jul 2023 21:55:41 +0000</pubDate>
      <link>https://forem.com/ukemzyskywalker/aws-codecommit-vs-git-choosing-the-right-version-control-system-for-your-project-mb6</link>
      <guid>https://forem.com/ukemzyskywalker/aws-codecommit-vs-git-choosing-the-right-version-control-system-for-your-project-mb6</guid>
      <description>&lt;p&gt;Version control systems play a crucial role in software development, enabling teams to efficiently manage code, collaborate, and track changes. When it comes to choosing a version control system for your project, AWS CodeCommit and Git are two popular options. &lt;/p&gt;

&lt;p&gt;In this blog post, we will explore the similarities, differences, and considerations for selecting between AWS CodeCommit and Git.&lt;/p&gt;

&lt;p&gt;By understanding their features, workflows, and integration capabilities, you can make an informed decision that aligns with your project's requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Git:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2ap9qmpjo7wi473cm5b.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2ap9qmpjo7wi473cm5b.jpg" alt="Iron Man Git meme" width="495" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Git is an open-source distributed version control system widely adopted in the software development community. It provides a decentralized approach, allowing developers to have a local copy of the entire code repository. Git offers powerful branching and merging capabilities, facilitating parallel development, collaboration, and code versioning. Git repositories can be hosted on various platforms, including GitHub, GitLab, and Bitbucket.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing AWS CodeCommit:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fphb6kfpfbbaaf4nfkfds.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fphb6kfpfbbaaf4nfkfds.jpg" alt="Code commit workflow" width="800" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS CodeCommit is a fully managed source code control service provided by Amazon Web Services (AWS). It offers a secure and scalable environment for hosting private Git repositories. CodeCommit is integrated with other AWS services, such as AWS CodePipeline and AWS CodeBuild, enabling end-to-end CI/CD workflows. It provides features like access control, code reviews, and pull requests, ensuring a robust and collaborative development process within the AWS ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparing AWS CodeCommit and Git:
&lt;/h2&gt;

&lt;p&gt;To make an informed decision about the version control system for your project, let's compare AWS CodeCommit and Git based on several factors:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Hosting and Scalability:
&lt;/h3&gt;

&lt;p&gt;Git repositories can be hosted on a variety of platforms, offering flexibility and the ability to choose a hosting provider that aligns with your needs. On the other hand, CodeCommit provides a fully managed and scalable hosting environment within the AWS ecosystem, ensuring high availability, security, and seamless integration with other AWS services.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Integration with AWS Services:
&lt;/h3&gt;

&lt;p&gt;CodeCommit offers tight integration with other AWS DevOps tools, such as AWS CodePipeline, facilitating streamlined CI/CD workflows. Git repositories, on the other hand, can be integrated with various CI/CD platforms and services, allowing for flexibility in tooling choices.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Security and Access Control:
&lt;/h3&gt;

&lt;p&gt;Both Git and CodeCommit provide mechanisms for securing code repositories and managing access control. CodeCommit leverages AWS Identity and Access Management (IAM) for fine-grained access control and integrates with AWS Key Management Service (KMS) for encryption at rest. Git repositories can implement access controls and authentication mechanisms based on the hosting platform's features.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Community and Ecosystem:
&lt;/h3&gt;

&lt;p&gt;Git has a vast and vibrant community, with a wealth of resources, plugins, and integrations available. It offers extensive documentation, tutorials, and community support. While CodeCommit is relatively newer and has a smaller community, it benefits from the broader AWS ecosystem and integrates seamlessly with other AWS services.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Cost Considerations:
&lt;/h3&gt;

&lt;p&gt;Git repositories hosted on third-party platforms may have associated costs based on usage, storage, and additional features. AWS CodeCommit pricing is based on active users, repository size, and data transfer, with a free tier available for small-scale projects.&lt;/p&gt;

&lt;p&gt;Choosing between AWS CodeCommit and Git depends on your project's requirements, development workflow, integration needs, and familiarity with the platforms. &lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Git provides flexibility, a mature ecosystem, and multiple hosting options, while AWS CodeCommit offers a managed and integrated environment within the AWS ecosystem. &lt;br&gt;
Consider factors such as scalability, security, integration capabilities, community support, and cost when making your decision. By evaluating these aspects, you can select the version control system that best suits your project's needs and empowers your development team.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>git</category>
      <category>github</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Infrastructure as Code: Managing Docker Containers using AWS DevOps Tools</title>
      <dc:creator>Ukeme David Eseme</dc:creator>
      <pubDate>Fri, 30 Jun 2023 18:17:56 +0000</pubDate>
      <link>https://forem.com/ukemzyskywalker/infrastructure-as-code-managing-docker-containers-using-aws-devops-tools-1oc4</link>
      <guid>https://forem.com/ukemzyskywalker/infrastructure-as-code-managing-docker-containers-using-aws-devops-tools-1oc4</guid>
      <description>&lt;h3&gt;
  
  
  Introduction:
&lt;/h3&gt;

&lt;p&gt;In the world of modern software development, managing infrastructure has become a critical aspect of the DevOps lifecycle. Infrastructure as Code (IaC) has emerged as a best practice that allows developers to define and manage their infrastructure using code. &lt;br&gt;
This blog post will explore how AWS DevOps tools can be leveraged to manage Docker containers using Infrastructure as Code principles. We'll dive into the key concepts and demonstrate practical examples using code snippets.&lt;/p&gt;
&lt;h3&gt;
  
  
  Understanding Infrastructure as Code:
&lt;/h3&gt;

&lt;p&gt;Infrastructure as Code involves treating infrastructure components, such as servers, networks, and services, as programmable resources. This approach allows for version control, reproducibility, and automation, which are crucial for efficient infrastructure management. By using IaC, developers can define and provision their infrastructure using declarative code, enabling consistent deployments and eliminating manual configuration drift.&lt;/p&gt;
&lt;h3&gt;
  
  
  Managing Docker Containers with AWS DevOps Tools:
&lt;/h3&gt;

&lt;p&gt;AWS provides a set of powerful DevOps tools that seamlessly integrate with Docker containers, enabling effective management and deployment. Let's explore some key AWS services and how they can be utilized for managing Docker containers as code.&lt;/p&gt;
&lt;h3&gt;
  
  
  1. AWS CloudFormation:
&lt;/h3&gt;

&lt;p&gt;AWS CloudFormation is a powerful service that allows you to define and provision your AWS infrastructure using declarative templates. With CloudFormation, you can define a stack that includes various AWS resources, such as EC2 instances, VPCs, and security groups.&lt;br&gt;
To manage Docker containers, you can use CloudFormation to create and configure the necessary resources, such as Amazon Elastic Container Service (ECS) clusters, task definitions, and services.&lt;/p&gt;

&lt;p&gt;Example CloudFormation template snippet for defining an ECS service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resources:
  MyEcsService:
    Type: AWS::ECS::Service
    Properties:
      Cluster: !Ref MyEcsCluster
      TaskDefinition: !Ref MyEcsTaskDefinition
      DesiredCount: 2
      LaunchType: FARGATE
      NetworkConfiguration:
        AwsvpcConfiguration:
          Subnets:
            - !Ref MySubnet1
            - !Ref MySubnet2
          SecurityGroups:
            - !Ref MySecurityGroup

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. AWS CodePipeline:
&lt;/h3&gt;

&lt;p&gt;AWS CodePipeline is a fully managed continuous integration and continuous delivery (CI/CD) service. It enables you to automate your software release workflows, including the deployment of Docker containers. CodePipeline integrates with various AWS services, including AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy. You can configure a pipeline that automatically builds and deploys your Docker images to Amazon Elastic Container Registry (ECR) or ECS.&lt;/p&gt;

&lt;p&gt;Example CodePipeline configuration for building and deploying Docker containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Stages:
  - Name: Source
    Actions:
      - Name: SourceAction
        ActionTypeId:
          Category: Source
          Owner: AWS
          Provider: CodeCommit
          Version: "1"
        Configuration:
          RepositoryName: MyCodeRepo
          BranchName: main
        OutputArtifacts:
          - Name: source

  - Name: Build
    Actions:
      - Name: BuildAction
        ActionTypeId:
          Category: Build
          Owner: AWS
          Provider: CodeBuild
          Version: "1"
        Configuration:
          ProjectName: MyCodeBuildProject
        InputArtifacts:
          - Name: source
        OutputArtifacts:
          - Name: build

  - Name: Deploy
    Actions:
      - Name: DeployAction
        ActionTypeId:
          Category: Deploy
          Owner: AWS
          Provider: ECS
          Version: "1"
        Configuration:
          ClusterName: MyEcsCluster
          ServiceName: MyEcsService
          FileName: imagedefinitions.json
          Image1ArtifactName: build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. AWS Elastic Beanstalk:
&lt;/h3&gt;

&lt;p&gt;AWS Elastic Beanstalk is a fully managed platform that simplifies deploying and scaling applications. With Elastic Beanstalk, you can easily deploy your Docker containers without worrying about the underlying infrastructure. Elastic Beanstalk abstracts away the complexities of infrastructure management and provides a simple deployment model.&lt;/p&gt;

&lt;p&gt;Example Elastic Beanstalk configuration for Docker container deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resources:
  MyElasticBeanstalkEnvironment:
    Type: AWS::ElasticBeanstalk::Environment
    Properties:
      ApplicationName: MyApplication
      EnvironmentName: MyEnvironment
      SolutionStackName: "64bit Amazon Linux 2 v3.4.3 running Docker"
      OptionSettings:
        - Namespace: aws:elasticbeanstalk:environment
          OptionName: EnvironmentType
          Value: SingleInstance
        - Namespace: aws:elasticbeanstalk:application:environment
          OptionName: MyEnvironmentVariable
          Value: MyValue
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By leveraging AWS DevOps tools such as CloudFormation, CodePipeline, and Elastic Beanstalk, you can effectively manage Docker containers using Infrastructure as Code principles. &lt;/p&gt;

&lt;p&gt;This approach provides numerous benefits, including version control, repeatability, and automation. By treating infrastructure as code, you can achieve consistency, scalability, and efficiency in managing your Dockerized applications on the AWS platform.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>aws</category>
      <category>containers</category>
    </item>
    <item>
      <title>Creating An IAM User on AWS</title>
      <dc:creator>Ukeme David Eseme</dc:creator>
      <pubDate>Fri, 03 Feb 2023 09:03:23 +0000</pubDate>
      <link>https://forem.com/ukemzyskywalker/creating-an-iam-user-on-aws-5fpj</link>
      <guid>https://forem.com/ukemzyskywalker/creating-an-iam-user-on-aws-5fpj</guid>
      <description>&lt;p&gt;AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources.&lt;/p&gt;

&lt;p&gt;IAM is used to control authentication (who is signed in) and authorization (permissions) of AWS resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identity and Access Management is responsible for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fine-grained access control to AWS resources&lt;/li&gt;
&lt;li&gt;Analysis features to validate and fine-tune policies&lt;/li&gt;
&lt;li&gt;Integration with external identity management solutions&lt;/li&gt;
&lt;li&gt;AWS multi-factor authentication&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;IAM cloud identity tools are more secure and flexible than traditional username and password solutions&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;iframe src="https://player.vimeo.com/video/795011036" width="710" height="399"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is an IAM user?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An IAM user is a long-term credentialed identity used to interact with AWS in an account.&lt;/p&gt;

&lt;p&gt;AWS has redesigned the Users List experience to make it easier to use.&lt;/p&gt;

&lt;p&gt;Let's create an &lt;strong&gt;IAM User in 6 Simple Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1 — Login and Navigation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Login to your AWS console with an administrative user or profile.&lt;br&gt;
On the Top left corner, click on services, scroll down to &lt;strong&gt;Security, Identity &amp;amp; Compliance&lt;/strong&gt;, click on &lt;strong&gt;IAM&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1iwauel6w7r3zzifqqpe.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1iwauel6w7r3zzifqqpe.JPG" alt="Login and Navigation" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2 — Identity and Access Management(IAM) Dashboard.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On the left-hand corner of the IAM dashboard, under &lt;strong&gt;Access Management&lt;/strong&gt;, click** Users*&lt;em&gt;. Click on "&lt;/em&gt;&lt;em&gt;Add Users&lt;/em&gt;*" &amp;gt; "Specify User Details."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbmbyuiknuo9hxvjjhohs.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbmbyuiknuo9hxvjjhohs.JPG" alt="click users IAM Dashboard" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zvcrc41tlmpyor1z1nq.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zvcrc41tlmpyor1z1nq.JPG" alt="Add Users" width="800" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3 — Specify User Details.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frdt1xpccref77ndu82tk.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frdt1xpccref77ndu82tk.JPG" alt="Specify User Details" width="800" height="447"&gt;&lt;/a&gt;&lt;br&gt;
a) Enter a User name, and tick Enable console access checkbox.&lt;br&gt;
b) Generate Console password&lt;br&gt;
c) Uncheck Users must create a new password at the next sign-in —&lt;br&gt;
The reason why we unchecked this is because, we don’t want to keep changing our password on every sign-in…&lt;/p&gt;

&lt;p&gt;Well… it all depends on your security preference 🤷&lt;br&gt;
d) Click Next.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4 — Set Permissions&lt;/strong&gt;&lt;br&gt;
Select Attach policies directly from the Permissions menu.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;NB. The best practice is to attach the policies to a group, then add the created user to that group, but for this session, we would attach the policies directly to the user.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe61hqggm0crv7oom93ar.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe61hqggm0crv7oom93ar.JPG" alt="Set Permissions" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Permissions policies&lt;/strong&gt;, for this test scenario, we want this user to have full access to Amazon S3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi24izeebym08yowvdnal.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi24izeebym08yowvdnal.JPG" alt="Permissions policies" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the search bar, search for "S3" and click on &lt;strong&gt;&lt;em&gt;AmazonS3FullAccess*&lt;/em&gt;&lt;em&gt;. You can also click the *&lt;/em&gt;&lt;em&gt;Plus Icon&lt;/em&gt;&lt;/strong&gt;, to view the selected policy in json format.&lt;/p&gt;

&lt;p&gt;Click Next&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5 — Review&lt;/strong&gt;&lt;br&gt;
In this step, you are given the chance to review your choices and also have the option to create &lt;em&gt;tags&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;&lt;em&gt;Create User&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6y13ly20d4f6lsndsbqo.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6y13ly20d4f6lsndsbqo.JPG" alt="Review" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6 — Retrieve the password&lt;/strong&gt;&lt;br&gt;
You can view and download the user’s password below, or email the user's instructions for signing in to the AWS Management Console. This is the only time you can view and download this password.&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;&lt;em&gt;Console Password&lt;/em&gt;&lt;/strong&gt;, click show to view your newly created password.&lt;br&gt;
Click &lt;em&gt;&lt;strong&gt;Return to users list&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User-created successfully 😊&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpvy4bl3ejtw6xi3y2xw.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpvy4bl3ejtw6xi3y2xw.JPG" alt="User Created" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>cloud</category>
      <category>cloudnative</category>
    </item>
  </channel>
</rss>
