<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Stéphane Noutsa</title>
    <description>The latest articles on Forem by Stéphane Noutsa (@stephane_noutsa).</description>
    <link>https://forem.com/stephane_noutsa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/stephane_noutsa"/>
    <language>en</language>
    <item>
      <title>Argo CD "App of Apps" on EKS</title>
      <dc:creator>Stéphane Noutsa</dc:creator>
      <pubDate>Sun, 01 Feb 2026 22:50:52 +0000</pubDate>
      <link>https://forem.com/aws-builders/argo-cd-app-of-apps-on-eks-bcc</link>
      <guid>https://forem.com/aws-builders/argo-cd-app-of-apps-on-eks-bcc</guid>
      <description>&lt;p&gt;In our &lt;a href="https://dev.to/aws-builders/enhancing-your-aws-eks-cluster-with-istio-service-mesh-and-kiali-observability-49kk"&gt;previous article&lt;/a&gt; we saw how to set up an Istio service mesh and monitor it using Kiali. Now we'll configure and use Argo CD on our EKS cluster with the "App of Apps" GitOps pattern, and automatically deploy sample frontend (nginx) and backend (http-echo) applications.&lt;/p&gt;

&lt;p&gt;The "App of Apps" pattern is essential when you're bootstrapping clusters with many components, whether they're add-ons (like Istio and Kiali) or application workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;You have a Kubernetes cluster running&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl&lt;/code&gt; is configured to talk to your Kubernetes cluster&lt;/li&gt;
&lt;li&gt;You have a basic understanding of GitOps&lt;/li&gt;
&lt;li&gt;You have a Git repository. I'll be using GitHub for this demo&lt;/li&gt;
&lt;li&gt;You have a basic understanding of Helm&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 1 - Installing Argo CD
&lt;/h2&gt;

&lt;p&gt;Before installing Argo CD in our cluster, we'll first create an "argocd" namespace for better isolation and management of Argo CD components (pods, deployments, services, etc).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are primarily two options for installing Argo CD in a Kubernetes cluster: &lt;code&gt;kubectl&lt;/code&gt; or &lt;code&gt;helm&lt;/code&gt;.&lt;br&gt;
For our demo, we'll install Argo CD using &lt;code&gt;helm&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We'll start by adding the Argo CD Helm chart repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add argo https://argoproj.github.io/argo-helm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we update the Helm repository list to ensure we have the latest chart information:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And finally, we install the Argo CD Helm chart with the command below. This will install all necessary Argo CD components into the &lt;code&gt;argocd&lt;/code&gt; namespace (that we just created). You can customize the installation by using a &lt;code&gt;values.yaml&lt;/code&gt; file with the &lt;code&gt;--values&lt;/code&gt; flag.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;argocd argo/argo-cd &lt;span class="nt"&gt;--namespace&lt;/span&gt; argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can verify the installation with the following command, making sure that all pods are running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; argocd get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With Argo CD now installed, the simplest way to access the UI is via port forwarding (depending on your EKS cluster's network).&lt;br&gt;
So, we'll port-forward the &lt;code&gt;argocd-server&lt;/code&gt; service, making it available on &lt;code&gt;http://localhost:8080&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; argocd port-forward svc/argocd-server 8080:443
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We should now retrieve the initial admin password (the default username is &lt;code&gt;admin&lt;/code&gt;). The initial password is auto-generated and stored as a Kubernetes secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; argocd get secret argocd-initial-admin-secret &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.data.password}"&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can access the Argo CD UI on &lt;code&gt;http://localhost:8080&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjbqfhiw3dvraues6bin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjbqfhiw3dvraues6bin.png" alt="Argo CD UI" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2 - Prepare Git Repository
&lt;/h2&gt;

&lt;p&gt;Your GitOps repository should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gitops-eks/
├── apps/
│   ├── frontend-app.yaml
│   └── backend-app.yaml
│
├── app-of-apps/
│   └── parent-app.yaml
│
├── frontend/
│   ├── namespace.yaml
│   ├── deployment.yaml
│   └── service.yaml
│
└── backend/
    ├── namespace.yaml
    ├── deployment.yaml
    └── service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;app-of-apps/&lt;/code&gt; -&amp;gt; App of Apps&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;apps/&lt;/code&gt; -&amp;gt; Argo CD applications&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;frontend/&lt;/code&gt; -&amp;gt; Kubernetes manifests for frontend application&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;backend/&lt;/code&gt; -&amp;gt; Kubernetes manifests for backend application&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Content of Manifests
&lt;/h3&gt;

&lt;h4&gt;
  
  
  app-of-apps/parent-app.yaml
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;platform-apps&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;

  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/YOUR_ORG/gitops-eks.git&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HEAD&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps&lt;/span&gt;

  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;

  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This parent Argo CD application (App of Apps) reads all manifests in the &lt;strong&gt;apps&lt;/strong&gt; directory (as indicated by the line &lt;code&gt;path: apps&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;It then creates the child Argo CD applications (&lt;strong&gt;frontend&lt;/strong&gt; and &lt;strong&gt;backend&lt;/strong&gt; in this case)&lt;/li&gt;
&lt;li&gt;Then it automatically syncs drift (as indicated by the &lt;strong&gt;syncPolicy&lt;/strong&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  apps/frontend-app.yaml
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;

  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/YOUR_ORG/gitops-eks.git&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HEAD&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;

  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;

  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;syncOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CreateNamespace=true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This frontend Argo CD application deploys the frontend application using the Kubernetes manifests in the &lt;strong&gt;frontend&lt;/strong&gt; directory (as indicated by the line &lt;code&gt;path: frontend&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;It also automatically creates this application's namespace if it doesn't exist already&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  apps/backend-app.yaml
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;

  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/YOUR_ORG/gitops-eks.git&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HEAD&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;

  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;

  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;syncOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CreateNamespace=true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This frontend Argo CD application deploys the backend application using the Kubernetes manifests in the &lt;strong&gt;backend&lt;/strong&gt; directory (as indicated by the line &lt;code&gt;path: backend&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;It also automatically creates this application's namespace if it doesn't exist already&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  frontend/namespace.yaml
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This manifest creates the namespace for the frontend application&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  frontend/deployment.yaml
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.25&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;64Mi"&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;250m"&lt;/span&gt;
          &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;128Mi"&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This manifest creates an Nginx deployment for the frontend application that listens on port &lt;code&gt;80&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  frontend/service.yaml
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This manifest creates a service for the frontend application's deployment that listens on port &lt;code&gt;80&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  backend/namespace.yaml
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This manifest creates the namespace for the backend application&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  backend/deployment.yaml
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/http-echo&lt;/span&gt;
        &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-text=Hello&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;from&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;backend"&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-listen=:5678"&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5678&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;64Mi"&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;250m"&lt;/span&gt;
          &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;128Mi"&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This manifest creates an http-echo deployment for the backend application that listens on port &lt;code&gt;5678&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  backend/service.yaml
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5678&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This manifest creates a service for the backend application's deployment that listens on port &lt;code&gt;80&lt;/code&gt; and redirects traffic to the deployment's pods on port &lt;code&gt;5678&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 3 - Bootstrap the App of Apps
&lt;/h2&gt;

&lt;p&gt;With the Git repository configured and the manifest files pushed to it, we can now apply the parent YAML once:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; app-of-apps/parent-app.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Argo CD will then:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create all Argo CD applications (&lt;code&gt;App of Apps&lt;/code&gt;, &lt;code&gt;frontend&lt;/code&gt;, and &lt;code&gt;backend&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Create the Kubernetes namespaces for the frontend and backend applications&lt;/li&gt;
&lt;li&gt;Create the deployments for the frontend and backend applications&lt;/li&gt;
&lt;li&gt;Create the services for the frontend and backend applications&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You should see a similar output after executing the previous command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbrwpa5eiryp6yebhe4q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbrwpa5eiryp6yebhe4q.png" alt="Argo CD applications created" width="800" height="36"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From your terminal, enter the following commands to verify that the applications were successfully deployed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; argocd get applications
kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; frontend get service,deployment
kubeclt &lt;span class="nt"&gt;-n&lt;/span&gt; backend get service,deployment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should have outputs similar to these:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcxafnpvwfjipw9adkqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcxafnpvwfjipw9adkqw.png" alt="Argo CD apps - command line" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You could also verify this using the Argo CD UI.&lt;br&gt;
Below is the page displaying the three Argo CD applications that were created:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pc1v2736hjnc4l67k4b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pc1v2736hjnc4l67k4b.png" alt="Argo CD apps - UI" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking on each of the applications should show you similar pages:&lt;/p&gt;

&lt;h3&gt;
  
  
  parent app
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftushmlqup4ahc5rp49ys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftushmlqup4ahc5rp49ys.png" alt="Argo CD parent app" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  frontend app
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5imj65tbbcq9q9wl9ge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5imj65tbbcq9q9wl9ge.png" alt="Argo CD frontend app" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  backend app
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqncflvruowzz55tqlj8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqncflvruowzz55tqlj8.png" alt="Argo CD backend app" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4 - GitOps in Action
&lt;/h2&gt;

&lt;p&gt;To see the power of GitOps, we will update the backend deployment manifest so it now has &lt;code&gt;3&lt;/code&gt; replicas instead of the initial &lt;code&gt;2&lt;/code&gt;.&lt;br&gt;
After making this change and pushing it to the Git repository, we should see our backend application update automatically (you may need to refresh your apps in the UI).&lt;/p&gt;

&lt;p&gt;As you can see from the image below, the backend application no longer has 2 replicas, but now has 3:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1ioclf57py294tkhxx5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1ioclf57py294tkhxx5.png" alt="Argo CD updated backend app" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By implementing Argo CD on your EKS clusters with the &lt;strong&gt;App of Apps&lt;/strong&gt; pattern, you move from imperative cluster management to a true GitOps operating model, thereby leading to faster, reproducible deliveries, with auditable changes.&lt;/p&gt;

</description>
      <category>gitops</category>
      <category>argocd</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Enhancing Your AWS EKS Cluster with Istio Service Mesh and Kiali Observability</title>
      <dc:creator>Stéphane Noutsa</dc:creator>
      <pubDate>Thu, 01 Jan 2026 15:49:44 +0000</pubDate>
      <link>https://forem.com/aws-builders/enhancing-your-aws-eks-cluster-with-istio-service-mesh-and-kiali-observability-49kk</link>
      <guid>https://forem.com/aws-builders/enhancing-your-aws-eks-cluster-with-istio-service-mesh-and-kiali-observability-49kk</guid>
      <description>&lt;p&gt;As organizations grow their microservices footprint on Kubernetes, adding a &lt;strong&gt;service mesh&lt;/strong&gt; becomes an effective way to manage complex networking, enhance security, monitor traffic patterns, and enable resilient deployments. In AWS &lt;strong&gt;Elastic Kubernetes Service (EKS)&lt;/strong&gt;, introducing &lt;strong&gt;Istio&lt;/strong&gt; and &lt;strong&gt;Kiali&lt;/strong&gt; provides significant operational value.&lt;/p&gt;

&lt;p&gt;This guide explains &lt;em&gt;how&lt;/em&gt; to configure Istio and Kiali in an EKS cluster and &lt;em&gt;why&lt;/em&gt; they help run distributed services more safely and efficiently.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Istio + Kiali Matters for EKS
&lt;/h2&gt;

&lt;p&gt;Before diving into installation, it’s important to understand the value proposition.&lt;/p&gt;

&lt;h3&gt;
  
  
  Istio: Service Mesh Capabilities
&lt;/h3&gt;

&lt;p&gt;Istio acts as an infrastructure layer between microservices by deploying &lt;strong&gt;Envoy sidecars&lt;/strong&gt; alongside application pods. These sidecars intercept and manage all service‑to‑service traffic.&lt;/p&gt;

&lt;p&gt;Key benefits include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Advanced Traffic Management&lt;/strong&gt;&lt;br&gt;
Control routing behavior with retries, timeouts, circuit breaking, traffic splitting, mirroring, canary releases, and blue/green deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Zero‑Trust Security&lt;/strong&gt;&lt;br&gt;
Automatic &lt;strong&gt;mutual TLS (mTLS)&lt;/strong&gt; encrypts pod‑to‑pod traffic and enforces service identity. Authorization policies allow fine‑grained access control without changing application code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Uniform Observability&lt;/strong&gt;&lt;br&gt;
Istio emits consistent metrics, logs, and traces across all services, independent of language or framework.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Kiali: Service Mesh Visualization &amp;amp; Observability
&lt;/h3&gt;

&lt;p&gt;Kiali is a management and observability console built specifically for Istio.&lt;/p&gt;

&lt;p&gt;Key benefits include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service Mesh Topology Graphs&lt;/strong&gt;&lt;br&gt;
Visualize services, workloads, and traffic flows in real time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Health &amp;amp; Metrics Dashboards&lt;/strong&gt;&lt;br&gt;
Monitor request rates, latencies, error ratios, and workload health using Prometheus metrics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configuration Validation&lt;/strong&gt;&lt;br&gt;
Detect misconfigurations in Istio resources such as &lt;code&gt;VirtualService&lt;/code&gt;, &lt;code&gt;DestinationRule&lt;/code&gt;, and &lt;code&gt;Gateway&lt;/code&gt; objects.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tracing Integration&lt;/strong&gt;&lt;br&gt;
Seamlessly integrates with Jaeger to inspect distributed traces directly from service graphs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, Istio and Kiali turn the service mesh into an &lt;em&gt;observable, debuggable, and governable&lt;/em&gt; platform rather than an opaque networking layer.&lt;/p&gt;




&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;This article assumes you have read my previous two articles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/aws-builders/provision-eks-cluster-with-terraform-terragrunt-github-actions-1c64"&gt;Provision EKS Cluster with Terraform, Terragrunt &amp;amp; GitHub Actions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/aws-builders/configure-eks-cluster-security-pod-security-network-policies-pod-identity-506d"&gt;Configure EKS Cluster Security - Pod Security, Network Policies, Pod Identity&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Installing Istio on EKS
&lt;/h2&gt;

&lt;p&gt;Istio can be installed using either &lt;code&gt;istioctl&lt;/code&gt; or Helm. The &lt;code&gt;istioctl&lt;/code&gt; method is recommended for first‑time installs due to built‑in validation and profile support.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install Istio Using istioctl
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Download Istio and install istioctl
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-L&lt;/span&gt; https://istio.io/downloadIstio | sh -
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;/istio-&lt;span class="k"&gt;*&lt;/span&gt;/bin:&lt;span class="nv"&gt;$PATH&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Install Istio using the default profile
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;istioctl &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;profile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;default &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This deploys the Istio control plane into the &lt;code&gt;istio-system&lt;/code&gt; Kubernetes namespace.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Enable automatic sidecar injection
&lt;/h4&gt;

&lt;p&gt;Label your application namespaces so Envoy sidecars are automatically injected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl label namespace &amp;lt;application-namespace&amp;gt; istio-injection&lt;span class="o"&gt;=&lt;/span&gt;enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Any newly deployed pods in this namespace will now be part of the service mesh.&lt;/p&gt;

&lt;p&gt;For example, let's create a namespace called &lt;code&gt;eks-demo&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create ns eks-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, let's create an nginx pod in this namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; eks-demo run nginx &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's now get this newly created pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; eks-demo get pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can notice, only 1 container has been created in the pod, indicating that the sidecar container hasn't been injected by Istio.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sg5okj5u8zlyru4zv2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sg5okj5u8zlyru4zv2a.png" alt="Pod creation with no Istio sidecar injection" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's correct this by adding a label to our namespace for automatic sidecar injection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl label ns eks-demo istio-injection&lt;span class="o"&gt;=&lt;/span&gt;enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's now create a new nginx pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; eks-demo run nginx-istio &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now get this pod and see that the sidecar container was properly injected into this new pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; eks-demo get pod nginx-istio
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1cvxazg1p5p28pr5iy0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1cvxazg1p5p28pr5iy0.png" alt="Pod creation with Istio sidecar injection" width="800" height="207"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Installing Kiali and Observability Add‑Ons
&lt;/h2&gt;

&lt;p&gt;Istio integrates with several observability tools. At minimum, Kiali requires Prometheus to function.&lt;/p&gt;

&lt;h3&gt;
  
  
  Quick Add‑On Installation (Evaluation or Non‑Production)
&lt;/h3&gt;

&lt;p&gt;You can deploy Istio’s sample observability stack using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;for &lt;/span&gt;ADDON &lt;span class="k"&gt;in &lt;/span&gt;kiali jaeger prometheus grafana
&lt;span class="k"&gt;do
  &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/istio/istio/release-1.28/samples/addons/&lt;span class="nv"&gt;$ADDON&lt;/span&gt;.yaml
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This installs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prometheus&lt;/strong&gt; for metrics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grafana&lt;/strong&gt; for dashboards&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Jaeger&lt;/strong&gt; for distributed tracing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kiali&lt;/strong&gt; for service mesh visualization&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Production‑Grade Kiali Installation Using Helm
&lt;/h3&gt;

&lt;p&gt;For better control and security, install Kiali via Helm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add kiali https://kiali.org/helm-charts
helm repo update

helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; kiali-server kiali/kiali-server &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; istio-system &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; auth.strategy&lt;span class="o"&gt;=&lt;/span&gt;anonymous &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; deployment.ingress.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In production, you should integrate authentication (OIDC, OpenID, or token‑based auth) instead of anonymous access.&lt;/p&gt;




&lt;h2&gt;
  
  
  Accessing the Kiali Dashboard
&lt;/h2&gt;

&lt;p&gt;To access Kiali locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward svc/kiali 20001:20001 &lt;span class="nt"&gt;-n&lt;/span&gt; istio-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open your browser at:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:20001
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the Kiali UI you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;View real‑time service mesh graphs&lt;/li&gt;
&lt;li&gt;Inspect traffic flows and error rates&lt;/li&gt;
&lt;li&gt;Validate Istio configuration&lt;/li&gt;
&lt;li&gt;Navigate directly to Jaeger traces&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fridsix04oamodpu2dpbt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fridsix04oamodpu2dpbt.png" alt="Kiali dashboard" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Operating Istio and Kiali in EKS
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Enforcing mTLS
&lt;/h3&gt;

&lt;p&gt;Once workloads are meshed, you can enable strict mTLS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Encrypt all east‑west traffic&lt;/li&gt;
&lt;li&gt;Enforce service identity verification&lt;/li&gt;
&lt;li&gt;Reduce reliance on network‑level trust&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This can be done for a specific namespace or for the entire service mesh.&lt;/p&gt;

&lt;h4&gt;
  
  
  Enforce mTLS for a specific namespace
&lt;/h4&gt;

&lt;p&gt;To enforce mTLS in our &lt;code&gt;eks-demo&lt;/code&gt; namespace, for example, we can execute the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-n&lt;/span&gt; eks-demo &lt;span class="nt"&gt;-f&lt;/span&gt; - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
  name: default
spec:
  mtls:
    mode: STRICT
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This enforces mTLS for all traffic between pods in this namespace.&lt;/p&gt;

&lt;h4&gt;
  
  
  Enforce mTLS for the entire service mesh
&lt;/h4&gt;

&lt;p&gt;To enforce mTLS in the entire service mesh, we create the &lt;code&gt;PeerAuthentication&lt;/code&gt; object in the &lt;code&gt;istio-system&lt;/code&gt; namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-n&lt;/span&gt; istio-system &lt;span class="nt"&gt;-f&lt;/span&gt; - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
  name: default
spec:
  mtls:
    mode: STRICT
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating Traffic and Verifying mTLS using Kiali
&lt;/h3&gt;

&lt;p&gt;To effectively verify that mTLS is enforced, you must generate live traffic between services and observe how Istio secures that traffic. Kiali provides a visual and intuitive way to do this.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1 - Deploy a Sample Application (Traffic Generator)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; eks-demo apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/istio/istio/release-1.28/samples/bookinfo/platform/kube/bookinfo.yaml
kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; eks-demo apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/istio/istio/release-1.28/samples/bookinfo/networking/bookinfo-gateway.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 2 - Generate Traffic
&lt;/h4&gt;

&lt;p&gt;Generate continuous traffic so that metrics appear in Kiali.&lt;/p&gt;

&lt;h5&gt;
  
  
  Retrieve the Istio Ingress Gateway external IP address
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; istio-system get svc istio-ingressgateway
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Generate traffic via Curl
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://&amp;lt;INGRESS_IP&amp;gt;/productpage &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null
  &lt;span class="nb"&gt;sleep &lt;/span&gt;1
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures steady traffic for visualization.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3 - Open the Kiali Dashboard
&lt;/h4&gt;

&lt;p&gt;First, enable access to the Kiali dashboard from a browser using this command (you can skip this step if you did it earlier):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; istio-system port-forward svc/kiali 20001:20001
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then navigate to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;http://localhost:20001
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 4 - Visualize Traffic in the Traffic Graph View
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Traffic Graph&lt;/strong&gt; from the Kiali sidebar&lt;/li&gt;
&lt;li&gt;Choose the &lt;code&gt;eks-demo&lt;/code&gt; &lt;strong&gt;Namespace&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;In the &lt;strong&gt;Display&lt;/strong&gt; dropdown, enable: &lt;code&gt;Traffic Animation&lt;/code&gt;, &lt;code&gt;Security&lt;/code&gt;, &lt;code&gt;Traffic Rate&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should now see services connected by live traffic edges.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1us7xi9w1pb6lv75ksq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1us7xi9w1pb6lv75ksq.png" alt="Kiali Dashboard - Live Traffic" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 5 - Verify mTLS Enforcement
&lt;/h4&gt;

&lt;p&gt;The live traffic line connectors should display a lock icon, indicating that mTLS is enabled.&lt;/p&gt;

&lt;p&gt;You can click on any of these lock icons on the arrows, and the right sidebar will display &lt;strong&gt;mTLS Enabled&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhst48fjc0ft2v6dxmbdq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhst48fjc0ft2v6dxmbdq.png" alt="Kiali Dashboard - mTLS Enabled" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also verify mTLS enforcement by attempting to bypass the sidecar by curling a pod's IP directly.&lt;/p&gt;

&lt;p&gt;First, get the list of pods (and their IP addresses) in the &lt;code&gt;eks-demo&lt;/code&gt; namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; eks-demo get pod &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, exec into the &lt;code&gt;nginx-istio&lt;/code&gt; pod we created earlier and curl a different pod's IP address:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; eks-demo &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; nginx-istio &lt;span class="nt"&gt;--&lt;/span&gt; curl http://&amp;lt;target-pod-ip&amp;gt;:&amp;lt;port&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see an error like the one below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk74zea5cai2lzhsbr9m2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk74zea5cai2lzhsbr9m2.png" alt="Istio mTLS - curl pod IP" width="800" height="48"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  GitOps‑Driven Configuration
&lt;/h3&gt;

&lt;p&gt;Store Istio configuration (Gateways, VirtualServices, AuthorizationPolicies) in Git and deploy via:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Argo CD&lt;/li&gt;
&lt;li&gt;Flux&lt;/li&gt;
&lt;li&gt;CI/CD pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures reproducibility, auditability, and safe rollbacks.&lt;/p&gt;

&lt;p&gt;In our next article, we'll see how to configure and use Argo CD to automate the deployment of applications.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Adding &lt;strong&gt;Istio&lt;/strong&gt; and &lt;strong&gt;Kiali&lt;/strong&gt; to your AWS EKS platform significantly improves how you manage microservices networking and security.&lt;/p&gt;

&lt;p&gt;With this architecture, you gain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Encrypted, authenticated service‑to‑service communication&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Advanced traffic control for safer deployments&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Clear visibility into service behavior and dependencies&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Faster troubleshooting through visual observability&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When combined with strong EKS provisioning, security controls, and GitOps automation, Istio and Kiali enable a robust, production‑ready Kubernetes platform that scales with confidence.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>istio</category>
      <category>kiali</category>
    </item>
    <item>
      <title>Hands-On Amazon Q Developer Latest Features - /dev, /review, /doc, /test, /transform</title>
      <dc:creator>Stéphane Noutsa</dc:creator>
      <pubDate>Wed, 05 Feb 2025 00:48:10 +0000</pubDate>
      <link>https://forem.com/aws-builders/hands-on-amazon-q-developer-latest-features-dev-review-doc-test-transform-1m9l</link>
      <guid>https://forem.com/aws-builders/hands-on-amazon-q-developer-latest-features-dev-review-doc-test-transform-1m9l</guid>
      <description>&lt;p&gt;AWS re:Invent 2024 brought with it a lot of surprises, one of them being big updates to its Generative AI (GenAI) assistant for software development, &lt;a href="https://aws.amazon.com/q/developer/" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon Q Developer&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As hinted above, &lt;a href="https://aws.amazon.com/q/developer/" rel="noopener noreferrer"&gt;Amazon Q Developer&lt;/a&gt; is a GenAI assistant that tremendously accelerates development for developers by up to 80%.&lt;/p&gt;

&lt;p&gt;In this blog post, we'll look at the major new features of Amazon Q Developer and test them hands-on with a dummy Java project.&lt;br&gt;
These features are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/dev&lt;/code&gt; - to generate code for your feature. Supported languages are Java, Python, JavaScript, and TypeScript.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/test&lt;/code&gt; - to generate unit tests for your code. Supported languages are Java (JUnit 4 and 5, JUnit Jupiter, Mockito) and Python (PyTest, Unittest).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/review&lt;/code&gt; - to review your code for different types of code issues such as detecting security vulnerabilities in your code, detecting secrets, detecting issues with your IaC (Infrastructure as Code) files, detecting code quality issues, and more. Supports multiple languages, including those already listed for &lt;code&gt;/dev&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/doc&lt;/code&gt; - to generate documentation for your codebase. Supported languages are Java, Python, JavaScript, and TypeScript.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/transform&lt;/code&gt; - to upgrade your Java and .NET projects. See this &lt;a href="https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/transform-in-IDE.html" rel="noopener noreferrer"&gt;link&lt;/a&gt; for more details.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before proceeding, take note of this pricing info from the Amazon Q Developer web page:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Try Amazon Q Developer free with the AWS Free Tier. The Amazon Q Developer Free Tier gives you 50 chat interactions per month. You can also use it to develop software 5 times per month or transform up to 1,000 lines of code per month.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You can fork the Java 17 project in &lt;a href="https://github.com/stephanenoutsa/user-service/tree/demo" rel="noopener noreferrer"&gt;this repository&lt;/a&gt; to follow hands-on. If you have a lightweight Java 8 or Java 11 project, that would be better.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In this article, we use Visual Studio Code and the Amazon Q extension (see the image below). However, you can also use Amazon Q Developer with the JetBrains IDE and Eclipse (still in preview) as well.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2k2qehkvz3lzfzlef4l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2k2qehkvz3lzfzlef4l.png" alt="Amazon Q VS Code Extension" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After installing the extension, you can click the Amazon Q icon to open its chat window.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feebw3h7v99i2e1s0utbk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feebw3h7v99i2e1s0utbk.png" alt="Open Amazon Q Chat Window" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open the forked Java 17 project in your Visual Studio Code window before following the next steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  /dev - Generate a user authentication feature
&lt;/h2&gt;

&lt;p&gt;From the Amazon Q chat window, start typing &lt;code&gt;/dev&lt;/code&gt; then select the option presented to you to open a &lt;code&gt;/dev&lt;/code&gt; chat window.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmv5ggm1h5mleb1z1fb2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmv5ggm1h5mleb1z1fb2.png" alt="Open /dev chat window" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, prompt Amazon Q Developer to develop a user authentication feature.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27kmdvhfv4y00oc02vg4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27kmdvhfv4y00oc02vg4.png" alt="Develop User Auth Feature" width="800" height="664"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll see visual feedback informing you that Amazon Q Developer is generating code for the requested feature.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0cbbqcct6chxh3da1ns9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0cbbqcct6chxh3da1ns9.png" alt="Generating Code..." width="800" height="603"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0rwwrhna1wxxp4tdh4v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0rwwrhna1wxxp4tdh4v.png" alt="Displaying Code Generation Progress" width="800" height="1025"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the code is generated, you'll be able to peruse and accept the suggestions (or give feedback and request regeneration)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpt4vztyxluqrsgi71e99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpt4vztyxluqrsgi71e99.png" alt=" " width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's it, your user authentication feature is ready!&lt;/p&gt;

&lt;h2&gt;
  
  
  /review - Detect issues with codebase
&lt;/h2&gt;

&lt;p&gt;Open a new Amazon Q chat window and start typing &lt;code&gt;/review&lt;/code&gt; then select the option presented to you to open a &lt;code&gt;/review&lt;/code&gt; chat window.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzqr1gvs9548zh3r9zon.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzqr1gvs9548zh3r9zon.png" alt="Open /review Chat Window" width="800" height="869"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can then choose to review the workspace or just the active file in your Visual Studio Code window. Let's opt to review the workspace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fim9reopx3wpk2ex7uwog.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fim9reopx3wpk2ex7uwog.png" alt="Review Workspace or Active File" width="800" height="652"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the review is complete, we'll see a list of issues found and their severity. You can then use Amazon Q suggestions to optimize the code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vud1v653ij042io60pz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vud1v653ij042io60pz.png" alt="Detected Issues" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  /doc - Generate a README for our project
&lt;/h2&gt;

&lt;p&gt;Open a new Amazon Q chat window and start typing &lt;code&gt;/doc&lt;/code&gt; then select the option presented to you to open a &lt;code&gt;/doc&lt;/code&gt; chat window.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5eyelbm0gpbkhjascuv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5eyelbm0gpbkhjascuv.png" alt="Open /doc Chat Window" width="800" height="669"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can then choose to create a new README or update an existing README. Our project has no README, we will create one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1gzrnx1s9s8gxn093o7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1gzrnx1s9s8gxn093o7.png" alt="Create README" width="800" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the README is created, we can either &lt;strong&gt;Accept&lt;/strong&gt; the suggestion, &lt;strong&gt;Make changes&lt;/strong&gt; to the suggestion, or &lt;strong&gt;Reject&lt;/strong&gt; the suggestion. Let's accept the suggestion.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffw23a9l044jbwxo1neyk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffw23a9l044jbwxo1neyk.png" alt="Generated README" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can now proceed to write a unit test for our project.&lt;/p&gt;

&lt;h2&gt;
  
  
  /test - Write a unit test for a specific method
&lt;/h2&gt;

&lt;p&gt;Open a new Amazon Q chat window and start typing &lt;code&gt;/test&lt;/code&gt; then select the option presented to you to open a &lt;code&gt;/test&lt;/code&gt; chat window.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F224r5w1pjzdjmmong0zx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F224r5w1pjzdjmmong0zx.png" alt="Open /test Chat Window" width="800" height="753"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;/dev&lt;/code&gt; step above created the class &lt;code&gt;UserDetailsServiceImpl.java&lt;/code&gt; in the directory &lt;code&gt;src/main/java/com/shesa/user/security/&lt;/code&gt;. I'll ask Amazon Q Developer to generate a unit test for the &lt;code&gt;loadUserByUsername&lt;/code&gt; method in this class.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttoi92hadqi7vwip2boi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttoi92hadqi7vwip2boi.png" alt="Specify Method to Test" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can then review the generated unit test and accept it if it looks good.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yw2wlp6x24017gqxotm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yw2wlp6x24017gqxotm.png" alt="Generated Unit Test" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  /transform - Upgrade Java project (if using version 8 or 11)
&lt;/h2&gt;

&lt;p&gt;Open a new Amazon Q chat window and start typing &lt;code&gt;/transform&lt;/code&gt; then select the option presented to you to open a &lt;code&gt;/transform&lt;/code&gt; chat window.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65lwaf1ku9vety9sk3jh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65lwaf1ku9vety9sk3jh.png" alt="Open /transform Chat Window" width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Given that our project is already using Java 17, we won't be able to transform its codebase. However, if you have a project using Java 8 or Java 11, you could use the &lt;code&gt;/transform&lt;/code&gt; feature to upgrade it up to Java 17.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd68b7q5xmkagsspz78mx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd68b7q5xmkagsspz78mx.png" alt="Upgrade Java Version" width="800" height="964"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it! I hope you find this useful.&lt;br&gt;
Don't hesitate to leave comments if you have any.&lt;/p&gt;

&lt;p&gt;Hasta la proxima!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>java</category>
      <category>genai</category>
      <category>aws</category>
    </item>
    <item>
      <title>Configure EKS Cluster Security - Pod Security, Network Policies, Pod Identity</title>
      <dc:creator>Stéphane Noutsa</dc:creator>
      <pubDate>Sun, 19 Jan 2025 22:26:09 +0000</pubDate>
      <link>https://forem.com/aws-builders/configure-eks-cluster-security-pod-security-network-policies-pod-identity-506d</link>
      <guid>https://forem.com/aws-builders/configure-eks-cluster-security-pod-security-network-policies-pod-identity-506d</guid>
      <description>&lt;p&gt;This blog post picks up from the &lt;a href="https://dev.to/aws-builders/provision-eks-cluster-with-terraform-terragrunt-github-actions-1c64"&gt;previous article&lt;/a&gt; which provisions an EKS cluster using Terraform and GitHub Actions.&lt;br&gt;
Here, we'll look at securing our cluster's resources using pod security groups and network policies.&lt;/p&gt;

&lt;p&gt;First, we need to configure our bastion host to be able to communicate with the cluster. We'll need to use &lt;strong&gt;Session Manager&lt;/strong&gt; to connect to our bastion host to be able to follow along in this blog post.&lt;/p&gt;
&lt;h1&gt;
  
  
  Configure AWS credentials
&lt;/h1&gt;

&lt;p&gt;Check &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html" rel="noopener noreferrer"&gt;this link&lt;/a&gt; to see how to configure your AWS credentials. Make sure to use the same credentials as those used to create the EKS cluster.&lt;/p&gt;
&lt;h1&gt;
  
  
  Use AWS CLI to save kubeconfig file
&lt;/h1&gt;

&lt;p&gt;&lt;code&gt;aws eks update-kubeconfig --name &amp;lt;cluster_name&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Be sure to replace &lt;code&gt;&amp;lt;cluster_name&amp;gt;&lt;/code&gt; with the name of your EKS cluster. Mine is &lt;code&gt;eks-demo&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check the kubeconfig file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cat ~/.kube/config&lt;/code&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Download and apply EKS aws-auth
&lt;/h1&gt;

&lt;p&gt;To grant our IAM principal the ability to interact with our EKS cluster, first download the &lt;code&gt;aws-auth&lt;/code&gt; &lt;code&gt;ConfigMap&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/aws-auth-cm.yaml&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We should then edit the downloaded &lt;code&gt;aws-auth-cm.yaml&lt;/code&gt; file (using Vim or Nano) and replace &lt;code&gt;&amp;lt;ARN of instance role (not instance profile)&amp;gt;&lt;/code&gt; with the ARN of our worker node IAM role (not its instance profile's ARN), then save the file.&lt;/p&gt;

&lt;p&gt;We can then apply the configuration with the following line:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f aws-auth-cm.yaml&lt;/code&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Configure Pod Security Group
&lt;/h1&gt;

&lt;p&gt;Below is a diagram of the infrastructure we want to set up:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1ny9timidn8sujix0ba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1ny9timidn8sujix0ba.png" alt=" " width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the diagram we have an RDS database with its security group configured that only allows access to the green pod (through its security group). So no other pod, besides the green pod, will be able to communicate with the RDS database.&lt;/p&gt;

&lt;p&gt;These are the steps we'll follow to configure and test our pod security group:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create an Amazon RDS database protected by a security group called db_sg.&lt;/li&gt;
&lt;li&gt;Create a security group called pod_sg that will be allowed to connect to the RDS instance.&lt;/li&gt;
&lt;li&gt;Deploy a SecurityGroupPolicy that will automatically attach the pod_sg security group to a pod with the correct metadata.&lt;/li&gt;
&lt;li&gt;Deploy two pods (green and blue) using the same image and verify that only one of them (green) can connect to the Amazon RDS database.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Create DB Security Group (db_sg)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export VPC_ID=$(aws eks describe-cluster \
    --name eks-demo \
    --query "cluster.resourcesVpcConfig.vpcId" \
    --output text)

# create DB security group
aws ec2 create-security-group \
    --description 'DB SG' \
    --group-name 'db_sg' \
    --vpc-id ${VPC_ID}

# save the security group ID for future use
export DB_SG=$(aws ec2 describe-security-groups \
    --filters Name=group-name,Values=db_sg Name=vpc-id,Values=${VPC_ID} \
    --query "SecurityGroups[0].GroupId" --output text)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create Pod Security Group (pod_sg)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# create the Pod security group
aws ec2 create-security-group \
    --description 'POD SG' \
    --group-name 'pod_sg' \
    --vpc-id ${VPC_ID}

# save the security group ID for future use
export POD_SG=$(aws ec2 describe-security-groups \
    --filters Name=group-name,Values=pod_sg Name=vpc-id,Values=${VPC_ID} \
    --query "SecurityGroups[0].GroupId" --output text)

echo "Pod security group ID: ${POD_SG}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Add Ingress Rules to db_sg&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One rule is to allow bastion host to populate DB, the other rule is to allow pod_sg to connect to DB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get IMDSv2 Token
export TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`

# Instance IP
export INSTANCE_IP=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -s http://169.254.169.254/latest/meta-data/local-ipv4)

# allow instance to connect to RDS
aws ec2 authorize-security-group-ingress \
    --group-id ${DB_SG} \
    --protocol tcp \
    --port 5432 \
    --cidr ${INSTANCE_IP}/32

# Allow pod_sg to connect to the RDS
aws ec2 authorize-security-group-ingress \
    --group-id ${DB_SG} \
    --protocol tcp \
    --port 5432 \
    --source-group ${POD_SG}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Configure Node Group's Security Group to Allow Pod to Communicate with its Node for DNS Resolution&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export NODE_GROUP_SG=$(aws ec2 describe-security-groups \
    --filters Name=tag:Name,Values=eks-cluster-sg-eks-demo-* Name=vpc-id,Values=${VPC_ID} \
    --query "SecurityGroups[0].GroupId" \
    --output text)
echo "Node Group security group ID: ${NODE_GROUP_SG}"

# allow pod_sg to connect to NODE_GROUP_SG using TCP 53
aws ec2 authorize-security-group-ingress \
    --group-id ${NODE_GROUP_SG} \
    --protocol tcp \
    --port 53 \
    --source-group ${POD_SG}

# allow pod_sg to connect to NODE_GROUP_SG using UDP 53
aws ec2 authorize-security-group-ingress \
    --group-id ${NODE_GROUP_SG} \
    --protocol udp \
    --port 53 \
    --source-group ${POD_SG}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create RDS DB&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This post assumes that you have some knowledge of RDS databases and won't focus on this step.&lt;/p&gt;

&lt;p&gt;You should create a DB subnet group consisting of the 2 data subnets created in the &lt;a href="https://dev.to/aws-builders/provision-eks-cluster-with-terraform-terragrunt-github-actions-1c64"&gt;previous article&lt;/a&gt;, and use this subnet group for the RDS database you're provisioning.&lt;/p&gt;

&lt;p&gt;I have named my database &lt;code&gt;eks_demo&lt;/code&gt; (DB name, not DB identifier), and this name is referenced in some steps below. If you give your database a different name, you must update this in the corresponding steps below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Populate DB with sample data&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dnf update
sudo dnf install postgresql15.x86_64 postgresql15-server -y
sudo postgresql-setup --initdb
sudo systemctl start postgresql
sudo systemctl enable postgresql

# Use Vim to edit the postgresql.conf file to listen from all address
sudo vi /var/lib/pgsql/data/postgresql.conf

# Replace this line
listen_addresses = 'localhost'

# with the following line
listen_addresses = '*'

# Backup your postgres config file
sudo cp /var/lib/pgsql/data/pg_hba.conf /var/lib/pgsql/data/pg_hba.conf.bck

# Allow connections from all addresses with password authentication
# First edit the pg_hba.conf file
sudo vi /var/lib/pgsql/data/pg_hba.conf
# Then add the following line to the file
host all all 0.0.0.0/0 md5

# Restart the postgres service
sudo systemctl restart postgresql

cat &amp;lt;&amp;lt; EOF &amp;gt; sg-per-pod-pgsql.sql
CREATE TABLE welcome (column1 TEXT);
insert into welcome values ('--------------------------');
insert into welcome values ('  Welcome to the EKS lab  ');
insert into welcome values ('--------------------------');
EOF

psql postgresql://&amp;lt;RDS_USER&amp;gt;:&amp;lt;RDS_PASSWORD&amp;gt;@&amp;lt;RDS_ENDPOINT&amp;gt;:5432/&amp;lt;RDS_DATABASE_NAME&amp;gt;?ssl=true -f sg-per-pod-pgsql.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Be sure to replace &lt;code&gt;&amp;lt;RDS_USER&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;RDS_PASSWORD&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;RDS_ENDPOINT&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;RDS_DATABASE_NAME&amp;gt;&lt;/code&gt; with the right values for your RDS database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configure CNI to Manage Network Interfaces for Pods&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl -n kube-system set env daemonset aws-node ENABLE_POD_ENI=true

# Wait for the rolling update of the daemonset
kubectl -n kube-system rollout status ds aws-node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that this requires the &lt;code&gt;AmazonEKSVPCResourceController&lt;/code&gt; AWS-managed policy to be attached to the cluster's role, that will allow it to manage ENIs and IPs for the worker nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create SecurityGroupPolicy Custom Resource&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A new Custom Resource Definition (CRD) has also been added automatically at the cluster creation. Cluster administrators can specify which security groups to assign to pods through the SecurityGroupPolicy CRD. Within a namespace, you can select pods based on pod labels, or based on labels of the service account associated with a pod. For any matching pods, you also define the security group IDs to be applied.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Verify the CRD is present with this command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get crd securitygrouppolicies.vpcresources.k8s.aws&lt;/code&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The webhook watches SecurityGroupPolicy custom resources for any changes, and automatically injects matching pods with the extended resource request required for the pod to be scheduled onto a node with available branch network interface capacity. Once the pod is scheduled, the resource controller will create and attach a branch interface to the trunk interface. Upon successful attachment, the controller adds an annotation to the pod object with the branch interface details.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Next, create the policy configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; sg-per-pod-policy.yaml
apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
  name: allow-rds-access
spec:
  podSelector:
    matchLabels:
      app: green-pod
  securityGroups:
    groupIds:
      - ${POD_SG}
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, deploy the policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f sg-per-pod-policy.yaml
kubectl describe securitygrouppolicy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create Secret for DB Access&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create secret generic rds --from-literal="password=&amp;lt;RDS_PASSWORD&amp;gt;" --from-literal="host=&amp;lt;RDS_ENDPOINT&amp;gt;"

kubectl describe secret rds
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure you replace &lt;code&gt;RDS_PASSWORD&lt;/code&gt; and &lt;code&gt;RDS_ENDPOINT&lt;/code&gt; with the correct values for your RDS database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create Docker Image to Test RDS Connection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In order to test our connection to the database, we need to create a Docker image which we'll use to create our pods.&lt;/p&gt;

&lt;p&gt;First, we create a Python script that will handle this connection test:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;postgres_test.py&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os

import boto3
import psycopg2
HOST = os.getenv('HOST')
PORT = "5432"
USER = os.getenv('USER')
REGION = "us-east-1"
DB_NAME = os.getenv('DB_NAME')
PASSWORD = os.getenv('PASSWORD')

session = boto3.Session()
client = boto3.client('rds', region_name=REGION)

conn = None
try:
    conn = psycopg2.connect(host=HOST, port=PORT, database=DB_NAME, user=USER, password=PASSWORD, connect_timeout=3)
    cur = conn.cursor()
    cur.execute("""SELECT version()""")
    query_results = cur.fetchone()
    print(query_results)
    cur.close()
except Exception as e:
    print("Database connection failed due to {}".format(e))
finally:
    if conn is not None:
        conn.close()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code connects to our RDS database and prints the version if successful, otherwise it prints an error message.&lt;/p&gt;

&lt;p&gt;Then, we create a Dockerfile which we'll use to build a Docker image:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dockerfile&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.8.5-slim-buster
ADD postgres_test.py /
RUN pip install psycopg2-binary boto3
CMD [ "python", "-u", "./postgres_test.py" ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we build and push our Docker image to an ECR repo. Make sure you replace &lt;code&gt;&amp;lt;region&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;account_id&amp;gt;&lt;/code&gt; with appropriate values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t postgres-test .
aws ecr create-repository --repository-name postgres-test-demo
aws ecr get-login-password --region &amp;lt;region&amp;gt; | docker login --username AWS --password-stdin &amp;lt;account_id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com
docker tag postgres-test:latest &amp;lt;account_id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com/postgres-test-demo:latest
docker push &amp;lt;account_id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com/postgres-test-demo:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then proceed to create our pod configuration files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;green-pod.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:  name: green-pod
  labels:
    app: green-pod
spec:
  containers:
  - name: green-pod
    image: postgres-test:latest
    env:
    - name: HOST
      valueFrom:
        secretKeyRef:
          name: rds
          key: host
    - name: DB_NAME
      value: eks_demo
    - name: USER
      value: postgres
    - name: PASSWORD
      valueFrom:
        secretKeyRef:
          name: rds
          key: password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;blue-pod.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: blue-pod
  labels:
    app: blue-pod
spec:
  containers:
  - name: blue-pod
    image: postgres-test:latest
    env:
    - name: HOST
      valueFrom:
        secretKeyRef:
          name: rds
          key: host
    - name: DB_NAME
      value: eks_demo
    - name: USER
      value: postgres
    - name: PASSWORD
      valueFrom:
        secretKeyRef:
          name: rds
          key: password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then apply our configurations and check if the connections succeeded:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f green-pod.yaml -f blue-pod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then check the status of your pods using:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get pod&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see an output similar to this (the status could either be &lt;code&gt;Completed&lt;/code&gt; or &lt;code&gt;CrashLoopBackOff&lt;/code&gt;):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvy6lsmhswem8d18376ah.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvy6lsmhswem8d18376ah.png" alt=" " width="800" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can now check our pod's logs and see that the green pod logs the version of our RDS database, while the blue pod logs a timeout error:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2ca4prn0aqgpgsft3bi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2ca4prn0aqgpgsft3bi.png" alt=" " width="800" height="106"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Something else you could check to confirm that your green pod actually uses the pod security group we created is by first describing the pod:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl describe pod green-pod&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see that it has an annotations with an ENI ID:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxofvihfjqdhw87jg56z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frxofvihfjqdhw87jg56z.png" alt=" " width="800" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can go to the AWS EC2 console, look for &lt;code&gt;Network Interfaces&lt;/code&gt; under the &lt;code&gt;Network &amp;amp; Security&lt;/code&gt; menu to the left, then look for an interface whose ID matches the one we saw in the pod annotation. If you select that interface, you should be able to see that it is of type &lt;code&gt;branch&lt;/code&gt; and it has the &lt;code&gt;pod_sg&lt;/code&gt; security group attached to it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgq9ph56dqpbuejb54awr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgq9ph56dqpbuejb54awr.png" alt=" " width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Configure Network Policies
&lt;/h1&gt;

&lt;p&gt;In order to be able to use network policies in our cluster, we must first configure the VPC CNI addon to enable network policies.&lt;/p&gt;

&lt;p&gt;We can use the AWS CLI to get the version of our CNI addon. Replace &lt;code&gt;&amp;lt;cluster_name&amp;gt;&lt;/code&gt; with the name of your cluster:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws eks describe-addon --cluster-name &amp;lt;cluster_name&amp;gt; --addon-name vpc-cni --query addon.addonVersion --output text&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We then update the CNI addon's configuration to enable network policies. Replace &lt;code&gt;&amp;lt;cluster_name&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;addon_version&amp;gt;&lt;/code&gt; with the appropriate values:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws eks update-addon --cluster-name &amp;lt;cluster_name&amp;gt; --addon-name vpc-cni --addon-version &amp;lt;addon_version&amp;gt; --resolve-conflicts PRESERVE --configuration-values '{"enableNetworkPolicy": "true"}'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;With this done, we can now define network policies to limit access to our pods. Below is a diagram of what we're trying to accomplish:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fos5tj9aee5ko9dwdaii1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fos5tj9aee5ko9dwdaii1.png" alt=" " width="800" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take this network policy, for example (&lt;strong&gt;network-policy.yaml&lt;/strong&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-netpol
  namespace: default
spec:
  podSelector:
    matchLabels:
      run: web2
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          run: web1
    ports:
    - protocol: TCP
      port: 80
  egress:
  - to:
    - podSelector:
        matchLabels:
          run: web1
    ports:
    - protocol: TCP
      port: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The policy will affect pods with the label &lt;code&gt;run: web2&lt;/code&gt;, and will allow access from pods with the label &lt;code&gt;run: web1&lt;/code&gt; on port &lt;strong&gt;80&lt;/strong&gt; and to the same pods (with label &lt;code&gt;run: web1&lt;/code&gt;) on port &lt;strong&gt;80&lt;/strong&gt; too.&lt;/p&gt;

&lt;p&gt;Let's now apply our configuration to create our network policy:&lt;br&gt;
&lt;code&gt;k apply -f network-policy.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We'll then create three pods to test this policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k run test --image nginx
k run web1 --image nginx
k run web2 --image nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three pods will be created and will respectively have the labels: &lt;strong&gt;run: test&lt;/strong&gt;, &lt;strong&gt;run: web1&lt;/strong&gt;, and &lt;strong&gt;run: web2&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We can then list these pods and check their IP addresses:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;k get pods -o wide&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then we can exec into each of these pods using the command below and use the shell to run cURL commands to test the network policy. Replace &lt;code&gt;&amp;lt;pod_name&amp;gt;&lt;/code&gt; with the right pod name):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;k exec -it &amp;lt;pod_name&amp;gt; -- /bin/sh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We'll then be able to curl to the other pods and notice that only the &lt;strong&gt;web1&lt;/strong&gt; pod can curl to the &lt;strong&gt;web2&lt;/strong&gt; pod's IP address, and the &lt;strong&gt;web2&lt;/strong&gt; pod can only curl to the &lt;strong&gt;web1&lt;/strong&gt; pod. The &lt;strong&gt;test&lt;/strong&gt; and &lt;strong&gt;web1&lt;/strong&gt; pods can curl to each other without any restrictions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pod Identity Federation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The next thing we'll look at is pod identity.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;EKS Pod Identity makes it easy to use an IAM role across multiple clusters and simplifies policy management by enabling the reuse of permission policies across IAM roles.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When configuring our cluster in the &lt;a href="https://dev.to/aws-builders/provision-eks-cluster-with-terraform-terragrunt-github-actions-1c64"&gt;previous article&lt;/a&gt;, we installed the &lt;code&gt;eks-pod-identity-agent&lt;/code&gt; plugin. This will run the EKS Pod Identity Agent, allowing us to use the plugin's features.&lt;/p&gt;

&lt;p&gt;Let's confirm that the EKS Pod Identity Agent pods are running on our cluster:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get pods -n kube-system | grep 'eks-pod-identity-agent'&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We should see an output similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eks-pod-identity-agent-6s7rj          1/1     Running   0          137meks-pod-identity-agent-mrlm2          1/1     Running   0          135m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we'll create the IAM policy with the permissions that we want our pods to have. For our demo, we want full S3 permissions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;gt;eks-pi-policy.json &amp;lt;&amp;lt;EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::*"
        }
    ]
}
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam create-policy --policy-name eks-pi-policy --policy-document file://eks-pi-policy.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll then create a service account that our pods will use. The IAM policy we created above will be attached to an IAM role which we'll in turn be attached to our service account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;gt;pi-service-account.yaml &amp;lt;&amp;lt;EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: pi-service-account
EOF

kubectl apply -f pi-service-account.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We then create a trust policy file for our IAM role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;gt;pi-trust-relationship.json &amp;lt;&amp;lt;EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
            "Effect": "Allow",
            "Principal": {
                "Service": "pods.eks.amazonaws.com"
            },
            "Action": [
                "sts:AssumeRole",
                "sts:TagSession"
            ]
        }
    ]
}
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we create our IAM role, passing in the trust policy file and a description as arguments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam create-role --role-name eks-pi-role --assume-role-policy-document file://pi-trust-relationship.json --description "IAM role for EKS pod identities"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we attach the IAM policy to our role. Make sure you replace &lt;code&gt;&amp;lt;account_id&amp;gt;&lt;/code&gt; with the appropriate value:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam attach-role-policy --role-name eks-pi-role --policy-arn=arn:aws:iam::&amp;lt;account_id&amp;gt;:policy/eks-pi-policy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we associate our cluster's pod identities with the IAM role we just created. Make sure you replace &lt;code&gt;&amp;lt;cluster_name&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;account_id&amp;gt;&lt;/code&gt; with appropriate values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks create-pod-identity-association --cluster-name &amp;lt;cluster_name&amp;gt; --role-arn arn:aws:iam::&amp;lt;account_id&amp;gt;:role/eks-pi-role --namespace default --service-account pi-service-account
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To test our pod identities' access to AWS, we'll first create a pod called &lt;strong&gt;s3-pod&lt;/strong&gt; with the image &lt;strong&gt;amazon/aws-cli&lt;/strong&gt;. With this image, we can pass AWS CLI commands as arguments to our pod:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;s3-pod.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  labels:
    run: s3-pod
  name: s3-pod
spec:
  serviceAccountName: pi-service-account
  containers:
  - image: amazon/aws-cli
    name: s3-container
    args:
    - s3
    - ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then apply this configuration to create our pod:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f s3-pod.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then we confirm that the pod has the service account token file mount:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl describe pod s3-pod | grep AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We can also check its logs and confirm that it returned a list of S3 bucket in our account. This is because the service account &lt;strong&gt;pi-service-account&lt;/strong&gt; is associated with the IAM role which has full S3 permissions, and the arguments we pass to our container are &lt;code&gt;s3 ls&lt;/code&gt; to list these buckets.&lt;/p&gt;

&lt;p&gt;Below is sample output from the pod's logs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgm57rglaw45hub0v1ns.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgm57rglaw45hub0v1ns.png" alt=" " width="800" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To be certain that our configurations truly work, we'll create another pod that attempts to describe the EC2 instances in our account:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ec2-pod.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  labels:
    run: ec2-pod
  name: ec2-pod
spec:
  serviceAccountName: pi-service-account
  containers:
  - image: amazon/aws-cli
    name: ec2-container
    args:
    - ec2
    - describe-instances
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f ec2-pod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we check the pod's logs, we see that it failed to retrieve the ec2-instances because of permission issues:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmuqti5c8wki1ia1p086.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmuqti5c8wki1ia1p086.png" alt=" " width="800" height="105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this, we've successfully configured our pods to assume IAM roles allowing them to interact with AWS services in our account.&lt;/p&gt;

&lt;p&gt;I hope you liked this article. If you have any questions or remarks, please feel free to leave a comment below.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>eks</category>
      <category>aws</category>
      <category>security</category>
    </item>
    <item>
      <title>Provision EKS Cluster with Terraform, Terragrunt &amp; GitHub Actions</title>
      <dc:creator>Stéphane Noutsa</dc:creator>
      <pubDate>Sat, 11 Jan 2025 19:42:53 +0000</pubDate>
      <link>https://forem.com/aws-builders/provision-eks-cluster-with-terraform-terragrunt-github-actions-1c64</link>
      <guid>https://forem.com/aws-builders/provision-eks-cluster-with-terraform-terragrunt-github-actions-1c64</guid>
      <description>&lt;p&gt;As cloud-native architectures continue to gain momentum, Kubernetes has emerged as the de facto standard for container orchestration. Amazon Elastic Kubernetes Service (EKS) is a popular managed Kubernetes service that simplifies the deployment and management of containerized applications on AWS. To streamline the process of provisioning an EKS cluster and automate infrastructure management, developers and DevOps teams often turn to tools like Terraform, Terragrunt, and GitHub Actions.&lt;/p&gt;

&lt;p&gt;In this article, we will explore the seamless integration of these tools to provision an EKS cluster on AWS, delving into the benefits of using them in combination, the key concepts involved, and the step-by-step process to set up an EKS cluster using infrastructure-as-code principles.&lt;/p&gt;

&lt;p&gt;Whether you are a developer, a DevOps engineer, or an infrastructure enthusiast, this article will serve as a comprehensive guide to help you leverage the power of Terraform, Terragrunt, and GitHub Actions in provisioning and managing your EKS clusters efficiently.&lt;br&gt;
Before diving in though, there are a few things to note.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;&lt;br&gt;
a) Given that we'll use Terraform and Terragrunt to provision our infrastructure, familiarity with these two is required to be able to follow along.&lt;br&gt;
b) Given that we'll use GitHub Actions to automate the provisioning of our infrastructure, familiarity with the tool is required to be able to follow along as well.&lt;br&gt;
c) Some basic understanding of Docker and container orchestration with Kubernetes will also help to follow along.&lt;/p&gt;

&lt;p&gt;These are the steps we'll follow to provision our EKS cluster:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Write Terraform code for building blocks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Write Terragrunt code to provision infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a GitHub Actions workflow and delegate the infrastructure provisioning task to it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add a GitHub Actions workflow job to destroy our infrastructure when we're done.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Below is a diagram of the VPC and its components that we'll create, bearing in mind that the control plane components will be deployed in an EKS-managed VPC:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fih92t2r2xyfte0bhkd04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fih92t2r2xyfte0bhkd04.png" alt="EKS cluster worker node VPC" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Write Terraform code for building blocks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each building block will have the following files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;main.tf
outputs.tf
provider.tf
variables.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll be using version 4.x of the AWS provider for Terraform, so the &lt;strong&gt;provider.tf&lt;/strong&gt; file will be the same in all building blocks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;provider.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "&amp;gt;= 1.4.2"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 4.0"
    }
  }
}

provider "aws" {
  access_key = var.AWS_ACCESS_KEY_ID
  secret_key = var.AWS_SECRET_ACCESS_KEY
  region     = var.AWS_REGION
  token      = var.AWS_SESSION_TOKEN
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see a few variables here that will also be used by all building blocks in the &lt;strong&gt;variables.tf&lt;/strong&gt; file:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "AWS_ACCESS_KEY_ID" {
  type = string
}

variable "AWS_SECRET_ACCESS_KEY" {
  type = string
}

variable "AWS_SESSION_TOKEN" {
  type    = string
  default = null
}

variable "AWS_REGION" {
  type = string
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So when defining the building blocks in the following, these variables won't be explicitly defined, but you should have them in your &lt;strong&gt;variables.tf&lt;/strong&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;a) VPC building block&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "vpc" {
  cidr_block                       = var.vpc_cidr
  instance_tenancy                 = var.instance_tenancy
  enable_dns_support               = var.enable_dns_support
  enable_dns_hostnames             = var.enable_dns_hostnames
  assign_generated_ipv6_cidr_block = var.assign_generated_ipv6_cidr_block

  tags = merge(var.vpc_tags, {
    Name = var.vpc_name
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "vpc_cidr" {
  type = string
}

variable "vpc_name" {
  type = string
}

variable "instance_tenancy" {
  type    = string
  default = "default"
}

variable "enable_dns_support" {
  type    = bool
  default = true
}

variable "enable_dns_hostnames" {
  type = bool
}

variable "assign_generated_ipv6_cidr_block" {
  type    = bool
  default = false
}

variable "vpc_tags" {
  type = map(string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "vpc_id" {
  value = aws_vpc.vpc.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;b) Internet Gateway building block&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_internet_gateway" "igw" {
  vpc_id = var.vpc_id

  tags = merge(var.tags, {
    Name = var.name
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "vpc_id" {
  type = string
}

variable "name" {
  type = string
}

variable "tags" {
  type = map(string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "igw_id" {
  value = aws_internet_gateway.igw.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;c) Route Table building block&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route_table" "route_tables" {
  for_each = { for rt in var.route_tables : rt.name =&amp;gt; rt }

  vpc_id = each.value.vpc_id

  dynamic "route" {
    for_each = { for route in each.value.routes : route.cidr_block =&amp;gt; route if each.value.is_igw_rt }

    content {
      cidr_block = route.value.cidr_block
      gateway_id = route.value.igw_id
    }
  }

  dynamic "route" {
    for_each = { for route in each.value.routes : route.cidr_block =&amp;gt; route if !each.value.is_igw_rt }

    content {
      cidr_block     = route.value.cidr_block
      nat_gateway_id = route.value.nat_gw_id
    }
  }

  tags = merge(each.value.tags, {
    Name         = each.value.name
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "route_tables" {
  type = list(object({
    name      = string
    vpc_id    = string
    is_igw_rt = bool

    routes = list(object({
      cidr_block = string
      igw_id     = optional(string)
      nat_gw_id  = optional(string)
    }))

    tags = map(string)
  }))
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "route_table_ids" {
  value = values(aws_route_table.route_tables)[*].id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;d) Subnet building block&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create public subnets
resource "aws_subnet" "public_subnets" {
  for_each = { for subnet in var.subnets : subnet.name =&amp;gt; subnet if subnet.is_public }

  vpc_id                              = each.value.vpc_id
  cidr_block                          = each.value.cidr_block
  availability_zone                   = each.value.availability_zone
  map_public_ip_on_launch             = each.value.map_public_ip_on_launch
  private_dns_hostname_type_on_launch = each.value.private_dns_hostname_type_on_launch

  tags = merge(each.value.tags, {
    Name = each.value.name
  })
}

# Associate public subnets with their route table
resource "aws_route_table_association" "public_subnets" {
  for_each = { for subnet in var.subnets : subnet.name =&amp;gt; subnet if subnet.is_public }

  subnet_id      = aws_subnet.public_subnets[each.value.name].id
  route_table_id = each.value.route_table_id
}

# Create private subnets
resource "aws_subnet" "private_subnets" {
  for_each = { for subnet in var.subnets : subnet.name =&amp;gt; subnet if !subnet.is_public }

  vpc_id                              = each.value.vpc_id
  cidr_block                          = each.value.cidr_block
  availability_zone                   = each.value.availability_zone
  private_dns_hostname_type_on_launch = each.value.private_dns_hostname_type_on_launch

  tags = merge(each.value.tags, {
    Name = each.value.name
  })
}

# Associate private subnets with their route table
resource "aws_route_table_association" "private_subnets" {
  for_each = { for subnet in var.subnets : subnet.name =&amp;gt; subnet if !subnet.is_public }

  subnet_id      = aws_subnet.private_subnets[each.value.name].id
  route_table_id = each.value.route_table_id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "subnets" {
  type = list(object({
    name                                = string
    vpc_id                              = string
    cidr_block                          = string
    availability_zone                   = optional(string)
    map_public_ip_on_launch             = optional(bool, true)
    private_dns_hostname_type_on_launch = optional(string, "resource-name")
    is_public                           = optional(bool, true)
    route_table_id                      = string
    tags                                = map(string)
  }))
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "public_subnets" {
  value = values(aws_subnet.public_subnets)[*].id
}

output "private_subnets" {
  value = values(aws_subnet.private_subnets)[*].id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;e) Elastic IP building block&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_eip" "eip" {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "eip_id" {
  value = aws_eip.eip.allocation_id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;f) NAT Gateway building block&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_nat_gateway" "nat_gw" {
  allocation_id = var.eip_id
  subnet_id     = var.subnet_id

  tags = merge(var.tags, {
    Name = var.name
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "name" {
  type = string
}

variable "eip_id" {
  type = string
}

variable "subnet_id" {
  type        = string
  description = "The ID of the public subnet in which the NAT Gateway should be placed"
}

variable "tags" {
  type = map(string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "nat_gw_id" {
  value = aws_nat_gateway.nat_gw.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;g) NACL&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_network_acl" "nacls" {
  for_each = { for nacl in var.nacls : nacl.name =&amp;gt; nacl }

  vpc_id = each.value.vpc_id

  dynamic "egress" {
    for_each = { for rule in each.value.egress : rule.rule_no =&amp;gt; rule }

    content {
      protocol   = egress.value.protocol
      rule_no    = egress.value.rule_no
      action     = egress.value.action
      cidr_block = egress.value.cidr_block
      from_port  = egress.value.from_port
      to_port    = egress.value.to_port
    }
  }

  dynamic "ingress" {
    for_each = { for rule in each.value.ingress : rule.rule_no =&amp;gt; rule }

    content {
      protocol   = ingress.value.protocol
      rule_no    = ingress.value.rule_no
      action     = ingress.value.action
      cidr_block = ingress.value.cidr_block
      from_port  = ingress.value.from_port
      to_port    = ingress.value.to_port
    }
  }

  tags = merge(each.value.tags, {
    Name = each.value.name
  })
}

resource "aws_network_acl_association" "nacl_associations" {
  for_each = { for nacl in var.nacls : "${nacl.name}_${nacl.subnet_id}" =&amp;gt; nacl }

  network_acl_id = aws_network_acl.nacls[each.value.name].id
  subnet_id      = each.value.subnet_id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "nacls" {
  type = list(object({
    name   = string
    vpc_id = string
    egress = list(object({
      protocol   = string
      rule_no    = number
      action     = string
      cidr_block = string
      from_port  = number
      to_port    = number
    }))
    ingress = list(object({
      protocol   = string
      rule_no    = number
      action     = string
      cidr_block = string
      from_port  = number
      to_port    = number
    }))
    subnet_id = string
    tags      = map(string)
  }))
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "nacls" {
  value = values(aws_network_acl.nacls)[*].id
}

output "nacl_associations" {
  value = values(aws_network_acl_association.nacl_associations)[*].id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;h) Security Group building block&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "security_group" {
  name        = var.name
  description = var.description
  vpc_id      = var.vpc_id

  # Ingress rules
  dynamic "ingress" {
    for_each = var.ingress_rules
    content {
      from_port   = ingress.value.from_port
      to_port     = ingress.value.to_port
      protocol    = ingress.value.protocol
      cidr_blocks = ingress.value.cidr_blocks
    }
  }

  # Egress rules
  dynamic "egress" {
    for_each = var.egress_rules
    content {
      from_port   = egress.value.from_port
      to_port     = egress.value.to_port
      protocol    = egress.value.protocol
      cidr_blocks = egress.value.cidr_blocks
    }
  }

  tags = merge(var.tags, {
    Name = var.name
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "vpc_id" {
  type = string
}

variable "name" {
  type = string
}

variable "description" {
  type = string
}

variable "ingress_rules" {
  type = list(object({
    protocol    = string
    from_port   = string
    to_port     = string
    cidr_blocks = list(string)
  }))
  default = []
}

variable "egress_rules" {
  type = list(object({
    protocol    = string
    from_port   = string
    to_port     = string
    cidr_blocks = list(string)
  }))
  default = []
}

variable "tags" {
  type = map(string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "security_group_id" {
  value = aws_security_group.security_group.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;i) EC2 building block&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# AMI
data "aws_ami" "ami" {
  most_recent = var.most_recent_ami
  owners      = var.owners

  filter {
    name   = var.ami_name_filter
    values = var.ami_values_filter
  }
}

# EC2 Instance
resource "aws_instance" "ec2_instance" {
  ami                         = data.aws_ami.ami.id
  iam_instance_profile        = var.use_instance_profile ? var.instance_profile_name : null
  instance_type               = var.instance_type
  subnet_id                   = var.subnet_id
  vpc_security_group_ids      = var.existing_security_group_ids
  associate_public_ip_address = var.assign_public_ip
  key_name                    = var.uses_ssh ? var.keypair_name : null
  user_data                   = var.use_userdata ? file(var.userdata_script_path) : null
  user_data_replace_on_change = var.use_userdata ? var.user_data_replace_on_change : null

  tags = merge(
    {
      Name = var.instance_name
    },
    var.extra_tags
  )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "most_recent_ami" {
  type = bool
}

variable "owners" {
  type    = list(string)
  default = ["amazon"]
}

variable "ami_name_filter" {
  type    = string
  default = "name"
}

variable "ami_values_filter" {
  type    = list(string)
  default = ["al2023-ami-2023.*-x86_64"]
}

variable "use_instance_profile" {
  type    = bool
  default = false
}

variable "instance_profile_name" {
  type = string
}

variable "instance_name" {
  description = "Name of the instance"
  type        = string
}

variable "subnet_id" {
  description = "ID of the subnet"
  type        = string
}

variable "instance_type" {
  description = "Type of EC2 instance"
  type        = string
  default     = "t2.micro"
}

variable "assign_public_ip" {
  type    = bool
  default = true
}

variable "extra_tags" {
  description = "Additional tags for EC2 instances"
  type        = map(string)
  default     = {}
}

variable "existing_security_group_ids" {
  description = "security group IDs for EC2 instances"
  type        = list(string)
}

variable "uses_ssh" {
  type = bool
}

variable "keypair_name" {
  type = string
}
variable "use_userdata" {
  description = "Whether to use userdata"
  type        = bool
  default     = false
}

variable "userdata_script_path" {
  description = "Path to the userdata script"
  type        = string
}

variable "user_data_replace_on_change" {
  type = bool
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "instance_id" {
  value = aws_instance.ec2_instance.id
}

output "instance_arn" {
  value = aws_instance.ec2_instance.arn
}

output "instance_private_ip" {
  value = aws_instance.ec2_instance.private_ip
}

output "instance_public_ip" {
  value = aws_instance.ec2_instance.public_ip
}

output "instance_public_dns" {
  value = aws_instance.ec2_instance.public_dns
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;j) IAM Role building block&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_iam_policy_document" "assume_role" {
  statement {
    effect = "Allow"

    dynamic "principals" {
      for_each = { for principal in var.principals : principal.type =&amp;gt; principal }
      content {
        type        = principals.value.type
        identifiers = principals.value.identifiers
      }
    }

    actions = ["sts:AssumeRole"]

    dynamic "condition" {
      for_each = var.is_external ? [var.condition] : []

      content {
        test     = condition.value.test
        variable = condition.value.variable
        values   = condition.value.values
      }
    }
  }
}

data "aws_iam_policy_document" "policy_document" {
  dynamic "statement" {
    for_each = { for statement in var.policy_statements : statement.sid =&amp;gt; statement }

    content {
      effect    = "Allow"
      actions   = statement.value.actions
      resources = statement.value.resources

      dynamic "condition" {
        for_each = statement.value.has_condition ? [statement.value.condition] : []

        content {
          test     = condition.value.test
          variable = condition.value.variable
          values   = condition.value.values
        }
      }
    }
  }
}

resource "aws_iam_role" "role" {
  name               = var.role_name
  assume_role_policy = data.aws_iam_policy_document.assume_role.json
}

resource "aws_iam_role_policy" "policy" {
  count = length(var.policy_statements) &amp;gt; 0 &amp;amp;&amp;amp; var.policy_name != "" ? 1 : 0

  name   = var.policy_name
  role   = aws_iam_role.role.id
  policy = data.aws_iam_policy_document.policy_document.json
}

resource "aws_iam_role_policy_attachment" "attachment" {
  for_each = { for attachment in var.policy_attachments : attachment.arn =&amp;gt; attachment }

  policy_arn = each.value.arn
  role       = aws_iam_role.role.name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "principals" {
  type = list(object({
    type        = string
    identifiers = list(string)
  }))
}

variable "is_external" {
  type    = bool
  default = false
}

variable "condition" {
  type = object({
    test     = string
    variable = string
    values   = list(string)
  })

  default = {
    test     = "test"
    variable = "variable"
    values   = ["values"]
  }
}

variable "role_name" {
  type = string
}

variable "policy_name" {
  type = string
}

variable "policy_attachments" {
  type = list(object({
    arn = string
  }))

  default = []
}

variable "policy_statements" {
  type = list(object({
    sid           = string
    actions       = list(string)
    resources     = list(string)
    has_condition = optional(bool, false)
    condition = optional(object({
      test     = string
      variable = string
      values   = list(string)
    }))
  }))

  default = [
    {
      sid = "CloudWatchLogsPermissions"
      actions = [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:DescribeLogGroups",
        "logs:DescribeLogStreams",
        "logs:PutLogEvents",
        "logs:GetLogEvents",
        "logs:FilterLogEvents",
      ],
      resources = ["*"]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "role_arn" {
  value = aws_iam_role.role.arn
}

output "role_name" {
  value = aws_iam_role.role.name
}

output "unique_id" {
  value = aws_iam_role.role.unique_id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;k) Instance Profile building block&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Instance Profile
resource "aws_iam_instance_profile" "instance_profile" {
  name = var.instance_profile_name
  path = var.path
  role = var.iam_role_name

  tags = merge(var.instance_profile_tags, {
    Name = var.instance_profile_name
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "instance_profile_name" {
  type        = string
  description = "(Optional, Forces new resource) Name of the instance profile. If omitted, Terraform will assign a random, unique name. Conflicts with name_prefix. Can be a string of characters consisting of upper and lowercase alphanumeric characters and these special characters: _, +, =, ,, ., @, -. Spaces are not allowed."
}

variable "iam_role_name" {
  type        = string
  description = "(Optional) Name of the role to add to the profile."
}

variable "path" {
  type        = string
  default     = "/"
  description = "(Optional, default ' / ') Path to the instance profile. For more information about paths, see IAM Identifiers in the IAM User Guide. Can be a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. Can include any ASCII character from the ! (\u0021) through the DEL character (\u007F), including most punctuation characters, digits, and upper and lowercase letters."
}

variable "instance_profile_tags" {
  type = map(string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "arn" {
  value = aws_iam_instance_profile.instance_profile.arn
}

output "name" {
  value = aws_iam_instance_profile.instance_profile.name
}

output "id" {
  value = aws_iam_instance_profile.instance_profile.id
}

output "unique_id" {
  value = aws_iam_instance_profile.instance_profile.unique_id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;l) EKS Cluster building block&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# EKS Cluster
resource "aws_eks_cluster" "cluster" {
  name                      = var.name
  enabled_cluster_log_types = var.enabled_cluster_log_types
  role_arn                  = var.cluster_role_arn
  version                   = var.cluster_version

  vpc_config {
    subnet_ids = var.subnet_ids
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "name" {
  type        = string
  description = "(Required) Name of the cluster. Must be between 1-100 characters in length. Must begin with an alphanumeric character, and must only contain alphanumeric characters, dashes and underscores (^[0-9A-Za-z][A-Za-z0-9\\-_]+$)."
}

variable "enabled_cluster_log_types" {
  type        = list(string)
  description = "(Optional) List of the desired control plane logging to enable."
  default     = []
}

variable "cluster_role_arn" {
  type        = string
  description = "(Required) ARN of the IAM role that provides permissions for the Kubernetes control plane to make calls to AWS API operations on your behalf."
}

variable "subnet_ids" {
  type        = list(string)
  description = "(Required) List of subnet IDs. Must be in at least two different availability zones. Amazon EKS creates cross-account elastic network interfaces in these subnets to allow communication between your worker nodes and the Kubernetes control plane."
}

variable "cluster_version" {
  type        = string
  description = "(Optional) Desired Kubernetes master version. If you do not specify a value, the latest available version at resource creation is used and no upgrades will occur except those automatically triggered by EKS. The value must be configured and increased to upgrade the version when desired. Downgrades are not supported by EKS."
  default     = null
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "arn" {
  value = aws_eks_cluster.cluster.arn
}

output "endpoint" {
  value = aws_eks_cluster.cluster.endpoint
}

output "id" {
  value = aws_eks_cluster.cluster.id
}

output "kubeconfig-certificate-authority-data" {
  value = aws_eks_cluster.cluster.certificate_authority[0].data
}

output "name" {
  value = aws_eks_cluster.cluster.name
}

output "oidc_tls_issuer" {
  value = aws_eks_cluster.cluster.identity[0].oidc[0].issuer
}

output "version" {
  value = aws_eks_cluster.cluster.version
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;m) EKS Add-ons building block&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# EKS Add-On
resource "aws_eks_addon" "addon" {
  for_each = { for addon in var.addons : addon.name =&amp;gt; addon }

  cluster_name  = var.cluster_name
  addon_name    = each.value.name
  addon_version = each.value.version
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "addons" {
  type = list(object({
    name    = string
    version = string
  }))
  description = "(Required) Name of the EKS add-on."
}

variable "cluster_name" {
  type        = string
  description = "(Required) Name of the EKS Cluster. Must be between 1-100 characters in length. Must begin with an alphanumeric character, and must only contain alphanumeric characters, dashes and underscores (^[0-9A-Za-z][A-Za-z0-9\\-_]+$)."
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "arns" {
  value = values(aws_eks_addon.addon)[*].arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;n) EKS Node Group building block&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# EKS node group
resource "aws_eks_node_group" "node_group" {
  cluster_name    = var.cluster_name
  node_group_name = var.node_group_name
  node_role_arn   = var.node_role_arn
  subnet_ids      = var.subnet_ids
  version         = var.cluster_version
  ami_type        = var.ami_type
  capacity_type   = var.capacity_type
  disk_size       = var.disk_size
  instance_types  = var.instance_types

  scaling_config {
    desired_size = var.scaling_config.desired_size
    max_size     = var.scaling_config.max_size
    min_size     = var.scaling_config.min_size
  }

  update_config {
    max_unavailable            = var.update_config.max_unavailable
    max_unavailable_percentage = var.update_config.max_unavailable_percentage
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "cluster_name" {
  type        = string
  description = "(Required) Name of the EKS Cluster. Must be between 1-100 characters in length. Must begin with an alphanumeric character, and must only contain alphanumeric characters, dashes and underscores (^[0-9A-Za-z][A-Za-z0-9\\-_]+$)."
}

variable "node_group_name" {
  type        = string
  description = "(Optional) Name of the EKS Node Group. If omitted, Terraform will assign a random, unique name. Conflicts with node_group_name_prefix. The node group name can't be longer than 63 characters. It must start with a letter or digit, but can also include hyphens and underscores for the remaining characters."
}

variable "node_role_arn" {
  type        = string
  description = "(Required) Amazon Resource Name (ARN) of the IAM Role that provides permissions for the EKS Node Group."
}

variable "scaling_config" {
  type = object({
    desired_size = number
    max_size     = number
    min_size     = number
  })

  default = {
    desired_size = 1
    max_size     = 1
    min_size     = 1
  }

  description = "(Required) Configuration block with scaling settings."
}

variable "subnet_ids" {
  type        = list(string)
  description = "(Required) Identifiers of EC2 Subnets to associate with the EKS Node Group. These subnets must have the following resource tag: kubernetes.io/cluster/CLUSTER_NAME (where CLUSTER_NAME is replaced with the name of the EKS Cluster)."
}

variable "update_config" {
  type = object({
    max_unavailable_percentage = optional(number)
    max_unavailable            = optional(number)
  })
}

variable "cluster_version" {
  type        = string
  description = "(Optional) Kubernetes version. Defaults to EKS Cluster Kubernetes version. Terraform will only perform drift detection if a configuration value is provided."
  default     = null
}

variable "ami_type" {
  type        = string
  description = "(Optional) Type of Amazon Machine Image (AMI) associated with the EKS Node Group. Valid values are: AL2_x86_64 | AL2_x86_64_GPU | AL2_ARM_64 | CUSTOM | BOTTLEROCKET_ARM_64 | BOTTLEROCKET_x86_64 | BOTTLEROCKET_ARM_64_NVIDIA | BOTTLEROCKET_x86_64_NVIDIA | WINDOWS_CORE_2019_x86_64 | WINDOWS_FULL_2019_x86_64 | WINDOWS_CORE_2022_x86_64 | WINDOWS_FULL_2022_x86_64 | AL2023_x86_64_STANDARD | AL2023_ARM_64_STANDARD"
  default     = "AL2023_x86_64_STANDARD"
}

variable "capacity_type" {
  type        = string
  description = "(Optional) Type of capacity associated with the EKS Node Group. Valid values: ON_DEMAND, SPOT."
  default     = "ON_DEMAND"
}

variable "disk_size" {
  type        = number
  description = "(Optional) Disk size in GiB for worker nodes. Defaults to 20."
  default     = 20
}

variable "instance_types" {
  type        = list(string)
  description = "(Required) Set of instance types associated with the EKS Node Group. Defaults to [\"t3.medium\"]."
  default     = ["t3.medium"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "arn" {
  value = aws_eks_node_group.node_group.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;&lt;strong&gt;o) IAM OIDC building block (to allow pods to assume IAM roles)&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "tls_certificate" "tls" {
  url = var.oidc_issuer
}

resource "aws_iam_openid_connect_provider" "provider" {
  client_id_list  = var.client_id_list
  thumbprint_list = data.tls_certificate.tls.certificates[*].sha1_fingerprint
  url             = data.tls_certificate.tls.url
}

data "aws_iam_policy_document" "assume_role_policy" {
  statement {
    actions = ["sts:AssumeRoleWithWebIdentity"]
    effect  = "Allow"

    condition {
      test     = "StringEquals"
      variable = "${replace(aws_iam_openid_connect_provider.provider.url, "https://", "")}:sub"
      values   = ["system:serviceaccount:kube-system:aws-node"]
    }

    principals {
      identifiers = [aws_iam_openid_connect_provider.provider.arn]
      type        = "Federated"
    }
  }
}

resource "aws_iam_role" "role" {
  assume_role_policy = data.aws_iam_policy_document.assume_role_policy.json
  name               = var.role_name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "role_name" {
  type        = string
  description = "(Required) Name of the IAM role."
}

variable "client_id_list" {
  type    = list(string)
  default = ["sts.amazonaws.com"]
}

variable "oidc_issuer" {
  type = string
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "provider_arn" {
  value = aws_iam_openid_connect_provider.provider.arn
}

output "provider_id" {
  value = aws_iam_openid_connect_provider.provider.id
}

output "provider_url" {
  value = aws_iam_openid_connect_provider.provider.url
}

output "role_arn" {
  value = aws_iam_role.role.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the building blocks defined, we can now version them into GitHub repositories and use them in the next step to develop our Terragrunt code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Write Terragrunt code to provision infrastructure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our Terragrunt code will have the following directory structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;infra-live/
  &amp;lt;environment&amp;gt;/
    &amp;lt;module_1&amp;gt;/
      terragrunt.hcl
    &amp;lt;module_2&amp;gt;/
      terragrunt.hcl
    ...
    &amp;lt;module_n&amp;gt;/
      terragrunt.hcl
  terragrunt.hcl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For our article, we'll only have a dev directory. This directory will contain directories that will represent the different specific resources we'll want to create.&lt;/p&gt;

&lt;p&gt;Our final folder structure will be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;infra-live/
  dev/
    bastion-ec2/
      terragrunt.hcl
      user-data.sh
    bastion-instance-profile/
      terragrunt.hcl
    bastion-role/
      terragrunt.hcl
    eks-addons/
      terragrunt.hcl
    eks-cluster/
      terragrunt.hcl
    eks-cluster-role/
      terragrunt.hcl
    eks-node-group/
      terragrunt.hcl
    eks-pod-iam/
      terragrunt.hcl
    internet-gateway/
      terragrunt.hcl
    nacl/
      terragrunt.hcl
    nat-gateway/
      terragrunt.hcl
    nat-gw-eip/
      terragrunt.hcl
    private-route-table/
      terragrunt.hcl
    private-subnets/
      terragrunt.hcl
    public-route-table/
      terragrunt.hcl
    public-subnets/
      terragrunt.hcl
    security-group/
      terragrunt.hcl
    vpc/
      terragrunt.hcl
    worker-node-role/
      terragrunt.hcl
  .gitignore
  terragrunt.hcl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;a) infra-live/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our root &lt;strong&gt;terragrunt.hcl&lt;/strong&gt; file will contain the configuration for our remote Terraform state. We'll use an S3 bucket in AWS to store our Terraform state file, and the name of our S3 bucket must be unique for it to be successfully created. This bucket must be created before applying any terragrunt configuration. My S3 bucket is in the N. Virginia region (us-east-1).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;generate "backend" {
  path      = "backend.tf"
  if_exists = "overwrite_terragrunt"
  contents = &amp;lt;&amp;lt;EOF
terraform {
  backend "s3" {
    bucket         = "&amp;lt;s3_bucket_name&amp;gt;"
    key            = "infra-live/${path_relative_to_include()}/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
  }
}
EOF
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure you replace &lt;strong&gt;&lt;/strong&gt; with the name of your own S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;b) infra-live/dev/vpc/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;VPC building block&lt;/strong&gt; to create our VPC.&lt;br&gt;
Our VPC CIDR will be &lt;code&gt;10.0.0.0/16&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/vpc.git"
}

inputs = {
  vpc_cidr = "10.0.0.0/16"
  vpc_name = "eks-demo-vpc"
  enable_dns_hostnames = true
  vpc_tags = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The values passed in the inputs section are the variables that are defined in the building blocks.&lt;/p&gt;

&lt;p&gt;For this module and the following modules, we won't be passing the variables &lt;strong&gt;AWS_ACCESS_KEY_ID&lt;/strong&gt;, &lt;strong&gt;AWS_SECRET_ACCESS_KEY&lt;/strong&gt;, and &lt;strong&gt;AWS_REGION&lt;/strong&gt; since such credentials (bar the &lt;strong&gt;AWS_REGION&lt;/strong&gt; variable) are sensitive. You'll have to add them as secrets in the GitHub repository you'll create to version your Terragrunt code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;c) infra-live/dev/internet-gateway/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;Internet Gateway building block&lt;/strong&gt; as its Terraform source to create our VPC's internet gateway.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/internet-gateway.git"
}

dependency "vpc" {
  config_path = "../vpc"
}

inputs = {
  vpc_id = dependency.vpc.outputs.vpc_id
  name = "eks-demo-igw"
  tags = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;d) infra-live/dev/public-route-table/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;Route Table building block&lt;/strong&gt; as its Terraform source to create our VPC's public route table to be associated with the public subnet we'll create next.&lt;/p&gt;

&lt;p&gt;It also adds a route to direct all internet traffic to the internet gateway.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/route-table.git"
}

dependency "vpc" {
  config_path = "../vpc"
}

dependency "igw" {
  config_path = "../internet-gateway"
}

inputs = {
  route_tables = [
    {
      name      = "eks-demo-public-rt"
      vpc_id    = dependency.vpc.outputs.vpc_id
      is_igw_rt = true

      routes = [
        {
          cidr_block = "0.0.0.0/0"
          igw_id     = dependency.igw.outputs.igw_id
        }
      ]

      tags = {}
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;e) infra-live/dev/public-subnets/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;Subnet building block&lt;/strong&gt; as its Terraform source to create our VPC's public subnet and associate it with the public route table.&lt;/p&gt;

&lt;p&gt;The CIDR for the public subnet will be 10.0.0.0/24.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/subnet.git"
}

dependency "vpc" {
  config_path = "../vpc"
}

dependency "public-route-table" {
  config_path = "../public-route-table"
}

inputs = {
  subnets = [
    {
      name                                = "eks-demo-public-subnet"
      vpc_id                              = dependency.vpc.outputs.vpc_id
      cidr_block                          = "10.0.0.0/24"
      availability_zone                   = "us-east-1a"
      map_public_ip_on_launch             = true
      private_dns_hostname_type_on_launch = "resource-name"
      is_public                           = true
      route_table_id                      = dependency.public-route-table.outputs.route_table_ids[0]
      tags                                = {}
    },

    {
      name                                = "eks-demo-rds-subnet-a"
      vpc_id                              = dependency.vpc.outputs.vpc_id
      cidr_block                          = "10.0.1.0/24"
      availability_zone                   = "us-east-1a"
      map_public_ip_on_launch             = true
      private_dns_hostname_type_on_launch = "resource-name"
      is_public                           = true
      route_table_id                      = dependency.public-route-table.outputs.route_table_ids[0]
      tags                                = {}
    },

    {
      name                                = "eks-demo-rds-subnet-b"
      vpc_id                              = dependency.vpc.outputs.vpc_id
      cidr_block                          = "10.0.2.0/24"
      availability_zone                   = "us-east-1b"
      map_public_ip_on_launch             = true
      private_dns_hostname_type_on_launch = "resource-name"
      is_public                           = true
      route_table_id                      = dependency.public-route-table.outputs.route_table_ids[0]
      tags                                = {}
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;f) infra-live/dev/nat-gw-eip/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;Elastic IP building block&lt;/strong&gt; as its Terraform source to create a static IP in our VPC which we'll associate with the NAT gateway we'll create next.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/eip.git"
}

dependency "vpc" {
  config_path = "../vpc"
}

inputs = {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;g) infra-live/dev/nat-gateway/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;NAT Gateway building block&lt;/strong&gt; as its Terraform to create a NAT Gateway that we'll place in our VPC's public subnet. It will have the previously created elastic IP attached to it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/nat-gateway.git"
}

dependency "eip" {
  config_path = "../nat-gw-eip"
}

dependency "public-subnets" {
  config_path = "../public-subnets"
}

inputs = {
  eip_id = dependency.eip.outputs.eip_id
  subnet_id = dependency.public-subnets.outputs.public_subnets[0]
  name = "eks-demo-nat-gw"
  tags = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;h) infra-live/dev/private-route-table/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;Route Table building block&lt;/strong&gt; as its Terraform source to create our VPC's private route table to be associated with the private subnets we'll create next.&lt;/p&gt;

&lt;p&gt;It also adds a route to direct all internet traffic to the NAT gateway.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/route-table.git"
}

dependency "vpc" {
  config_path = "../vpc"
}

dependency "nat-gw" {
  config_path = "../nat-gateway"
}

inputs = {
  route_tables = [
    {
      name      = "eks-demo-private-rt"
      vpc_id    = dependency.vpc.outputs.vpc_id
      is_igw_rt = false

      routes = [
        {
          cidr_block = "0.0.0.0/0"
          nat_gw_id     = dependency.nat-gw.outputs.nat_gw_id
        }
      ]

      tags = {}
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;i) infra-live/dev/private-subnets/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;Subnet building block&lt;/strong&gt; as its Terraform source to create our VPC's private subnets and associate them with the private route table.&lt;/p&gt;

&lt;p&gt;The CIDRs for the app private subnets will be 10.0.100.0/24 (us-east-1a) and 10.0.200.0/24 (us-east-1b), and those for the DB private subnets will be 10.0.10.0/24 (us-east-1a) and 10.0.20.0/24 (us-east-1b).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/subnet.git"
}

dependency "vpc" {
  config_path = "../vpc"
}

dependency "private-route-table" {
  config_path = "../private-route-table"
}

inputs = {
  subnets = [
    {
      name                                = "eks-demo-app-subnet-a"
      vpc_id                              = dependency.vpc.outputs.vpc_id
      cidr_block                          = "10.0.100.0/24"
      availability_zone                   = "us-east-1a"
      map_public_ip_on_launch             = false
      private_dns_hostname_type_on_launch = "resource-name"
      is_public                           = false
      route_table_id                      = dependency.private-route-table.outputs.route_table_ids[0]
      tags                                = {}
    },

    {
      name                                = "eks-demo-app-subnet-b"
      vpc_id                              = dependency.vpc.outputs.vpc_id
      cidr_block                          = "10.0.200.0/24"
      availability_zone                   = "us-east-1b"
      map_public_ip_on_launch             = false
      private_dns_hostname_type_on_launch = "resource-name"
      is_public                           = false
      route_table_id                      = dependency.private-route-table.outputs.route_table_ids[0]
      tags                                = {}
    },

    {
      name                                = "eks-demo-data-subnet-a"
      vpc_id                              = dependency.vpc.outputs.vpc_id
      cidr_block                          = "10.0.10.0/24"
      availability_zone                   = "us-east-1a"
      map_public_ip_on_launch             = false
      private_dns_hostname_type_on_launch = "resource-name"
      is_public                           = false
      route_table_id                      = dependency.private-route-table.outputs.route_table_ids[0]
      tags                                = {}
    },

    {
      name                                = "eks-demo-data-subnet-b"
      vpc_id                              = dependency.vpc.outputs.vpc_id
      cidr_block                          = "10.0.20.0/24"
      availability_zone                   = "us-east-1b"
      map_public_ip_on_launch             = false
      private_dns_hostname_type_on_launch = "resource-name"
      is_public                           = false
      route_table_id                      = dependency.private-route-table.outputs.route_table_ids[0]
      tags                                = {}
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;j) infra-live/dev/nacl/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;NACL building block&lt;/strong&gt; as its Terraform source to create NACLs for our public and private subnets.&lt;/p&gt;

&lt;p&gt;For the sake of simplicity, we'll configure very loose NACL and security group rules, but in the next blog post, we'll enforce security rules for the VPC and cluster.&lt;/p&gt;

&lt;p&gt;Note, though, that the data subnet's NACLs only allow traffic on port 5432 from the app subnet CIDRs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/nacl.git"
}

dependency "vpc" {
  config_path = "../vpc"
}

dependency "public-subnets" {
  config_path = "../public-subnets"
}

dependency "private-subnets" {
  config_path = "../private-subnets"
}

inputs = {
  _vpc_id = dependency.vpc.outputs.vpc_id
  nacls = [
    # Public NACL
    {
      name   = "eks-demo-public-nacl"
      vpc_id = dependency.vpc.outputs.vpc_id
      egress = [
        {
          protocol = "-1"
          rule_no  = 500
          action   = "allow"
          cidr_block = "0.0.0.0/0"
          from_port = 0
          to_port   = 0
        }
      ]
      ingress = [
        {
          protocol = "-1"
          rule_no  = 100
          action   = "allow"
          cidr_block = "0.0.0.0/0"
          from_port = 0
          to_port   = 0
        }
      ]
      subnet_id = dependency.public-subnets.outputs.public_subnets[0]
      tags      = {}
    },

    # App NACL A
    {
      name   = "eks-demo-nacl-a"
      vpc_id = dependency.vpc.outputs.vpc_id
      egress = [
        {
          protocol = "-1"
          rule_no  = 100
          action   = "allow"
          cidr_block = "0.0.0.0/0"
          from_port = 0
          to_port   = 0
        }
      ]
      ingress = [
        {
          protocol = "-1"
          rule_no  = 100
          action   = "allow"
          cidr_block = "0.0.0.0/0"
          from_port = 0
          to_port   = 0
        }
      ]
      subnet_id = dependency.private-subnets.outputs.private_subnets[0]
      tags      = {}
    },

    # App NACL B
    {
      name   = "eks-demo-nacl-b"
      vpc_id = dependency.vpc.outputs.vpc_id
      egress = [
        {
          protocol = "-1"
          rule_no  = 100
          action   = "allow"
          cidr_block = "0.0.0.0/0"
          from_port = 0
          to_port   = 0
        }
      ]
      ingress = [
        {
          protocol = "-1"
          rule_no  = 100
          action   = "allow"
          cidr_block = "0.0.0.0/0"
          from_port = 0
          to_port   = 0
        }
      ]
      subnet_id = dependency.private-subnets.outputs.private_subnets[1]
      tags      = {}
    },

    # RDS NACL A
    {
      name   = "eks-demo-rds-nacl-a"
      vpc_id = dependency.vpc.outputs.vpc_id
      egress = [
        {
          protocol = "tcp"
          rule_no  = 100
          action   = "allow"
          cidr_block = "10.0.100.0/24"
          from_port = 1024
          to_port   = 65535
        },

        {
          protocol = "tcp"
          rule_no  = 200
          action   = "allow"
          cidr_block = "10.0.200.0/24"
          from_port = 1024
          to_port   = 65535
        },

        {
          protocol = "tcp"
          rule_no  = 300
          action   = "allow"
          cidr_block = "10.0.0.0/24"
          from_port = 1024
          to_port   = 65535
        }
      ]
      ingress = [
        {
          protocol = "tcp"
          rule_no  = 100
          action   = "allow"
          cidr_block = "10.0.100.0/24"
          from_port = 5432
          to_port   = 5432
        },

        {
          protocol = "tcp"
          rule_no  = 200
          action   = "allow"
          cidr_block = "10.0.200.0/24"
          from_port = 5432
          to_port   = 5432
        },

        {
          protocol = "tcp"
          rule_no  = 300
          action   = "allow"
          cidr_block = "10.0.0.0/24"
          from_port = 5432
          to_port   = 5432
        }
      ]
      subnet_id = dependency.private-subnets.outputs.private_subnets[1]
      tags      = {}
    },

    # RDS NACL B
    {
      name   = "eks-demo-rds-nacl-b"
      vpc_id = dependency.vpc.outputs.vpc_id
      egress = [
        {
          protocol = "tcp"
          rule_no  = 100
          action   = "allow"
          cidr_block = "10.0.100.0/24"
          from_port = 1024
          to_port   = 65535
        },

        {
          protocol = "tcp"
          rule_no  = 200
          action   = "allow"
          cidr_block = "10.0.200.0/24"
          from_port = 1024
          to_port   = 65535
        },

        {
          protocol = "tcp"
          rule_no  = 300
          action   = "allow"
          cidr_block = "10.0.0.0/24"
          from_port = 1024
          to_port   = 65535
        }
      ]
      ingress = [
        {
          protocol = "tcp"
          rule_no  = 100
          action   = "allow"
          cidr_block = "10.0.100.0/24"
          from_port = 5432
          to_port   = 5432
        },

        {
          protocol = "tcp"
          rule_no  = 200
          action   = "allow"
          cidr_block = "10.0.200.0/24"
          from_port = 5432
          to_port   = 5432
        },

        {
          protocol = "tcp"
          rule_no  = 300
          action   = "allow"
          cidr_block = "10.0.0.0/24"
          from_port = 5432
          to_port   = 5432
        }
      ]
      subnet_id = dependency.private-subnets.outputs.private_subnets[2]
      tags      = {}
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;k) infra-live/dev/security-group/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;Security Group building block&lt;/strong&gt; as its Terraform source to create a security group for our nodes and bastion host.&lt;/p&gt;

&lt;p&gt;Again, its rules are going to be very loose, but we'll correct that in the next article.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/security-group.git"
}

dependency "vpc" {
  config_path = "../vpc"
}

dependency "public-subnets" {
  config_path = "../public-subnets"
}

dependency "private-subnets" {
  config_path = "../private-subnets"
}

inputs = {
  vpc_id = dependency.vpc.outputs.vpc_id
  name = "public-sg"
  description = "Open security group"
  ingress_rules = [
    {
      protocol    = "-1"
      from_port   = 0
      to_port     = 0
      cidr_blocks = ["0.0.0.0/0"]
    }
  ]
  egress_rules = [
    {
      protocol    = "-1"
      from_port   = 0
      to_port     = 0
      cidr_blocks = ["0.0.0.0/0"]
    }
  ]
  tags = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;l) infra-live/dev/bastion-role/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;IAM Role building block&lt;/strong&gt; as its Terraform source to create an IAM role with the permissions that our bastion host will need to perform EKS actions and to be managed by Systems Manager.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/iam-role.git"
}

inputs = {
  principals = [
    {
      type = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }
  ]
  policy_name = "EKSDemoBastionPolicy"
  policy_attachments = [
    {
      arn = "arn:aws:iam::534876755051:policy/AmazonEKSFullAccessPolicy"
    },
    {
      arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
    }
  ]
  policy_statements = []
  role_name = "EKSDemoBastionRole"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;m) infra-live/dev/bastion-instance-profile/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the *&lt;em&gt;Instance Profile building block&lt;/em&gt; as its Terraform source to create an IAM instance profile for our bastion host. The IAM role created in the previous step is attached to this instance profile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/instance-profile.git"
}

dependency "iam-role" {
  config_path = "../bastion-role"
}

inputs = {
  instance_profile_name = "EKSBastionInstanceProfile"
  path = "/"
  iam_role_name = dependency.iam-role.outputs.role_name
  instance_profile_tags = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;n) infra-live/dev/bastion-ec2/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;EC2 building block&lt;/strong&gt; as its Terraform source to create an EC2 instance which we'll use as a jump box (or bastion host) to manage the worker nodes in our EKS cluster.&lt;/p&gt;

&lt;p&gt;The bastion host will be placed in our public subnet and will have the instance profile we created in the previous step attached to it, as well as our loose security group.&lt;/p&gt;

&lt;p&gt;It is a Linux instance of type t2.micro using the Amazon Linux 2023 AMI with a user data script configured. This script will be defined in the next step.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/ec2.git"
}

dependency "public-subnets" {
  config_path = "../public-subnets"
}

dependency "instance-profile" {
  config_path = "../bastion-instance-profile"
}

dependency "security-group" {
  config_path = "../security-group"
}

inputs = {
  instance_name = "eks-bastion-host"
  use_instance_profile = true
  instance_profile_name = dependency.instance-profile.outputs.name
  most_recent_ami = true
  owners = ["amazon"]
  ami_name_filter = "name"
  ami_values_filter = ["al2023-ami-2023.*-x86_64"]
  instance_type = "t2.micro"
  subnet_id = dependency.public-subnets.outputs.public_subnets[0]
  existing_security_group_ids = [dependency.security-group.outputs.security_group_id]
  assign_public_ip = true
  uses_ssh = false
  keypair_name = ""
  use_userdata = true
  userdata_script_path = "user-data.sh"
  user_data_replace_on_change = true
  extra_tags = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;o) infra-live/dev/bastion-ec2/user-data.sh&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This user data script installs the AWS CLI, as well as the &lt;code&gt;kubectl&lt;/code&gt; and &lt;code&gt;eksctl&lt;/code&gt; tools. It also configures an alias for the &lt;code&gt;kubectl&lt;/code&gt; utility (&lt;code&gt;k&lt;/code&gt;), and bash completion for it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Become root user
sudo su - ec2-user

# Update software packages
sudo yum update -y

# Download AWS CLI package
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv.zip"

# Unzip file
unzip -q awscli.zip

# Install AWS CLI
./aws/install

# Check AWS CLI version
aws —version

# Download kubectl binary
sudo curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

# Give the binary executable permissions
sudo chmod +x ./kubectl

# Move binary to directory in system’s path
sudo mv kubectl /usr/local/bin/
export PATH=/usr/local/bin:$PATH 

# Check kubectl version
kubectl version -—client

# Installing kubectl bash completion on Linux
## If bash-completion is not installed on Linux, install the 'bash-completion' package
## via your distribution's package manager.
## Load the kubectl completion code for bash into the current shell
echo 'source &amp;lt;(kubectl completion bash)' &amp;gt;&amp;gt;~/.bash_profile
## Write bash completion code to a file and source it from .bash_profile
# kubectl completion bash &amp;gt; ~/.kube/completion.bash.inc
# printf "
# # kubectl shell completion
# source '$HOME/.kube/completion.bash.inc'
# " &amp;gt;&amp;gt; $HOME/.bash_profile
# source $HOME/.bash_profile

# Set bash completion for kubectl alias (k)
echo 'alias k=kubectl' &amp;gt;&amp;gt;~/.bashrc
echo 'complete -o default -F __start_kubectl k' &amp;gt;&amp;gt;~/.bashrc

source ~/.bashrc

# Get platform
ARCH=amd64
PLATFORM=$(uname -s)_$ARCH

# Download eksctl tool for platform
curl -sLO "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_$PLATFORM.tar.gz"

# (Optional) Verify checksum
curl -sL "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_checksums.txt" | grep $PLATFORM | sha256sum --check

# Extract binary
tar -xzf eksctl_$PLATFORM.tar.gz -C /tmp &amp;amp;&amp;amp; rm eksctl_$PLATFORM.tar.gz

# Move binary to directory in system’s path
sudo mv /tmp/eksctl /usr/local/bin

# Check eksctl version
eksctl version

# Enable eksctl bash completion
. &amp;lt;(eksctl completion bash)

# Update system
sudo yum update -y

# Install Docker
sudo yum install docker -y

# Start Docker
sudo service docker start

# Add ec2-user to docker group
sudo usermod -a -G docker ec2-user

# Create docker group
newgrp docker

# Ensure docker is on
sudo chkconfig docker on

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;p) infra-live/dev/eks-cluster-role/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;IAM Role building block&lt;/strong&gt; as its Terraform source to create an IAM role for the EKS cluster. It has the managed policy AmazonEKSClusterPolicy attached to it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/iam-role.git"
}

inputs = {
  principals = [
    {
      type = "Service"
      identifiers = ["eks.amazonaws.com"]
    }
  ]
  policy_name = "EKSDemoClusterRolePolicy"
  policy_attachments = [
    {
      arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
    }
  ]
  policy_statements = []
  role_name = "EKSDemoClusterRole"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;q) infra-live/dev/eks-cluster/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;EKS Cluster building block&lt;/strong&gt; as its Terraform source to create an EKS cluster which uses the IAM role created in the previous step.&lt;/p&gt;

&lt;p&gt;The cluster will provision ENIs (Elastic Network Interfaces) in the private subnets we had created, which will be used by the EKS worker nodes.&lt;/p&gt;

&lt;p&gt;The cluster also has various cluster log types enabled for auditing purposes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:gozem-test/eks-cluster.git"
}

dependency "private-subnets" {
  config_path = "../private-subnets"
}

dependency "iam-role" {
  config_path = "../eks-cluster-role"
}

inputs = {
  name = "eks-demo"
  subnet_ids = [dependency.private-subnets.outputs.private_subnets[0], dependency.private-subnets.outputs.private_subnets[1]]
  cluster_role_arn = dependency.iam-role.outputs.role_arn
  enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;r) infra-live/dev/eks-addons/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;EKS Add-ons building block&lt;/strong&gt; as its Terraform source to activate add-ons for our EKS cluster.&lt;/p&gt;

&lt;p&gt;This is very important, given that these add-ons can help with networking within the AWS VPC using the VPC features (&lt;code&gt;vpc-cni&lt;/code&gt;), cluster domain name resolution (&lt;code&gt;coredns&lt;/code&gt;), maintaining network connectivity between services and pods in the cluster (&lt;code&gt;kube-proxy&lt;/code&gt;), managing IAM credentials in the cluster (&lt;code&gt;eks-pod-identity-agent&lt;/code&gt;), or allowing EKS to manage the lifecycle of EBS volumes (&lt;code&gt;aws-ebs-csi-driver&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/eks-addon.git"
}

dependency "cluster" {
  config_path = "../eks-cluster"
}

inputs = {
  cluster_name = dependency.cluster.outputs.name
  addons = [
    {
      name = "vpc-cni"
      version = "v1.18.0-eksbuild.1"
    },
    {
      name = "coredns"
      version = "v1.11.1-eksbuild.6"
    },
    {
      name = "kube-proxy"
      version = "v1.29.1-eksbuild.2"
    },
    {
      name = "aws-ebs-csi-driver"
      version = "v1.29.1-eksbuild.1"
    },
    {
      name = "eks-pod-identity-agent"
      version = "v1.2.0-eksbuild.1"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;s) infra-live/dev/worker-node-role/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;IAM Role building block&lt;/strong&gt; as its Terraform source to create an IAM role for the EKS worker nodes.&lt;/p&gt;

&lt;p&gt;This role grants the node group permissions to carry out its operations within the cluster, and for its nodes to be managed by Systems Manager.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/iam-role.git"
}

inputs = {
  principals = [
    {
      type = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }
  ]
  policy_name = "EKSDemoWorkerNodePolicy"
  policy_attachments = [
    {
      arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
    },
    {
      arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
    },
    {
      arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
    },
    {
      arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
    },
    {
      arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
    }
  ]
  policy_statements = []
  role_name = "EKSDemoWorkerNodeRole"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;t) infra-live/dev/eks-node-group/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;EKS Node Group building block&lt;/strong&gt; as its Terraform source to create a node group in the cluster.&lt;/p&gt;

&lt;p&gt;The nodes in the node group will be provisioned in the VPC's private subnets, and we'll be using on-demand Linux instances of type &lt;code&gt;m5.4xlarge&lt;/code&gt; with the &lt;code&gt;AL2_x86_64&lt;/code&gt; AMI and disk size of &lt;code&gt;20GB&lt;/code&gt;. We use an &lt;code&gt;m5.4xlarge&lt;/code&gt; instance because it supports trunking, which we'll need in the next article to deploy pods and associate security groups to them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/eks-node-group.git"
}

dependency "cluster" {
  config_path = "../eks-cluster"
}

dependency "iam-role" {
  config_path = "../worker-node-role"
}

dependency "private-subnets" {
  config_path = "../private-subnets"
}

inputs = {
  cluster_name = dependency.cluster.outputs.name
  node_role_arn = dependency.iam-role.outputs.role_arn
  node_group_name = "eks-demo-node-group"
  scaling_config = {
    desired_size = 2
    max_size     = 4
    min_size     = 1
  }
  subnet_ids = [dependency.private-subnets.outputs.private_subnets[0], dependency.private-subnets.outputs.private_subnets[1]]
  update_config = {
    max_unavailable_percentage = 50
  }
  ami_type = "AL2_x86_64"
  capacity_type = "ON_DEMAND"
  disk_size = 20
  instance_types = ["m5.4xlarge"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;u) infra-live/dev/eks-pod-iam/terragrunt.hcl&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;IAM OIDC building block&lt;/strong&gt; as its Terraform source to create resources that will allow pods to assume IAM roles and communicate with other AWS services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "git@github.com:&amp;lt;name_or_org&amp;gt;/iam-oidc.git"
}

dependency "cluster" {
  config_path = "../eks-cluster"
}

inputs = {
  role_name = "EKSDemoPodIAMAuth"
  oidc_issuer = dependency.cluster.outputs.oidc_tls_issuer
  client_id_list = ["sts.amazonaws.com"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Having done all this, we now need to create a GitHub repository for our Terragrunt code and push our code to that repository. We should also configure repository secrets for our AWS credentials (&lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt;, &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt;, &lt;code&gt;AWS_REGION&lt;/code&gt;) and an SSH private key that we'll use to access the repositories with our Terraform building blocks.&lt;/p&gt;

&lt;p&gt;Once that is done, we can proceed to create a GitHub Actions workflow to automate the provisioning of our infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Create a GitHub Actions workflow for Automated Infrastructure Provisioning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that our code has been versioned, we can write a workflow that will be triggered whenever we push code to the main branch (use whichever branch you prefer, like master).&lt;br&gt;
Ideally, this workflow should only be triggered after a pull request has been approved to merge to the main branch, but we'll keep it simple for illustration purposes.&lt;/p&gt;

&lt;p&gt;The first thing will be to create a &lt;code&gt;.github/workflows&lt;/code&gt; in the root directory of your &lt;code&gt;infra-live&lt;/code&gt; project. You can then create a YAML file within this &lt;code&gt;infra-live/.github/workflows&lt;/code&gt; directory called &lt;strong&gt;deploy.yml&lt;/strong&gt;, for example.&lt;/p&gt;

&lt;p&gt;We'll add the following code to our &lt;code&gt;infra-live/.github/workflows/configure.yml&lt;/code&gt; file to handle the provisioning of our infrastructure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Deploy

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:
  terraform:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      - name: Setup SSH
        uses: webfactory/ssh-agent@v0.4.1
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.5.5
          terraform_wrapper: false

      - name: Setup Terragrunt
        run: |
          curl -LO "https://github.com/gruntwork-io/terragrunt/releases/download/v0.48.1/terragrunt_linux_amd64"
          chmod +x terragrunt_linux_amd64
          sudo mv terragrunt_linux_amd64 /usr/local/bin/terragrunt
          terragrunt -v

      - name: Apply Terraform changes
        run: |
          cd dev
          terragrunt run-all apply -auto-approve --terragrunt-non-interactive -var AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -var AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -var AWS_REGION=$AWS_DEFAULT_REGION
          cd bastion-ec2
          ip=$(terragrunt output instance_public_ip)
          echo "$ip"
          echo "$ip" &amp;gt; public_ip.txt
          cat public_ip.txt
          pwd
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break down what this file does:&lt;/p&gt;

&lt;p&gt;a) The &lt;code&gt;name: Deploy&lt;/code&gt; line names our workflow &lt;strong&gt;Deploy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;b) The following lines of code tell GitHub to trigger this workflow whenever code is pushed to the main branch or a pull request is merged to the main branch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;c) Then we define our job called &lt;strong&gt;terraform&lt;/strong&gt; using the lines below, telling GitHub to use a runner that runs on the latest version of Ubuntu. Think of a runner as the GitHub server executing the commands in this workflow file for us:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  terraform:
    runs-on: ubuntu-latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;d) We then define a series of steps or blocks of commands that will be executed in order.&lt;br&gt;
The first step uses a GitHub action to checkout our infra-live repository into the runner so that we can start working with it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Checkout repository
        uses: actions/checkout@v2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step uses another GitHub action to help us easily set up SSH on the GitHub runner using the private key we had defined as a repository secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Setup SSH
        uses: webfactory/ssh-agent@v0.4.1
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following step uses yet another GitHub action to help us easily install Terraform on the GitHub runner, specifying the exact version that we need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.5.5
          terraform_wrapper: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we use another step to execute a series of commands that install Terragrunt on the GitHub runner. We use the command &lt;code&gt;terragrunt -v&lt;/code&gt; to check the version of Terragrunt installed and confirm that the installation was successful:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Setup Terragrunt
        run: |
          curl -LO "https://github.com/gruntwork-io/terragrunt/releases/download/v0.48.1/terragrunt_linux_amd64"
          chmod +x terragrunt_linux_amd64
          sudo mv terragrunt_linux_amd64 /usr/local/bin/terragrunt
          terragrunt -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we use a step to apply our Terraform changes, then we use a series of commands to retrieve the public IP address of our provisioned EC2 instance and save it to a file called &lt;strong&gt;public_ip.txt&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Apply Terraform changes
        run: |
          cd dev
          terragrunt run-all apply -auto-approve --terragrunt-non-interactive -var AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -var AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -var AWS_REGION=$AWS_DEFAULT_REGION
          cd bastion-ec2
          ip=$(terragrunt output instance_public_ip)
          echo "$ip"
          echo "$ip" &amp;gt; public_ip.txt
          cat public_ip.txt
          pwd
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that's it! We can now watch the pipeline get triggered when we push code to our main branch, and see how our EKS cluster gets provisioned.&lt;/p&gt;

&lt;p&gt;In the next article, we'll secure our cluster then access our bastion host and get our hands dirty with real Kubernetes action!&lt;/p&gt;

&lt;p&gt;I hope you liked this article. If you have any questions or remarks, please feel free to leave a comment below.&lt;/p&gt;

&lt;p&gt;See you soon!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>devops</category>
      <category>iac</category>
    </item>
    <item>
      <title>Deploying a Containerized App to ECS Fargate Using a Private ECR Repo &amp; Terragrunt</title>
      <dc:creator>Stéphane Noutsa</dc:creator>
      <pubDate>Tue, 16 Jan 2024 00:59:06 +0000</pubDate>
      <link>https://forem.com/aws-builders/deploying-a-containerized-app-to-ecs-fargate-using-a-private-ecr-repo-terragrunt-5b8a</link>
      <guid>https://forem.com/aws-builders/deploying-a-containerized-app-to-ecs-fargate-using-a-private-ecr-repo-terragrunt-5b8a</guid>
      <description>&lt;p&gt;In past articles, we've focused a lot on deployments to servers (Amazon EC2 instances in AWS).&lt;br&gt;
However, in today's fast-paced and ever-evolving world of software development, containerization has become a popular choice for deploying applications due to its scalability, portability, and ease of management.&lt;/p&gt;

&lt;p&gt;Amazon ECS (Elastic Container Service), a highly scalable and fully managed container orchestration service provided by AWS, offers a robust platform for running and managing containers at scale.&lt;br&gt;
Amazon ECR (Elastic Container Registry), on the other hand, is an AWS-managed container image registry service that is secure, scalable, and reliable. It supports private repositories with resource-based permissions using AWS IAM, allowing IAM users and AWS services to securely access your container repositories and images.&lt;br&gt;
By leveraging the power of ECS and the security features of ECR, you can confidently push your containerized application to a private ECR repository, and deploy this application using ECS.&lt;/p&gt;

&lt;p&gt;In this step-by-step guide, we will walk through the process of deploying a containerized app to Amazon ECS using a Docker image stored in a private ECR repository.&lt;br&gt;
Here are some things to note, though, before we get started.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;&lt;br&gt;
a) Given that we'll use Terraform and Terragrunt to provision our infrastructure, familiarity with these two is required to be able to follow along. You can reference one of my &lt;a href="https://dev.to/aws-builders/terraform-terragrunt-to-deploy-a-web-server-with-amazon-ec2-bd9"&gt;previous articles&lt;/a&gt; to get some basics.&lt;br&gt;
b) Given that we'll use GitHub Actions to automate the provisioning of our infrastructure, familiarity with the tool is required to be able to follow along as well.&lt;br&gt;
c) Some basic understanding of Docker and container orchestration will also help to follow along.&lt;/p&gt;

&lt;p&gt;These are the steps we'll follow to deploy our containerized app:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create a private ECR repo and push a Docker image to it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Write code to provision infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Version our infrastructure code with GitHub.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a GitHub Actions workflow and delegate the infrastructure provisioning task to it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add a GitHub Actions workflow job to destroy our infrastructure when we're done.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;1. Create a private ECR repo and push a Docker image to it&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For simplicity, we'll create our ECR repo manually, and then push an Nginx image to it.&lt;/p&gt;

&lt;p&gt;a) Make sure you have the AWS CLI configured locally, and Docker installed as well.&lt;/p&gt;

&lt;p&gt;b) Pull the latest version of the nginx Docker image using the command below:&lt;br&gt;
&lt;code&gt;docker pull nginx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;c) Access the ECR console from the region you intend to create your ECS cluster.&lt;/p&gt;

&lt;p&gt;d) Select &lt;strong&gt;Repositories&lt;/strong&gt; under the &lt;strong&gt;Private registry&lt;/strong&gt; section in the sidebar.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1j41ggefw41gfqmrfm9o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1j41ggefw41gfqmrfm9o.png" alt="ECR private repository" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;e) Click on the &lt;strong&gt;Create repository&lt;/strong&gt; button then make sure the &lt;strong&gt;Private&lt;/strong&gt; radio option is selected.&lt;/p&gt;

&lt;p&gt;f) Enter your private ECR repository name, like &lt;strong&gt;ecs-demo&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhxvhwltod60sa1buffn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhxvhwltod60sa1buffn.png" alt="ECR private repository creation form" width="800" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;g) From your local device, run the following command to login to your private ECR repo. Be sure to replace &lt;strong&gt;&lt;/strong&gt;, &lt;strong&gt;&lt;/strong&gt;, and &lt;strong&gt;&lt;/strong&gt; with the appropriate values for you:&lt;br&gt;
&lt;code&gt;aws ecr get-login-password --region &amp;lt;region&amp;gt; | docker login --username AWS --password-stdin &amp;lt;account_id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;h) Tag the nginx image appropriately so that it can be pushed to your private ECR repo:&lt;br&gt;
&lt;code&gt;docker tag nginx:latest &amp;lt;account_id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com/&amp;lt;repo_name&amp;gt;:latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;i) Push the newly tagged image to your private ECR repo:&lt;br&gt;
&lt;code&gt;docker push &amp;lt;account_id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com/&amp;lt;repo_name&amp;gt;:latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Write code to provision infrastructure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/aws-builders/terraform-terragrunt-to-create-a-vpc-and-its-components-part-i-1hp7"&gt;this article&lt;/a&gt; we write Terraform code for most of the building blocks we'll be using now (VPC, Internet Gateway, Route Table, Subnet, NACL). We also write Terraform code for the &lt;strong&gt;Security Group&lt;/strong&gt; building block in &lt;a href="https://dev.to/aws-builders/terraform-terragrunt-to-deploy-a-web-server-with-amazon-ec2-bd9"&gt;this article&lt;/a&gt;.&lt;br&gt;
You can use those for reference, as in this article we'll focus on the building blocks for an &lt;strong&gt;IAM Role&lt;/strong&gt;, an &lt;strong&gt;ECS Cluster&lt;/strong&gt;, an &lt;strong&gt;ECS Task Definition&lt;/strong&gt;, and an &lt;strong&gt;ECS Service&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A file that will be used by all building blocks will be the &lt;strong&gt;provider.tf&lt;/strong&gt;, which is shown below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;provider.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "&amp;gt;= 1.4.2"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 4.0"
    }
  }
}

provider "aws" {
  access_key = var.AWS_ACCESS_KEY_ID
  secret_key = var.AWS_SECRET_ACCESS_KEY
  region     = var.AWS_REGION
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now start writing the other Terraform code for our building blocks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;a) IAM Role&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The IAM role will be used to define permissions that IAM entities will have.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "AWS_ACCESS_KEY_ID" {
  type = string
}

variable "AWS_SECRET_ACCESS_KEY" {
  type = string
}

variable "AWS_REGION" {
  type = string
}

variable "principals" {
  type = list(object({
    type        = string
    identifiers = list(string)
  }))
}

variable "is_external" {
  type    = bool
  default = false
}

variable "condition" {
  type = object({
    test     = string
    variable = string
    values   = list(string)
  })

  default = {
    test     = "test"
    variable = "variable"
    values   = ["values"]
  }
}

variable "role_name" {
  type = string
}

variable "policy_name" {
  type = string
}

variable "policy_statements" {
  type = list(object({
    sid       = string
    actions   = list(string)
    resources = list(string)
  }))
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_iam_policy_document" "assume_role" {
  statement {
    effect = "Allow"

    dynamic "principals" {
      for_each = { for principal in var.principals : principal.type =&amp;gt; principal }
      content {
        type        = principals.value.type
        identifiers = principals.value.identifiers
      }
    }

    actions = ["sts:AssumeRole"]

    dynamic "condition" {
      for_each = var.is_external ? [var.condition] : []

      content {
        test     = condition.value.test
        variable = condition.value.variable
        values   = condition.value.values
      }
    }
  }
}

data "aws_iam_policy_document" "policy_document" {
  dynamic "statement" {
    for_each = { for statement in var.policy_statements : statement.sid =&amp;gt; statement }

    content {
      effect    = "Allow"
      actions   = statement.value.actions
      resources = statement.value.resources
    }
  }
}

resource "aws_iam_role" "role" {
  name               = var.role_name
  assume_role_policy = data.aws_iam_policy_document.assume_role.json
}

resource "aws_iam_role_policy" "policy" {
  name   = var.policy_name
  role   = aws_iam_role.role.id
  policy = data.aws_iam_policy_document.policy_document.json
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "role_arn" {
  value = aws_iam_role.role.arn
}

output "role_name" {
  value = aws_iam_role.role.name
}

output "unique_id" {
  value = aws_iam_role.role.unique_id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;b) ECS Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The ECS cluster is the main component where your containerized application will reside.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "AWS_ACCESS_KEY_ID" {
  type = string
}

variable "AWS_SECRET_ACCESS_KEY" {
  type = string
}

variable "AWS_REGION" {
  type = string
}

variable "name" {
  type        = string
  description = "(Required) Name of the cluster (up to 255 letters, numbers, hyphens, and underscores)"
}

variable "setting" {
  type = object({
    name  = optional(string, "containerInsights")
    value = optional(string, "enabled")
  })
  description = "(Optional) Configuration block(s) with cluster settings. For example, this can be used to enable CloudWatch Container Insights for a cluster."
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ECS Cluster
resource "aws_ecs_cluster" "cluster" {
  name = var.name

  setting {
    name  = var.setting.name
    value = var.setting.value
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "arn" {
  value = aws_ecs_cluster.cluster.arn
}

output "id" {
  value = aws_ecs_cluster.cluster.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;c) ECS Task Definition&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The ECS task definition is a blueprint for your application that describes the parameters and container(s) that form your application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "AWS_ACCESS_KEY_ID" {
  type = string
}

variable "AWS_SECRET_ACCESS_KEY" {
  type = string
}

variable "AWS_REGION" {
  type = string
}

variable "family" {
  type        = string
  description = "(Required) A unique name for your task definition."
}

variable "container_definitions_path" {
  type        = string
  description = "Path to a JSON file containing a list of valid container definitions"
}

variable "network_mode" {
  type        = string
  description = "(Optional) Docker networking mode to use for the containers in the task. Valid values are none, bridge, awsvpc, and host."
  default     = "awsvpc"
}

variable "compatibilities" {
  type        = list(string)
  description = "(Optional) Set of launch types required by the task. The valid values are EC2 and FARGATE."
  default     = ["FARGATE"]
}

variable "cpu" {
  type        = number
  description = "(Optional) Number of cpu units used by the task. If the requires_compatibilities is FARGATE this field is required."
  default     = null
}

variable "memory" {
  type        = number
  description = "(Optional) Amount (in MiB) of memory used by the task. If the requires_compatibilities is FARGATE this field is required."
  default     = null
}

variable "task_role_arn" {
  type        = string
  description = "(Optional) ARN of IAM role that allows your Amazon ECS container task to make calls to other AWS services."
  default     = null
}

variable "execution_role_arn" {
  type        = string
  description = "(Optional) ARN of the task execution role that the Amazon ECS container agent and the Docker daemon can assume."
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ECS Task Definition
resource "aws_ecs_task_definition" "task_definition" {
  family                   = var.family
  container_definitions    = file(var.container_definitions_path)
  network_mode             = var.network_mode
  requires_compatibilities = var.compatibilities
  cpu                      = var.cpu
  memory                   = var.memory
  task_role_arn            = var.task_role_arn
  execution_role_arn       = var.execution_role_arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "arn" {
  value = aws_ecs_task_definition.task_definition.arn
}

output "revision" {
  value = aws_ecs_task_definition.task_definition.revision
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;d) ECS Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The ECS service can be used to run and maintain a specified number of instances of a task definition simultaneously in an ECS cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "AWS_ACCESS_KEY_ID" {
  type = string
}

variable "AWS_SECRET_ACCESS_KEY" {
  type = string
}

variable "AWS_REGION" {
  type = string
}

variable "name" {
  type        = string
  description = "(Required) Name of the service (up to 255 letters, numbers, hyphens, and underscores)"
}

variable "cluster_arn" {
  type        = string
  description = "(Optional) ARN of an ECS cluster."
}

variable "task_definition_arn" {
  type        = string
  description = "(Optional) Family and revision (family:revision) or full ARN of the task definition that you want to run in your service. Required unless using the EXTERNAL deployment controller. If a revision is not specified, the latest ACTIVE revision is used."
}

variable "desired_count" {
  type        = number
  description = "(Optional) Number of instances of the task definition to place and keep running. Defaults to 0. Do not specify if using the DAEMON scheduling strategy."
}

variable "launch_type" {
  type        = string
  description = "(Optional) Launch type on which to run your service. The valid values are EC2, FARGATE, and EXTERNAL. Defaults to EC2."
  default     = "FARGATE"
}

variable "force_new_deployment" {
  type        = bool
  description = "(Optional) Enable to force a new task deployment of the service. This can be used to update tasks to use a newer Docker image with same image/tag combination (e.g., myimage:latest), roll Fargate tasks onto a newer platform version, or immediately deploy ordered_placement_strategy and placement_constraints updates."
  default     = true
}

variable "network_configuration" {
  type = object({
    subnets          = list(string)
    security_groups  = optional(list(string))
    assign_public_ip = optional(bool)
  })
  description = "(Optional) Network configuration for the service. This parameter is required for task definitions that use the awsvpc network mode to receive their own Elastic Network Interface, and it is not supported for other network modes."
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ECS Service
resource "aws_ecs_service" "service" {
  name                 = var.name
  cluster              = var.cluster_arn
  task_definition      = var.task_definition_arn
  desired_count        = var.desired_count
  launch_type          = var.launch_type
  force_new_deployment = var.force_new_deployment

  network_configuration {
    subnets          = var.network_configuration.subnets
    security_groups  = var.network_configuration.security_groups
    assign_public_ip = var.network_configuration.assign_public_ip
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;outputs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "arn" {
  value = aws_ecs_service.service.id
}

output "name" {
  value = aws_ecs_service.service.name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With all the building blocks in place, we can now write our Terragrunt code that will orchestrate the provisioning of our infrastructure.&lt;br&gt;
The code will have the following directory structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;infra-live/
  dev/
    ecs-cluster/
      terragrunt.hcl
    ecs-service/
      terragrunt.hcl
    ecs-task-definition/
      container-definitions.json
      terragrunt.hcl
    internet-gateway/
      terragrunt.hcl
    nacl/
      terragrunt.hcl
    public-route-table/
      terragrunt.hcl
    public-subnets/
      terragrunt.hcl
    security-group/
      terragrunt.hcl
    task-role/
      terragrunt.hcl
    vpc/
      terragrunt.hcl
  terragrunt.hcl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we'll fill our files with appropriate code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Root terragrunt.hcl file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our root terragrunt.hcl file will contain the configuration for our remote Terraform state. We'll use an S3 bucket in AWS to store our Terraform state file, and the name of our S3 bucket must be unique for it to be successfully created. My S3 bucket is in the N. Virginia region (us-east-1).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;infra-live/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;generate "backend" {
  path      = "backend.tf"
  if_exists = "overwrite_terragrunt"
  contents = &amp;lt;&amp;lt;EOF
terraform {
  backend "s3" {
    bucket         = "&amp;lt;unique_bucket_name&amp;gt;"
    key            = "infra-live/${path_relative_to_include()}/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
  }
}
EOF
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NB:&lt;/strong&gt; Make sure to replace &lt;strong&gt;&lt;/strong&gt; with the name of the S3 bucket you will have created in your AWS account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;a) VPC&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At the core of it all, our ECS cluster components will reside within a VPC, which is why we need this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;infra-live/dev/vpc/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;git_repo_url&amp;gt;
}

inputs = {
  vpc_cidr = "10.0.0.0/16"
  vpc_name = "vpc-dev"
  enable_dns_hostnames = true
  vpc_tags = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this Terragrunt file (and in the subsequent files), replace the terraform source value with the URL of the Git repository hosting your building block's code (we'll get to versioning our infrastructure code soon).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;b) Internet Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;infra-live/dev/internet-gateway/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;git_repo_url&amp;gt;
}

dependency "vpc" {
  config_path = "../vpc"
}

inputs = {
  vpc_id = dependency.vpc.outputs.vpc_id
  name = "igw-dev"
  tags = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;c) Public Route Table&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;infra-live/dev/public-route-table/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;git_repo_url&amp;gt;
}

dependency "vpc" {
  config_path = "../vpc"
}

dependency "igw" {
  config_path = "../internet-gateway"
}

inputs = {
  route_tables = [
    {
      name      = "public-rt-dev"
      vpc_id    = dependency.vpc.outputs.vpc_id
      is_igw_rt = true

      routes = [
        {
          cidr_block = "0.0.0.0/0"
          igw_id     = dependency.igw.outputs.igw_id
        }
      ]

      tags = {}
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;d) Public Subnets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;infra-live/dev/public-subnets/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;git_repo_url&amp;gt;
}

dependency "vpc" {
  config_path = "../vpc"
}

dependency "public-route-table" {
  config_path = "../public-route-table"
}

inputs = {
  subnets = [
    {
      name                                = "public-subnet"
      vpc_id                              = dependency.vpc.outputs.vpc_id
      cidr_block                          = "10.0.1.0/24"
      availability_zone                   = "us-east-1a"
      map_public_ip_on_launch             = true
      private_dns_hostname_type_on_launch = "resource-name"
      is_public                           = true
      route_table_id                      = dependency.public-route-table.outputs.route_table_ids[0]
      tags                                = {}
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;e) NACL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;infra-live/dev/nacl/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;git_repo_url&amp;gt;
}

dependency "vpc" {
  config_path = "../vpc"
}

dependency "public-subnets" {
  config_path = "../public-subnets"
}

inputs = {
  _vpc_id = dependency.vpc.outputs.vpc_id
  nacls = [
    {
      name   = "public-nacl"
      vpc_id = dependency.vpc.outputs.vpc_id
      egress = [
        {
          protocol   = "tcp"
          rule_no    = 100
          action     = "allow"
          cidr_block = "0.0.0.0/0"
          from_port  = 80
          to_port    = 80
        },

        {
          protocol   = "tcp"
          rule_no    = 200
          action     = "allow"
          cidr_block = "0.0.0.0/0"
          from_port  = 443
          to_port    = 443
        }
      ]
      ingress = [
        {
          protocol   = "tcp"
          rule_no    = 100
          action     = "allow"
          cidr_block = "0.0.0.0/0"
          from_port  = 80
          to_port    = 80
        },

        {
          protocol   = "tcp"
          rule_no    = 200
          action     = "allow"
          cidr_block = "0.0.0.0/0"
          from_port  = 443
          to_port    = 443
        }
      ]
      subnet_id = dependency.public-subnets.outputs.public_subnets[0]
      tags      = {}
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;f) Security Group&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;infra-live/dev/security-group/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;git_repo_url&amp;gt;
}

dependency "vpc" {
  config_path = "../vpc"
}

dependency "public-subnets" {
  config_path = "../public-subnets"
}

inputs = {
  vpc_id = dependency.vpc.outputs.vpc_id
  name = "public-sg"
  description = "Web security group"
  ingress_rules = [
    {
      protocol    = "tcp"
      from_port   = 80
      to_port     = 80
      cidr_blocks = ["0.0.0.0/0"]
    },

    {
      protocol    = "tcp"
      from_port   = 443
      to_port     = 443
      cidr_blocks = ["0.0.0.0/0"]
    }
  ]
  egress_rules = [
    {
      protocol    = "-1"
      from_port   = 0
      to_port     = 0
      cidr_blocks = ["0.0.0.0/0"]
    }
  ]
  tags = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;g) Task Role&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This IAM role gives ECR and CloudWatch permissions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;infra-live/dev/task-role/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;git_repo_url&amp;gt;
}

inputs = {
  principals = [
    {
      type = "Service"
      identifiers = ["ecs-tasks.amazonaws.com"]
    }
  ]
  role_name = "ECSTaskExecutionRole"
  policy_name = "ECRTaskExecutionPolicy"
  policy_statements = [
    {
      sid = "ECRPermissions"
      actions = [
        "ecr:BatchCheckLayerAvailability",
        "ecr:BatchGetImage",
        "ecr:DescribeImages",
        "ecr:DescribeImageScanFindings",
        "ecr:DescribeRepositories",
        "ecr:GetAuthorizationToken",
        "ecr:GetDownloadUrlForLayer",
        "ecr:GetLifecyclePolicy",
        "ecr:GetLifecyclePolicyPreview",
        "ecr:GetRepositoryPolicy",
        "ecr:ListImages",
        "ecr:ListTagsForResource"
      ]
      resources = ["*"]
    },
    {
      sid = "CloudWatchLogsPermissions"
      actions = [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:DescribeLogGroups",
        "logs:DescribeLogStreams",
        "logs:PutLogEvents",
        "logs:GetLogEvents",
        "logs:FilterLogEvents",
      ],
      resources = ["*"]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;h) ECS Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;infra-live/dev/ecs-cluster/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;git_repo_url&amp;gt;
}

inputs = {
  name = "ecs-demo"
  setting = {
    name = "containerInsights"
    value = "enabled"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;i) ECS Task Definition&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The ECS task definition references a JSON file that contains the actual container definition configuration.&lt;br&gt;
Be sure to replace &lt;strong&gt;&lt;/strong&gt; with the actual URI of your Docker image in your private ECR repo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;infra-live/dev/ecs-task-definition/container-definitions.json&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
  {
    "name": "ecs-demo",
    "image": &amp;lt;ecr_image_uri&amp;gt;,
    "cpu": 512,
    "memory": 2048,
    "essential": true,
    "portMappings": [
      {
        "containerPort": 80,
        "hostPort": 80
      }
    ],
    "logConfiguration": {
      "logDriver": "awslogs",
      "options": {
        "awslogs-group": "ecs-demo",
        "awslogs-region": "us-east-1",
        "awslogs-stream-prefix": "ecs-demo"
      }
    }
  }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;infra-live/dev/ecs-task-definition/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;git_repo_url&amp;gt;
}

dependency "task_role" {
  config_path = "../task-role"
}

inputs = {
  family = "ecs-demo-task-definition"
  container_definitions_path = "./container-definitions.json"
  network_mode = "awsvpc"
  compatibilities = ["FARGATE"]
  cpu = 512
  memory = 2048
  task_role_arn = dependency.task_role.outputs.role_arn
  execution_role_arn = dependency.task_role.outputs.role_arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;j) ECS Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The ECS service lets us determine how many instances of our task definition we want (&lt;strong&gt;desired_count&lt;/strong&gt;) and which launch type we want for our ECS tasks (&lt;strong&gt;EC2&lt;/strong&gt; or &lt;strong&gt;FARGATE&lt;/strong&gt;). We've selected &lt;strong&gt;FARGATE&lt;/strong&gt; as our launch type, since that's the focus of this article.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;infra-live/dev/ecs-service/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;git_repo_url&amp;gt;
}

dependency "ecs_cluster" {
  config_path = "../ecs-cluster"
}

dependency "ecs_task_definition" {
  config_path = "../ecs-task-definition"
}

dependency "public_subnets" {
  config_path = "../public-subnets"
}

dependency "security_group" {
  config_path = "../security-group"
}

inputs = {
  name = "ecs-demo-service"
  cluster_arn = dependency.ecs_cluster.outputs.arn
  task_definition_arn = dependency.ecs_task_definition.outputs.arn
  desired_count = 2
  launch_type = "FARGATE"
  force_new_deployment = true
  network_configuration = {
    subnets = [dependency.public_subnets.outputs.public_subnets[0]]
    security_groups = [dependency.security_group.outputs.security_group_id]
    assign_public_ip = true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Version our infrastructure code with GitHub&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can use &lt;a href="https://dev.to/aws-builders/ec2-configuration-using-ansible-github-actions-25bj"&gt;this article&lt;/a&gt; as a reference to create repositories for our building blocks' code and Terragrunt code.&lt;/p&gt;

&lt;p&gt;After versioning the building blocks, be sure to update the &lt;code&gt;terragrunt.hcl&lt;/code&gt; files' terraform source in the Terragrunt project with the GitHub URLs for the corresponding building blocks. You can then push these changes to your Terragrunt project's GitHub repo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. GitHub Actions workflow for infrastructure provisioning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With our code written and versioned, we can now create a workflow that will be triggered whenever we push code to the main branch.&lt;/p&gt;

&lt;p&gt;We'll first need to configure some secrets in our GitHub &lt;code&gt;infra-live&lt;/code&gt; repository settings.&lt;br&gt;
Once again, you can use &lt;a href="https://dev.to/aws-builders/ec2-configuration-using-ansible-github-actions-25bj"&gt;this article&lt;/a&gt; for a step-by-step guide on how to do so.&lt;/p&gt;

&lt;p&gt;We can then create a &lt;code&gt;.github/workflows&lt;/code&gt; directory in the root directory of our &lt;code&gt;infra-live&lt;/code&gt; project, and then create a YAML file within this directory which we'll call &lt;code&gt;configure.yml&lt;/code&gt; (you can name it whatever you want, as long as it has a &lt;code&gt;.yml&lt;/code&gt; extension).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;infra-live/.github/workflows/configure.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Configure

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:
  apply:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      - name: Setup SSH
        uses: webfactory/ssh-agent@v0.4.1
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.5.5
          terraform_wrapper: false

      - name: Setup Terragrunt
        run: |
          curl -LO "https://github.com/gruntwork-io/terragrunt/releases/download/v0.48.1/terragrunt_linux_amd64"
          chmod +x terragrunt_linux_amd64
          sudo mv terragrunt_linux_amd64 /usr/local/bin/terragrunt
          terragrunt -v

      - name: Apply Terraform changes
        run: |
          cd dev
          terragrunt run-all apply -auto-approve --terragrunt-non-interactive -var AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -var AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -var AWS_REGION=$AWS_DEFAULT_REGION
        env:
          AWS_ACCESS_KEY_ID: ${{ vars.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_DEFAULT_REGION: ${{ vars.AWS_DEFAULT_REGION }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So our &lt;code&gt;configure.yml&lt;/code&gt; file is executed whenever code is pushed to the &lt;strong&gt;main&lt;/strong&gt; branch or a pull request is merged to the &lt;strong&gt;main&lt;/strong&gt; branch.&lt;br&gt;
We then have an &lt;code&gt;apply&lt;/code&gt; job which runs on the latest version of Ubuntu that checks out our &lt;code&gt;infra-live&lt;/code&gt; GitHub repo, sets up SSH on the GitHub runner to be able to pull our building blocks' code from their various repositories, installs Terraform and Terragrunt, and then applies our Terragrunt configuration.&lt;/p&gt;

&lt;p&gt;Here's some sample output from the execution of our pipeline after pushing code to the main branch:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55ifqzerhh0hsd9gfskk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55ifqzerhh0hsd9gfskk.png" alt="GitHub Actions workflow success" width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below, we can see our service trying to spin up two tasks since our ECS service configuration has a &lt;code&gt;desired_count&lt;/code&gt; of &lt;strong&gt;2&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkh6jpimgmx3nc8b7d4k0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkh6jpimgmx3nc8b7d4k0.png" alt="ECS cluster" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5) GitHub Actions destroy job&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Having provisioned our infrastructure for illustration purposes, we may now want to easily destroy it all to avoid incurring costs.&lt;br&gt;
We can easily do so by adding a job whose task is to destroy our provisioned infrastructure to our GitHub Actions workflow and configure it to be triggered manually.&lt;/p&gt;

&lt;p&gt;We'll start by adding a &lt;code&gt;workflow_dispatch&lt;/code&gt; block to our &lt;code&gt;on&lt;/code&gt; block. This block also allows us to configure inputs whose values we can define when triggering the workflow manually.&lt;br&gt;
In our case, we define a &lt;code&gt;destroy&lt;/code&gt; input which is essentially a dropdown element with two options: &lt;strong&gt;true&lt;/strong&gt; and &lt;strong&gt;false&lt;/strong&gt;.&lt;br&gt;
Selecting &lt;strong&gt;true&lt;/strong&gt; should run the &lt;code&gt;destroy&lt;/code&gt; job, whereas selecting &lt;strong&gt;false&lt;/strong&gt; should run the &lt;code&gt;apply&lt;/code&gt; job.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main
  workflow_dispatch:
    inputs:
      destroy:
        description: 'Run Terragrunt destroy command'
        required: true
        default: 'false'
        type: choice
        options:
          - true
          - false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We now need to add a condition to our &lt;code&gt;apply&lt;/code&gt; job which will cause it to only be run if a) we haven't defined the &lt;strong&gt;destroy&lt;/strong&gt; input or b) we have selected &lt;strong&gt;false&lt;/strong&gt; as the value for our &lt;strong&gt;destroy&lt;/strong&gt; input.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  apply:
    if: ${{ !inputs.destroy || inputs.destroy == 'false' }}
    runs-on: ubuntu-latest
    ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now add a &lt;code&gt;destroy&lt;/code&gt; job which will only run if we select &lt;strong&gt;true&lt;/strong&gt; as the value of our &lt;strong&gt;destroy&lt;/strong&gt; input.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;destroy:
    if: ${{ inputs.destroy == 'true' }}
    runs-on: ubuntu-latest

    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      - name: Setup SSH
        uses: webfactory/ssh-agent@v0.4.1
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.5.5
          terraform_wrapper: false

      - name: Setup Terragrunt
        run: |
          curl -LO "https://github.com/gruntwork-io/terragrunt/releases/download/v0.48.1/terragrunt_linux_amd64"
          chmod +x terragrunt_linux_amd64
          sudo mv terragrunt_linux_amd64 /usr/local/bin/terragrunt
          terragrunt -v

      - name: Destroy Terraform changes
        run: |
          cd dev
          terragrunt run-all destroy -auto-approve --terragrunt-non-interactive -var AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -var AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -var AWS_REGION=$AWS_DEFAULT_REGION
        env:
          AWS_ACCESS_KEY_ID: ${{ vars.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_DEFAULT_REGION: ${{ vars.AWS_DEFAULT_REGION }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So our full &lt;code&gt;configure.yml&lt;/code&gt; file should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Configure

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main
  workflow_dispatch:
    inputs:
      destroy:
        description: 'Run Terragrunt destroy command'
        required: true
        default: 'false'
        type: choice
        options:
          - true
          - false

jobs:
  apply:
    if: ${{ !inputs.destroy || inputs.destroy == 'false' }}
    runs-on: ubuntu-latest

    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      - name: Setup SSH
        uses: webfactory/ssh-agent@v0.4.1
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.5.5
          terraform_wrapper: false

      - name: Setup Terragrunt
        run: |
          curl -LO "https://github.com/gruntwork-io/terragrunt/releases/download/v0.48.1/terragrunt_linux_amd64"
          chmod +x terragrunt_linux_amd64
          sudo mv terragrunt_linux_amd64 /usr/local/bin/terragrunt
          terragrunt -v

      - name: Apply Terraform changes
        run: |
          cd dev
          terragrunt run-all apply -auto-approve --terragrunt-non-interactive -var AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -var AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -var AWS_REGION=$AWS_DEFAULT_REGION
        env:
          AWS_ACCESS_KEY_ID: ${{ vars.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_DEFAULT_REGION: ${{ vars.AWS_DEFAULT_REGION }}

  destroy:
    if: ${{ inputs.destroy == 'true' }}
    runs-on: ubuntu-latest

    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      - name: Setup SSH
        uses: webfactory/ssh-agent@v0.4.1
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.5.5
          terraform_wrapper: false

      - name: Setup Terragrunt
        run: |
          curl -LO "https://github.com/gruntwork-io/terragrunt/releases/download/v0.48.1/terragrunt_linux_amd64"
          chmod +x terragrunt_linux_amd64
          sudo mv terragrunt_linux_amd64 /usr/local/bin/terragrunt
          terragrunt -v

      - name: Destroy Terraform changes
        run: |
          cd dev
          terragrunt run-all destroy -auto-approve --terragrunt-non-interactive -var AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -var AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -var AWS_REGION=$AWS_DEFAULT_REGION
        env:
          AWS_ACCESS_KEY_ID: ${{ vars.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_DEFAULT_REGION: ${{ vars.AWS_DEFAULT_REGION }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then commit and push our code, and see the change in the GitHub interface when we go to our GitHub repo, select the &lt;strong&gt;Actions&lt;/strong&gt; tab, and select our &lt;strong&gt;Configure&lt;/strong&gt; workflow in the left sidebar menu (note that pushing the code to your main branch will still trigger the automatic execution of your pipeline).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn9rorpr98o5cd4gqlf3l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn9rorpr98o5cd4gqlf3l.png" alt="Manual workflow execution" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we select &lt;strong&gt;true&lt;/strong&gt; and click the green &lt;strong&gt;Run workflow&lt;/strong&gt; button, a pipeline will be executed, running just the &lt;code&gt;destroy&lt;/code&gt; job.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4407xr0mppfffnbic7n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4407xr0mppfffnbic7n.png" alt="Apply job skipped" width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the pipeline execution is done, you can check the AWS console to confirm that the ECS cluster and its components have been deleted.&lt;br&gt;
You could choose to recreate the cluster by following the same approach, but selecting &lt;strong&gt;false&lt;/strong&gt; instead of &lt;strong&gt;true&lt;/strong&gt; to trigger the workflow manually and create our resources.&lt;/p&gt;

&lt;p&gt;And that's it! I hope this helps you in your tech journey.&lt;br&gt;
If you have any questions or remark, feel free to leave them in the comments section.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>infrastructureascode</category>
      <category>docker</category>
    </item>
    <item>
      <title>EC2 Configuration using Ansible &amp; GitHub Actions</title>
      <dc:creator>Stéphane Noutsa</dc:creator>
      <pubDate>Sat, 13 Jan 2024 05:14:38 +0000</pubDate>
      <link>https://forem.com/aws-builders/ec2-configuration-using-ansible-github-actions-25bj</link>
      <guid>https://forem.com/aws-builders/ec2-configuration-using-ansible-github-actions-25bj</guid>
      <description>&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Some basic understanding of GitHub, GitHub Actions, Terragrunt, and Ansible is needed to be able to follow along.&lt;/li&gt;
&lt;li&gt;This article builds on my &lt;a href="https://dev.to/aws-builders/terraform-terragrunt-to-deploy-a-web-server-with-amazon-ec2-bd9"&gt;previous article&lt;/a&gt;, so to follow along you'll need to go through it first.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this article, we'll use GitHub Actions &amp;amp; Ansible to deploy a test web page to our provisioned EC2 instance and test that it worked using the instance's public DNS or public IP address.&lt;/p&gt;

&lt;p&gt;In the previous article, we provisioned an EC2 instance in a public subnet using Terraform &amp;amp; Terragrunt. We made sure that this instance would be accessible via SSH, and this is important because the Ansible host will need to connect to the instance via SSH to perform its configuration management tasks.&lt;/p&gt;

&lt;p&gt;These are the steps we'll need to follow to achieve our objectives:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Version our infrastructure code with GitHub.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a GitHub Actions workflow and delegate the infrastructure provisioning to it, instead of applying changes from our local computers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add a job to our GitHub Actions workflow that configures Ansible and deploys our test web page to the provisioned EC2 instance.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;1. Version our infrastructure code with GitHub&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We'll start by creating GitHub repositories for each of our building blocks from the &lt;a href="https://dev.to/aws-builders/terraform-terragrunt-to-deploy-a-web-server-with-amazon-ec2-bd9"&gt;previous article&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should be shown a screen similar to the one below, asking you to enter your repository name and a description for the repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0kfm6t0ciiiilwj1555.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0kfm6t0ciiiilwj1555.png" alt="Creating GitHub repository for building block" width="800" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the appropriate information for the building block, then scroll down and click on the &lt;strong&gt;Create repository&lt;/strong&gt; button.&lt;br&gt;
You can then go to your local code for this building block and push it to your newly created repository.&lt;/p&gt;

&lt;p&gt;Repeat this step for each building block, and you should end up with a list of repositories similar to the one below (you should have more repositories of course).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffowbgx1ot1yy74olasub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffowbgx1ot1yy74olasub.png" alt="List of repositories" width="800" height="635"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should then create a repository for your Terragrunt code, and name it &lt;strong&gt;infra-live&lt;/strong&gt;, for example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xvti0euh52hnpg48vm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xvti0euh52hnpg48vm2.png" alt="Terragrunt code repository" width="716" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next step will be to update each &lt;code&gt;terragrunt.hcl&lt;/code&gt; file in your infra-live project so that it points to the corresponding Git repository for your building blocks, and remove the AWS credentials lines of code from the &lt;code&gt;inputs&lt;/code&gt; section of this file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS_ACCESS_KEY_ID&lt;/li&gt;
&lt;li&gt;AWS_SECRET_ACCESS_KEY_ID&lt;/li&gt;
&lt;li&gt;AWS_REGION&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4luyn5ir1njgpfhsxbm4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4luyn5ir1njgpfhsxbm4.png" alt="Terragrunt VPC pointing to Terraform building block in GitHub" width="800" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can then push your &lt;code&gt;infra-live&lt;/code&gt; code to its GitHub repository, and our infrastructure code will have been versioned!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. GitHub Actions workflow for infrastructure provisioning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that our code has been versioned, we can write a workflow that will be triggered whenever we push code to the main branch (use whichever branch you prefer, like master).&lt;br&gt;
Ideally, this workflow should only be triggered after a pull request has been approved to merge to the main branch, but we'll keep it simple for illustration purposes.&lt;/p&gt;

&lt;p&gt;Before doing anything, we'll configure some secrets in our GitHub &lt;code&gt;infra-live&lt;/code&gt; repository settings. These secrets will be required for the GitHub Actions workflow to be able to properly provision your infrastructure.&lt;/p&gt;

&lt;p&gt;From within the &lt;code&gt;infra-live&lt;/code&gt; repository, click on the &lt;strong&gt;Settings&lt;/strong&gt; tab to access the repository's settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0folj09odcoz6d56yf2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0folj09odcoz6d56yf2.png" alt="Repository settings" width="800" height="47"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the left menu, under the &lt;strong&gt;Security&lt;/strong&gt; block, expand &lt;strong&gt;Secrets and variables&lt;/strong&gt; and select &lt;strong&gt;Actions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1wutcpqdm96etrhm810.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1wutcpqdm96etrhm810.png" alt="Repository secrets" width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can then add repository secrets for AWS credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION) by clicking on the &lt;strong&gt;New repository secret&lt;/strong&gt; button.&lt;br&gt;
You'll also need to create a &lt;strong&gt;SSH_PRIVATE_KEY&lt;/strong&gt; secret, which will be required by Ansible to SSH into the EC2 instance. You should use the content of the &lt;strong&gt;.pem&lt;/strong&gt; file you created for the SSH key pair used in the previous article. It should look similar to this (make sure not to share this with anyone, as it would be a big security risk):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dy9jsn25ktsbgihmht5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dy9jsn25ktsbgihmht5.png" alt="Sample redacted content of SSH private key" width="800" height="864"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can now start working on our GitHub Actions workflow!&lt;br&gt;
The first thing will be to create a &lt;code&gt;.github/workflows&lt;/code&gt; in the root directory of your &lt;code&gt;infra-live&lt;/code&gt; project. You can then create a YAML file within this &lt;code&gt;infra-live/.github/workflows&lt;/code&gt; directory called &lt;strong&gt;configure.yml&lt;/strong&gt;, for example.&lt;/p&gt;

&lt;p&gt;We'll add the following code to our &lt;code&gt;infra-live/.github/workflows/configure.yml&lt;/code&gt; file to handle the provisioning of our infrastructure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Configure

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:
  terraform:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      - name: Setup SSH
        uses: webfactory/ssh-agent@v0.4.1
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.5.5
          terraform_wrapper: false

      - name: Setup Terragrunt
        run: |
          curl -LO "https://github.com/gruntwork-io/terragrunt/releases/download/v0.48.1/terragrunt_linux_amd64"
          chmod +x terragrunt_linux_amd64
          sudo mv terragrunt_linux_amd64 /usr/local/bin/terragrunt
          terragrunt -v

      - name: Apply Terraform changes
        run: |
          cd dev
          terragrunt run-all apply -auto-approve --terragrunt-non-interactive -var AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -var AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -var AWS_REGION=$AWS_DEFAULT_REGION
          cd apache-server/ec2-web-server
          public_ip=$(terragrunt output instance_public_ip)
          echo "$public_ip" &amp;gt; public_ip.txt
          cat public_ip.txt
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break down what this file does:&lt;/p&gt;

&lt;p&gt;a) The &lt;code&gt;name: Configure&lt;/code&gt; line names our workflow &lt;strong&gt;Configure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;b) The following lines of code tell GitHub to trigger this workflow whenever code is pushed to the main branch or a pull request is merged to the main branch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;c) Then we define our first job called &lt;strong&gt;terraform&lt;/strong&gt; using the lines below, telling GitHub to use a runner that runs on the latest version of Ubuntu. Think of a runner as the GitHub server executing the commands in this workflow file for us:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  terraform:
    runs-on: ubuntu-latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;d) We then define a series of steps or blocks of commands that will be executed in order.&lt;br&gt;
The first step uses a GitHub action to checkout our infra-live repository into the runner so that we can start working with it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Checkout repository
        uses: actions/checkout@v2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step uses another GitHub action to help us easily set up SSH on the GitHub runner using the private key we had defined as a repository secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Setup SSH
        uses: webfactory/ssh-agent@v0.4.1
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following step uses yet another GitHub action to help us easily install Terraform on the GitHub runner, specifying the exact version that we need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.5.5
          terraform_wrapper: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we use another step to execute a series of commands that install Terragrunt on the GitHub runner. We use the command &lt;code&gt;terragrunt -v&lt;/code&gt; to check the version of Terragrunt installed and confirm that the installation was successful:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Setup Terragrunt
        run: |
          curl -LO "https://github.com/gruntwork-io/terragrunt/releases/download/v0.48.1/terragrunt_linux_amd64"
          chmod +x terragrunt_linux_amd64
          sudo mv terragrunt_linux_amd64 /usr/local/bin/terragrunt
          terragrunt -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we use a step to apply our Terraform changes, then we use a series of commands to retrieve the public IP address of our provisioned EC2 instance and save it to a file called &lt;strong&gt;public_ip.txt&lt;/strong&gt; (we'll need this for the Ansible configuration).&lt;/p&gt;

&lt;p&gt;With our infrastructure provisioned, we can now proceed to configure Ansible and deploy our test web page in our EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Configure Ansible and deploy test web page&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We can now configure Ansible using a different workflow job that we'll call &lt;strong&gt;ansible&lt;/strong&gt;, and deploy our test web page to our EC2.&lt;/p&gt;

&lt;p&gt;But first, we need to make the file containing the EC2 instance's public IP address (&lt;strong&gt;public_ip.txt&lt;/strong&gt;) from the &lt;strong&gt;terraform&lt;/strong&gt; job available to our &lt;strong&gt;ansible&lt;/strong&gt; job.&lt;br&gt;
For that, we need to add another step to our &lt;strong&gt;terraform&lt;/strong&gt; job to upload the artifact we generated (&lt;strong&gt;public_ip.txt&lt;/strong&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Upload artifact
        uses: actions/upload-artifact@v4
        with:
          name: ip-artifact
          path: dev/apache-server/ec2-web-server/public_ip.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With that out of the way, we can configure our &lt;strong&gt;ansible&lt;/strong&gt; job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  ansible:
    runs-on: ubuntu-latest
    needs: terraform

    steps:
      - name: Download artifact
        uses: actions/download-artifact@v4
        with:
          name: ip-artifact

      - name: Configure Ansible
        run: |
          sudo apt update
          sudo pipx inject ansible-core jmespath
          ansible-playbook --version
          sudo echo "[web]" &amp;gt;&amp;gt; ansible_hosts
          sudo cat public_ip.txt &amp;gt;&amp;gt; ansible_hosts
          mv ansible_hosts $HOME
          sudo cat $HOME/ansible_hosts

      - name: Configure playbook
        run: |
          cd $HOME
          cat &amp;gt; deploy.yml &amp;lt;&amp;lt;EOF
          ---
          - hosts: web
            remote_user: ec2-user
            become: true

            tasks:
              - name: Create web page
                copy:
                  dest: "/var/www/html/test.html"
                  content: |
                    &amp;lt;html&amp;gt;
                      &amp;lt;head&amp;gt;
                        &amp;lt;title&amp;gt;Test Page&amp;lt;/title&amp;gt;
                      &amp;lt;/head&amp;gt;
                      &amp;lt;body&amp;gt;
                        &amp;lt;h1&amp;gt;This is a test page&amp;lt;/h1&amp;gt;
                      &amp;lt;/body&amp;gt;
          EOF
          cat $HOME/deploy.yml

      - name: Run playbook
        uses: dawidd6/action-ansible-playbook@v2
        with:
          playbook: deploy.yml
          directory: /home/runner
          key: ${{secrets.SSH_PRIVATE_KEY}}
          options: |
            --inventory ansible_hosts
            --verbose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break this down.&lt;/p&gt;

&lt;p&gt;a) We define our second job called &lt;strong&gt;ansible&lt;/strong&gt;, telling GitHub again to use a runner with the latest version of Ubuntu, and specifying that this job needs the &lt;strong&gt;terraform&lt;/strong&gt; job to first complete successfully before it can be run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  ansible:
    runs-on: ubuntu-latest
    needs: terraform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b) We then define our job's steps, the first being to download the artifact we generated in the previous job using a GitHub action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Download artifact
        uses: actions/download-artifact@v4
        with:
          name: ip-artifact
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;c) The next step is to install Ansible on the runner and create our inventory (or hosts) file. We define a group of servers called &lt;strong&gt;[web]&lt;/strong&gt; in this file and pass the public IP address of our EC2 instance to this &lt;strong&gt;[web]&lt;/strong&gt; group.&lt;br&gt;
We then move our inventory file to our $HOME directory so that it can be accessed in the subsequent steps.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Configure Ansible
        run: |
          sudo apt update
          sudo pipx inject ansible-core jmespath
          ansible-playbook --version
          sudo echo "[web]" &amp;gt;&amp;gt; ansible_hosts
          sudo cat public_ip.txt &amp;gt;&amp;gt; ansible_hosts
          mv ansible_hosts $HOME
          sudo cat $HOME/ansible_hosts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;d) In the following step, we configure our Ansible playbook by defining its configuration and putting it in a file called &lt;strong&gt;deploy.yml&lt;/strong&gt; in our $HOME directory. The configuration has a task to create an HTML page called &lt;strong&gt;test.html&lt;/strong&gt; in the &lt;code&gt;/var/www/html/&lt;/code&gt; directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Configure playbook
        run: |
          cd $HOME
          cat &amp;gt; deploy.yml &amp;lt;&amp;lt;EOF
          ---
          - hosts: web
            remote_user: ec2-user
            become: true

            tasks:
              - name: Create web page
                copy:
                  dest: "/var/www/html/test.html"
                  content: |
                    &amp;lt;html&amp;gt;
                      &amp;lt;head&amp;gt;
                        &amp;lt;title&amp;gt;Test Page&amp;lt;/title&amp;gt;
                      &amp;lt;/head&amp;gt;
                      &amp;lt;body&amp;gt;
                        &amp;lt;h1&amp;gt;This is a test page&amp;lt;/h1&amp;gt;
                      &amp;lt;/body&amp;gt;
          EOF
          cat $HOME/deploy.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;e) Finally, our last step runs our Ansible playbook using a custom GitHub action that takes as input the name of our playbook file, the path to the directory that has our playbook file, our SSH private key (which is retrieved from the repository secrets), and an argument to determine where our inventory file is located.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Run playbook
        uses: dawidd6/action-ansible-playbook@v2
        with:
          playbook: deploy.yml
          directory: /home/runner
          key: ${{secrets.SSH_PRIVATE_KEY}}
          options: |
            --inventory ansible_hosts
            --verbose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The final version of our workflow file should then look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Configure

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:
  terraform:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      - name: Setup SSH
        uses: webfactory/ssh-agent@v0.4.1
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.5.5
          terraform_wrapper: false

      - name: Setup Terragrunt
        run: |
          curl -LO "https://github.com/gruntwork-io/terragrunt/releases/download/v0.48.1/terragrunt_linux_amd64"
          chmod +x terragrunt_linux_amd64
          sudo mv terragrunt_linux_amd64 /usr/local/bin/terragrunt
          terragrunt -v

      - name: Apply Terraform changes
        run: |
          cd dev
          terragrunt run-all apply -auto-approve --terragrunt-non-interactive -var AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -var AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -var AWS_REGION=$AWS_DEFAULT_REGION
          cd apache-server/ec2-web-server
          public_ip=$(terragrunt output instance_public_ip)
          echo "$public_ip" &amp;gt; public_ip.txt
          cat public_ip.txt
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}

      - name: Upload artifact
        uses: actions/upload-artifact@v4
        with:
          name: ip-artifact
          path: dev/apache-server/ec2-web-server/public_ip.txt

  ansible:
    runs-on: ubuntu-latest
    needs: terraform

    steps:
      - name: Download artifact
        uses: actions/download-artifact@v4
        with:
          name: ip-artifact

      - name: Configure Ansible
        run: |
          sudo apt update
          sudo pipx inject ansible-core jmespath
          ansible-playbook --version
          sudo echo "[web]" &amp;gt;&amp;gt; ansible_hosts
          sudo cat public_ip.txt &amp;gt;&amp;gt; ansible_hosts
          mv ansible_hosts $HOME
          sudo cat $HOME/ansible_hosts

      - name: Configure playbook
        run: |
          cd $HOME
          cat &amp;gt; deploy.yml &amp;lt;&amp;lt;EOF
          ---
          - hosts: web
            remote_user: ec2-user
            become: true

            tasks:
              - name: Create web page
                copy:
                  dest: "/var/www/html/test.html"
                  content: |
                    &amp;lt;html&amp;gt;
                      &amp;lt;head&amp;gt;
                        &amp;lt;title&amp;gt;Test Page&amp;lt;/title&amp;gt;
                      &amp;lt;/head&amp;gt;
                      &amp;lt;body&amp;gt;
                        &amp;lt;h1&amp;gt;This is a test page&amp;lt;/h1&amp;gt;
                      &amp;lt;/body&amp;gt;
          EOF
          cat $HOME/deploy.yml

      - name: Run playbook
        uses: dawidd6/action-ansible-playbook@v2
        with:
          playbook: deploy.yml
          directory: /home/runner
          key: ${{secrets.SSH_PRIVATE_KEY}}
          options: |
            --inventory ansible_hosts
            --verbose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now commit and push our code to the main branch of our &lt;code&gt;infra-live&lt;/code&gt; GitHub repository and the pipeline will be automatically triggered to provision our infrastructure and deploy our test web page to our EC2 instance.&lt;/p&gt;

&lt;p&gt;Both workflow jobs should succeed like in the image below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8vc2q2azzrhut8m52e8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8vc2q2azzrhut8m52e8.png" alt="Jobs succeeded" width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should then be able to access the test web page which was deployed by opening a browser and entering the public IP address of your EC2 instance then &lt;code&gt;/test.html&lt;/code&gt;.&lt;br&gt;
For example, &lt;code&gt;http://18.212.153.185/test.html&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Your browser should display like in the image below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktdgdtknrj4yxorf7xtu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktdgdtknrj4yxorf7xtu.png" alt="Test web page displayed in browser" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
We now have some foundations on how to use GitHub Actions to help us automate the provisioning of our infrastructure using Terraform and Terragrunt, as well as the configuration of our servers using Ansible. We can build on this to design more complex pipelines depending on our use cases.&lt;/p&gt;

&lt;p&gt;If I made any mistake or you think I could have done something more efficiently, please don't hesitate to point that out to me in a comment below.&lt;/p&gt;

&lt;p&gt;Until next time, happy coding!!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>ansible</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>Terraform &amp; Terragrunt to Deploy a Web Server with Amazon EC2</title>
      <dc:creator>Stéphane Noutsa</dc:creator>
      <pubDate>Fri, 03 Nov 2023 15:35:41 +0000</pubDate>
      <link>https://forem.com/aws-builders/terraform-terragrunt-to-deploy-a-web-server-with-amazon-ec2-bd9</link>
      <guid>https://forem.com/aws-builders/terraform-terragrunt-to-deploy-a-web-server-with-amazon-ec2-bd9</guid>
      <description>&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Some basic understanding of the AWS cloud, Terraform, and Terragrunt is needed to be able to follow along with this tutorial.&lt;/li&gt;
&lt;li&gt;This article builds on my previous two articles, so to follow along you'll need to go through them first:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/aws-builders/terraform-terragrunt-to-create-a-vpc-and-its-components-part-i-1hp7"&gt;Terraform &amp;amp; Terragrunt to Create a VPC and its Components (Part I)&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dev.to/aws-builders/terraform-terragrunt-to-create-a-vpc-and-its-components-part-ii-1pl6"&gt;Terraform &amp;amp; Terragrunt to Create a VPC and its Components (Part II)&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article, we'll use Terraform &amp;amp; Terragrunt to deploy an Apache web server to an EC2 instance that will be in the public subnet of a VPC. As stated in the disclaimer above, this article builds on my last articles, whose links are provided in the disclaimer.&lt;/p&gt;

&lt;p&gt;An EC2 instance, which stands for Elastic Compute Cloud, is a virtual server in AWS. It allows you to run applications and services on the AWS cloud infrastructure and provides computing resources, such as CPU, memory, storage, and networking capabilities, which can be easily configured and scaled as per your requirements. You can think of an EC2 instance as a virtual machine in the cloud.&lt;/p&gt;

&lt;p&gt;By the end of this article, we'll be able to access the Apache web server deployed to our EC2 instance by using its public IP address or its public DNS name.&lt;br&gt;
Below are the different components we'll create to reach our objective:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Security group building block&lt;/li&gt;
&lt;li&gt;SSH key pair building block&lt;/li&gt;
&lt;li&gt;EC2 instance profile building block&lt;/li&gt;
&lt;li&gt;EC2 instance building block&lt;/li&gt;
&lt;li&gt;Security group module in VPC orchestration Terragrunt code&lt;/li&gt;
&lt;li&gt;Web server orchestration Terragrunt code&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Our building blocks will have the same common files as described in &lt;a href="https://dev.to/aws-builders/terraform-terragrunt-to-create-a-vpc-and-its-components-part-i-1hp7"&gt;this article&lt;/a&gt;, although the &lt;strong&gt;variables.tf&lt;/strong&gt; files will have additional variables in them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Security group building block&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This building block will be used to set a firewall (security rules) on our EC2 instance. It will allow us to define multiple ingress and egress rules at once for any security group that we create.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "security_group" {
  name        = var.name
  description = var.description
  vpc_id      = var.vpc_id

  # Ingress rules
  dynamic "ingress" {
    for_each = var.ingress_rules
    content {
      from_port   = ingress.value.from_port
      to_port     = ingress.value.to_port
      protocol    = ingress.value.protocol
      cidr_blocks = ingress.value.cidr_blocks
    }
  }

  # Egress rules
  dynamic "egress" {
    for_each = var.egress_rules
    content {
      from_port   = egress.value.from_port
      to_port     = egress.value.to_port
      protocol    = egress.value.protocol
      cidr_blocks = egress.value.cidr_blocks
    }
  }

  tags = merge(var.tags, {
    Name = var.name
  })
}

output "security_group_id" {
  value = aws_security_group.security_group.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt; (additional variables)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "vpc_id" {
  type = string
}

variable "name" {
  type = string
}

variable "description" {
  type = string
}

variable "ingress_rules" {
  type = list(object({
    protocol    = string
    from_port   = string
    to_port     = string
    cidr_blocks = list(string)
  }))
  default = []
}

variable "egress_rules" {
  type = list(object({
    protocol    = string
    from_port   = string
    to_port     = string
    cidr_blocks = list(string)
  }))
  default = []
}

variable "tags" {
  type = map(string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. SSH key pair building block&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This building block will allow us to create key pairs that we'll use to SSH into our EC2 instance. We'll first need to use OpenSSH to manually create a key pair, then provide the public key as an input to this building block (in the corresponding Terragrunt module).&lt;/p&gt;

&lt;p&gt;This article shows you how to create a key pair on macOS and Linux:&lt;br&gt;
&lt;a href="https://docs.digitalocean.com/products/droplets/how-to/add-ssh-keys/create-with-openssh/" rel="noopener noreferrer"&gt;https://docs.digitalocean.com/products/droplets/how-to/add-ssh-keys/create-with-openssh/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt; (additional variables)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "key_name" {
  type = string
}

variable "public_key" {
  type = string
}

variable "tags" {
  type = map(string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_key_pair" "ssh" {
  key_name   = var.key_name
  public_key = var.public_key

  tags = merge(var.tags, {
    Name = var.key_name
  })
}

output "key_name" {
  value = aws_key_pair.ssh.key_name
}

output "key_pair_id" {
  value = aws_key_pair.ssh.key_pair_id
}

output "key_pair_arn" {
  value = aws_key_pair.ssh.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NB: We actually don't need this because our EC2 instance profile's role will allow our EC2 instance to be managed by Systems Manager (an AWS service), which will allow us to log into our instance using Session Manager (a Systems Manager feature) without needing an SSH key pair.&lt;br&gt;
(&lt;strong&gt;This key pair will be used in the next article where Ansible gets involved, so stay alert for that one&lt;/strong&gt; 😉)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. EC2 instance profile building block&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An EC2 instance profile in AWS is a container for an IAM (Identity and Access Management) role that you can assign to an EC2 instance. It provides the necessary permissions for the instance to access other AWS services and resources securely.&lt;/p&gt;

&lt;p&gt;For the purpose of this article, our instance profile will be assigned a role with permissions to be managed by Systems Manager.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt; (additional variables)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "iam_policy_statements" {
  type = list(object({
    sid    = string
    effect = string
    principals = object({
      type        = optional(string)
      identifiers = list(string)
    })
    actions   = list(string)
    resources = list(string)
  }))
}

variable "iam_role_name" {
  type = string
}

variable "iam_role_description" {
  type = string
}

variable "iam_role_path" {
  type = string
}

variable "other_policy_arns" {
  type = list(string)
}

variable "instance_profile_name" {
  type = string
}

variable "tags" {
  type = map(string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# IAM Policy
data "aws_iam_policy_document" "iam_policy" {
  dynamic "statement" {
    for_each = { for statement in var.iam_policy_statements : statement.sid =&amp;gt; statement }

    content {
      sid    = statement.value.sid
      effect = statement.value.effect

      principals {
        type        = statement.value.principals.type
        identifiers = statement.value.principals.identifiers
      }

      actions   = statement.value.actions
      resources = statement.value.resources
    }
  }
}

# IAM Role
resource "aws_iam_role" "iam_role" {
  name               = var.iam_role_name
  description        = var.iam_role_description
  path               = var.iam_role_path
  assume_role_policy = data.aws_iam_policy_document.iam_policy.json

  tags = {
    Name = var.iam_role_name
  }
}

# Attach more policies to role
resource "aws_iam_role_policy_attachment" "other_policies" {
  for_each = toset([for policy_arn in var.other_policy_arns : policy_arn])

  role       = aws_iam_role.iam_role.name
  policy_arn = each.value
}

# EC2 Instance Profile
resource "aws_iam_instance_profile" "instance_profile" {
  name = var.instance_profile_name
  role = aws_iam_role.iam_role.name

  tags = merge(var.tags, {
    Name = var.instance_profile_name
  })
}

output "instance_profile_name" {
  value = aws_iam_instance_profile.instance_profile.name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. EC2 instance building block&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This building block will create the virtual machine where the Apache web server will be deployed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt; (additional variables)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "most_recent_ami" {
  type = bool
}

variable "owners" {
  type = list(string)
}

variable "ami_name_filter" {
  type = string
}

variable "ami_values_filter" {
  type = list(string)
}

variable "instance_profile_name" {
  type = string
}

variable "instance_type" {
  type = string
}

variable "subnet_id" {
  type = string
}

variable "associate_public_ip_address" {
  type = bool
}

variable "vpc_security_group_ids" {
  type = list(string)
}

variable "has_user_data" {
  type = bool
}

variable "user_data_path" {
  type = string
}

variable "user_data_replace_on_change" {
  type = bool
}

variable "instance_name" {
  type = string
}

variable "uses_ssh" {
  type = bool
}

variable "key_name" {
  type = string
}

variable "tags" {
  type = map(string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# AMI
data "aws_ami" "ami" {
  most_recent = var.most_recent_ami
  owners      = var.owners

  filter {
    name   = var.ami_name_filter
    values = var.ami_values_filter
  }
}

# EC2 Instance
resource "aws_instance" "instance" {
  ami                         = data.aws_ami.ami.id
  associate_public_ip_address = var.associate_public_ip_address
  iam_instance_profile        = var.instance_profile_name
  instance_type               = var.instance_type
  key_name                    = var.uses_ssh ? var.key_name : null
  subnet_id                   = var.subnet_id
  user_data                   = var.has_user_data ? file(var.user_data_path) : null
  user_data_replace_on_change = var.has_user_data ? var.user_data_replace_on_change : null
  vpc_security_group_ids      = var.vpc_security_group_ids

  tags = merge(var.tags, {
    Name = var.instance_name
  })
}

output "instance_id" {
  value = aws_instance.instance.id
}

output "instance_arn" {
  value = aws_instance.instance.arn
}

output "instance_private_ip" {
  value = aws_instance.instance.private_ip
}

output "instance_public_ip" {
  value = aws_instance.instance.public_ip
}

output "instance_public_dns" {
  value = aws_instance.instance.public_dns
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. Security group module in VPC orchestration Terragrunt code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;vpc-live/dev/&lt;/strong&gt; directory that we created in the previous article, we'll create a new directory called &lt;strong&gt;security-group&lt;/strong&gt; that will contain a &lt;strong&gt;terragrunt.hcl&lt;/strong&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Directory structure&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vpc-live/
  dev/
    ... (previous modules)
    security-group/
      terragrunt.hcl
  terragrunt.hcl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;vpc-live/dev/security-group/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "&amp;lt;path_to_local_security_group_building_block_or_git_repo_url&amp;gt;"
}

dependency "vpc" {
  config_path = "../vpc"
}

inputs = {
  AWS_ACCESS_KEY_ID = "&amp;lt;your_aws_access_key_id&amp;gt;"
  AWS_SECRET_ACCESS_KEY = "&amp;lt;your_aws_secret_access_key&amp;gt;"
  AWS_REGION = "&amp;lt;your_aws_region&amp;gt;"
  vpc_id = dependency.vpc.outputs.vpc_id
  name = "dev-sg"
  description = "Allow HTTP (80), HTTPS (443) and SSH (22)"
  ingress_rules = [
    {
      protocol    = "tcp"
      from_port   = 80
      to_port     = 80
      cidr_blocks = ["0.0.0.0/0"]
    },
    {
      protocol    = "tcp"
      from_port   = 443
      to_port     = 443
      cidr_blocks = ["0.0.0.0/0"]
    },
    {
      protocol    = "tcp"
      from_port   = 22
      to_port     = 22
      cidr_blocks = ["0.0.0.0/0"]
    }
  ]
  egress_rules = [
    {
      protocol    = "tcp"
      from_port   = 80
      to_port     = 80
      cidr_blocks = ["0.0.0.0/0"]
    },
    {
      protocol    = "tcp"
      from_port   = 443
      to_port     = 443
      cidr_blocks = ["0.0.0.0/0"]
    },
    {
      protocol    = "tcp"
      from_port   = 22
      to_port     = 22
      cidr_blocks = ["0.0.0.0/0"]
    }
  ]
  tags = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This module will create a security group that allows internet traffic on ports 80 (HTTP), 443 (HTTPS), and 22 (SSH).&lt;/p&gt;

&lt;p&gt;After adding this, I can run the command below from the &lt;strong&gt;vpc-live/dev/&lt;/strong&gt; directory to create the security group (enter &lt;strong&gt;y&lt;/strong&gt; when prompted to confirm the creation of the resource).&lt;br&gt;
Be sure to set the appropriate values for &lt;strong&gt;AWS_ACCESS_KEY_ID&lt;/strong&gt;, &lt;strong&gt;AWS_SECRET_ACCESS_KEY&lt;/strong&gt;, and &lt;strong&gt;AWS_REGION&lt;/strong&gt;, and DO NOT commit these values to a Git repository.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terragrunt run-all apply&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Below is part of the output of the above command which shows that the security group has been created:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27gf52qexgqop100mgvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27gf52qexgqop100mgvg.png" alt="Terragrunt output" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Web server orchestration Terragrunt code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To proceed, we'll first need to copy the ID of one public subnet and the ID of our newly created security group from our VPC. Don't use the ID in the image as that won't work for you.&lt;/p&gt;

&lt;p&gt;Our Terragrunt code will have the following directory structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ec2-live/
  dev/
    apache-server/
      ec2-key-pair/
        terragrunt.hcl
      ec2-web-server/
        terragrunt.hcl
        user-data.sh
      ssm-instance-profile/
        terragrunt.hcl
      terragrunt.hcl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The content of the &lt;strong&gt;terragrunt.hcl&lt;/strong&gt; files will be shared below.&lt;br&gt;
Notice that the &lt;strong&gt;ec2-web-server&lt;/strong&gt; subdirectory contains a script (&lt;strong&gt;user-data.sh&lt;/strong&gt;). This script will deploy the Apache web server to our EC2 instance as will be illustrated in a step further down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ec2-live/dev/apache-server/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;generate "backend" {
  path      = "backend.tf"
  if_exists = "overwrite_terragrunt"
  contents = &amp;lt;&amp;lt;EOF
terraform {
  backend "s3" {
    bucket         = "&amp;lt;s3_bucket_name&amp;gt;"
    key            = "${path_relative_to_include()}/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
  }
}
EOF
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above file, which is the root Terragrunt file, defines the backend configuration and will save the Terraform state file in an S3 bucket that you would have already created manually (and whose name will replace the placeholder &lt;strong&gt;&lt;/strong&gt; in the above configuration).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ec2-live/dev/apache-server/ec2-key-pair/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "&amp;lt;path_to_local_key_pair_building_block_or_git_repo_url&amp;gt;"
}

inputs = {
  AWS_ACCESS_KEY_ID = "&amp;lt;your_aws_access_key_id&amp;gt;"
  AWS_SECRET_ACCESS_KEY = "&amp;lt;your_aws_secret_access_key&amp;gt;"
  AWS_REGION = "&amp;lt;your_aws_region&amp;gt;"
  key_name = "Apache server SSH key pair"
  public_key = "&amp;lt;your_ssh_public_key&amp;gt;"
  tags = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This module will create the key pair that will be used to SSH into the EC2 instance.&lt;br&gt;
Be sure to replace the &lt;strong&gt;source&lt;/strong&gt; value in the &lt;strong&gt;terraform&lt;/strong&gt; block with the path to your local building block or the URL of the Git repo hosting the building block's code.&lt;br&gt;
Also, replace the &lt;strong&gt;public_key&lt;/strong&gt; value in the &lt;strong&gt;inputs&lt;/strong&gt; section with the content of your SSH public key.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ec2-live/dev/apache-server/ssm-instance-profile/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "&amp;lt;path_to_local_ec2_instance_profile_building_block_or_git_repo_url&amp;gt;"
}

inputs = {
  AWS_ACCESS_KEY_ID = "&amp;lt;your_aws_access_key_id&amp;gt;"
  AWS_SECRET_ACCESS_KEY = "&amp;lt;your_aws_secret_access_key&amp;gt;"
  AWS_REGION = "&amp;lt;your_aws_region&amp;gt;"
  iam_policy_statements = [
    {
      sid = "AllowEC2AssumeRole"
      effect = "Allow"
      principals = {
        type        = "Service"
        identifiers = ["ec2.amazonaws.com"]
      }
      actions   = ["sts:AssumeRole"]
      resources = []
    }
  ]
  iam_role_name = "EC2RoleForSSM"
  iam_role_description = "Allows EC2 instance to be managed by Systems Manager"
  iam_role_path = "/"
  other_policy_arns = ["arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"]
  instance_profile_name = "EC2InstanceProfileForSSM"
  tags = {
    Name = "dev-ssm-instance-profile"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This module allows us to use an AWS-managed IAM policy (&lt;strong&gt;AmazonSSMManagedInstanceCore&lt;/strong&gt;) that grants &lt;strong&gt;Systems Manager&lt;/strong&gt; the permissions it needs to manage an EC2 instance. This policy will then be attached to an IAM role (whose name we've defined as &lt;strong&gt;EC2RoleForSSM&lt;/strong&gt; here) that will be created by the &lt;strong&gt;instance profile&lt;/strong&gt; building block and attached to the created instance profile (that we've named &lt;strong&gt;EC2InstanceProfileForSSM&lt;/strong&gt; here).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ec2-live/dev/apache-server/ec2-web-server/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "&amp;lt;path_to_local_ec2_instance_building_block_or_git_repo_url&amp;gt;"
}

dependency "key-pair" {
  config_path = "../ec2-key-pair" # Path to Terragrunt ec2-key-pair module
}

dependency "instance-profile" {
  config_path = "../ssm-instance-profile" # Path to Terragrunt ssm-instance-profile module
}

inputs = {
  AWS_ACCESS_KEY_ID = "&amp;lt;your_aws_access_key_id&amp;gt;"
  AWS_SECRET_ACCESS_KEY = "&amp;lt;your_aws_secret_access_key&amp;gt;"
  AWS_REGION = "&amp;lt;your_aws_region&amp;gt;"
  most_recent_ami = true
  owners = ["amazon"]
  ami_name_filter = "name"
  ami_values_filter = ["al2023-ami-2023.*-x86_64"]
  instance_profile_name = dependency.instance-profile.outputs.instance_profile_name
  instance_type = "t3.micro"
  subnet_id = "&amp;lt;copied_subnet_id&amp;gt;"
  associate_public_ip_address = true # Set to true so that our instance can be assigned a public IP address
  vpc_security_group_ids = ["&amp;lt;copied_security_group_id&amp;gt;"]
  has_user_data = true
  user_data_path = "user-data.sh"
  user_data_replace_on_change = true
  instance_name = "Apache Server"
  uses_ssh = true # Set to true so that the building block knows to uses the input below
  key_name = dependency.key-pair.outputs.key_name
  tags = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This module depends on both the EC2 key pair and EC2 instance profile modules as indicated by the &lt;strong&gt;dependency&lt;/strong&gt; blocks (as well as the values of the &lt;strong&gt;instance_profile_name&lt;/strong&gt; and &lt;strong&gt;key_name&lt;/strong&gt; inputs).&lt;br&gt;
It will use the most recent version of the AWS &lt;strong&gt;Amazon Linux 2023&lt;/strong&gt; AMI (as the values for &lt;strong&gt;most_recent_ami&lt;/strong&gt;, &lt;strong&gt;owners&lt;/strong&gt;, &lt;strong&gt;ami_name_filter&lt;/strong&gt;, and &lt;strong&gt;ami_values_filter&lt;/strong&gt; indicate) and will create an instance of type &lt;strong&gt;t3.micro&lt;/strong&gt;, belonging to a public subnet in our VPC (whose ID we previously copied and should paste as the value of the &lt;strong&gt;subnet_id&lt;/strong&gt; input) and also using the security group we created above (whose ID we also previously copied and should paste as a value in the array of values of the &lt;strong&gt;vpc_security_group_ids&lt;/strong&gt; input).&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;user_data_path&lt;/strong&gt; input expects to receive the path to a script that will be executed only when the EC2 instance is first created. This script is the &lt;strong&gt;user-data.sh&lt;/strong&gt; file that will contain instructions to deploy an Apache web server to our EC2 instance as shown below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ec2-live/dev/apache-server/ec2-web-server/user-data.sh&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "&amp;lt;h1&amp;gt;Hello World from $(hostname -f)&amp;lt;/h1&amp;gt;" &amp;gt; /var/www/html/index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script does the following:&lt;br&gt;
a) Updates the Amazon Linux 2023 system (&lt;strong&gt;yum update -y&lt;/strong&gt;)&lt;br&gt;
b) Installs &lt;strong&gt;httpd&lt;/strong&gt; which is the Apache web server (&lt;strong&gt;yum install -y httpd&lt;/strong&gt;)&lt;br&gt;
c) Starts the Apache service (&lt;strong&gt;systemctl start httpd&lt;/strong&gt;)&lt;br&gt;
d) Ensures the Apache service is started whenever the server restarts (&lt;strong&gt;systemctl enable httpd&lt;/strong&gt;)&lt;br&gt;
e) Copies the string "&lt;/p&gt;
&lt;h1&gt;Hello World from $(hostname -f)&lt;/h1&gt;" into the &lt;strong&gt;index.html&lt;/strong&gt; file located in the &lt;strong&gt;/var/www/html/&lt;/strong&gt; directory. This will make the server display this string in bold, replacing &lt;strong&gt;$(hostname -f)&lt;/strong&gt; with the hostname of the EC2 instance.

&lt;p&gt;&lt;strong&gt;Putting it all together&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our Terraform and Terragrunt configuration is now ready, so we can create the resources using the following Terragrunt command from within the &lt;strong&gt;ec2-live/dev/apache-server/&lt;/strong&gt; directory. Enter y when prompted to confirm the creation of the resources.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terragrunt run-all apply&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The last output lines following the successful execution of this command should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdvqjlzq0sfmm2vfszo7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdvqjlzq0sfmm2vfszo7.png" alt="Terragrunt output" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the list of outputs, we are most concerned with the &lt;strong&gt;instance_public_dns&lt;/strong&gt; and &lt;strong&gt;instance_public_ip&lt;/strong&gt;, whose values will allow us to access our web server from our browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbcz4xm5mm09j6sqncim.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftbcz4xm5mm09j6sqncim.png" alt="Accessing web server via its public IP address" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgl7ut7sa2bvx6q38yo5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgl7ut7sa2bvx6q38yo5.png" alt="Accessing web server via its public DNS name" width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, both the public IP address and public DNS name return the same result when accessed from a browser.&lt;br&gt;
You can also see that the message is the same as that which was set in the user data script, and it has replaced &lt;strong&gt;$(hostname -f)&lt;/strong&gt; with the hostname of the created EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bonus - Systems Manager&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We can now access the AWS management console to check if our EC2 instance is managed by Systems Manager. To do this, we need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;log in to the AWS management console,&lt;/li&gt;
&lt;li&gt;then search for the Systems Manager service from the search bar. We can pick the &lt;strong&gt;Systems Manager&lt;/strong&gt; option that is presented to us. This will take us to the &lt;strong&gt;Systems Manager&lt;/strong&gt; console, where we can scroll down the menu and select &lt;strong&gt;Session Manager&lt;/strong&gt; (under the &lt;strong&gt;Node Management&lt;/strong&gt; section).&lt;/li&gt;
&lt;li&gt;We'll see a button labeled &lt;strong&gt;Start session&lt;/strong&gt; that we should click on and we'll be presented with a list of target instances.&lt;/li&gt;
&lt;li&gt;Our instance will be in this list, so we can select its radio button and click on the &lt;strong&gt;Start session&lt;/strong&gt; button at the bottom to log in to the instance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Voilà!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Given that we can now easily deploy an Apache web server to an EC2 instance using both Terraform and Terragrunt, we should delete the resources we created to avoid incurring unexpected costs. We should do this from both the &lt;strong&gt;vpc-live/dev&lt;/strong&gt; and &lt;strong&gt;ec2-live/dev/apache-server&lt;/strong&gt; directories using the command below. Enter &lt;strong&gt;y&lt;/strong&gt; when prompted to confirm the destruction of these resources.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terragrunt run-all destroy&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In the next article, we'll create a second instance in a private subnet, and see how to use Ansible, a configuration management tool, to manage the configuration of our public and private instances.&lt;/p&gt;

&lt;p&gt;Until then, happy coding!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
      <category>terragrunt</category>
    </item>
    <item>
      <title>Terraform &amp; Terragrunt to Create a VPC and its Components (Part II)</title>
      <dc:creator>Stéphane Noutsa</dc:creator>
      <pubDate>Fri, 03 Nov 2023 00:47:31 +0000</pubDate>
      <link>https://forem.com/aws-builders/terraform-terragrunt-to-create-a-vpc-and-its-components-part-ii-1pl6</link>
      <guid>https://forem.com/aws-builders/terraform-terragrunt-to-create-a-vpc-and-its-components-part-ii-1pl6</guid>
      <description>&lt;p&gt;Terragrunt is a powerful open-source tool that serves as a wrapper around Terraform, providing enhanced features and simplifying the management of Terraform deployments. With Terragrunt, infrastructure as code (IaC) practitioners can achieve more effective and scalable infrastructure management in complex environments. By abstracting common Terraform tasks, Terragrunt facilitates the creation, deployment, and maintenance of infrastructure resources, enabling teams to efficiently manage infrastructure with consistency and ease.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/aws-builders/terraform-terragrunt-to-create-a-vpc-and-its-components-part-i-1hp7"&gt;previous article&lt;/a&gt;, we created Terraform building blocks which we'll use in this article to orchestrate the creation of a VPC with the components below.&lt;br&gt;
This article assumes some familiarity with Terraform and Terragrunt.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A VPC (obviously 😅)&lt;/li&gt;
&lt;li&gt;An Internet Gateway&lt;/li&gt;
&lt;li&gt;A route table for the public subnet (which we'll just call &lt;strong&gt;public route table&lt;/strong&gt;)&lt;/li&gt;
&lt;li&gt;A public subnet&lt;/li&gt;
&lt;li&gt;An Elastic IP (EIP) for the NAT Gateway&lt;/li&gt;
&lt;li&gt;A NAT Gateway&lt;/li&gt;
&lt;li&gt;A route table for the private subnet (which we'll just call &lt;strong&gt;private route table&lt;/strong&gt;)&lt;/li&gt;
&lt;li&gt;A private subnet&lt;/li&gt;
&lt;li&gt;A Network Access Control List (NACL)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Our Terragrunt project will have the following folder structure and files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vpc-live/
  &amp;lt;environment&amp;gt;/
    &amp;lt;module_1&amp;gt;/
      terragrunt.hcl
    &amp;lt;module_2&amp;gt;/
      terragrunt.hcl
    ...
    &amp;lt;module_n&amp;gt;/
      terragrunt.hcl
  terragrunt.hcl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Basically, we'll have a parent directory which we'll call &lt;strong&gt;vpc-live&lt;/strong&gt;. This directory will contain a root &lt;strong&gt;terragrunt.hcl&lt;/strong&gt; file and a directory for each environment we'll want to create our resources in (e.g &lt;strong&gt;dev&lt;/strong&gt;, &lt;strong&gt;staging&lt;/strong&gt;, &lt;strong&gt;prod&lt;/strong&gt;). For our article, we'll only have a dev directory. This directory will contain directories that will represent the different specific resources we'll want to create.&lt;/p&gt;

&lt;p&gt;Our final folder structure will be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vpc-live/
  dev/
    elastic-ip/
      terragrunt.hcl
    internet-gateway/
      terragrunt.hcl
    nacl/
      terragrunt.hcl
    nat-gateway/
      terragrunt.hcl
    private-route-table/
      terragrunt.hcl
    private-subnet/
      terragrunt.hcl
    public-route-table/
      terragrunt.hcl
    public-subnet/
      terragrunt.hcl
    vpc/
      terragrunt.hcl
  terragrunt.hcl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;0. Root terragrunt.hcl file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our root &lt;strong&gt;terragrunt.hcl&lt;/strong&gt; file will contain the configuration for our remote Terraform state. We'll use an S3 bucket in AWS to store our Terraform state file, and the name of our S3 bucket must be unique for it to be successfully created. My S3 bucket is in the Paris region (eu-west-3).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vpc-live/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;generate "backend" {
  path      = "backend.tf"
  if_exists = "overwrite_terragrunt"
  contents = &amp;lt;&amp;lt;EOF
terraform {
  backend "s3" {
    bucket         = "snk-terraform-state"
    key            = "${path_relative_to_include()}/terraform.tfstate"
    region         = "eu-west-3"
    encrypt        = true
  }
}
EOF
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should be noted that our module &lt;strong&gt;terragrunt.hcl&lt;/strong&gt; files' Terraform source could either be the path to the local building block or the URL of the Git repository hosting our building block's code.&lt;br&gt;
(We created the different building blocks in our &lt;a href="https://dev.to/aws-builders/terraform-terragrunt-to-create-a-vpc-and-its-components-part-i-1hp7"&gt;previous article&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. VPC&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;VPC&lt;/strong&gt; building block as its Terraform source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vpc-live/dev/vpc/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;path_to_local_vpc_building_block_or_git_repo_url&amp;gt;
}

inputs = {
  vpc_cidr = "10.0.0.0/16"
  vpc_name = "dev-vpc"
  instance_tenancy = "default"
  enable_dns_support = true
  enable_dns_hostnames = true
  assign_generated_ipv6_cidr_block = false
  vpc_tags = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;include&lt;/strong&gt; block will include the root &lt;strong&gt;terragrunt.hcl&lt;/strong&gt; file (with the backend configuration), and will substitute the key of our bucket configuration with the path to each module's &lt;strong&gt;terragrunt.hcl&lt;/strong&gt; file.&lt;/p&gt;

&lt;p&gt;The values passed in the inputs section are the variables that are defined in the building blocks.&lt;/p&gt;

&lt;p&gt;For this module and the following modules, we won't be passing the variables &lt;strong&gt;AWS_ACCESS_KEY_ID&lt;/strong&gt;, &lt;strong&gt;AWS_SECRET_ACCESS_KEY&lt;/strong&gt;, and &lt;strong&gt;AWS_REGION&lt;/strong&gt; since such credentials (bar the &lt;strong&gt;AWS_REGION&lt;/strong&gt; variable) are sensitive). You'll have to add them yourself since they're required.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB&lt;/strong&gt;: Please don't commit these credentials to version control systems as that is not secure. Ideally, you'll have a pipeline whose job runner will have the credentials configured as the default profile, or the runner will be able to assume a role that will allow it to create the resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Internet Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;Internet Gateway&lt;/strong&gt; building block as its Terraform source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vpc-live/dev/internet-gateway/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;path_to_local_internet_gateway_building_block_or_git_repo_url&amp;gt;
}

dependency "vpc" {
  config_path = "../vpc"
}

inputs = {
  vpc_id = dependency.vpc.outputs.vpc_id
  name = "dev-igw"
  tags = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;dependency&lt;/strong&gt; block indicates that this module requires outputs (the VPC ID) from the &lt;strong&gt;VPC&lt;/strong&gt; module which we created in step 1.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Public Route Table&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;Route Table&lt;/strong&gt; building block as its Terraform source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vpc-live/dev/public-route-table/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;path_to_local_route_table_building_block_or_git_repo_url&amp;gt;
}

dependency "vpc" {
  config_path = "../vpc"
}

dependency "igw" {
  config_path = "../internet-gateway"
}

inputs = {
  route_tables = [
    {
      name      = "dev-public-rt"
      vpc_id    = dependency.vpc.outputs.vpc_id
      is_igw_rt = true

      routes = [
        {
          cidr_block = "0.0.0.0/0"
          igw_id     = dependency.igw.outputs.igw_id
        }
      ]

      tags = {}
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This module requires outputs from both the &lt;strong&gt;VPC&lt;/strong&gt; and &lt;strong&gt;Internet Gateway&lt;/strong&gt; modules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Public Subnet&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;Subnet&lt;/strong&gt; building block as its Terraform source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vpc-live/dev/public-subnet/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;path_to_local_subnet_building_block_or_git_repo_url&amp;gt;
}

dependency "vpc" {
  config_path = "../vpc"
}

dependency "public-route-table" {
  config_path = "../public-route-table"
}

inputs = {
  subnets = [
    {
      name                                = "dev-public-subnet"
      vpc_id                              = dependency.vpc.outputs.vpc_id
      cidr_block                          = "10.0.0.0/24"
      availability_zone                   = "eu-west-3a"
      map_public_ip_on_launch             = true
      private_dns_hostname_type_on_launch = "resource-name"
      is_public                           = true
      route_table_id                      = dependency.public-route-table.outputs.route_table_ids[0]
      tags                                = {}
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This module requires outputs from both the &lt;strong&gt;VPC&lt;/strong&gt; and &lt;strong&gt;Public Route Table&lt;/strong&gt; modules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Elastic IP (EIP)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module will create an EIP which will be attached to the NAT Gateway that we'll create.&lt;br&gt;
It uses the &lt;strong&gt;Elastic IP&lt;/strong&gt; building block as its Terraform source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vpc-live/dev/elastic-ip/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;path_to_local_elastic_ip_building_block_or_git_repo_url&amp;gt;
}

inputs = {
  in_vpc = true
  tags = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This module does not depend on any other module, so it will actually be created before other modules which have dependencies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. NAT Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;NAT Gateway&lt;/strong&gt; building block as its Terraform source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vpc-live/dev/nat-gateway/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;path_to_local_nat_gateway_building_block_or_git_repo_url&amp;gt;
}

dependency "eip" {
  config_path = "../elastic-ip"
}

dependency "public-subnet" {
  config_path = "../public-subnet"
}

inputs = {
  eip_id = dependency.eip.outputs.eip_id
  subnet_id = dependency.public-subnet.outputs.public_subnets[0]
  name = "dev-nat-gw"
  tags = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This module requires outputs from both the &lt;strong&gt;Elastic IP&lt;/strong&gt; and &lt;strong&gt;Public Subnet&lt;/strong&gt; modules, given that the NAT Gateway needs to (a) have a public IP address and (b) be in a public subnet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Private Route Table&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;Route Table&lt;/strong&gt; building block as its Terraform source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vpc-live/dev/private-route-table/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;path_to_local_route_table_building_block_or_git_repo_url&amp;gt;
}

dependency "vpc" {
  config_path = "../vpc"
}

dependency "nat-gateway" {
  config_path = "../nat-gateway"
}

inputs = {
  route_tables = [
    {
      name      = "dev-private-rt"
      vpc_id    = dependency.vpc.outputs.vpc_id
      is_igw_rt = false

      routes = [
        {
          cidr_block = "0.0.0.0/0"
          nat_gw_id  = dependency.nat-gateway.outputs.nat_gw_id
        }
      ]

      tags = {}
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This module requires outputs from both the &lt;strong&gt;VPC&lt;/strong&gt; and &lt;strong&gt;NAT Gateway&lt;/strong&gt; modules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Private Subnet&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;Subnet&lt;/strong&gt; building block as its Terraform source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vpc-live/dev/private-subnet/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;path_to_local_subnet_building_block_or_git_repo_url&amp;gt;
}

dependency "vpc" {
  config_path = "../vpc"
}

dependency "private-route-table" {
  config_path = "../private-route-table"
}

inputs = {
  subnets = [
    {
      name                                = "dev-private-subnet"
      vpc_id                              = dependency.vpc.outputs.vpc_id
      cidr_block                          = "10.0.1.0/24"
      availability_zone                   = "eu-west-3a"
      map_public_ip_on_launch             = false
      private_dns_hostname_type_on_launch = "resource-name"
      is_public                           = false
      route_table_id                      = dependency.private-route-table.outputs.route_table_ids[0]
      tags                                = {}
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This module requires outputs from both the &lt;strong&gt;VPC&lt;/strong&gt; and &lt;strong&gt;Private Route Table&lt;/strong&gt; modules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Network Access Control List (NACL)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This module uses the &lt;strong&gt;NACL&lt;/strong&gt; building block as its Terraform source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vpc-live/dev/nacl/terragrunt.hcl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = &amp;lt;path_to_local_nacl_building_block_or_git_repo_url&amp;gt;
}

dependency "vpc" {
  config_path = "../vpc"
}

dependency "public-subnet" {
  config_path = "../public-subnet"
}

dependency "private-subnet" {
  config_path = "../private-subnet"
}

inputs = {
  nacls = [
    {
      name   = "open-public-nacl"
      vpc_id = dependency.vpc.outputs.vpc_id
      egress = [
        {
          protocol   = "-1"
          rule_no    = 100
          action     = "allow"
          cidr_block = "0.0.0.0/0"
          from_port  = 0
          to_port    = 0
        }
      ]
      ingress = [
        {
          protocol   = "-1"
          rule_no    = 100
          action     = "allow"
          cidr_block = "0.0.0.0/0"
          from_port  = 0
          to_port    = 0
        }
      ]
      subnet_id = dependency.public-subnet.outputs.public_subnets[0]
      tags      = {}
    },
    {
      name   = "open-private-nacl"
      vpc_id = dependency.vpc.outputs.vpc_id
      egress = [
        {
          protocol   = "-1"
          rule_no    = 100
          action     = "allow"
          cidr_block = "0.0.0.0/0"
          from_port  = 0
          to_port    = 0
        }
      ]
      ingress = [
        {
          protocol   = "-1"
          rule_no    = 100
          action     = "allow"
          cidr_block = "0.0.0.0/0"
          from_port  = 0
          to_port    = 0
        }
      ]
      subnet_id = dependency.private-subnet.outputs.private_subnets[0]
      tags      = {}
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This module creates two NACLs: one for the public subnet and one for the private subnet. Their security rules allow traffic from and to all sources for simplicity, and that's not secure. You should modify the values for ingress and egress to only allow the traffic you desire, making the rules more secure.&lt;/p&gt;

&lt;p&gt;The module requires outputs from the &lt;strong&gt;VPC&lt;/strong&gt;, &lt;strong&gt;Public Subnet&lt;/strong&gt;, and &lt;strong&gt;Private Subnet&lt;/strong&gt; modules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10. Creating the Resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Terragrunt will allow us to orchestrate the creation of our VPC and its components through the modules we've created above.&lt;/p&gt;

&lt;p&gt;First, we'll need to &lt;strong&gt;cd&lt;/strong&gt; into our environment's directory from the terminal:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cd vpc-live/dev&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here, we can now run the following command to instruct Terragrunt to loop through all the module directories and create the different resources:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terragrunt run-all apply&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We should then see a similar prompt in our terminal. Enter &lt;strong&gt;y&lt;/strong&gt; to confirm the creation of all the resources:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femi4ljzrv6ww5dhp35uo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femi4ljzrv6ww5dhp35uo.png" alt="terragrunt run-all apply" width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We've seen how to orchestrate the creation of a VPC and its components with Terragrunt, using basic Terraform building blocks.&lt;/p&gt;

&lt;p&gt;We could create as many building blocks as we'd like with Terraform, then use Terragrunt to orchestrate the creation of an architecture that fits the needs of different projects (and in different environments) by simply reusing these building blocks. This will help us to keep our code DRY.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
      <category>terragrunt</category>
    </item>
    <item>
      <title>Terraform &amp; Terragrunt to Create a VPC and its Components (Part I)</title>
      <dc:creator>Stéphane Noutsa</dc:creator>
      <pubDate>Fri, 03 Nov 2023 00:10:06 +0000</pubDate>
      <link>https://forem.com/aws-builders/terraform-terragrunt-to-create-a-vpc-and-its-components-part-i-1hp7</link>
      <guid>https://forem.com/aws-builders/terraform-terragrunt-to-create-a-vpc-and-its-components-part-i-1hp7</guid>
      <description>&lt;p&gt;In the era of cloud computing, infrastructure as code (IaC) has gained immense popularity due to its ability to provision and manage infrastructure resources in a consistent and automated manner. Terraform, an open-source IaC tool by HashiCorp, has emerged as a leading choice for provisioning and managing cloud resources, including those offered by Amazon Web Services (AWS).&lt;/p&gt;

&lt;p&gt;In this article, we will explore the process of using Terraform to create basic AWS modules (which we'll call building blocks), enabling you to deploy and manage infrastructure easily and efficiently. We'll create the following building blocks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A VPC (obviously)&lt;/li&gt;
&lt;li&gt;An internet gateway&lt;/li&gt;
&lt;li&gt;A route table&lt;/li&gt;
&lt;li&gt;A subnet&lt;/li&gt;
&lt;li&gt;An elastic IP (EIP)&lt;/li&gt;
&lt;li&gt;A NAT gateway&lt;/li&gt;
&lt;li&gt;A NACL for the subnets&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is the first article in a 2-part series on how to use Terraform and Terragrunt to create a VPC with its components, and it assumes some familiarity with Terraform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;0. Common Files&lt;/strong&gt;&lt;br&gt;
Our Terraform building blocks will be independent projects which will, however, share common files - the &lt;strong&gt;provider.tf&lt;/strong&gt; and &lt;strong&gt;variables.tf&lt;/strong&gt; files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "AWS_ACCESS_KEY_ID" {
  type = string
}

variable "AWS_SECRET_ACCESS_KEY" {
  type = string
}

variable "AWS_REGION" {
  type = string
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should be noted that each building block will add more variables to its &lt;strong&gt;variables.tf&lt;/strong&gt; file depending on its requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;provider.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "&amp;gt;= 1.4.2"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 4.0"
    }
  }
}

provider "aws" {
  access_key = var.AWS_ACCESS_KEY_ID
  secret_key = var.AWS_SECRET_ACCESS_KEY
  region     = var.AWS_REGION
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;1. VPC&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The VPC will be the main building block that will contain all the other building blocks.&lt;br&gt;
(Take note of the output section which outputs the VPC's ID)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "vpc" {
  cidr_block                       = var.vpc_cidr
  instance_tenancy                 = var.instance_tenancy
  enable_dns_support               = var.enable_dns_support
  enable_dns_hostnames             = var.enable_dns_hostnames
  assign_generated_ipv6_cidr_block = var.assign_generated_ipv6_cidr_block

  tags = merge(var.vpc_tags, {
    Name = var.vpc_name
  })
}

output "vpc_id" {
  value = aws_vpc.vpc.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt; (additional variables)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "vpc_cidr" {
  type = string
}

variable "vpc_name" {
  type = string
}

variable "instance_tenancy" {
  type    = string
  default = "default"
}

variable "enable_dns_support" {
  type    = bool
  default = true
}

variable "enable_dns_hostnames" {
  type = bool
}

variable "assign_generated_ipv6_cidr_block" {
  type    = bool
  default = false
}

variable "vpc_tags" {
  type = map(string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Internet Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The internet gateway will allow two-way communication between the internet and resources in the VPC (in the public subnet to be more precise).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_internet_gateway" "igw" {
  vpc_id = var.vpc_id

  tags = merge(var.tags, {
    Name = var.name
  })
}

output "igw_id" {
  value = aws_internet_gateway.igw.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt; (additional variables)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "vpc_id" {
  type = string
}

variable "name" {
  type = string
}

variable "tags" {
  type = map(string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Route Table&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The route table will have a route that will allow resources in public and private subnets to communicate with the Internet through an Internet Gateway (for public subnets) or a NAT Gateway (for private subnets).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route_table" "route_tables" {
  for_each = { for rt in var.route_tables : rt.name =&amp;gt; rt }

  vpc_id = each.value.vpc_id

  dynamic "route" {
    for_each = { for route in each.value.routes : route.cidr_block =&amp;gt; route if each.value.is_igw_rt }

    content {
      cidr_block = route.value.cidr_block
      gateway_id = route.value.igw_id
    }
  }

  dynamic "route" {
    for_each = { for route in each.value.routes : route.cidr_block =&amp;gt; route if !each.value.is_igw_rt }

    content {
      cidr_block     = route.value.cidr_block
      nat_gateway_id = route.value.nat_gw_id
    }
  }

  tags = merge(each.value.tags, {
    Name = each.value.name
  })
}

output "route_table_ids" {
  value = values(aws_route_table.route_tables)[*].id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;route&lt;/strong&gt; blocks use conditional statements to determine which entries to create for the route table.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt; (additional variables)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "route_tables" {
  type = list(object({
    name      = string
    vpc_id    = string
    is_igw_rt = bool

    routes = list(object({
      cidr_block = string
      igw_id     = optional(string)
      nat_gw_id  = optional(string)
    }))

    tags = map(string)
  }))
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Subnet&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This building block will allow the creation of public subnets or private subnets, depending on the values passed to its variables.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create public subnets
resource "aws_subnet" "public_subnets" {
  for_each = { for subnet in var.subnets : subnet.name =&amp;gt; subnet if subnet.is_public }

  vpc_id                              = each.value.vpc_id
  cidr_block                          = each.value.cidr_block
  availability_zone                   = each.value.availability_zone
  map_public_ip_on_launch             = each.value.map_public_ip_on_launch
  private_dns_hostname_type_on_launch = each.value.private_dns_hostname_type_on_launch

  tags = merge(each.value.tags, {
    Name = each.value.name
  })
}

# Associate public subnets with their route table
resource "aws_route_table_association" "public_subnets" {
  for_each = { for subnet in var.subnets : subnet.name =&amp;gt; subnet if subnet.is_public }

  subnet_id      = aws_subnet.public_subnets[each.value.name].id
  route_table_id = each.value.route_table_id
}

# Create private subnets
resource "aws_subnet" "private_subnets" {
  for_each = { for subnet in var.subnets : subnet.name =&amp;gt; subnet if !subnet.is_public }

  vpc_id                              = each.value.vpc_id
  cidr_block                          = each.value.cidr_block
  availability_zone                   = each.value.availability_zone
  private_dns_hostname_type_on_launch = each.value.private_dns_hostname_type_on_launch

  tags = merge(each.value.tags, {
    Name = each.value.name
  })
}

# Associate private subnets with their route table
resource "aws_route_table_association" "private_subnets" {
  for_each = { for subnet in var.subnets : subnet.name =&amp;gt; subnet if !subnet.is_public }

  subnet_id      = aws_subnet.private_subnets[each.value.name].id
  route_table_id = each.value.route_table_id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt; (additional variables)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "subnets" {
  type = list(object({
    name                                = string
    vpc_id                              = string
    cidr_block                          = string
    availability_zone                   = optional(string)
    map_public_ip_on_launch             = optional(bool, true)
    private_dns_hostname_type_on_launch = optional(string, "resource-name")
    is_public                           = optional(bool, true)
    route_table_id                      = string
    tags                                = map(string)
  }))
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. Elastic IP (EIP)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The EIP can be used to assign a static public IP to a resource such as an EC2 instance or a NAT Gateway. In our case, it will be attached to the NAT Gateway that we'll create.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_eip" "eip" {
  vpc = var.in_vpc

  tags = merge(var.tags, {})
}

output "eip_id" {
  value = aws_eip.eip.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt; (additional variables)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "in_vpc" {
  type = bool
}

variable "tags" {
  type = map(string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6. Network Address Translation (NAT) Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The NAT Gateway will allow resources in private subnets to make one-way requests to the Internet (and receive responses). It has to be placed in a public subnet and should have a public IP address (which is why an EIP will first be created).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_nat_gateway" "nat_gw" {
  allocation_id = var.eip_id
  subnet_id     = var.subnet_id

  tags = merge(var.tags, {
    Name = var.name
  })
}

output "nat_gw_id" {
  value = aws_nat_gateway.nat_gw.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt; (additional variables)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "name" {
  type = string
}

variable "eip_id" {
  type = string
}

variable "subnet_id" {
  type        = string
  description = "The ID of the public subnet in which the NAT Gateway should be placed"
}

variable "tags" {
  type = map(string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;7. Network Access Control List (NACL)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The NACL acts like a firewall at the subnet level, allowing or denying traffic into or out of the subnet.&lt;br&gt;
After a NACL is created, it has to be associated with a subnet before it can start filtering its traffic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_network_acl" "nacls" {
  for_each = { for nacl in var.nacls : nacl.name =&amp;gt; nacl }

  vpc_id = each.value.vpc_id

  dynamic "egress" {
    for_each = { for rule in each.value.egress : rule.rule_no =&amp;gt; rule }

    content {
      protocol   = egress.value.protocol
      rule_no    = egress.value.rule_no
      action     = egress.value.action
      cidr_block = egress.value.cidr_block
      from_port  = egress.value.from_port
      to_port    = egress.value.to_port
    }
  }

  dynamic "ingress" {
    for_each = { for rule in each.value.ingress : rule.rule_no =&amp;gt; rule }

    content {
      protocol   = ingress.value.protocol
      rule_no    = ingress.value.rule_no
      action     = ingress.value.action
      cidr_block = ingress.value.cidr_block
      from_port  = ingress.value.from_port
      to_port    = ingress.value.to_port
    }
  }

  tags = merge(each.value.tags, {
    Name = each.value.name
  })
}

resource "aws_network_acl_association" "nacl_associations" {
  for_each = { for nacl in var.nacls : "${nacl.name}_${nacl.subnet_id}" =&amp;gt; nacl }

  network_acl_id = aws_network_acl.nacls[each.value.name].id
  subnet_id      = each.value.subnet_id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt; (additional variables)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "nacls" {
  type = list(object({
    name   = string
    vpc_id = string
    egress = list(object({
      protocol   = string
      rule_no    = number
      action     = string
      cidr_block = string
      from_port  = number
      to_port    = number
    }))
    ingress = list(object({
      protocol   = string
      rule_no    = number
      action     = string
      cidr_block = string
      from_port  = number
      to_port    = number
    }))
    subnet_id = string
    tags      = map(string)
  }))
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After creating all these building blocks, we can proceed to orchestrate the creation of a VPC with its components depending on its specific needs. This orchestration can be done using another IaC tool called &lt;strong&gt;Terragrunt&lt;/strong&gt;. We'll look at that in the second (and last) part of this series.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Create a Secure VPC with SSM-Managed Private EC2 Instances Using the AWS CLI</title>
      <dc:creator>Stéphane Noutsa</dc:creator>
      <pubDate>Thu, 02 Nov 2023 23:49:45 +0000</pubDate>
      <link>https://forem.com/aws-builders/create-a-secure-vpc-with-ssm-managed-private-ec2-instances-using-the-aws-cli-pan</link>
      <guid>https://forem.com/aws-builders/create-a-secure-vpc-with-ssm-managed-private-ec2-instances-using-the-aws-cli-pan</guid>
      <description>&lt;p&gt;In this blog post, we will walk through how to create a secure VPC in AWS with EC2 instances in private subnets that are managed by Systems Manager (SSM), are in an Auto Scaling Group (ASG), and are fronted by an Application Load Balancer (ALB), all using the AWS CLI. This setup will allow you to run your applications in a secure, highly available, fault-tolerant, and scalable environment.&lt;/p&gt;

&lt;p&gt;I'm going to assume that you have a basic understanding of what a VPC, a subnet (private or public), security groups, Network Access Control Lists (NACLs), an ALB, an ASG, an Internet Gateway (IGW), and a NAT Gateway are.&lt;/p&gt;

&lt;p&gt;That said, SSM is a service that helps you automate and manage your AWS resources at scale. You can use SSM to run commands, patch software, configure settings, and monitor the performance of your EC2 instances.&lt;/p&gt;

&lt;p&gt;To create a secure VPC with EC2 instances in private subnets that are managed by SSM and fronted by an ALB, we'll need to perform the steps listed below (you'll need to have the AWS CLI installed and configured for your account to be able to follow along)&lt;/p&gt;

&lt;p&gt;We'll start by creating a VPC with four subnets: two public subnets and two private subnets in two AZs. This VPC will have a CIDR block of 10.0.0.0/16:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws ec2 create-vpc --cidr-block 10.0.0.0/16&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then we'll create an IGW and attach it to our VPC so that the VPC will be able to communicate with the Internet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-internet-gateway
aws ec2 attach-internet-gateway --internet-gateway-id igw-xxxxxxxx --vpc-id vpc-xxxxxxxx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that you would have to replace igw-xxxxxxxx and vpc-xxxxxxxx respectively with the IDs of the IGW and VPC we would have previously created.&lt;br&gt;
We'll then create four subnets in our VPC, two in the us-east-1a Availability Zone (AZ) and two in the us-east-1b AZ (I'm creating the resources in the N. Virginia region, us-east-1). They'll have the CIDR blocks &lt;strong&gt;10.0.0.0/24&lt;/strong&gt;, &lt;strong&gt;10.0.1.0/24&lt;/strong&gt;, &lt;strong&gt;10.0.2.0/24&lt;/strong&gt; and &lt;strong&gt;10.0.3.0/24&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-subnet --vpc-id vpc-xxxxxxxx --cidr-block 10.0.0.0/24 --availability-zone us-east-1a
aws ec2 create-subnet --vpc-id vpc-xxxxxxxx --cidr-block 10.0.1.0/24 --availability-zone us-east-1b
aws ec2 create-subnet --vpc-id vpc-xxxxxxxx --cidr-block 10.0.2.0/24 --availability-zone us-east-1a
aws ec2 create-subnet --vpc-id vpc-xxxxxxxx --cidr-block 10.0.3.0/24 --availability-zone us-east-1b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A prerequisite to allow SSM to manage our private EC2 instances is to create VPC interface endpoints and associate them with our private subnets. Note that the private DNS setting needs to be enabled to allow communication with the SSM service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-vpc-endpoint --vpc-endpoint-type Interface --vpc-id vpc-xxxxxxxx --service-name com.amazonaws.us-east-1.ssm --subnet-ids subnet-33333333 subnet-44444444 --private-dns-enabled
aws ec2 create-vpc-endpoint --vpc-endpoint-type Interface --vpc-id vpc-xxxxxxxx --service-name com.amazonaws.us-east-1.ssmmessages --subnet-ids subnet-33333333 subnet-44444444 --private-dns-enabled
aws ec2 create-vpc-endpoint --vpc-endpoint-type Interface --vpc-id vpc-xxxxxxxx --service-name com.amazonaws.us-east-1.ec2messages --subnet-ids subnet-33333333 subnet-44444444 --private-dns-enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to secure our subnets, we'll create NACLs for the private and public subnets, and attach them to their respective subnets.&lt;/p&gt;

&lt;p&gt;We'll start by creating the NACLs for the public and private subnets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-network-acl --vpc-id vpc-xxxxxxxx
aws ec2 create-network-acl --vpc-id vpc-xxxxxxxx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll then create two entries that respectively allow inbound traffic from the Internet and outbound traffic for the Internet, for the public subnet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-network-acl-entry --network-acl-id acl-11111111 --ingress --rule-number 100 --protocol tcp --cidr-block 0.0.0.0/0 --rule-action allow
aws ec2 create-network-acl-entry --network-acl-id acl-11111111 --egress --rule-number 100 --protocol tcp --cidr-block 0.0.0.0/0 --rule-action allow
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We then create entries that respectively allow inbound traffic from the ALB and outbound traffic to the ephemeral ports (make sure to replace the CIDR block value with the IP address of the ALB after creating it):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-network-acl-entry --network-acl-id acl-22222222 --ingress --rule-number 100 --protocol tcp --port-range From=443,To=443 --cidr-block x.x.x.x/32 --rule-action allow
aws ec2 create-network-acl-entry --network-acl-id acl-22222222 --egress --rule-number 100 --protocol tcp --port-range From=443,To=443 --cidr-block 0.0.0.0/0 --rule-action allow
aws ec2 create-network-acl-entry --network-acl-id acl-22222222 --egress --rule-number 200 --protocol tcp --port-range From=1024,To=65535 --cidr-block x.x.x.x/32 --rule-action allow
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we allocate two Elastic IP addresses for the NAT gateways that will be provisioned in our public subnets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 allocate-address --domain vpc --output text --query 'AllocationId'
aws ec2 allocate-address --domain vpc --output text --query 'AllocationId'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands return the allocation IDs of the Elastic IP addresses, such as eipalloc-11111111 and eipalloc-22222222.&lt;/p&gt;

&lt;p&gt;We'll then create a NAT Gateway in each of our public subnets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-nat-gateway --subnet-id subnet-11111111 --allocation-id eipalloc-11111111 --output text --query 'NatGateway.NatGatewayId'
aws ec2 create-nat-gateway --subnet-id subnet-22222222 --allocation-id eipalloc-22222222 --output text --query 'NatGateway.NatGatewayId'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands return the NAT gateway IDs, such as nat-xxxxxxxxxxxxxxxx and nat-yyyyyyyyyyyyyyyy.&lt;/p&gt;

&lt;p&gt;We'll wait for our newly created NAT Gateways to become available before proceeding. While waiting, we can check the status of the NAT Gateways by running the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws ec2 describe-nat-gateways --nat-gateway-ids nat-xxxxxxxxxxxxxxxx nat-yyyyyyyyyyyyyyyy --output table&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The command returns a table with information about the NAT gateways, such as their state, subnet ID, and elastic IP address. When the state of both NAT gateways is &lt;strong&gt;available&lt;/strong&gt;, you can proceed to the next step.&lt;/p&gt;

&lt;p&gt;We'll then create a route table for the private subnets and add routes to the NAT gateways:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-route-table --vpc-id vpc-xxxxxxxx --output text --query 'RouteTable.RouteTableId'

aws ec2 create-route --route-table-id rtb-11111111 --destination-cidr-block 0.0.0.0/0 --nat-gateway-id nat-xxxxxxxxxxxxxxxx
aws ec2 create-route --route-table-id rtb-11111111 --destination-cidr-block 0.0.0.0/0 --nat-gateway-id nat-yyyyyyyyyyyyyyyy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first command creates a route table for the VPC with ID vpc-xxxxxxxx and returns the route table ID, such as rtb-11111111. The second and third commands add routes to the NAT gateways for all destinations (0.0.0.0/0) in the route table.&lt;/p&gt;

&lt;p&gt;Next, we'll associate the route table with the private subnets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 associate-route-table --route-table-id rtb-11111111 --subnet-id subnet-33333333
aws ec2 associate-route-table --route-table-id rtb-11111111 --subnet-id subnet-44444444
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands will allow resources provisioned in the private subnets to make requests to the Internet through the NAT Gateway, but they won't be reachable from the Internet.&lt;/p&gt;

&lt;p&gt;Next, we'll create a route table and associate it with our public subnets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-route-table --vpc-id vpc-xxxxxxxx

aws ec2 associate-route-table --route-table-id rtb-22222222 --subnet-id subnet-11111111
aws ec2 associate-route-table --route-table-id rtb-22222222 --subnet-id subnet-22222222
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll then create a route in the route table that points to the internet gateway:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws ec2 create-route --route-table-id rtb-22222222 --destination-cidr-block 0.0.0.0/0 --gateway-id igw-xxxxxxxx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The next thing we'll do is create an ALB that will redirect traffic to our private EC2 instances in an ASG. For this, we'll first need to create a public security group that will allow inbound HTTPS traffic (port 443) from the Internet (0.0.0.0/0). For convenience, our created security group will have the ID sg-11111111:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-security-group --group-name alb-sg --description "Security group for ALB" --vpc-id vpc-xxxxxxxx
aws ec2 authorize-security-group-ingress --group-id sg-11111111 --protocol tcp --port 443 --cidr 0.0.0.0/0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll then create our ALB, without specifying its target groups (we'll do this later):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws elbv2 create-load-balancer --name alb --type application --subnets subnet-11111111 subnet-22222222 --security-groups sg-11111111 --scheme internet-facing&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Given that our ALB needs to forward traffic to target groups, we'll create a security group for our private EC2 instances that will be in an ASG, which will serve as our ALB's target group:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws ec2 create-security-group --group-name private-sg --description "Security group for private EC2 instances" --vpc-id vpc-xxxxxxxx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next, we'll add rules to our private security group (with ID sg-22222222 for convenience) that allow inbound traffic from our ALB on port 80 (HTTP) and port 443 (HTTPS):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 authorize-security-group-ingress --group-id sg-22222222 --protocol tcp --port 80 --source-group arn:aws:elasticloadbalancing:us-east-1:123456789012:loadbalancer/app/alb/xxxxxxxxxxxxxxxx
aws ec2 authorize-security-group-ingress --group-id sg-22222222 --protocol tcp --port 443 --source-group arn:aws:elasticloadbalancing:us-east-1:123456789012:loadbalancer/app/alb/xxxxxxxxxxxxxxxx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we'll add rules to the security group that allow outbound traffic to the internet (CIDR block 0.0.0.0/0) on port 80 (HTTP) and port 443 (HTTPS):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 authorize-security-group-egress --group-id sg-22222222 --protocol tcp --port 80 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-egress --group-id sg-22222222 --protocol tcp --port 443 --cidr 0.0.0.0/0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to allow our EC2 instances to be managed by SSM, we'll need to create an IAM role which will be used as the EC2 instance profile.&lt;/p&gt;

&lt;p&gt;First, we'll create a trust policy document that allows SSM to assume the role. You can use the following JSON template and save it as &lt;strong&gt;trustpolicy.json&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ssm.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we'll create an IAM role called &lt;strong&gt;EC2RoleForSSM&lt;/strong&gt; using the trust policy document:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws iam create-role --role-name EC2RoleForSSM --assume-role-policy-document file://trustpolicy.json&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then we'll attach the AWS-managed policy, &lt;strong&gt;AmazonSSMManagedInstanceCore&lt;/strong&gt;, to the IAM role that grants SSM the necessary permissions to manage our EC2 instances:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws iam attach-role-policy --role-name EC2RoleForSSM --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next, we'll create an instance profile (&lt;strong&gt;ec2-ssm-profile&lt;/strong&gt;) for our IAM role &lt;strong&gt;EC2RoleForSSM&lt;/strong&gt;, and add the IAM role to the instance profile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam create-instance-profile --instance-profile-name ec2-ssm-profile
aws iam add-role-to-instance-profile --instance-profile-name ec2-ssm-profile --role-name EC2RoleForSSM
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll then create a launch template that specifies the configuration of our private EC2 instances, such as their AMI ID, instance type, instance profile, and security group. We won't create or use an SSH key pair because we'll manage our instances using SSM instead.&lt;br&gt;
We'll use the latest version of the Amazon Linux 2023 AMI in the N. Virginia region (us-east-1), because it comes with an SSM agent preinstalled (its ID at the time of writing is &lt;strong&gt;ami-0889a44b331db0194&lt;/strong&gt;), and call this launch template &lt;strong&gt;private-launch-template&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-launch-template \
--launch-template-name private-launch-template \
--version-description "Initial version" \
--launch-template-data \
'{"ImageId":"ami-0889a44b331db0194","InstanceType":"t2.micro", "IamInstanceProfile": {"Name": "ec2-ssm-profile"},"SecurityGroupIds":["sg-11111111"]}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now create our ASG that will use the launch template we just created, and will specify the desired capacity, minimum size, maximum size, and availability zones. We'll need to provide a name for the ASG (private-asg) and a health check type (ELB in this case, so as to use the ALB's health checks). We'll assume our created launch template's ID is &lt;strong&gt;lt-xxxxxxxxxxxxxxxx&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws autoscaling create-auto-scaling-group \
--auto-scaling-group-name private-asg \
--launch-template "LaunchTemplateId=lt-xxxxxxxxxxxxxxxx,Version=1" \
--min-size 2 \
--max-size 4 \
--desired-capacity 2 \
--availability-zones us-east-1a us-east-1b \
--health-check-type ELB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step will be to create our ALB's target group (&lt;strong&gt;alb-tg&lt;/strong&gt;) which will listen on port 443 (HTTPS):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws elbv2 create-target-group \
--name alb-tg \
--protocol HTTP \
--port 443 \
--vpc-id vpc-xxxxxxxx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we'll attach our target group &lt;strong&gt;alb-tg&lt;/strong&gt; to our previously created ASG. Be sure to use the right Amazon Resource Name (ARN) for the target group we just created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws autoscaling attach-load-balancer-target-groups \
--auto-scaling-group-name private-asg \
--target-group-arns arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/alb-tg/1234567890abcdef
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Having created our ALB and target groups and their respective security groups, we'll now create a listener that forwards requests on port 443 of the ALB to our target group. We'll assume that we had previously requested a certificate from ACM and its ARN is &lt;strong&gt;arn:aws:acm:us-east-1:123456789012:certificate/abcdef12–3456–7890-abcd-ef1234567890&lt;/strong&gt; :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws elbv2 create-listener --load-balancer-arn arn:aws:elasticloadbalancing:us-east-1:123456789012:loadbalancer/app/alb/xxxxxxxxxxxxxxxx --protocol HTTPS --port 443 --certificates CertificateArn=arn:aws:acm:us-east-1:123456789012:certificate/abcdef12-3456-7890-abcd-ef1234567890 --default-actions Type=forward,TargetGroupArn=arn:aws:elasticloadbalancing:us-east-1:123456789012:targetgroup/alb-tg/xxxxxxxxxxxxxxxx&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We can now test our setup by accessing our ALB's DNS name from a web browser. Since we haven't installed any application on our servers, we'll get an empty response.&lt;/p&gt;

&lt;p&gt;Congratulations! You have successfully created a secure VPC in AWS with EC2 instances in private subnets that are managed by Systems Manager and fronted by an Application Load Balancer.&lt;/p&gt;

&lt;p&gt;PS: Please feel free to leave questions or comments on how to improve the security of this VPC. We learn every day!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cli</category>
      <category>networking</category>
      <category>ssm</category>
    </item>
  </channel>
</rss>
