<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Stephanie Makori</title>
    <description>The latest articles on Forem by Stephanie Makori (@stephanie_makori_845bb2c0).</description>
    <link>https://forem.com/stephanie_makori_845bb2c0</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/stephanie_makori_845bb2c0"/>
    <language>en</language>
    <item>
      <title>Building a 3-Tier Multi-Region High Availability Architecture with Terraform</title>
      <dc:creator>Stephanie Makori</dc:creator>
      <pubDate>Fri, 17 Apr 2026 06:17:36 +0000</pubDate>
      <link>https://forem.com/stephanie_makori_845bb2c0/building-a-3-tier-multi-region-high-availability-architecture-with-terraform-1l82</link>
      <guid>https://forem.com/stephanie_makori_845bb2c0/building-a-3-tier-multi-region-high-availability-architecture-with-terraform-1l82</guid>
      <description>&lt;p&gt;High availability is one of the most important goals when designing cloud infrastructure. In a production environment, deploying resources in a single region is not enough because a regional outage can make the entire application unavailable. To solve this, I built a &lt;strong&gt;3-tier multi-region high availability architecture on AWS using Terraform&lt;/strong&gt;, designed to remain available even if one AWS region fails.&lt;/p&gt;

&lt;p&gt;This infrastructure consists of five reusable Terraform modules that provision networking, load balancing, compute, database, and DNS failover resources across two AWS regions. The result is a resilient architecture where traffic automatically shifts to a secondary region if the primary region becomes unhealthy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;The infrastructure follows a standard &lt;strong&gt;3-tier architecture&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Presentation Tier&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Route53 directs traffic to an Application Load Balancer (ALB) in the active region.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Application Tier&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
EC2 instances are managed by an Auto Scaling Group (ASG) across multiple Availability Zones.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Database Tier&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Amazon RDS runs in Multi-AZ mode in the primary region with a cross-region read replica in the secondary region.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Traffic flows like this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Route53 → ALB → EC2 Auto Scaling Group → RDS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This design ensures redundancy at every layer. If one Availability Zone fails, traffic is served from another AZ. If the primary region fails, Route53 automatically redirects traffic to the secondary region.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Five Terraform Modules?
&lt;/h2&gt;

&lt;p&gt;To keep the infrastructure maintainable and reusable, I split the deployment into five Terraform modules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;VPC Module&lt;/strong&gt; provisions networking resources such as VPCs, public/private subnets, route tables, internet gateways, and NAT gateways.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ALB Module&lt;/strong&gt; provisions the Application Load Balancer, listeners, target groups, and ALB security groups.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ASG Module&lt;/strong&gt; provisions launch templates, EC2 instances, scaling policies, alarms, and instance security groups.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RDS Module&lt;/strong&gt; provisions the Multi-AZ primary database and cross-region replica.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Route53 Module&lt;/strong&gt; provisions health checks and failover DNS records.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using modules avoids duplicating code and allows each infrastructure component to manage a single responsibility. This also makes troubleshooting easier because changes can be isolated to one module without affecting the others.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Flow Between Modules
&lt;/h2&gt;

&lt;p&gt;One of the biggest advantages of modular Terraform is how &lt;strong&gt;outputs from one module become inputs to another&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For example, the ALB module creates a target group and exports its ARN. That ARN is then passed to the ASG module so the EC2 instances register with the load balancer target group.&lt;/p&gt;

&lt;p&gt;Similarly, the RDS primary database module exports its database ARN, which is passed into the secondary region RDS module as the replication source. This creates a cross-region read replica.&lt;/p&gt;

&lt;p&gt;This flow creates a dependency chain:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VPC outputs → ALB inputs → ASG inputs → RDS inputs → Route53 inputs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This keeps the root Terraform configuration clean while each module handles its own internal complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Route53 Failover in Action
&lt;/h2&gt;

&lt;p&gt;A major feature of this architecture is &lt;strong&gt;automatic DNS failover&lt;/strong&gt; using Route53 health checks.&lt;/p&gt;

&lt;p&gt;Route53 continuously checks the health of the primary region ALB endpoint. If the primary region fails health checks, Route53 marks it unhealthy and redirects DNS traffic to the ALB in the secondary region.&lt;/p&gt;

&lt;p&gt;The failover process works like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Route53 detects the failed health check in the primary region&lt;/li&gt;
&lt;li&gt;DNS failover policy marks the primary record unhealthy&lt;/li&gt;
&lt;li&gt;Traffic is routed to the secondary ALB&lt;/li&gt;
&lt;li&gt;Users continue accessing the application with minimal downtime&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This failover typically takes about &lt;strong&gt;1 to 2 minutes&lt;/strong&gt;, depending on DNS TTL and health check intervals.&lt;/p&gt;

&lt;p&gt;This approach provides automatic disaster recovery without manual intervention.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-AZ vs Cross-Region Replication
&lt;/h2&gt;

&lt;p&gt;The database layer uses both &lt;strong&gt;Multi-AZ&lt;/strong&gt; and &lt;strong&gt;cross-region replication&lt;/strong&gt;, but they serve different purposes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-AZ
&lt;/h3&gt;

&lt;p&gt;Multi-AZ creates a standby database in another Availability Zone within the same region. If the primary database instance fails, AWS automatically promotes the standby.&lt;/p&gt;

&lt;p&gt;This protects against:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AZ failures&lt;/li&gt;
&lt;li&gt;hardware failures&lt;/li&gt;
&lt;li&gt;maintenance downtime&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cross-Region Read Replica
&lt;/h3&gt;

&lt;p&gt;Cross-region replication copies data asynchronously to another AWS region.&lt;/p&gt;

&lt;p&gt;This protects against:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;regional outages&lt;/li&gt;
&lt;li&gt;disaster recovery scenarios&lt;/li&gt;
&lt;li&gt;geographic redundancy needs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, these strategies provide both &lt;strong&gt;high availability&lt;/strong&gt; and &lt;strong&gt;regional resilience&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of This Architecture
&lt;/h2&gt;

&lt;p&gt;This deployment provided several important benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High availability&lt;/strong&gt; through Multi-AZ EC2 and RDS deployment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disaster recovery&lt;/strong&gt; through cross-region redundancy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic failover&lt;/strong&gt; using Route53 health checks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; through Auto Scaling Groups&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reusability&lt;/strong&gt; through modular Terraform design&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency&lt;/strong&gt; through infrastructure as code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of manually configuring resources in AWS, Terraform made it possible to define the entire infrastructure in reusable modules and deploy it consistently across regions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This project was an excellent demonstration of how Terraform modules can be combined to build a &lt;strong&gt;production-style multi-region high availability architecture&lt;/strong&gt; on AWS.&lt;/p&gt;

&lt;p&gt;By separating the infrastructure into reusable modules and wiring them together with outputs and inputs, I created an environment that is scalable, fault tolerant, and easy to manage.&lt;/p&gt;

&lt;p&gt;The most valuable takeaway from this project was understanding how &lt;strong&gt;Route53 failover, Auto Scaling Groups, Multi-AZ RDS, and cross-region replicas&lt;/strong&gt; work together to provide resilience at every layer of the application stack.&lt;/p&gt;

&lt;p&gt;This is the kind of architecture that forms the foundation for real-world production systems where uptime and fault tolerance are critical.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Building a Scalable Web Application on AWS with EC2, ALB, and Auto Scaling using Terraform</title>
      <dc:creator>Stephanie Makori</dc:creator>
      <pubDate>Thu, 16 Apr 2026 16:54:34 +0000</pubDate>
      <link>https://forem.com/stephanie_makori_845bb2c0/building-a-scalable-web-application-on-aws-with-ec2-alb-and-auto-scaling-using-terraform-31h1</link>
      <guid>https://forem.com/stephanie_makori_845bb2c0/building-a-scalable-web-application-on-aws-with-ec2-alb-and-auto-scaling-using-terraform-31h1</guid>
      <description>&lt;p&gt;On Day 26 of the Terraform Challenge, I moved from deploying static infrastructure to building a scalable web application architecture on AWS using Terraform. This project brought together EC2 Launch Templates, an Application Load Balancer, an Auto Scaling Group, and CloudWatch alarms into a modular infrastructure design that can automatically respond to changes in demand.&lt;/p&gt;

&lt;p&gt;This was one of the most practical labs in the challenge because it demonstrated how multiple Terraform modules can work together to create a production-style environment where traffic is distributed across healthy instances and scaling decisions happen automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Architecture
&lt;/h2&gt;

&lt;p&gt;The infrastructure was split into three reusable Terraform modules:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;EC2 Module&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
This module created the launch template and security group for the web application instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ALB Module&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
This module provisioned the Application Load Balancer, listener, target group, and the ALB security group.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ASG Module&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
This module created the Auto Scaling Group, attached instances to the ALB target group, and configured CPU-based scaling policies with CloudWatch alarms.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By separating the resources into modules, the deployment stayed organized and reusable. Each module focused on one responsibility, and outputs from one module were passed as inputs to another.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Modular Design Matters
&lt;/h2&gt;

&lt;p&gt;Instead of placing all AWS resources in one Terraform file, I separated them into modules so that each part of the infrastructure could be reused independently.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;EC2 module&lt;/strong&gt; outputs the launch template ID&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;ALB module&lt;/strong&gt; outputs the target group ARN&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;ASG module&lt;/strong&gt; consumes both outputs to launch instances and attach them behind the load balancer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This modular structure keeps the environment configuration clean and makes it easier to maintain or expand the infrastructure later.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;envs/dev&lt;/code&gt; configuration only needed to define environment-specific variables like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AMI ID&lt;/li&gt;
&lt;li&gt;desired capacity&lt;/li&gt;
&lt;li&gt;subnet IDs&lt;/li&gt;
&lt;li&gt;VPC ID&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All the infrastructure logic remained inside the modules.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment Workflow
&lt;/h2&gt;

&lt;p&gt;After building the modules, I deployed the infrastructure using the normal Terraform workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;terraform init&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;terraform validate&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;terraform plan&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;terraform apply&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Terraform created:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 Launch Template&lt;/li&gt;
&lt;li&gt;Application Load Balancer&lt;/li&gt;
&lt;li&gt;Target Group&lt;/li&gt;
&lt;li&gt;Auto Scaling Group&lt;/li&gt;
&lt;li&gt;CPU scale-out and scale-in policies&lt;/li&gt;
&lt;li&gt;CloudWatch alarms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After deployment, Terraform returned the ALB DNS endpoint, which served as the public URL for the application.&lt;/p&gt;

&lt;p&gt;When I opened the ALB URL in the browser, the application returned:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Deployed with Terraform — environment: dev&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This confirmed that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the EC2 instances launched successfully&lt;/li&gt;
&lt;li&gt;the load balancer was routing traffic correctly&lt;/li&gt;
&lt;li&gt;the Auto Scaling Group registered healthy targets&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Auto Scaling Works
&lt;/h2&gt;

&lt;p&gt;The Auto Scaling Group was configured with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;minimum capacity:&lt;/strong&gt; 1 instance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;desired capacity:&lt;/strong&gt; 2 instances&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;maximum capacity:&lt;/strong&gt; 4 instances&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CloudWatch alarms monitored average CPU utilization:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If CPU reached &lt;strong&gt;70%&lt;/strong&gt;, Terraform triggered a &lt;strong&gt;scale-out policy&lt;/strong&gt; to add one instance&lt;/li&gt;
&lt;li&gt;If CPU dropped to &lt;strong&gt;30%&lt;/strong&gt;, Terraform triggered a &lt;strong&gt;scale-in policy&lt;/strong&gt; to remove one instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This creates elasticity in the infrastructure, allowing the application to handle increased load while reducing costs during low usage.&lt;/p&gt;

&lt;p&gt;One important configuration in the Auto Scaling Group was:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;health_check_type = "ELB"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This setting ensures that scaling decisions are based on the Application Load Balancer health checks rather than only EC2 instance status.&lt;/p&gt;

&lt;p&gt;Without it, an EC2 instance could remain "healthy" from AWS's perspective even if the web server application had failed. Using ELB health checks ensures only working instances receive traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of This Architecture
&lt;/h2&gt;

&lt;p&gt;This infrastructure design provides several key advantages:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. High Availability&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Application Load Balancer distributes requests across multiple instances, reducing the risk of downtime.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Elastic Scaling&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Auto Scaling Group increases or decreases capacity based on CPU demand automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Modular Reusability&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Each module can be reused in other environments such as staging or production.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Maintainability&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Because the modules are separated by function, updates can be made without affecting unrelated components.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5. Cost Efficiency&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Scaling policies ensure resources are only added when needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;

&lt;p&gt;Once the deployment was verified, I ran:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform destroy&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This removed the Auto Scaling Group, load balancer, target group, launch template, and CloudWatch alarms.&lt;/p&gt;

&lt;p&gt;Cleaning up after testing is important because EC2 instances and ALBs continue incurring charges if left running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This project was a major step forward in understanding how scalable infrastructure is built on AWS with Terraform.&lt;/p&gt;

&lt;p&gt;It was not just about provisioning EC2 instances, but about connecting multiple services into a self-managing system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Launch Templates define compute&lt;/li&gt;
&lt;li&gt;ALB distributes traffic&lt;/li&gt;
&lt;li&gt;ASG manages instance count&lt;/li&gt;
&lt;li&gt;CloudWatch triggers scaling actions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The biggest lesson was understanding how Terraform modules can model real infrastructure relationships while keeping the code reusable and organized.&lt;/p&gt;

&lt;p&gt;This project felt like the first truly production-style deployment in the challenge, combining modularity, scalability, automation, and resilience into one infrastructure workflow.&lt;/p&gt;

&lt;p&gt;Day 26 showed how Infrastructure as Code moves beyond provisioning resources into designing systems that can adapt automatically to real demand.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>automation</category>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Deploying a Static Website on AWS S3 with Terraform: A Beginner’s Guide</title>
      <dc:creator>Stephanie Makori</dc:creator>
      <pubDate>Wed, 15 Apr 2026 09:23:42 +0000</pubDate>
      <link>https://forem.com/stephanie_makori_845bb2c0/deploying-a-static-website-on-aws-s3-with-terraform-a-beginners-guide-koi</link>
      <guid>https://forem.com/stephanie_makori_845bb2c0/deploying-a-static-website-on-aws-s3-with-terraform-a-beginners-guide-koi</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this project, I deployed a fully functional static website using AWS S3 and CloudFront with Terraform. The goal was to apply everything learned throughout the Terraform challenge, including modular design, environment separation, remote state, and Infrastructure as Code best practices.&lt;/p&gt;

&lt;p&gt;This project represents a complete real-world workflow where infrastructure is defined, reviewed, deployed, and destroyed in a controlled and repeatable manner.&lt;/p&gt;




&lt;h2&gt;
  
  
  Project Overview
&lt;/h2&gt;

&lt;p&gt;The architecture of the solution follows a simple but powerful flow:&lt;/p&gt;

&lt;p&gt;User requests website → CloudFront distributes content globally → S3 bucket serves static files&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Components
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;S3 bucket for static website hosting&lt;/li&gt;
&lt;li&gt;CloudFront distribution for global content delivery and HTTPS support&lt;/li&gt;
&lt;li&gt;Terraform module for reusable infrastructure design&lt;/li&gt;
&lt;li&gt;Environment-based configuration for dev deployment&lt;/li&gt;
&lt;li&gt;Remote backend for state management and locking&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;

&lt;p&gt;The project was organized using a modular approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A modules directory containing reusable infrastructure logic for the static website&lt;/li&gt;
&lt;li&gt;An envs directory containing environment-specific configurations&lt;/li&gt;
&lt;li&gt;A backend configuration for remote state storage&lt;/li&gt;
&lt;li&gt;A provider configuration for AWS setup&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This structure ensures a clear separation between reusable infrastructure components and environment-specific deployment settings.&lt;/p&gt;




&lt;h2&gt;
  
  
  Module Design Decisions
&lt;/h2&gt;

&lt;p&gt;The module was designed to encapsulate all infrastructure complexity related to the static website.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key design choices:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The bucket name is required with no default because it must be globally unique in AWS&lt;/li&gt;
&lt;li&gt;The environment variable is restricted to predefined values to prevent misconfiguration&lt;/li&gt;
&lt;li&gt;Tags are optional to allow flexibility while still supporting cost tracking and resource organization&lt;/li&gt;
&lt;li&gt;Default values are used for index and error documents because most static websites follow standard conventions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The module also centralizes tagging, security configuration, and CloudFront setup to ensure consistency across deployments.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Modules Were Used
&lt;/h2&gt;

&lt;p&gt;Modules were introduced to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Avoid duplication of infrastructure code&lt;/li&gt;
&lt;li&gt;Promote reusability across environments&lt;/li&gt;
&lt;li&gt;Improve maintainability of the codebase&lt;/li&gt;
&lt;li&gt;Enforce consistent architecture patterns&lt;/li&gt;
&lt;li&gt;Support scaling to multiple environments such as staging and production&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without modules, each environment would require repeated configuration, increasing the risk of inconsistency and errors.&lt;/p&gt;




&lt;h2&gt;
  
  
  Calling Configuration (Dev Environment)
&lt;/h2&gt;

&lt;p&gt;The dev environment configuration acts as a lightweight wrapper around the module.&lt;/p&gt;

&lt;p&gt;It only defines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The bucket name&lt;/li&gt;
&lt;li&gt;The environment type&lt;/li&gt;
&lt;li&gt;Any overrides for defaults&lt;/li&gt;
&lt;li&gt;Basic tagging information&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All infrastructure logic remains inside the module, keeping the environment configuration clean and easy to manage.&lt;/p&gt;




&lt;h2&gt;
  
  
  Deployment Workflow
&lt;/h2&gt;

&lt;p&gt;The deployment followed a structured Terraform workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Initialization of the working directory and backend configuration&lt;/li&gt;
&lt;li&gt;Validation of configuration correctness&lt;/li&gt;
&lt;li&gt;Planning of infrastructure changes to preview modifications&lt;/li&gt;
&lt;li&gt;Application of the planned changes to create real AWS resources&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each step ensured that infrastructure changes were predictable and reviewable before being applied.&lt;/p&gt;




&lt;h2&gt;
  
  
  Deployment Output
&lt;/h2&gt;

&lt;p&gt;After successful execution, Terraform output provided a CloudFront distribution domain.&lt;/p&gt;

&lt;p&gt;This domain represents the live endpoint of the deployed static website, served globally through AWS edge locations.&lt;/p&gt;

&lt;p&gt;The deployment confirmed that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 bucket was created and configured correctly&lt;/li&gt;
&lt;li&gt;CloudFront distribution was provisioned successfully&lt;/li&gt;
&lt;li&gt;Static website content was accessible via HTTPS&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Live Website Confirmation
&lt;/h2&gt;

&lt;p&gt;The deployed website was successfully accessed using the CloudFront URL.&lt;/p&gt;

&lt;p&gt;The site displayed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A simple static HTML page&lt;/li&gt;
&lt;li&gt;A confirmation message indicating deployment via Terraform&lt;/li&gt;
&lt;li&gt;Dynamic values such as environment and bucket information&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This confirmed that both S3 and CloudFront were correctly integrated.&lt;/p&gt;




&lt;h2&gt;
  
  
  Cleanup Process
&lt;/h2&gt;

&lt;p&gt;After verification, all infrastructure was destroyed using Terraform destroy.&lt;/p&gt;

&lt;p&gt;This ensured:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No unnecessary AWS costs were incurred&lt;/li&gt;
&lt;li&gt;All resources were properly removed&lt;/li&gt;
&lt;li&gt;State was updated to reflect the destroyed infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This step reinforces the importance of infrastructure lifecycle management in real-world environments.&lt;/p&gt;




&lt;h2&gt;
  
  
  DRY Principle in Practice
&lt;/h2&gt;

&lt;p&gt;The DRY (Don’t Repeat Yourself) principle was applied through module usage.&lt;/p&gt;

&lt;p&gt;Instead of repeating S3 and CloudFront configurations across environments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All logic was centralized in a single module&lt;/li&gt;
&lt;li&gt;Environments only supplied configuration values&lt;/li&gt;
&lt;li&gt;Infrastructure changes could be made in one place and applied everywhere&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This significantly reduces complexity and improves long-term maintainability.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Learnings
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;CloudFront requires propagation time before the site becomes fully available globally&lt;/li&gt;
&lt;li&gt;S3 static websites require careful configuration of public access policies&lt;/li&gt;
&lt;li&gt;Modules are essential for scalable infrastructure design&lt;/li&gt;
&lt;li&gt;Remote state improves collaboration and prevents configuration drift&lt;/li&gt;
&lt;li&gt;Terraform outputs are critical for validating successful deployments&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project demonstrates how Terraform transforms simple infrastructure into a scalable and maintainable system. A static website deployment becomes more than just hosting files; it becomes a fully automated, version-controlled, and reproducible cloud architecture.&lt;/p&gt;

&lt;p&gt;By combining S3, CloudFront, modules, and remote state, infrastructure is treated like software — predictable, reusable, and safe to evolve over time.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>My Final Preparation for the Terraform Associate Exam</title>
      <dc:creator>Stephanie Makori</dc:creator>
      <pubDate>Wed, 15 Apr 2026 05:43:19 +0000</pubDate>
      <link>https://forem.com/stephanie_makori_845bb2c0/my-final-preparation-for-the-terraform-associate-exam-ie8</link>
      <guid>https://forem.com/stephanie_makori_845bb2c0/my-final-preparation-for-the-terraform-associate-exam-ie8</guid>
      <description>&lt;p&gt;After 24 days of consistent hands-on practice and study, I shifted fully into exam preparation mode. This stage was not about learning new tools, but about refining precision and confidence under pressure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exam Simulation Results
&lt;/h2&gt;

&lt;p&gt;I completed a full 60-minute simulation with 57 questions and scored &lt;strong&gt;44 out of 57 (77%)&lt;/strong&gt;. This gave me a realistic view of my readiness. While the score is above the passing mark, it exposed specific weak areas that needed focused attention.&lt;/p&gt;

&lt;p&gt;The main gaps were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform CLI edge cases&lt;/li&gt;
&lt;li&gt;State management scenarios&lt;/li&gt;
&lt;li&gt;Terraform Cloud features&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most mistakes came from confusing similar commands and misunderstanding how state behaves in real situations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Focus Areas and Improvements
&lt;/h2&gt;

&lt;p&gt;I spent time drilling the highest-weight domains:&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform Basics
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Clear understanding of &lt;strong&gt;state as the source of truth&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Difference between &lt;strong&gt;resources and data sources&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Proper use of lifecycle rules like &lt;code&gt;prevent_destroy&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Terraform CLI
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Knowing exactly what each command does in practice&lt;/li&gt;
&lt;li&gt;Understanding flags like &lt;code&gt;-target&lt;/code&gt; and &lt;code&gt;-auto-approve&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Distinguishing between &lt;code&gt;apply&lt;/code&gt;, &lt;code&gt;destroy&lt;/code&gt;, and &lt;code&gt;refresh-only&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Infrastructure as Code Concepts
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Idempotency ensures consistent results&lt;/li&gt;
&lt;li&gt;Declarative approach defines desired state&lt;/li&gt;
&lt;li&gt;Drift detection highlights real vs expected infrastructure differences&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Terraform Purpose
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Provider-agnostic design&lt;/li&gt;
&lt;li&gt;Workflow: &lt;strong&gt;Write → Plan → Apply&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Importance of state in tracking infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Common Exam Traps
&lt;/h2&gt;

&lt;p&gt;A few patterns stood out during practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;terraform state rm&lt;/code&gt; does not delete resources, only removes them from state&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sensitive = true&lt;/code&gt; hides output but still stores values in state&lt;/li&gt;
&lt;li&gt;Using branch references instead of version tags breaks reproducibility&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  My Exam Strategy
&lt;/h2&gt;

&lt;p&gt;To stay efficient during the exam:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Spend no more than &lt;strong&gt;60–90 seconds per question&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Flag difficult questions and revisit later&lt;/li&gt;
&lt;li&gt;Eliminate wrong answers first to improve accuracy&lt;/li&gt;
&lt;li&gt;Pay close attention to keywords like &lt;em&gt;state&lt;/em&gt;, &lt;em&gt;destroy&lt;/em&gt;, and &lt;em&gt;drift&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Follow instructions strictly for multi-select questions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This preparation phase helped me move from general understanding to precise execution. The biggest shift was learning to think in terms of Terraform’s behavior, not just memorizing commands.&lt;/p&gt;

&lt;p&gt;At this point, I am confident in both my knowledge and my approach. The goal is not just to pass the exam, but to truly understand how Terraform works in real-world scenarios.&lt;/p&gt;

&lt;p&gt;Next step: exam day.&lt;/p&gt;

</description>
      <category>career</category>
      <category>devops</category>
      <category>learning</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Preparing for the Terraform Associate Exam: Key Resources and Tips</title>
      <dc:creator>Stephanie Makori</dc:creator>
      <pubDate>Mon, 13 Apr 2026 05:16:19 +0000</pubDate>
      <link>https://forem.com/stephanie_makori_845bb2c0/preparing-for-the-terraform-associate-exam-key-resources-and-tips-45mc</link>
      <guid>https://forem.com/stephanie_makori_845bb2c0/preparing-for-the-terraform-associate-exam-key-resources-and-tips-45mc</guid>
      <description>&lt;p&gt;As I move deeper into Terraform exam preparation, I realized that success is not just about building infrastructure, but about understanding how Terraform behaves under different scenarios. Day 23 focused on auditing my knowledge against the official exam domains and identifying gaps early enough to fix them before the exam.&lt;/p&gt;

&lt;p&gt;The first step was reviewing all exam domains and honestly rating my confidence level. I found that I am strongest in core Terraform concepts, modules, and general workflow, where I consistently operate in real projects. However, my weaker areas are Terraform CLI commands, state management, and Terraform Cloud internals. These are not difficult concepts, but they require deliberate hands on repetition rather than passive reading.&lt;/p&gt;

&lt;p&gt;One of the most important parts of my preparation is the Terraform CLI. Commands like plan, apply, init, state mv, state rm, and import are heavily tested. The exam does not only ask what they do, but also what happens to infrastructure when they are executed. This means understanding the difference between modifying state and modifying real resources is critical.&lt;/p&gt;

&lt;p&gt;I also reviewed non cloud providers such as random and local. These are often overlooked but appear frequently in exam questions because they test understanding of Terraform beyond AWS or cloud infrastructure.&lt;/p&gt;

&lt;p&gt;Based on my audit, I created a structured study plan focusing on three key areas: CLI commands, state management, and Terraform Cloud features. Each topic has a clear practice method such as running commands in a test environment or writing out scenarios from memory.&lt;/p&gt;

&lt;p&gt;The most valuable takeaway from this stage is that Terraform mastery is not about memorizing syntax. It is about understanding the lifecycle of infrastructure, especially how state connects configuration to real-world resources.&lt;/p&gt;

&lt;p&gt;Moving forward, my focus is repetition, practice questions, and reinforcing weak areas until they become automatic.&lt;/p&gt;

</description>
      <category>career</category>
      <category>devops</category>
      <category>learning</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Putting It All Together: Application and Infrastructure Workflows with Terraform</title>
      <dc:creator>Stephanie Makori</dc:creator>
      <pubDate>Mon, 13 Apr 2026 04:50:48 +0000</pubDate>
      <link>https://forem.com/stephanie_makori_845bb2c0/putting-it-all-together-application-and-infrastructure-workflows-with-terraform-3c0e</link>
      <guid>https://forem.com/stephanie_makori_845bb2c0/putting-it-all-together-application-and-infrastructure-workflows-with-terraform-3c0e</guid>
      <description>&lt;p&gt;Over the past three weeks, I have moved from writing basic Terraform code to building a complete, production-ready workflow that combines application and infrastructure deployment into one unified system.&lt;/p&gt;

&lt;p&gt;The biggest takeaway is that &lt;strong&gt;Infrastructure as Code must follow the same discipline as software engineering&lt;/strong&gt;. Version control, testing, code reviews, and CI/CD pipelines are not optional. They are essential for safe and scalable infrastructure.&lt;/p&gt;

&lt;p&gt;In this final stage, I built an &lt;strong&gt;integrated CI pipeline&lt;/strong&gt; using GitHub Actions. Every pull request triggers formatting checks, validation, and a Terraform plan. That plan is saved as an immutable artifact, ensuring that what gets reviewed is exactly what gets applied. This removes uncertainty and prevents unexpected changes during deployment.&lt;/p&gt;

&lt;p&gt;I also implemented &lt;strong&gt;Sentinel policies&lt;/strong&gt; in Terraform Cloud to enforce rules across all deployments. Restricting instance types prevents costly mistakes, while mandatory tagging ensures every resource is traceable and properly managed. These policies act as guardrails, allowing teams to move quickly without compromising safety.&lt;/p&gt;

&lt;p&gt;Another key addition is the &lt;strong&gt;cost estimation gate&lt;/strong&gt;. Terraform Cloud calculates the expected monthly cost before deployment and blocks changes that exceed a defined threshold. This introduces financial accountability directly into the workflow.&lt;/p&gt;

&lt;p&gt;What makes this approach powerful is the concept of &lt;strong&gt;immutable infrastructure promotion&lt;/strong&gt;. Instead of rebuilding environments differently, the same reviewed Terraform plan is promoted across environments. This ensures consistency, reduces drift, and aligns infrastructure workflows with modern application deployment practices.&lt;/p&gt;

&lt;p&gt;Reflecting on this journey, the most important shift for me was thinking of infrastructure as a &lt;strong&gt;controlled, versioned system&lt;/strong&gt; rather than manual configuration. This mindset is what enables teams to scale safely and confidently.&lt;/p&gt;

&lt;p&gt;This is no longer just about writing Terraform. It is about building reliable systems.&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>devops</category>
      <category>github</category>
      <category>terraform</category>
    </item>
    <item>
      <title>A Workflow for Deploying Infrastructure Code with Terraform</title>
      <dc:creator>Stephanie Makori</dc:creator>
      <pubDate>Sun, 12 Apr 2026 16:37:40 +0000</pubDate>
      <link>https://forem.com/stephanie_makori_845bb2c0/a-workflow-for-deploying-infrastructure-code-with-terraform-6f3</link>
      <guid>https://forem.com/stephanie_makori_845bb2c0/a-workflow-for-deploying-infrastructure-code-with-terraform-6f3</guid>
      <description>&lt;p&gt;Yesterday, I mapped the standard application deployment workflow. Today, I applied the same seven-step process to infrastructure code and the difference is clear: deploying infrastructure is not just riskier, it demands stricter discipline.&lt;/p&gt;

&lt;p&gt;I implemented a real change by adding a CloudWatch CPU alarm to a webserver instance using Terraform.&lt;/p&gt;

&lt;p&gt;The workflow began with &lt;strong&gt;version control&lt;/strong&gt;, enforcing protected branches and pull request reviews. This is familiar from application development, but far more critical when changes can affect live infrastructure.&lt;/p&gt;

&lt;p&gt;Next, I ran &lt;code&gt;terraform plan&lt;/code&gt; locally. This is where the workflows begin to diverge. Instead of running code, Terraform generates a &lt;strong&gt;diff against the current state&lt;/strong&gt;, showing exactly what will change. This step is non-negotiable because even a small misconfiguration can have a large impact.&lt;/p&gt;

&lt;p&gt;After creating a feature branch and committing the change, I opened a pull request and included the full plan output. Unlike application code reviews, this is not just about logic. Reviewers must evaluate &lt;strong&gt;cost, security, and potential blast radius&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Once approved, the change was merged and tagged. Deployment was then executed using a &lt;strong&gt;saved plan file&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform apply day21.tfplan&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This ensures that what was reviewed is exactly what gets applied, eliminating drift between plan and execution.&lt;/p&gt;

&lt;p&gt;To make the workflow safe, I implemented key safeguards: plan file pinning, blast radius documentation, and approval gates for changes. I also explored Sentinel policies, which act as a guardrail by enforcing rules before any deployment can proceed.&lt;/p&gt;

&lt;p&gt;The biggest lesson is this: infrastructure deployments are not forgiving. Application failures can often be rolled back quickly. Infrastructure mistakes can cascade across systems.&lt;/p&gt;

&lt;p&gt;Infrastructure as Code only works at scale when it is treated with the same rigor as software engineering, plus an extra layer of caution.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>A Workflow for Deploying Application Code with Terraform</title>
      <dc:creator>Stephanie Makori</dc:creator>
      <pubDate>Sun, 12 Apr 2026 11:18:12 +0000</pubDate>
      <link>https://forem.com/stephanie_makori_845bb2c0/a-workflow-for-deploying-application-code-with-terraform-10i3</link>
      <guid>https://forem.com/stephanie_makori_845bb2c0/a-workflow-for-deploying-application-code-with-terraform-10i3</guid>
      <description>&lt;p&gt;Modern Infrastructure as Code becomes truly effective when it follows the same disciplined workflows used in application development. In this exercise, I mapped the standard seven step software delivery pipeline to a Terraform-based infrastructure workflow.&lt;/p&gt;

&lt;p&gt;The process begins with &lt;strong&gt;version control&lt;/strong&gt;, where all infrastructure code is stored in Git with a protected main branch. This ensures that no changes are applied directly without review, maintaining consistency and control across the system.&lt;/p&gt;

&lt;p&gt;Next, I worked &lt;strong&gt;locally&lt;/strong&gt; by modifying the application user data script to update the deployed web response to version 3. I validated this change using &lt;code&gt;terraform plan&lt;/code&gt;, which provides a safe preview of all infrastructure modifications before execution.&lt;/p&gt;

&lt;p&gt;A feature branch was created to isolate the change, and standard Git practices were followed for commit and push operations. This ensures traceability and clean collaboration.&lt;/p&gt;

&lt;p&gt;During the &lt;strong&gt;review stage&lt;/strong&gt;, a pull request was opened and the Terraform plan output was attached. This allows reviewers to assess infrastructure impact without needing to execute Terraform, improving both safety and transparency.&lt;/p&gt;

&lt;p&gt;Automated validation was handled through CI pipelines using GitHub Actions, ensuring that formatting and configuration checks passed before merging.&lt;/p&gt;

&lt;p&gt;Once approved, the change was merged into the main branch and tagged for version tracking. Deployment was then executed using &lt;code&gt;terraform apply&lt;/code&gt;, and the updated application was verified through the browser.&lt;/p&gt;

&lt;p&gt;Terraform Cloud enhanced this workflow by introducing remote state management, secure variable storage, and detailed audit logs. The private registry further enables reusable, versioned infrastructure modules across teams.&lt;/p&gt;

&lt;p&gt;The key insight from this exercise is that Infrastructure as Code must follow structured engineering workflows. When teams skip planning, review, or automation, they introduce unnecessary risk, drift, and instability.&lt;/p&gt;

&lt;p&gt;When properly implemented, Terraform transforms infrastructure into a predictable, versioned, and collaborative engineering system aligned with modern software delivery practices.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>devops</category>
      <category>terraform</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Day 19 - Adopting Infrastructure as Code in Practice</title>
      <dc:creator>Stephanie Makori</dc:creator>
      <pubDate>Sun, 12 Apr 2026 10:30:25 +0000</pubDate>
      <link>https://forem.com/stephanie_makori_845bb2c0/day-19-adopting-infrastructure-as-code-in-practice-4ekd</link>
      <guid>https://forem.com/stephanie_makori_845bb2c0/day-19-adopting-infrastructure-as-code-in-practice-4ekd</guid>
      <description>&lt;p&gt;Infrastructure as Code (IaC) is often presented as a technical skill, but in real environments, it is primarily an &lt;strong&gt;adoption and culture problem&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Today focused on understanding how teams transition from manual infrastructure management to a fully version-controlled and automated approach using Terraform.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I worked on
&lt;/h2&gt;

&lt;p&gt;I started by setting up a secure Terraform backend using an S3 bucket and DynamoDB for state locking. This ensured safe remote state management and collaboration.&lt;/p&gt;

&lt;p&gt;Next, I provisioned infrastructure using Terraform by creating an S3 bucket. This reinforced the standard workflow of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Writing configuration&lt;/li&gt;
&lt;li&gt;Running &lt;code&gt;terraform plan&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Applying infrastructure changes safely&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also practiced importing existing infrastructure using &lt;code&gt;terraform import&lt;/code&gt;, which demonstrated how real-world systems can be gradually brought under Terraform management without disruption.&lt;/p&gt;

&lt;p&gt;After importing, I verified the state using:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform plan&lt;/code&gt; and &lt;code&gt;terraform show&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This confirmed that Terraform correctly recognized the existing infrastructure.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key takeaway
&lt;/h2&gt;

&lt;p&gt;The biggest challenge in Infrastructure as Code adoption is not technical — it is &lt;strong&gt;organizational and cultural&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Common blockers include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lack of trust in automation&lt;/li&gt;
&lt;li&gt;Dependence on manual cloud operations&lt;/li&gt;
&lt;li&gt;Resistance to workflow change&lt;/li&gt;
&lt;li&gt;Unclear migration strategy&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Adoption strategy that works
&lt;/h2&gt;

&lt;p&gt;A successful IaC rollout should be incremental:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start with new infrastructure
&lt;/li&gt;
&lt;li&gt;Gradually import existing resources
&lt;/li&gt;
&lt;li&gt;Establish team standards (PR reviews, CI checks, version control)
&lt;/li&gt;
&lt;li&gt;Introduce automation last, after trust is established
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;Infrastructure as Code is not just about automation tools like Terraform.&lt;/p&gt;

&lt;p&gt;It is about building &lt;strong&gt;repeatable, reliable, and collaborative infrastructure systems&lt;/strong&gt;.&lt;/p&gt;




</description>
      <category>automation</category>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Automating Terraform Testing: From Unit Tests to End-to-End Validation</title>
      <dc:creator>Stephanie Makori</dc:creator>
      <pubDate>Tue, 07 Apr 2026 11:03:15 +0000</pubDate>
      <link>https://forem.com/stephanie_makori_845bb2c0/automating-terraform-testing-from-unit-tests-to-end-to-end-validation-51mk</link>
      <guid>https://forem.com/stephanie_makori_845bb2c0/automating-terraform-testing-from-unit-tests-to-end-to-end-validation-51mk</guid>
      <description>&lt;p&gt;Infrastructure as code (IaC) is powerful, but deploying untested changes can be risky. On Day 18 of my 30-Day Terraform Challenge, I focused on automating testing for Terraform code, covering unit tests, integration tests, and end-to-end tests, all tied together in a CI/CD pipeline.&lt;/p&gt;




&lt;h2&gt;
  
  
  Unit Tests
&lt;/h2&gt;

&lt;p&gt;Unit tests are fast, cheap, and safe because they test your module plan only—no real resources are created. Each unit test ensures your resources are configured correctly, such as validating cluster names, instance types, and open ports. These tests catch configuration errors and bad variables before anything reaches production.&lt;/p&gt;

&lt;p&gt;Unit tests run on pull requests, giving developers &lt;strong&gt;fast feedback&lt;/strong&gt; and confidence that changes won’t break the module.&lt;/p&gt;




&lt;h2&gt;
  
  
  Integration Tests
&lt;/h2&gt;

&lt;p&gt;Integration tests deploy real infrastructure, assert behavior, then destroy everything. They check how modules interact with actual cloud resources, like verifying that the application load balancer responds correctly and that EC2 instances are running as expected.&lt;/p&gt;

&lt;p&gt;Integration tests run only on pushes to the main branch, because they are slower and use real AWS resources. Using &lt;code&gt;defer destroy&lt;/code&gt; ensures all resources are cleaned up after the test, preventing cost leaks.&lt;/p&gt;




&lt;h2&gt;
  
  
  End-to-End Tests
&lt;/h2&gt;

&lt;p&gt;End-to-end (E2E) tests validate the entire stack—from networking and databases to applications. They ensure that the full system works as a whole. E2E tests are &lt;strong&gt;slow and costlier&lt;/strong&gt;, so they are run less frequently.&lt;/p&gt;




&lt;h2&gt;
  
  
  CI/CD Test Strategy
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Unit tests run on pull requests (fast, free)
&lt;/li&gt;
&lt;li&gt;Integration tests run only on push to main (slower, real AWS)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Test Type&lt;/th&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Deploys Real Infra&lt;/th&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;th&gt;What It Catches&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Unit&lt;/td&gt;
&lt;td&gt;terraform test&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Seconds&lt;/td&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;td&gt;Config errors, bad variables&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Integration&lt;/td&gt;
&lt;td&gt;Terratest&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Minutes&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Resource behavior&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;End-to-End&lt;/td&gt;
&lt;td&gt;Terratest&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;15–30 min&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Full system issues&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Integration vs End-to-End: Integration tests focus on a module in isolation, while E2E tests validate the full stack.
&lt;/li&gt;
&lt;li&gt;Unit tests on PRs → fast feedback
&lt;/li&gt;
&lt;li&gt;E2E tests less frequent → expensive &amp;amp; slower&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Challenges &amp;amp; Fixes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Missing required variables → added dummy values for unit tests
&lt;/li&gt;
&lt;li&gt;Go module errors → used &lt;code&gt;go mod tidy&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Terraform syntax mistakes → corrected &lt;code&gt;.tftest.hcl&lt;/code&gt; content
&lt;/li&gt;
&lt;li&gt;Application Load Balancer slow startup → added retry logic
&lt;/li&gt;
&lt;li&gt;AWS credentials setup → properly configured GitHub secrets
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Apply Screenshot
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwintkw1v12es4lrz1rxq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwintkw1v12es4lrz1rxq.png" alt=" " width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Automated testing with Terraform ensures infrastructure deploys reliably and safely. Combining unit, integration, and E2E tests gives full confidence while minimizing cost and risk. With CI/CD, every commit is validated, enabling rapid and safe iteration.&lt;/p&gt;




&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Terraform Test Documentation
&lt;/li&gt;
&lt;li&gt;Terratest Documentation
&lt;/li&gt;
&lt;li&gt;GitHub Actions Terraform Setup
&lt;/li&gt;
&lt;li&gt;Go Testing Package
&lt;/li&gt;
&lt;li&gt;AWS Documentation&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>automation</category>
      <category>devops</category>
      <category>terraform</category>
      <category>testing</category>
    </item>
    <item>
      <title>The Importance of Manual Testing in Terraform</title>
      <dc:creator>Stephanie Makori</dc:creator>
      <pubDate>Mon, 06 Apr 2026 07:20:34 +0000</pubDate>
      <link>https://forem.com/stephanie_makori_845bb2c0/the-importance-of-manual-testing-in-terraform-pn6</link>
      <guid>https://forem.com/stephanie_makori_845bb2c0/the-importance-of-manual-testing-in-terraform-pn6</guid>
      <description>&lt;p&gt;Manual testing is often overlooked in infrastructure as code workflows, especially with powerful tools like Terraform. However, before introducing automated tests, manual testing is essential to fully understand how your infrastructure behaves in real-world conditions.&lt;/p&gt;

&lt;p&gt;On Day 17 of my 30-Day Terraform Challenge, I focused on building a structured manual testing process for my webserver cluster (Application Load Balancer + Auto Scaling Group + EC2 instances). This experience reinforced one key idea: &lt;strong&gt;you cannot automate what you do not understand.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Manual Testing Matters
&lt;/h2&gt;

&lt;p&gt;Manual testing helps answer critical questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does the infrastructure deploy correctly?&lt;/li&gt;
&lt;li&gt;Does it behave as expected under real conditions?&lt;/li&gt;
&lt;li&gt;Are there hidden misconfigurations that validation tools miss?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While &lt;code&gt;terraform validate&lt;/code&gt; and &lt;code&gt;terraform plan&lt;/code&gt; ensure correctness at a configuration level, they do not guarantee real-world functionality. Manual testing bridges that gap.&lt;/p&gt;




&lt;h2&gt;
  
  
  Building a Structured Test Checklist
&lt;/h2&gt;

&lt;p&gt;Instead of randomly clicking around the AWS Console, I created a structured checklist to guide my testing process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Provisioning Verification
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;terraform init&lt;/code&gt; and confirm initialization completes successfully
&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;terraform validate&lt;/code&gt; and ensure configuration is valid
&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;terraform plan&lt;/code&gt; and verify expected resources
&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;terraform apply&lt;/code&gt; and confirm successful provisioning
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Resource Correctness
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Verify resources exist in AWS Console (EC2, ALB, Target Groups)
&lt;/li&gt;
&lt;li&gt;Confirm names, tags, and region match configuration
&lt;/li&gt;
&lt;li&gt;Ensure security group rules are correctly applied
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Functional Verification
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Retrieve ALB DNS using &lt;code&gt;terraform output&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;curl http://&amp;lt;alb-dns&amp;gt;&lt;/code&gt; and verify response
&lt;/li&gt;
&lt;li&gt;Confirm all instances pass health checks
&lt;/li&gt;
&lt;li&gt;Terminate one instance and verify ASG replaces it
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  State Consistency
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;terraform plan&lt;/code&gt; after apply → expect “No changes”
&lt;/li&gt;
&lt;li&gt;Confirm Terraform state matches actual infrastructure
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Regression Check
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Make a small configuration change (e.g., add a tag)
&lt;/li&gt;
&lt;li&gt;Ensure only intended changes appear in &lt;code&gt;terraform plan&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Apply and verify the plan is clean afterward
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Test Execution: What Worked and What Didn’t
&lt;/h2&gt;

&lt;p&gt;Running the checklist revealed both successes and failures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Successful Tests
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Terraform Initialization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;terraform init&lt;/p&gt;

&lt;p&gt;Result: PASS  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform Apply&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;terraform apply -auto-approve&lt;/p&gt;

&lt;p&gt;Result: PASS&lt;br&gt;&lt;br&gt;
All resources were successfully created (see screenshots).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ALB Functional Test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;curl &lt;a href="http://my-app-alb-123456.us-east-1.elb.amazonaws.com" rel="noopener noreferrer"&gt;http://my-app-alb-123456.us-east-1.elb.amazonaws.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Result: PASS&lt;br&gt;&lt;br&gt;
Returned: &lt;code&gt;"Hello World v1"&lt;/code&gt; (confirmed via browser)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Auto Scaling Self-Healing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;aws ec2 terminate-instances --instance-ids i-xxxx&lt;/p&gt;

&lt;p&gt;Result: PASS&lt;br&gt;&lt;br&gt;
A replacement instance was automatically launched.&lt;/p&gt;




&lt;h3&gt;
  
  
  Failure and Fix
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Test: Terraform Plan Consistency&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;terraform plan&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Expected: No changes
&lt;/li&gt;
&lt;li&gt;Actual: 1 resource change detected
&lt;/li&gt;
&lt;li&gt;Result: FAIL
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Root Cause:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A missing tag in the security group configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Added the missing tag in the Terraform code and re-applied:&lt;/p&gt;

&lt;p&gt;terraform apply&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retest Result:&lt;/strong&gt; PASS  &lt;/p&gt;

&lt;p&gt;This failure highlighted how manual testing uncovers real issues that static validation cannot.&lt;/p&gt;




&lt;h2&gt;
  
  
  Testing Across Environments
&lt;/h2&gt;

&lt;p&gt;I ran the same tests in both development and production environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Differences:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Dev used &lt;code&gt;t2.micro&lt;/code&gt;, production used &lt;code&gt;t3.medium&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Production had stricter security group rules
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Unexpected Issue:
&lt;/h3&gt;

&lt;p&gt;The application initially failed in production because HTTP (port 80) was blocked.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Updated the security group to allow inbound HTTP traffic.&lt;/p&gt;

&lt;p&gt;This demonstrated a common real-world problem: &lt;strong&gt;something works in dev but fails in production due to configuration differences.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Importance of Cleanup
&lt;/h2&gt;

&lt;p&gt;After testing, I destroyed all resources to avoid unnecessary costs.&lt;/p&gt;

&lt;p&gt;terraform destroy -auto-approve&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;

&lt;p&gt;aws ec2 describe-instances --filters "Name=tag:ManagedBy,Values=terraform"&lt;/p&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;[]&lt;/p&gt;

&lt;p&gt;aws elbv2 describe-load-balancers&lt;/p&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;[]&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Cleanup Matters
&lt;/h3&gt;

&lt;p&gt;Cleaning up sounds simple, but it is often where engineers fail. Terraform may partially destroy resources, leaving orphaned infrastructure behind.&lt;/p&gt;

&lt;p&gt;If cleanup is ignored:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS costs can accumulate quickly
&lt;/li&gt;
&lt;li&gt;Old resources can interfere with future tests
&lt;/li&gt;
&lt;li&gt;Infrastructure drift becomes harder to manage
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Lessons from Terraform Import
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;terraform import&lt;/code&gt; lab introduced a critical concept: bringing existing infrastructure under Terraform management.&lt;/p&gt;

&lt;h3&gt;
  
  
  What It Solves
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Allows Terraform to manage manually created resources
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What It Does NOT Solve
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;It does not generate Terraform configuration
&lt;/li&gt;
&lt;li&gt;You must manually write &lt;code&gt;.tf&lt;/code&gt; files
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This reinforces that Terraform is not just a tool — it requires discipline and understanding.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Challenges and Fixes
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Issue&lt;/th&gt;
&lt;th&gt;Root Cause&lt;/th&gt;
&lt;th&gt;Fix&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Missing tag&lt;/td&gt;
&lt;td&gt;Not defined in config&lt;/td&gt;
&lt;td&gt;Added tag block&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ALB not accessible&lt;/td&gt;
&lt;td&gt;Port 80 blocked&lt;/td&gt;
&lt;td&gt;Updated security group&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Plan inconsistency&lt;/td&gt;
&lt;td&gt;Config drift&lt;/td&gt;
&lt;td&gt;Re-applied configuration&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Manual testing is not optional — it is the foundation of reliable infrastructure.&lt;/p&gt;

&lt;p&gt;It helps you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand your system deeply
&lt;/li&gt;
&lt;li&gt;Catch real-world failures early
&lt;/li&gt;
&lt;li&gt;Build confidence before automation
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every failure discovered during manual testing becomes a future automated test case.&lt;/p&gt;

&lt;p&gt;As I move forward in this challenge, the next step is clear: &lt;strong&gt;turn these manual checks into automated tests.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Day 17 Summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Built a structured manual testing checklist
&lt;/li&gt;
&lt;li&gt;Tested both dev and production environments
&lt;/li&gt;
&lt;li&gt;Identified and fixed real infrastructure issues
&lt;/li&gt;
&lt;li&gt;Practiced strict cleanup discipline
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Manual testing isn’t just a step — it’s a mindset.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
      <category>testing</category>
    </item>
    <item>
      <title>Refactoring Terraform Toward Production-Grade Standards</title>
      <dc:creator>Stephanie Makori</dc:creator>
      <pubDate>Tue, 31 Mar 2026 13:00:58 +0000</pubDate>
      <link>https://forem.com/stephanie_makori_845bb2c0/refactoring-terraform-toward-production-grade-standards-il</link>
      <guid>https://forem.com/stephanie_makori_845bb2c0/refactoring-terraform-toward-production-grade-standards-il</guid>
      <description>&lt;p&gt;Day 16 of my &lt;strong&gt;30-Day Terraform Challenge&lt;/strong&gt; was all about improving infrastructure quality rather than simply adding more resources.&lt;/p&gt;

&lt;p&gt;Today I took an existing Terraform setup and refactored it to make it more &lt;strong&gt;production-ready&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I improved
&lt;/h2&gt;

&lt;p&gt;I focused on several key areas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;reusable module structure&lt;/li&gt;
&lt;li&gt;consistent tagging&lt;/li&gt;
&lt;li&gt;lifecycle protection&lt;/li&gt;
&lt;li&gt;input validation&lt;/li&gt;
&lt;li&gt;CloudWatch monitoring&lt;/li&gt;
&lt;li&gt;basic automated testing with Terratest&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Biggest Refactors
&lt;/h2&gt;

&lt;p&gt;One of the most useful improvements was introducing a shared &lt;code&gt;common_tags&lt;/code&gt; block so I could apply consistent metadata across resources without repeating the same tag definitions everywhere.&lt;/p&gt;

&lt;p&gt;I also added lifecycle rules like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;create_before_destroy&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;prevent_destroy&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are small changes in code, but they make a huge difference in real environments where accidental deletion or downtime can be expensive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring and Validation
&lt;/h2&gt;

&lt;p&gt;I added a &lt;strong&gt;CloudWatch CPU alarm&lt;/strong&gt; and input validation rules to make the infrastructure safer and easier to operate.&lt;/p&gt;

&lt;p&gt;That helped shift my thinking from:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Will this deploy?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;to:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Will this still be safe, maintainable, and observable later?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Real Challenge I Hit
&lt;/h2&gt;

&lt;p&gt;The most realistic issue today was with &lt;strong&gt;ALB access logging&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Terraform failed because the Application Load Balancer didn’t have permission to write logs to my S3 bucket. I had to fix that by adding the correct bucket policy.&lt;/p&gt;

&lt;p&gt;That was a great reminder that “working Terraform” and “production-grade Terraform” are not the same thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaway
&lt;/h2&gt;

&lt;p&gt;Today showed me that strong infrastructure is not just about provisioning resources - it is about designing for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;safety&lt;/li&gt;
&lt;li&gt;maintainability&lt;/li&gt;
&lt;li&gt;observability&lt;/li&gt;
&lt;li&gt;operational reliability&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Terraform #IaC #AWS #DevOps #CloudComputing #30DayTerraformChallenge #TerraformChallenge
&lt;/h1&gt;

</description>
    </item>
  </channel>
</rss>
