<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Filbert Nana Blessing</title>
    <description>The latest articles on Forem by Filbert Nana Blessing (@filbert_nanablessing_1ae).</description>
    <link>https://forem.com/filbert_nanablessing_1ae</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/filbert_nanablessing_1ae"/>
    <language>en</language>
    <item>
      <title>How I Built a Production-Grade DevOps Project From Scratch</title>
      <dc:creator>Filbert Nana Blessing</dc:creator>
      <pubDate>Sat, 14 Mar 2026 12:49:18 +0000</pubDate>
      <link>https://forem.com/filbert_nanablessing_1ae/how-i-built-a-production-grade-devops-project-from-scratch-obg</link>
      <guid>https://forem.com/filbert_nanablessing_1ae/how-i-built-a-production-grade-devops-project-from-scratch-obg</guid>
      <description>&lt;p&gt;&lt;em&gt;A walkthrough of building a real CI/CD pipeline, AWS infrastructure, and containerised app — the way it's actually done in production.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I Built This
&lt;/h2&gt;

&lt;p&gt;Most DevOps tutorials show you how to deploy a "Hello World" app to a single EC2 instance with hardcoded AWS keys. That's fine for learning the basics, but it doesn't reflect what production engineering actually looks like.&lt;/p&gt;

&lt;p&gt;I wanted to build something I could point to and say — this is how I would do it at a real company. No shortcuts, no tutorial hand-holding.&lt;/p&gt;

&lt;p&gt;The result: a fully automated pipeline that takes code from a git push to a live HTTPS endpoint on AWS, with security scanning, infrastructure as code, observability, and zero static credentials anywhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Live URL:&lt;/strong&gt; &lt;a href="https://tasks.therealblessing.com" rel="noopener noreferrer"&gt;https://tasks.therealblessing.com&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/nanafilbert/cicd-aws-terraform-deploy" rel="noopener noreferrer"&gt;github.com/nanafilbert/cicd-aws-terraform-deploy&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;A Node.js task manager API with a Kanban dashboard UI, deployed to AWS behind an Application Load Balancer with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;8-stage GitHub Actions CI/CD pipeline&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OIDC keyless AWS authentication&lt;/strong&gt; — no static credentials&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modular Terraform&lt;/strong&gt; — VPC, ALB, ASG, EC2, all in reusable modules&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-stage Docker build&lt;/strong&gt; — tests run inside the build, broken images can't be pushed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trivy CVE scanning&lt;/strong&gt; — pipeline fails on HIGH/CRITICAL vulnerabilities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ACM SSL certificate&lt;/strong&gt; with custom domain&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prometheus + Grafana&lt;/strong&gt; observability stack locally&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ASG instance refresh&lt;/strong&gt; for zero-downtime deployments&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GitHub Actions (OIDC) → Docker Hub → AWS
                                      │
                              ALB (HTTPS:443)
                              ACM Certificate
                              HTTP → HTTPS redirect
                                      │
                          Auto Scaling Group (EC2 t3.small)
                                      │
                              Docker Container
                              Node.js API :3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The pipeline authenticates to AWS using OIDC short-lived tokens — no &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; or &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt; anywhere. Every deploy triggers an ASG instance refresh that replaces EC2 instances with fresh ones pulling the latest image.&lt;/p&gt;


&lt;h2&gt;
  
  
  The CI/CD Pipeline
&lt;/h2&gt;

&lt;p&gt;Eight stages, every one intentional:&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Lint
&lt;/h3&gt;

&lt;p&gt;ESLint checks code quality. Fails fast before any expensive steps.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Test
&lt;/h3&gt;

&lt;p&gt;Jest runs 19 integration tests with coverage enforced at 80%. If tests fail, nothing gets built or deployed.&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Security Scan
&lt;/h3&gt;

&lt;p&gt;Trivy scans the filesystem for known CVEs in dependencies. Fails the pipeline on any HIGH or CRITICAL unfixed vulnerability. This caught real issues during development — Alpine CVEs and npm transitive dependency vulnerabilities that needed pinning.&lt;/p&gt;
&lt;h3&gt;
  
  
  4. Build &amp;amp; Push
&lt;/h3&gt;

&lt;p&gt;Multi-stage Docker build. Three stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;deps&lt;/code&gt; — installs only production dependencies&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;test&lt;/code&gt; — runs Jest inside the build process&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;production&lt;/code&gt; — minimal Alpine 3.21, non-root user, only what's needed to run&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The test stage is critical. If your tests fail, the image doesn't get built. You physically cannot push a broken image.&lt;/p&gt;
&lt;h3&gt;
  
  
  5. Terraform Plan
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;terraform plan&lt;/code&gt; runs and saves the plan as an artifact. This is what gets applied in the next stage — no variables re-injected, no drift between plan and apply.&lt;/p&gt;
&lt;h3&gt;
  
  
  6. Deploy
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;terraform apply&lt;/code&gt; using the saved plan. Followed immediately by an explicit ASG instance refresh:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws autoscaling start-instance-refresh &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--auto-scaling-group-name&lt;/span&gt; &lt;span class="nv"&gt;$ASG_NAME&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--preferences&lt;/span&gt; &lt;span class="s1"&gt;'{"MinHealthyPercentage": 50, "InstanceWarmup": 60}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what actually gets new code onto the instances. Without triggering the refresh, the ASG would keep running the old image indefinitely.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Smoke Test
&lt;/h3&gt;

&lt;p&gt;Polls &lt;code&gt;/health/ready&lt;/code&gt; for up to 6 minutes after deploy. If the app never becomes healthy, the pipeline fails and you know immediately.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Summary
&lt;/h3&gt;

&lt;p&gt;A pass/fail table written to the GitHub Actions job summary. Clean, visible at a glance.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Terraform Setup
&lt;/h2&gt;

&lt;p&gt;Three independent modules:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;networking&lt;/strong&gt; — VPC, public subnets across two AZs, internet gateway, route tables.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;security&lt;/strong&gt; — Security groups. The ALB accepts traffic from anywhere on 80 and 443. The app security group only accepts traffic from the ALB security group — EC2 instances are never directly reachable from the internet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;compute&lt;/strong&gt; — ALB with HTTP redirect to HTTPS and an HTTPS listener with ACM certificate, launch template with IMDSv2 required, ASG with rolling instance refresh, IAM role for EC2 with SSM access.&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;bootstrap/&lt;/code&gt; folder handles the one-time setup that must exist before the pipeline can run — the OIDC provider, IAM role, S3 state bucket, and DynamoDB lock table.&lt;/p&gt;

&lt;p&gt;Remote state in S3 with DynamoDB locking means the pipeline and local Terraform commands never conflict.&lt;/p&gt;




&lt;h2&gt;
  
  
  OIDC — The Right Way to Authenticate
&lt;/h2&gt;

&lt;p&gt;This was the most important decision in the project.&lt;/p&gt;

&lt;p&gt;The traditional approach is to create an IAM user, generate access keys, and store them as GitHub secrets. This works but creates long-lived credentials that can be leaked, rotated incorrectly, or forgotten.&lt;/p&gt;

&lt;p&gt;OIDC works differently. GitHub Actions requests a short-lived token from GitHub's OIDC provider. AWS verifies that token against a trust policy and issues temporary credentials. The whole exchange happens in seconds and the credentials expire when the job ends.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;id-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;
  &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v4&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;role-to-assume&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ROLE_ARN }}&lt;/span&gt;
    &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_REGION }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The trust policy on the IAM role restricts assumption to this specific GitHub repo only:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"Condition"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"StringLike"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"token.actions.githubusercontent.com:sub"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; 
      &lt;/span&gt;&lt;span class="s2"&gt;"repo:nanafilbert/cicd-aws-terraform-deploy:*"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No static credentials. Nothing to rotate. Nothing to leak.&lt;/p&gt;




&lt;h2&gt;
  
  
  Bugs I Hit (And What They Taught Me)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The Docker permission bug&lt;/strong&gt; — The production container was crashing with &lt;code&gt;Cannot find module '/app/src/app.js'&lt;/code&gt; even though the file clearly existed in the image. Took a while to figure out: I had set &lt;code&gt;chmod -R 550&lt;/code&gt; on the app directory. Read and execute, but no execute on directories means Node.js can't traverse the path. Changed to &lt;code&gt;755&lt;/code&gt; and it worked immediately. The lesson: file permission bugs are silent and confusing — always verify what your non-root user can actually access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The HSTS loop&lt;/strong&gt; — After adding HTTPS, all API calls from the browser were being upgraded to HTTPS even when I explicitly typed &lt;code&gt;http://&lt;/code&gt;. Helmet's default configuration sets a Strict-Transport-Security header, which tells browsers to remember to always use HTTPS for this origin. Even clearing the cache wasn't enough — had to explicitly clear the HSTS policy in Chrome's &lt;code&gt;chrome://net-internals/#hsts&lt;/code&gt; and disable the header in Helmet for the HTTP-only ALB endpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The instance refresh gap&lt;/strong&gt; — After every deploy the new Docker image was pushed to Docker Hub, but the EC2 instance kept running the old one. Terraform saw no infrastructure changes so it didn't replace anything. The fix was to explicitly trigger an ASG instance refresh in the pipeline after every apply. Without that step, automation is an illusion — you're just pushing images that never get deployed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Terraform state lock&lt;/strong&gt; — A failed pipeline run left a lock on the state file. Subsequent runs couldn't acquire the lock and failed immediately. Learned that &lt;code&gt;terraform force-unlock -force &amp;lt;lock-id&amp;gt;&lt;/code&gt; from the correct working directory resolves this, and added auto-unlock logic to the plan job for future failures.&lt;/p&gt;




&lt;h2&gt;
  
  
  Observability
&lt;/h2&gt;

&lt;p&gt;The app exposes Prometheus metrics via &lt;code&gt;prom-client&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;promClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;prom-client&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;promClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;collectDefaultMetrics&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;register&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/health/metrics&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;register&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;contentType&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;register&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Locally, &lt;code&gt;docker-compose up&lt;/code&gt; starts the full stack — app, nginx, Prometheus, and Grafana. Prometheus scrapes &lt;code&gt;/health/metrics&lt;/code&gt; every 15 seconds. Grafana visualizes CPU usage, heap memory, event loop lag, and active handles in real time.&lt;/p&gt;

&lt;p&gt;Running a load test against the local API makes the graphs spike visibly — useful for demonstrating the observability story to anyone reviewing the project.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Would Add Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RDS PostgreSQL&lt;/strong&gt; — tasks currently live in memory and reset on deploy. A real database would make this production-ready in a deeper sense.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CloudWatch alarms&lt;/strong&gt; — alert on unhealthy host count and high CPU before users notice.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WAF&lt;/strong&gt; — Web Application Firewall in front of the ALB for rate limiting and bot protection at the infrastructure level.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;The most valuable part of this project wasn't the technology — it was the debugging. Every bug I hit taught me something real: how file permissions work in containers, how browsers cache security policies, how Terraform state locking works, how ASG instance refresh interacts with deploy automation.&lt;/p&gt;

&lt;p&gt;That's the difference between following a tutorial and building something yourself. The tutorial gives you the happy path. Building it yourself gives you everything else.&lt;/p&gt;

&lt;p&gt;If you're building a DevOps portfolio, don't copy a tutorial. Pick a problem, build something real, and let it break. That's where the learning actually happens.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The full source code is at &lt;a href="https://github.com/nanafilbert/cicd-aws-terraform-deploy" rel="noopener noreferrer"&gt;github.com/nanafilbert/cicd-aws-terraform-deploy&lt;/a&gt; and the live app is running at &lt;a href="https://tasks.therealblessing.com" rel="noopener noreferrer"&gt;https://tasks.therealblessing.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




</description>
      <category>devops</category>
      <category>aws</category>
      <category>terraform</category>
      <category>docker</category>
    </item>
    <item>
      <title>How I Got Charged for an Elastic IP — And What I Learned About Cleaning Up AWS Resources</title>
      <dc:creator>Filbert Nana Blessing</dc:creator>
      <pubDate>Tue, 29 Apr 2025 19:22:44 +0000</pubDate>
      <link>https://forem.com/filbert_nanablessing_1ae/how-i-got-charged-for-an-elastic-ip-and-what-i-learned-about-cleaning-up-aws-resources-22kj</link>
      <guid>https://forem.com/filbert_nanablessing_1ae/how-i-got-charged-for-an-elastic-ip-and-what-i-learned-about-cleaning-up-aws-resources-22kj</guid>
      <description>&lt;h2&gt;
  
  
  canonical_url: [&lt;a href="https://follow-my-journey-in-devops.hashnode.dev/how-i-got-charged-for-an-elastic-ip-and-what-i-learned-about-cleaning-up-aws-resources" rel="noopener noreferrer"&gt;https://follow-my-journey-in-devops.hashnode.dev/how-i-got-charged-for-an-elastic-ip-and-what-i-learned-about-cleaning-up-aws-resources&lt;/a&gt;]
&lt;/h2&gt;

&lt;p&gt;Introduction&lt;/p&gt;

&lt;p&gt;When I started experimenting with AWS as part of my journey into cloud and DevOps, I was focused on learning how to launch EC2 instances, SSH into them, and set up web servers. What I didn't expect was to get charged for something I wasn't even actively using: an Elastic IP. In this post, I want to share what happened, what I learned, and how you can avoid this common beginner mistake.&lt;/p&gt;

&lt;p&gt;If you're new to AWS, check out &lt;a href="https://dev.to/filbert_nanablessing_1ae/launching-your-first-ec2-instance-a-beginners-guide-b2f"&gt;my first post&lt;/a&gt; where I share how I got started experimenting with EC2 and the AWS Free Tier.&lt;/p&gt;

&lt;p&gt;What Is an Elastic IP?&lt;/p&gt;

&lt;p&gt;An Elastic IP (EIP) is a static public IPv4 address provided by AWS. Unlike regular public IPs (which change when you stop and start an instance), an Elastic IP remains the same until you explicitly release it. It's particularly useful if you want to ensure a consistent IP address for your application or server.&lt;/p&gt;

&lt;p&gt;However, there's a catch: while Elastic IPs are free when attached to a running EC2 instance, they incur charges when they're not in use.&lt;/p&gt;

&lt;p&gt;My Mistake&lt;/p&gt;

&lt;p&gt;I tried using the AWS CLI to create an EC2 instance from my terminal. I wrote a script and ran it, but it didn’t seem to work. I tried again, and the result was the same. So I left it and instead used the AWS Management Console, which worked immediately.&lt;/p&gt;

&lt;p&gt;What I didn’t realize was that even though my CLI attempts failed, they somehow triggered the allocation of two Elastic IPs to my account. The next day, a little over 24 hours later, I received a billing alert. I was surprised and curious because I was still using the Free Tier and hadn’t expected any charges. That’s when I discovered the Elastic IPs were the cause.&lt;/p&gt;

&lt;p&gt;Why AWS Charges for Idle Elastic IPs&lt;/p&gt;

&lt;p&gt;AWS offers one Elastic IP per running EC2 instance for free. But once that IP is no longer attached to a running instance, it’s considered idle. The reason behind the charge is simple: AWS wants to prevent users from hoarding public IP addresses, which are a limited resource.&lt;/p&gt;

&lt;p&gt;Here are the key points:&lt;/p&gt;

&lt;p&gt;Free: When the Elastic IP is associated with a running instance.&lt;/p&gt;

&lt;p&gt;Charged: If the instance is stopped or terminated but the Elastic IP remains allocated.&lt;/p&gt;

&lt;p&gt;What I Learned About Resource Cleanup&lt;/p&gt;

&lt;p&gt;This experience taught me a lot about how AWS billing works and why it's so important to clean up unused resources:&lt;/p&gt;

&lt;p&gt;Releasing Elastic IPs: Terminating an instance does not automatically release the EIP. You have to do it manually.&lt;/p&gt;

&lt;p&gt;Monitoring Costs: Even if your instance is off, your billing might still increase due to leftover resources.&lt;/p&gt;

&lt;p&gt;Check the VPC section of your billing dashboard — that’s where Elastic IP charges show up.&lt;/p&gt;

&lt;p&gt;Billing Alarms: Set up CloudWatch billing alarms to alert you if your usage goes above $0.01 or any threshold you’re comfortable with.&lt;/p&gt;

&lt;p&gt;Tips to Avoid Elastic IP Charges&lt;/p&gt;

&lt;p&gt;If you're a beginner using AWS Free Tier, here are a few tips to help avoid charges like this:&lt;/p&gt;

&lt;p&gt;Use Elastic IPs only when necessary.&lt;/p&gt;

&lt;p&gt;Prefer auto-assigned public IPs for short-term or test projects.&lt;/p&gt;

&lt;p&gt;Always release unused Elastic IPs manually.&lt;/p&gt;

&lt;p&gt;Set billing alerts early.&lt;/p&gt;

&lt;p&gt;Regularly review the Cost Explorer or Billing Dashboard to track your usage.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;Getting charged for an idle Elastic IP was frustrating at first, but it became a valuable lesson in AWS resource management. Now I know to double-check every service I'm using and to always clean up after myself. If you're just starting out with AWS, I hope this post saves you from a similar surprise.&lt;/p&gt;

&lt;p&gt;Have you had a similar experience with AWS costs? Share your story or tips in the comments below&lt;/p&gt;

&lt;h2&gt;
  
  
  canonical_url: [&lt;a href="https://follow-my-journey-in-devops.hashnode.dev/how-i-got-charged-for-an-elastic-ip-and-what-i-learned-about-cleaning-up-aws-resources" rel="noopener noreferrer"&gt;https://follow-my-journey-in-devops.hashnode.dev/how-i-got-charged-for-an-elastic-ip-and-what-i-learned-about-cleaning-up-aws-resources&lt;/a&gt;]
&lt;/h2&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>beginners</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Launching Your First EC2 Instance: A Beginner's Guide 🚀</title>
      <dc:creator>Filbert Nana Blessing</dc:creator>
      <pubDate>Sat, 26 Apr 2025 10:15:01 +0000</pubDate>
      <link>https://forem.com/filbert_nanablessing_1ae/launching-your-first-ec2-instance-a-beginners-guide-b2f</link>
      <guid>https://forem.com/filbert_nanablessing_1ae/launching-your-first-ec2-instance-a-beginners-guide-b2f</guid>
      <description>&lt;h1&gt;
  
  
  Launching Your First EC2 Instance: A Beginner's Guide 🚀
&lt;/h1&gt;

&lt;p&gt;If you're just getting started with AWS, one of the first services you'll use is &lt;strong&gt;Amazon EC2 (Elastic Compute Cloud)&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
EC2 lets you create virtual machines (called &lt;strong&gt;instances&lt;/strong&gt;) that you can access and run just like real computers.&lt;/p&gt;

&lt;p&gt;In this guide, I'll walk you through the simple steps to create your very first EC2 instance — &lt;strong&gt;no complicated jargon&lt;/strong&gt;!&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Open the EC2 Dashboard
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Log in to your &lt;strong&gt;AWS Management Console&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;At the top of the page, type &lt;strong&gt;EC2&lt;/strong&gt; into the search bar and click on the EC2 service.&lt;/li&gt;
&lt;li&gt;On the left-hand menu, click &lt;strong&gt;Instances&lt;/strong&gt;, then click &lt;strong&gt;Launch Instance&lt;/strong&gt; to start the creation process.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 2: Name Your Instance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Give your instance a &lt;strong&gt;name&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;It can be anything you want — something that makes it easy to recognize, especially when you have multiple instances later on.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 3: Choose an Amazon Machine Image (AMI)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Select an &lt;strong&gt;Amazon Machine Image (AMI)&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The AMI includes:

&lt;ul&gt;
&lt;li&gt;An &lt;strong&gt;Operating System&lt;/strong&gt; (like Ubuntu, Amazon Linux, or Windows)&lt;/li&gt;
&lt;li&gt;Optional &lt;strong&gt;application software&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Custom configurations&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Think of the AMI as the "template" for your virtual machine.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Select the Instance Type
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Choose your &lt;strong&gt;instance type&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For beginners and those using the &lt;strong&gt;AWS Free Tier&lt;/strong&gt;, the best choice is &lt;strong&gt;t2.micro&lt;/strong&gt; — it gives you enough resources without any cost.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 5: Create a Key Pair
&lt;/h2&gt;

&lt;p&gt;You need a &lt;strong&gt;key pair&lt;/strong&gt; to securely connect (SSH) to your instance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new &lt;strong&gt;key pair&lt;/strong&gt; and give it a name.&lt;/li&gt;
&lt;li&gt;AWS will generate:

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;public key&lt;/strong&gt; (AWS stores this)&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;private key&lt;/strong&gt; (you download as a &lt;code&gt;.pem&lt;/code&gt; file)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;Important:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Download the &lt;code&gt;.pem&lt;/code&gt; file immediately! AWS will not allow you to download it again.&lt;br&gt;&lt;br&gt;
Keep the file safe — you’ll need it when connecting to your instance.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Step 6: Configure Network Settings
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Under &lt;strong&gt;Network Settings&lt;/strong&gt;, click &lt;strong&gt;Edit&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;AWS automatically assigns you a &lt;strong&gt;VPC&lt;/strong&gt; and &lt;strong&gt;Subnet&lt;/strong&gt; — you can leave the default options.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then, configure your &lt;strong&gt;Security Group&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allow &lt;strong&gt;SSH traffic&lt;/strong&gt; (port 22) — to connect to your instance.&lt;/li&gt;
&lt;li&gt;Allow &lt;strong&gt;HTTP (port 80)&lt;/strong&gt; and &lt;strong&gt;HTTPS (port 443)&lt;/strong&gt; traffic — if you plan to host a website.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 7: Configure Storage
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS offers &lt;strong&gt;8 GiB&lt;/strong&gt; of storage by default with the Free Tier.&lt;/li&gt;
&lt;li&gt;This is usually enough to get started, but you can adjust it based on your needs.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 8: Review and Launch
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Take a moment to &lt;strong&gt;review all your settings&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;If everything looks good, click &lt;strong&gt;Launch&lt;/strong&gt;!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your instance will move to the &lt;strong&gt;Running&lt;/strong&gt; state.&lt;br&gt;&lt;br&gt;
Wait until you see &lt;strong&gt;Status Checks: 2/2 passed&lt;/strong&gt; before connecting.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 9: Connect to Your Instance
&lt;/h2&gt;

&lt;p&gt;Once your instance is running:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;strong&gt;Connect&lt;/strong&gt; in the AWS Console and follow the provided instructions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or manually SSH from your terminal or PowerShell:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
bash
ssh -i /path/to/your-keypair.pem ubuntu@your-ec2-public-ip




Final Thoughts 🎉
Congratulations!
You've just launched and connected to your very first EC2 instance!

This skill lays the foundation for everything else you’ll build in the cloud.
Keep exploring, try different instance types, set up a basic web server — and most importantly — enjoy the learning journey! 🚀

💬 Got questions or stuck on a step? Drop a comment below — I’d love to help!


Originally published on my blog at Hashnode
(https://follow-my-journey-in-devops.hashnode.dev/launching-your-first-ec2-instance-a-beginners-guide)






&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>beginners</category>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
