<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: asim-makes</title>
    <description>The latest articles on Forem by asim-makes (@asimmakes).</description>
    <link>https://forem.com/asimmakes</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/asimmakes"/>
    <language>en</language>
    <item>
      <title>Decoupling at Scale: My Deep Dive into AWS Event-Driven Architecture (API Gateway, EventBridge, SQS)</title>
      <dc:creator>asim-makes</dc:creator>
      <pubDate>Wed, 22 Oct 2025 17:04:56 +0000</pubDate>
      <link>https://forem.com/asimmakes/decoupling-at-scale-my-deep-dive-into-aws-event-driven-architecture-api-gateway-eventbridge-sqs-4a90</link>
      <guid>https://forem.com/asimmakes/decoupling-at-scale-my-deep-dive-into-aws-event-driven-architecture-api-gateway-eventbridge-sqs-4a90</guid>
      <description>&lt;p&gt;Hey everyone!!!  &lt;/p&gt;

&lt;p&gt;Tihar is here in Nepal 🎉. With the holiday break on, I decided to wrap up another portfolio project: &lt;em&gt;designing and deploying a complete, event-driven e-commerce order processing system on AWS&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I’ve previously built a few monolithic REST APIs, but this time I wanted to &lt;em&gt;challenge myself&lt;/em&gt; and understand how &lt;em&gt;microservices differ from monolithic&lt;/em&gt; systems in practice. So I chose a &lt;em&gt;microservices architecture centered around an Event Bus (Amazon EventBridge).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;While the project follows a microservices pattern, my main goal wasn’t to build a fancy UI or user-facing backend — instead, I focused on &lt;em&gt;AWS architecture, infrastructure, and operational maturity.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
To push myself deeper into the IaC world, I decided to deploy everything using &lt;em&gt;raw CloudFormation YAML&lt;/em&gt; — no SAM, no CDK, no Terraform.&lt;/p&gt;

&lt;p&gt;Contrary to popular opinion, I actually found CloudFormation to be a &lt;em&gt;super fun tool&lt;/em&gt; once you get used to its structure and declarative nature.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧱 Tech Stack Overview
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;AWS Service(s)&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;em&gt;IaC&lt;/em&gt;&lt;/td&gt;
&lt;td&gt;CloudFormation (raw YAML)&lt;/td&gt;
&lt;td&gt;Full IaC deployment — no SAM/CDK/Terraform&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;em&gt;Compute&lt;/em&gt;&lt;/td&gt;
&lt;td&gt;AWS Lambda (x5 microservices)&lt;/td&gt;
&lt;td&gt;Each service is independent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;em&gt;Data Storage&lt;/em&gt;&lt;/td&gt;
&lt;td&gt;DynamoDB (x4 tables)&lt;/td&gt;
&lt;td&gt;One per service for isolation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;em&gt;Integration &amp;amp; Routing&lt;/em&gt;&lt;/td&gt;
&lt;td&gt;API Gateway, EventBridge&lt;/td&gt;
&lt;td&gt;Event-driven communication&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;em&gt;Buffering &amp;amp; Resilience&lt;/em&gt;&lt;/td&gt;
&lt;td&gt;SQS (x4 + DLQs)&lt;/td&gt;
&lt;td&gt;Protects against message loss/failure&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;em&gt;Notifications&lt;/em&gt;&lt;/td&gt;
&lt;td&gt;SNS (x1)&lt;/td&gt;
&lt;td&gt;Sends real-time alerts to users&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;em&gt;Observability&lt;/em&gt;&lt;/td&gt;
&lt;td&gt;CloudWatch&lt;/td&gt;
&lt;td&gt;Logs, metrics, alarms for all services&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🏗️ Architecture Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekbwmec2z4t1c3vjexiw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekbwmec2z4t1c3vjexiw.png" alt="Architecture Diagram" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;High-level architecture showing all AWS services and their interactions&lt;/p&gt;&lt;/center&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;em&gt;API Gateway&lt;/em&gt; receives the customer's order and passes it to the &lt;em&gt;Order Service Lambda.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;Order Service&lt;/em&gt; simply logs the initial order and publishes an event to &lt;em&gt;EventBridge.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;EventBridge&lt;/em&gt; immediately routes this event to the &lt;em&gt;Inventory Service:&lt;/em&gt;

&lt;ul&gt;
&lt;li&gt;i. If &lt;em&gt;in stock&lt;/em&gt;, Inventory Service publishes its own event.
&lt;/li&gt;
&lt;li&gt;ii. If &lt;em&gt;out of stock&lt;/em&gt;, it notifies the user via &lt;em&gt;SNS.&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;Payment Service&lt;/em&gt; listens for events sent by &lt;em&gt;Inventory Service:&lt;/em&gt;

&lt;ul&gt;
&lt;li&gt;i. A &lt;em&gt;successful payment&lt;/em&gt; results in a "Payment Successful" event, triggering the &lt;em&gt;Shipping&lt;/em&gt; flow.
&lt;/li&gt;
&lt;li&gt;ii. A &lt;em&gt;failed payment&lt;/em&gt; triggers a compensation action (like restocking the item and notifying the user via &lt;em&gt;SNS&lt;/em&gt;).
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Finally, the &lt;em&gt;Shipping Service&lt;/em&gt; processes the paid order using &lt;em&gt;SQS&lt;/em&gt; and &lt;em&gt;DLQ&lt;/em&gt; for reliability.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  ⚙️ Building the microservice stack
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcebqvomwl0u5oymxe364.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcebqvomwl0u5oymxe364.png" alt="Microservice Stacks" width="403" height="695"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;All microservice stacks forming the complete event-driven e-commerce system&lt;/p&gt;&lt;/center&gt;




&lt;h3&gt;
  
  
  a) Base Infra Stack
&lt;/h3&gt;

&lt;p&gt;The three core pieces defined in this stack are a Customer Managed Key for encryption (CMK), a central Event Bus, and an Operations Alerting Topic.&lt;/p&gt;

&lt;p&gt;I defined a Customer Managed Key (CMK) using &lt;em&gt;AWS::KMS::Key.&lt;/em&gt; I decided to go with the CMK instead of the default AWS keys because I wanted to see practice with KMS practically. I have read about CMK during my SAA-C03 preparation but the concept never sticked to my head and so I made many mistakes answering the practice questions on encryption. But once I implemented it in this project, the whole concept really sticked to my head. Owning the CMK gives me complete control over its usage and policy. During the key creation, I also need to create a policy for the key; I defined to allow the root user full control, grant general usage (encrypt, decrypt, etc.) to all principals within the account, and explicitly authorize specific AWS services (SNS, SQS, DynamoDB, EventBridge, and CloudWatch) to use the key for their encryption. To maintain best practices, I've also enabled automatic key rotation.&lt;/p&gt;

&lt;p&gt;Next, I established the communication backbone for the microservices architecture with an &lt;em&gt;AWS::Events::EventBus&lt;/em&gt; named &lt;strong&gt;EcomEventBus&lt;/strong&gt;. In an event-driven world, this bus acts like the main switchboard. Services won't communicate directly; instead, they'll simply publish events in the event bus and other services can define rules to subscribe only to the events relevant to them. This service allowed me to decouple the e-commerce services, so that they can evolve independently without breaking each other.&lt;/p&gt;

&lt;p&gt;Finally, for the alerting mechanism, I created &lt;em&gt;The OpsAlertTopic,&lt;/em&gt; which is an SNS topic. I integrated the custom KMS key here, ensuring every message published to this topic is encrypted, maintaining encryption at rest for sensitive operations data. To make these resources accessible to all other stacks, I used the Outputs section to export the ARN values of the KMS key, the Event Bus, and the Ops Topic. This completed the foundation of my project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrmbly5eo73iyf0khyhn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrmbly5eo73iyf0khyhn.png" alt="KMS Dashboard" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;Custom KMS key configured for encryption with automatic rotation&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0bofamrz6pkkz400bq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj0bofamrz6pkkz400bq6.png" alt="EventBus Dashboard" width="800" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;Custom EventBridge bus (EcomEventBus) for event routing&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foruscavkg9g805p3xsqb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foruscavkg9g805p3xsqb.png" alt="SNSTopic" width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;OpsAlert SNS topic with KMS encryption for secure notifications&lt;/p&gt;&lt;/center&gt;




&lt;h3&gt;
  
  
  b) Order Stack
&lt;/h3&gt;

&lt;p&gt;This stack defines the complete Orders Service of my e-commerce platform. The stack is a self-contained microservice and is exposed to both internal events and external HTTP requests.&lt;/p&gt;

&lt;p&gt;The service's data layer is the &lt;em&gt;OrdersTable (DynamoDB),&lt;/em&gt; keyed by &lt;em&gt;orderId.&lt;/em&gt; PITR and CMK-based SSE are enabled via &lt;code&gt;!ImportValue MyKmsKeyArn&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The primary part of the service is the &lt;em&gt;OrderLambda&lt;/em&gt; function (Python), handling two different input types:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;HTTP Request Handling&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Handles POST requests from API Gateway, validates input, stores order in DynamoDB, and publishes &lt;em&gt;OrderPlaced&lt;/em&gt; event to EventBridge.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;EventBridge Event Handling&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Updates order status based on events: &lt;code&gt;StockConfirmation&lt;/code&gt;, &lt;code&gt;PaymentConfirmation&lt;/code&gt;, and &lt;code&gt;ShipmentCreated&lt;/code&gt; events.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Access Control and API Exposure&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
IAM Role grants minimal privileges. Exposed via API Gateway POST &lt;code&gt;/orders&lt;/code&gt;.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;EventBridge Rules&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Rules trigger Lambda for failure events and shipments using content filtering.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8qszm7in4bj5yoeu85s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8qszm7in4bj5yoeu85s.png" alt=" " width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;Order microservice architecture — Lambda, DynamoDB, and API Gateway setup&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8ucrdn26c2ya47o8hfo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8ucrdn26c2ya47o8hfo.png" alt=" " width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;EventBridge rules linking Order service with other microservices&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsefpip8vxj0snu5d0iok.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsefpip8vxj0snu5d0iok.png" alt=" " width="800" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;OrdersTable in DynamoDB showing stored order data&lt;/p&gt;&lt;/center&gt;




&lt;h3&gt;
  
  
  c) Inventory Stack
&lt;/h3&gt;

&lt;p&gt;The Inventory Service ensures stock checks and rollback via asynchronous messaging. It includes &lt;em&gt;InventoryTable, SQS queues,&lt;/em&gt; and a &lt;em&gt;Lambda&lt;/em&gt; for stock management and compensation.&lt;/p&gt;

&lt;p&gt;Messages are first routed to &lt;em&gt;InventoryQueue (SQS)&lt;/em&gt; for buffering. The Lambda then processes messages to decrement or increment stock quantities. Conditional updates prevent negative inventory.  &lt;/p&gt;

&lt;p&gt;On stock failure, a &lt;em&gt;StockConfirmation&lt;/em&gt; event is published back to EventBridge; on payment failure, rollback happens via &lt;em&gt;InventoryCompensationQueue.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhavog3eh4k1e4ld7ulyj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhavog3eh4k1e4ld7ulyj.png" alt="Inventory Table" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;InventoryTable storing product stock and metadata&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6mlzjefb4sz18zx350pj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6mlzjefb4sz18zx350pj.png" alt="Inventory Queue" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;Primary InventoryQueue used for processing OrderPlaced events&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm3pr7ppeylrajw77s5td.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm3pr7ppeylrajw77s5td.png" alt="Inventory Compensation Queue" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;InventoryCompensationQueue used for handling rollback after payment failures&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fml4pkbwsb4cpmrvnt2m3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fml4pkbwsb4cpmrvnt2m3.png" alt="Event Bridge Rule: StockConfirmation" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;EventBridge rule routing stock confirmation events to the Inventory Service&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fphrb8g20rn9xd508n72i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fphrb8g20rn9xd508n72i.png" alt="StockConfirmation to SNS Rule" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;Rule forwarding StockConfirmation events to SNS for customer notifications&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgetxr7eprtgomh5liy4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgetxr7eprtgomh5liy4.png" alt="Inventory Lambda" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;InventoryLambda implementation handling stock decrement and compensation logic&lt;/p&gt;&lt;/center&gt;




&lt;h3&gt;
  
  
  d) Payment Stack
&lt;/h3&gt;

&lt;p&gt;The Payment Service manages mock payment transactions and communicates success or failure. It uses &lt;em&gt;PaymentTable (DynamoDB)&lt;/em&gt; with PITR and KMS encryption.&lt;/p&gt;

&lt;p&gt;It starts only after &lt;code&gt;stockConfirmed: true&lt;/code&gt; and processes messages from &lt;em&gt;PaymentQueue (SQS)&lt;/em&gt;. The Lambda simulates payment outcomes and publishes &lt;em&gt;PaymentConfirmation&lt;/em&gt; events.  &lt;/p&gt;

&lt;p&gt;A &lt;em&gt;CloudWatch Alarm&lt;/em&gt; monitors the Lambda for errors and sends notifications via &lt;em&gt;OpsNotificationTopic.&lt;/em&gt;  &lt;/p&gt;

&lt;p&gt;Failures are routed to &lt;em&gt;PaymentFailureTopic (SNS)&lt;/em&gt;, which notifies customers with personalized messages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7u7oba60vhgcx35tgr2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7u7oba60vhgcx35tgr2f.png" alt="Payment Table" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;PaymentTable holding transaction status and metadata&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyj0ioj83fuhxqvpy51d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyj0ioj83fuhxqvpy51d.png" alt="PaymentStockConfirmation: True Rule" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;EventBridge rule triggering payment service upon stock confirmation&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq64rptf57uq4af7celn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq64rptf57uq4af7celn.png" alt="PaymentFailure Topic" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;SNS topic used for customer notifications on payment failure&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb83yxvw3wynsl14xcnui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb83yxvw3wynsl14xcnui.png" alt="PaymentFailure mail" width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;Sample payment failure email notification sent via SNS&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8e38o0lrazu3jynmxkzz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8e38o0lrazu3jynmxkzz.png" alt="CloudWatch Alarm for lambda" width="800" height="539"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;CloudWatch alarm monitoring PaymentLambda errors and notifying via SNS&lt;/p&gt;&lt;/center&gt;




&lt;h3&gt;
  
  
  e) Shipping Stack
&lt;/h3&gt;

&lt;p&gt;The Shipping Service finalizes order fulfillment. It uses &lt;em&gt;ShippingTable (DynamoDB)&lt;/em&gt;, &lt;em&gt;SQS queues,&lt;/em&gt; &lt;em&gt;DLQ&lt;/em&gt;, and &lt;em&gt;Lambdas&lt;/em&gt; for shipping and reprocessing.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;PaymentConfirmedToShippingRule&lt;/em&gt; routes successful payment events to &lt;em&gt;ShippingQueue&lt;/em&gt;.&lt;br&gt;&lt;br&gt;
ShippingLambda simulates an external API call — on success, it stores shipment data and publishes a &lt;em&gt;ShipmentCreated&lt;/em&gt; event. On failure, SQS retries and finally moves messages to &lt;em&gt;ShippingDLQ.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A &lt;em&gt;CloudWatch Alarm&lt;/em&gt; watches the DLQ and notifies the Ops topic if any failed shipments exist.&lt;br&gt;&lt;br&gt;
A second Lambda (&lt;em&gt;DLQProcessorLambda&lt;/em&gt;) polls the DLQ, reprocesses stuck shipments, and publishes a &lt;em&gt;ShipmentCreatedByDLQ&lt;/em&gt; event — ensuring reliability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0mp12pmzwujhdr3jmm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0mp12pmzwujhdr3jmm6.png" alt="Shipping Queue" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;ShippingQueue buffering messages for the shipping service&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf0xr9xjmtxa7j6d2ot4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf0xr9xjmtxa7j6d2ot4.png" alt="Shipping DLQ" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;ShippingDLQ for handling failed shipment messages&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpceheuc7xd0dl7ee4hn9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpceheuc7xd0dl7ee4hn9.png" alt="Shipping DLQ Alarm" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;CloudWatch alarm monitoring DLQ message count&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55ehabkzq60ehwd0k60d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55ehabkzq60ehwd0k60d.png" alt="Shipping DLQ Alarm Mail" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;Example alert email for a shipping DLQ trigger&lt;/p&gt;&lt;/center&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project literally really stands as a significant milestone for me in my cloud journey.The most memorable takeaway from this deep dive wasn't just the final architecture, but the process of building it. Contrary to the popular notion that YAML-based Infrastructure as Code (IaC) is tedious or overly complex, I found the experience of writing CloudFormation templates in YAML to be surprisingly enjoyable and engaging.&lt;/p&gt;

&lt;p&gt;My previous infrastructure work with AWS CDK felt abstracted and high-level. Working directly with CloudFormation felt like building with fundamental, plain English blocks. I had no choice but rely on AWS documentation, which allowed me to understand different properties of resources. Explicitly defining resource relationships gave me a visual, low-level appreciation for how these services connect. This project highlights my commitment to understanding the how and why behind every cloud resource.&lt;/p&gt;

&lt;p&gt;Now that I am done with two IaC tools offered by AWS, I will look forward to work with ECS and Docker.&lt;/p&gt;




&lt;h2&gt;
  
  
  Credits:
&lt;/h2&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@aviosly?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Avi Waxman&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/birds-eye-view-of-asphalt-road-upaJhH2bd8Y?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>cloud</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Building Production-Grade Multi-Tier Web Infrastructure on AWS with CDK &amp; CLI Only</title>
      <dc:creator>asim-makes</dc:creator>
      <pubDate>Mon, 13 Oct 2025 09:22:39 +0000</pubDate>
      <link>https://forem.com/asimmakes/building-production-grade-multi-tier-web-infrastructure-on-aws-with-cdk-cli-only-4ej8</link>
      <guid>https://forem.com/asimmakes/building-production-grade-multi-tier-web-infrastructure-on-aws-with-cdk-cli-only-4ej8</guid>
      <description>&lt;p&gt;This post details my journey of designing and deploying a secure, highly available, and scalable **3-Tier Web Application Infrastructure **entirely on AWS using the Cloud Development Kit (CDK) and the AWS CLI. I used a code-only approach to ensure repeatability and automate all operational tasks.&lt;/p&gt;

&lt;h3&gt; Architecture Diagram &lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flfbws2s07wa5qqhfc3mc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flfbws2s07wa5qqhfc3mc.png" alt=" " width="761" height="861"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt; 3 Tier Architecture Diagram &lt;/p&gt;&lt;/center&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuihli76yablosndsculw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuihli76yablosndsculw.png" alt=" " width="727" height="637"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt; VPC Architecture Diagram &lt;/p&gt;&lt;/center&gt;

&lt;h3&gt;Prerequisites: Security and Control&lt;/h3&gt;

&lt;p&gt;Before writing a single line of infrastructure code, I focused on securing the deployment environment and controlling costs.&lt;/p&gt;

&lt;h4&gt;💰 Cost Control: Billing Alarms as Guardrails&lt;/h4&gt;

&lt;p&gt;As someone who have just over a month experience in AWS, overspending is a common fear. I implemented two essential &lt;strong&gt;CloudWatch Billing Alarms&lt;/strong&gt; to monitor estimated AWS charges:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Safety Net Buffer ($5): A high-level alert created easily via the AWS Console for moderate costs.

- Tighter Threshold ($1): A critical, early-warning alert deployed using the AWS CLI.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This dual approach provides both an easy to use alarm with scripts for automation offering an immediate warning for any unexpected spending spikes.&lt;/p&gt;

&lt;h4&gt;👤 Principle of Least Privilege: Dedicated IAM User&lt;/h4&gt;

&lt;p&gt;To ensure secure, auditable deployments, I created a dedicated IAM User with the minimum permissions required.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Custom IAM Policy: Based on a mental model of required resources (VPC, EC2, RDS, etc.), I defined a custom 3-tier-deployment-policy.

- CLI Access: Generated Access Keys using aws iam create-access-key and securely stored them in a local CLI profile instead of exposing them in scripts.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;While running the script, I encountered a frustrating error when attaching the policy to the user: &lt;strong&gt;An error occurred (ValidationError) when calling the AttachUserPolicy operation: Invalid ARN: Could not be parsed!&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The root cause was a shell scripting issue: when capturing the policy ARN using command substitution ($()), the tee command in my logging function was writing log messages to Standard Output (stdout), which were then accidentally perfixed to the ARN.&lt;/p&gt;

&lt;p&gt;To fix this, I modified the logging function to redirect the tee command's output to Standard Error (stderr) (&amp;gt;&amp;amp;2). This ensured only the valid ARN was captured on stdout for the AWS CLI command.&lt;/p&gt;



&lt;h3&gt;Infra-as-Code: Writing the Entire Stack in AWS CDK&lt;/h3&gt;

&lt;p&gt;The AWS Cloud Development Kit (CDK) allowed me to define the entire cloud infrastructure using Python.&lt;/p&gt;

&lt;p&gt;Before deploying, I executed the one-time setup command:&lt;br&gt;
&lt;code&gt;cdk bootstrap aws://&amp;lt;account-id&amp;gt;/&amp;lt;region&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This command creates the CDKToolkit CloudFormation Stack, which provisions essential resources (like an S3 bucket and IAM roles) necessary for the CDK CLI to deploy assets and templates. This process is essentially known as &lt;strong&gt;bootstrapping&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Modular Design: Constructs, Stacks, and App&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I structured the project for clarity and reusability:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Component&lt;/th&gt;
      &lt;th&gt;Role&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;b&gt;Constructs&lt;/b&gt;&lt;/td&gt;
      &lt;td&gt;The &lt;b&gt;basic building blocks&lt;/b&gt;; defines how an individual resource or group of resources is created. Example: &lt;code&gt;VpcConstruct&lt;/code&gt; (defines subnets, NATs, etc.)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;b&gt;Stacks&lt;/b&gt;&lt;/td&gt;
      &lt;td&gt;
&lt;b&gt;High-level groupings&lt;/b&gt;; instantiates Constructs with environment-specific configurations and parameters. Example: VpcConstruct with specific CIDRs)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;b&gt;App&lt;/b&gt;&lt;/td&gt;
      &lt;td&gt;The &lt;b&gt;entry point&lt;/b&gt;; connects all stacks, defines their deployment order, and specifies the target AWS account/region. Example: &lt;code&gt;cdk deploy&lt;/code&gt; executes the App&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;h3&gt;The 3-Tier Architecture Breakdown&lt;/h3&gt;

&lt;p&gt;The infrastructure is divided into three distinct tiers, each with a defined role and strict security groups.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Network Tier: The Private Cloud (VPC)&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The is the foundational tier and is a single VPC spanning two Availability Zones (AZs) for high availability and fault tolerance.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Subnet Type&lt;/th&gt;
      &lt;th&gt;Role&lt;/th&gt;
      &lt;th&gt;Purpose/Hosts&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;b&gt;Public Subnets (3)&lt;/b&gt;&lt;/td&gt;
      &lt;td&gt;For &lt;b&gt;internet-facing resources&lt;/b&gt; that require an Internet Gateway (IGW).&lt;/td&gt;
      &lt;td&gt;Hosts the &lt;b&gt;Application Load Balancer (ALB)&lt;/b&gt; and &lt;b&gt;NAT Gateway(s)&lt;/b&gt;.&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;b&gt;Private Subnets (3)&lt;/b&gt;&lt;/td&gt;
      &lt;td&gt;For &lt;b&gt;backend application resources&lt;/b&gt; that are isolated from the internet.&lt;/td&gt;
      &lt;td&gt;Hosts the &lt;b&gt;EC2 Auto Scaling Group (ASG)&lt;/b&gt; and &lt;b&gt;RDS&lt;/b&gt; database.&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;One thing that I loved about CDK is the way it abstracts away manual networking setup: It automatically Defining the VPC subnet with proper routes so that I dont have to manually update route table unless I have a specific rqeuirement to do so. And by specifying nat_gateways=1, it automatically &lt;strong&gt;provisions a NAT Gateway in a public subnet&lt;/strong&gt; and updates the &lt;strong&gt;private subnet route tables to direct all outbound traffic through it&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffs34juf7a5fhxuo9i8kz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffs34juf7a5fhxuo9i8kz.png" alt=" " width="800" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt; Network Stack CloudForamtion Dashboard &lt;/p&gt;&lt;/center&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Web Tier: The Public Entry Point&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Application Load Balancer (ALB) serves as the internet-facing entry point and is designed as such:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Security Group: Only permits inbound traffic on HTTP (port 80).

- Target Group: Directs traffic to the application instances on port 8080.

- Health Check: Configured to check the root path (/) to ensure only healthy instances receive traffic.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0diezswsw1qu0zc7i0wd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0diezswsw1qu0zc7i0wd.png" alt=" " width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt; Web Stack CloudForamtion Dashboard &lt;/p&gt;&lt;/center&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;App Tier: The Application Logic&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The application runs on Amazon Linux 2 EC2 instances managed by an Auto Scaling Group (ASG), deployed entirely within the private subnets (with egress).&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Security: The ASG instances are private with egress. They can only be reached by the ALB's security group on port 8080.

- Outbound Egress: Traffic for updates or external API calls is routed securely through the NAT Gateway.

- Identity &amp;amp; Credentials: An attached IAM role grants permissions for SSM, CloudWatch Logs/Metrics. It has read access to the database credentials stored in Secrets Manager, ensuring no plaintext credentials are ever stored on the instance.

- Observability Config: The CloudWatch Agent configuration is stored in an SSM Parameter Store. The instance user data script fetches the config at startup, making monitoring adjustments easy without ASG redeployment.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9olq099zo3mzzckgxb72.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9olq099zo3mzzckgxb72.png" alt=" " width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt; App Stack CloudForamtion Dashboard &lt;/p&gt;&lt;/center&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Database Tier: The Isolated Backend&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A PostgreSQL instance on Amazon RDS is deployed in the PRIVATE ISOLATED subnets.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Isolation: The database has no internet access, inbound or outbound.

- Strict Access Control: Access is only permitted from the application’s ASG Security Group to the RDS Security Group on TCP port 5432.

- Secret Management: RDS credentials are automatically provisioned and stored in AWS Secrets Manager, and only the App Tier's IAM role is authorized to retrieve them.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;Operational Excellence: CLI and Observability&lt;/h3&gt;

&lt;p&gt;For observability, I built a set of Bash scripts utilizing the AWS CLI.&lt;/p&gt;

&lt;p&gt;I created scripts for:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Instance/Target Health Checks: Quick validation of the environment state.

- Resource Information: Fetching details on RDS and EC2 instances.

- SSH via SSM: Secure, tunnel-less access to private instances.

- Application Log Retrieval: Centralized access to application logs.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1t9eaeo33xdx00awvzzn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1t9eaeo33xdx00awvzzn.png" alt=" " width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt; Script for getting infra info &lt;/p&gt;&lt;/center&gt;

&lt;h4&gt;Leveraging EC2 Metadata&lt;/h4&gt;

&lt;p&gt;One thing that I learned when doing this project was for checking the local health and configuration of an instance from within the instance itself, the application uses EC2 Metadata. This data is always available locally via the non-routable IP address: 169.254.169.254.&lt;/p&gt;



&lt;h3&gt;Complete Observability with CloudWatch&lt;/h3&gt;

&lt;p&gt;The deployed infrastructure is fully observable using the CloudWatch Agent and Alarms.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Agent Configuration: The agent collects system metrics (like CPU) and combined application logs (including NGINX logs and a custom log for RDS connectivity checks).

- CloudWatch Alarms:

    1. Performance: Notifies if EC2 CPU utilization exceeds 80%.

    2. Capacity: Warns when RDS storage capacity is nearing its limit.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiq1mwtc46gtgtb2qbj71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiq1mwtc46gtgtb2qbj71.png" alt=" " width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;p&gt;CloudWatch Alarm Dashboard&lt;/p&gt;&lt;/center&gt;

&lt;p&gt;This setup ensures that the entire environment, from deployment to scaling and health checks is fully automated, secure, and easily monitored from the terminal.&lt;/p&gt;

&lt;h3&gt;Conclusion&lt;/h3&gt;

&lt;p&gt;This project demonstrated the deployment of a multi tier web app on AWS using purely code based approach. This project helped me clear my doubts on availability zones, ASG and target groups. It was also a great learning for making secuirty groups to build a complete, isolated subnets. I had a lot of fun doing this project and especially writing the modular code in Python. Up next, I will explore Terraform, which is a declarative configuration language (HCL) and is differnet from the CDK (general purpose programming model). Onward to the next challenge, the cloud awaits! ☁️&lt;/p&gt;

&lt;p&gt;Feel free to check my github repo on this project:&lt;br&gt;
&lt;a href="https://github.com/asim-makes/3-tier-infra" rel="noopener noreferrer"&gt;https://github.com/asim-makes/3-tier-infra&lt;/a&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>devops</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Serverless Dashboard Architecture Using AWS Lambda, API Gateway, and GitHub Actions</title>
      <dc:creator>asim-makes</dc:creator>
      <pubDate>Wed, 24 Sep 2025 02:16:55 +0000</pubDate>
      <link>https://forem.com/asimmakes/serverless-dashboard-architecture-using-aws-lambda-api-gateway-and-github-actions-3enn</link>
      <guid>https://forem.com/asimmakes/serverless-dashboard-architecture-using-aws-lambda-api-gateway-and-github-actions-3enn</guid>
      <description>&lt;p&gt;In my ongoing cloud journey, I decided to build a multi-functional serverless dashboard using AWS. The dashboard includes multiple applications like ExpenseApp, WeatherApp, NewsApp, and GitHubApp. Each of these apps runs serverlessly with AWS Lambda, uses API Gateway for routing, and stores data in DynamoDB. For easier deployment, I’ve implemented CI/CD pipelines using GitHub Actions, automating the entire process from code commit to deployment.&lt;/p&gt;

&lt;h3&gt;Architecture Diagram&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxj3gwulul1ojj18t4uek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxj3gwulul1ojj18t4uek.png" alt="Simple Architecture Diagram of Dashboard Project" width="790" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Tech Stack Breakdown&lt;/h3&gt;



&lt;h4&gt;Frontend:&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  The dashboard's frontend, with its various apps, is built using &lt;strong&gt;TypeScript&lt;/strong&gt;. Since I wanted to focus on the infrastructure, I used AI tools to handle this part for me.&lt;/li&gt;
&lt;/ul&gt;



&lt;h4&gt;Backend:&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;AWS Lambda&lt;/strong&gt;: The logic for each app (Expense, Weather, News, GitHub) runs on &lt;strong&gt;AWS Lambda&lt;/strong&gt; functions, written in Python. I also used AI to generate the code for these functions.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;API Gateway&lt;/strong&gt;: This service acts as the router, directing requests to the correct Lambda function based on the API path, such as &lt;code&gt;/ExpenseApp&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;DynamoDB&lt;/strong&gt;: I used this NoSQL database specifically for the ExpenseApp to store user data. To keep the project simple, I decided not to store API keys for the other apps here.&lt;/li&gt;
&lt;/ul&gt;



&lt;h4&gt;CI/CD:&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;GitHub Actions&lt;/strong&gt;: I automated the deployment of both the frontend and backend using &lt;strong&gt;GitHub Actions&lt;/strong&gt;, ensuring that any changes are automatically pushed to AWS without any manual steps.&lt;/li&gt;
&lt;/ul&gt;



&lt;h3&gt;IAM Roles &amp;amp; Permissions&lt;/h3&gt;



&lt;p&gt;I created a dedicated IAM user to manage the various resources required by the dashboard. The user is limited to the resources necessary for each app. This user’s permissions include creating and updating Lambda functions, managing API Gateway configurations, accessing DynamoDB, and interacting with S3 for frontend assets. Using a dedicated IAM user is a good security practice, as it follows the &lt;strong&gt;principle of least privilege&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Policy Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iam:CreateRole",
                "iam:GetRole",
                "iam:AttachRolePolicy",
                "iam:ListAttachedRolePolicies",
                "iam:PassRole"
            ],
            "Resource": [
                "arn:aws:iam::123456789012:role/LambdaDynamoDBRole",
                "arn:aws:iam::123456789012:role/LambdaDynamoDBCloudWatchRole"
            ]
        },
      ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h3&gt;Database Design For ExpenseApp&lt;/h3&gt;



&lt;p&gt;
  For the ExpenseApp, I used DynamoDB as the database. I designed the table with
  the following schema:
&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;strong&gt;Table Name&lt;/strong&gt;: &lt;code&gt;expenses-table&lt;/code&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;strong&gt;Partition Key&lt;/strong&gt;: &lt;code&gt;expenseId&lt;/code&gt; (Number)
  &lt;/li&gt;
  &lt;li&gt;
    &lt;strong&gt;Sort Key&lt;/strong&gt;: &lt;code&gt;timestamp&lt;/code&gt; (Number)
  &lt;/li&gt;
  &lt;li&gt;
    &lt;strong&gt;Attributes&lt;/strong&gt;: &lt;code&gt;description&lt;/code&gt;, &lt;code&gt;amount&lt;/code&gt;,
    &lt;code&gt;category&lt;/code&gt;, &lt;code&gt;date&lt;/code&gt;, and &lt;code&gt;timestamp&lt;/code&gt;.
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;
  I used DynamoDB On-Demand capacity mode because this model is ideal for
  staying within the &lt;strong&gt;AWS free tier&lt;/strong&gt;, as I only pay for what I
  use, and my simple application's low traffic will likely remain far below the
  free limits.
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frplyrkzrpe5qwoctgge0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frplyrkzrpe5qwoctgge0.png" alt="Dynamo DB" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h3&gt;Setting Up the Lambda Functions &amp;amp; API Gateway Integration&lt;/h3&gt;



&lt;p&gt;
  Each app (Expense, Weather, News, GitHub) has its own Lambda function. Here’s
  an overview of the integration and automation:
&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;strong&gt;Lambda Function Deployment:&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;
        Each app has its own set of Lambda functions deployed automatically using
        a &lt;strong&gt;deployment&lt;/strong&gt; script.
      &lt;/li&gt;
      &lt;li&gt;
        The functions are connected to the API Gateway endpoints:
        &lt;code&gt;/ExpenseApp&lt;/code&gt;, &lt;code&gt;/WeatherApp&lt;/code&gt;,
        &lt;code&gt;/NewsApp&lt;/code&gt;, and &lt;code&gt;/GitHubApp&lt;/code&gt;.
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;


![My Lambda Functions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sb5mpudm13pee654g1y6.png)

  &lt;li&gt;
    &lt;strong&gt;API Gateway Routes:&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;
        I used API Gateway as the main router for my dashboard. I set up a
        dedicated path for each part of my app. For instance, I created a
        specific route called &lt;code&gt;/ExpenseApp&lt;/code&gt; for the ExpenseApp. On
        that route, I added a &lt;code&gt;GET&lt;/code&gt; method. Then, I linked it
        directly to my Lambda function. I used something called
        &lt;strong&gt;Lambda Proxy integration&lt;/strong&gt;, which is a simple way to make
        sure the API Gateway sends everything from the request straight to my
        code so it can handle it.
      &lt;/li&gt;
      &lt;li&gt;
        &lt;strong&gt;API Gateway Configuration Error&lt;/strong&gt;: A challenge I
        encountered was an integration error between API Gateway and Lambda
        functions, which was caused by a mismatch in expected resource paths.
        The Lambda functions expected paths like &lt;code&gt;/ExpenseApp&lt;/code&gt; but
        were defined under the root path &lt;code&gt;/&lt;/code&gt;. This was a frustrating
        issue because I just could not get the API Gateway align with my Lambda
        no matter how much permissive my permissions were. Once I adjusted the
        paths to match, everything worked seamlessly.
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatpmfnzmaxyjai31w8ul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatpmfnzmaxyjai31w8ul.png" alt="API Gateway Console" width="800" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Handling CORS (Cross-Origin Resource Sharing)&lt;/h3&gt;

&lt;p&gt;Another frustrating issue that I ran into was getting &lt;strong&gt;CORS&lt;/strong&gt; (Cross-Origin Resource Sharing) to work. My dashboard's frontend had to talk to different backends, and for some reason, the requests kept getting blocked. I thought I'd solved it by just enabling CORS on the main API Gateway (the &lt;strong&gt;root resource)&lt;/strong&gt;, but that wasn't enough. It turns out I had to go into every child resource (like &lt;code&gt;/ExpenseApp&lt;/code&gt; and &lt;code&gt;/WeatherApp&lt;/code&gt;)and enable it there too. And that was just the start of it. I also had to make sure my Lambda functions included the code snippet to return the correct &lt;code&gt;Access-Control&lt;/code&gt; headers in their response. The final challenge was using the AWS CLI for this. I couldn't just pass the headers on a single line because the command line kept messing up the quotes. To fix it, I had to create a separate JSON file just for the headers and then reference that file in my CLI command. This was a tedious trial and error process.&lt;/p&gt;

&lt;h3&gt;CI/CD Pipeline with GitHub Actions&lt;/h3&gt;



&lt;h4&gt;Backend CI/CD:&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;strong&gt;GitHub Actions&lt;/strong&gt; automates the deployment of Lambda functions
    whenever code is pushed to the repository.
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3wm1rqavqs0gjb3f3mz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3wm1rqavqs0gjb3f3mz.png" alt="Backend Deployment" width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h4&gt;Frontend CI/CD:&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;
    The &lt;strong&gt;frontend&lt;/strong&gt; is deployed to &lt;strong&gt;S3&lt;/strong&gt; buckets
    automatically whenever changes are committed. The GitHub Actions pipeline
    builds the frontend assets and uploads them to the appropriate S3 bucket for
    public access.
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcx7sc0lnaq4a2orzllo8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcx7sc0lnaq4a2orzllo8.png" alt="Frontend Deployment" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h3&gt;AWS CLI for Automation&lt;/h3&gt;



&lt;p&gt;After the completion of my dashboard, I wanted to challenge myself: how do I get all of this stuff onto AWS without touching the AWS console? Manually creating all those Lambda functions, IAM roles, and API Gateway endpoints would be a tedious and hectic process. That's when I used the &lt;strong&gt;AWS Command Line Interface (CLI)&lt;/strong&gt; to automate the entire process.&lt;/p&gt;

&lt;p&gt;I wrote a single &lt;code&gt;deploy_infra.sh&lt;/code&gt; script to handle everything. The script is smart enough to check if a resource already exists before trying to create it. For each of my apps, the script first creates the necessary &lt;strong&gt;IAM roles&lt;/strong&gt; and  gives each app the basic permissions it needs. For the ExpenseApp, it adds extra access to DynamoDB.&lt;/p&gt;

&lt;p&gt;Once the roles are set up, the script moves to function deployment. It takes my Python code, bundles it with any required libraries, and then uploads it to create a new Lambda function. It also pulls any private API keys from a separate file so I don't accidentally expose them.&lt;/p&gt;

&lt;p&gt;The final, and perhaps most complex, part was automating &lt;strong&gt;API Gateway&lt;/strong&gt;. The script not only creates a new API for each app but also builds out the routes, like &lt;code&gt;/ExpenseApp&lt;/code&gt; and &lt;code&gt;/WeatherApp&lt;/code&gt;. It connects these routes to their corresponding Lambda functions using the &lt;strong&gt;Lambda Proxy integration&lt;/strong&gt;. And because &lt;strong&gt;CORS&lt;/strong&gt; was such a pain to configure, I built a special function into the script to handle all of those headers and pre-flight &lt;code&gt;OPTIONS&lt;/code&gt; requests automatically.&lt;/p&gt;

&lt;p&gt;Finally, the script deploys the entire API and gives me the public endpoint for each app, so I can immediately start testing. This entire process, which would have taken me more than 15 minutes of manual clicking and configuring, is now done with a single command. It's a huge win for me and I learned so much when playing with it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yygjw29ax9kq2fqr95m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yygjw29ax9kq2fqr95m.png" alt="Final AWS CLI Output" width="800" height="118"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h3&gt;Website Image&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf750jqjcz99fpdzgwxh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf750jqjcz99fpdzgwxh.png" alt="Image of my dashboard website" width="800" height="782"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;Conclusion&lt;/h3&gt;



&lt;p&gt;This multi-app dashboard project gave me a good learning experience in designing and implementing serverless architectures with AWS. I was able to focus on &lt;strong&gt;cloud engineering&lt;/strong&gt; aspects (like IAM roles, Lambda functions, and API Gateway), while using &lt;strong&gt;GitHub Actions&lt;/strong&gt; to automate deployment processes. This project was a learning exercise and a portfolio piece to showcase my cloud engineering skills. Moving forward, I’m excited to explore Infrastructure-as-Code tools like AWS CDK or Terraform to further streamline the setup and deployment of serverless applications.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>cloudcomputing</category>
      <category>aws</category>
      <category>cloudstorage</category>
    </item>
    <item>
      <title>My Cloud Resume: Built on Azure</title>
      <dc:creator>asim-makes</dc:creator>
      <pubDate>Sat, 06 Sep 2025 07:48:24 +0000</pubDate>
      <link>https://forem.com/asimmakes/my-cloud-resume-built-on-azure-47o0</link>
      <guid>https://forem.com/asimmakes/my-cloud-resume-built-on-azure-47o0</guid>
      <description>&lt;p&gt;So a few weeks ago, I decided to take on the Cloud Resume Challenge that I saw online. I skimmed over the challenge and thought it would be interesting for beginner like me to try out. And that is way better than just watching or reading tutorials. &lt;/p&gt;

&lt;p&gt;Before going on how I completed the challenge, I will quickly summarize what the challenge is all about:&lt;br&gt;
1) Build a resume using HTML, and CSS.&lt;br&gt;
2) Host the static resume in cloud storage.&lt;br&gt;
3) Make a visitor counter to track how many times my resume is visited with JavaScript, Python, and Database.&lt;br&gt;
4) Make a template to automatically deploy the resources (IaC).&lt;br&gt;
5) CI/CD pipelines for both frontend and backend with GitHub Actions.&lt;br&gt;
6) Make a custom DNS and use HTTPS to visit the websiste.&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;Simple Archcitecture Diagram of the project&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6wgf179y4nbm53bpnbv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6wgf179y4nbm53bpnbv.png" alt="Simple Architecture Diagram of the project" width="800" height="639"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h2&gt;Part 1: Creating a static resume&lt;/h2&gt;

&lt;p&gt;The very first part was challenging for me as I am not someone who is comfortable with frontend. So for that, I learned basic HTML and CSS and then used a tutorial YouTube video with AI tools to build my resume. The resume was good enough for me.&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;Part 2: Making the resume a static website&lt;/h2&gt;

&lt;p&gt;After finishing my resume, I created a storage account in Azure. Azure storage serves files, like a CDN and hence it is the best way to host a static website. First, I need to enable static web by going to the Networking. When I do that, a special container called $web is created. By convention, it looks only at $web when it is serving my site.&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;Part 3: Frontend Visitor Counter&lt;/h2&gt;

&lt;p&gt;Before making any JS, I first went to the static resume that I have created, made HTML for displaying visitor counter and then linked the class to JS. The job of the JS is simple" It calls an API and display the number it gets back dynamically. Since I am not into frontend, I leveraged AI tools to write the code for me and understood the snippets it gave me.&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;Part 4: CosmosDB Database&lt;/h2&gt;

&lt;p&gt;For the database creation, the challenge suggests to use Azure Cosmos DB Table API in serverless mode. Let me briefly explain what it is. The Table API is a schema-less NoSQL option and the simplest out of the many flavors of APIs that are out there (eg: SQL, MongoDB, etc). To add a new data, I just throw in entities (rows) with whatever properties (columns) that I want. Each entity has:&lt;br&gt;
a) PartitionKey: Groups data&lt;br&gt;
b) RowKey: Unique identifier&lt;br&gt;
Serverless means I am only paying for what I consume and am not reserving capacity. If I were using a normal model like &lt;em&gt;provisioned throughput&lt;/em&gt;, I am billed for that capacity whether I use it or not. I created a new CosmosDB resource and made a table for visitor counter.&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;Part 5: Backend Visitor Counter&lt;/h2&gt;

&lt;p&gt;For security, the browser should not talk to the database directly because any key/connection string shipped to the browser is public and users can read it. Just as the challenge suggests, I built a small API using Azure Functions with an HTTP trigger. When the JS makes a request, the function reads the current visitor count from CosmosDB, increments it, writes it back, and then returns the new counter to the browser. &lt;/p&gt;

&lt;p&gt;The part that really had me banging my head was deploying Azure Functions with Python. I was on Linux and started using Python 3.12 as the runtime stack. I wrote the code, debugged it, checked my host file, even hit the forums and leaned on AI tools but no matter what, I just couldn’t get the function to deploy to my Function App.&lt;/p&gt;

&lt;p&gt;After some digging, I learned that Python 3.12 support for Azure Functions is still new and has some dependency conflicts. The build would show as successful in VS Code, but the function itself just wouldn’t show up after deployment. Switching back to Python 3.11 solved the issue immediately.&lt;/p&gt;

&lt;p&gt;I finally had an HTTP trigger up and running that connected to my database, got the counter value, incremented it, and stored it back. That little victory felt huge.&lt;/p&gt;

&lt;p&gt;Finally, all that was left for this step was to write some test cases. I just was not in the mood to write it on my own, so I skipped it and used AI tools to write it for me.&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;Part 6: Infrastructure as Code (IaC)&lt;/h2&gt;

&lt;p&gt;The normal way to create resources in Azure is to go to the Azure portal, click on the resource that I want to create, fill out the information and hit "review+create". This is a manual task and if I need to recreate the same setup, I have to do it again and if I forget a setting, then good luck with it. &lt;/p&gt;

&lt;p&gt;IaC is a way of solving it where I deploy my infrastructure in code like JSON, YAML, etc. I used Bicep to deploy my resources which is just a way to declare &lt;strong&gt;&lt;em&gt;I need a Storage Account in this resource group&lt;/em&gt;&lt;/strong&gt;, &lt;strong&gt;&lt;em&gt;I need a Function App with this plan and runtime&lt;/em&gt;&lt;/strong&gt;, etc. Bicep generally has 3 sections:&lt;br&gt;
a) Parameter&lt;br&gt;
b) Resource&lt;br&gt;
c) Deployment&lt;br&gt;
Parameter section defines the variable that will be used throughout my bicep file. Resource section defines the infrastructure. Deployment section just deploys the code. The code can be deployed using the simple command:&lt;br&gt;
&lt;code&gt;az deployment group create --resource-group RESOURCE_GROUP_NAME --template-file BICEP_FILE&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This part was not frustrating as there were plenty of resources out there which does exactly this. So I did this step easily.&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;Part 7: CI/CD for Backend&lt;/h2&gt;

&lt;p&gt;First let us understand some theory. So when I make some changes to my code and upload it, that's it. If i want the change to be reflected in my Azure resource, then I have to do it manually by going to that resource and uploading the new changes again. However, I can automatically deploy changes and this is where &lt;strong&gt;CI/CD&lt;/strong&gt; comes in.&lt;/p&gt;

&lt;p&gt;CI stands for &lt;strong&gt;Continuous Integration&lt;/strong&gt;. Every time I make a change and push it to GitHub, my code is automatically tested.&lt;/p&gt;

&lt;p&gt;CD stands for &lt;strong&gt;Continuous Deployment&lt;/strong&gt;. If the test passes, my code and infra changes are automatically deployed to Azure.&lt;/p&gt;

&lt;p&gt;For this, I need to use GitHub Actions. It is a way to write automation scripts that run whenever certain events happen in my repository like a change is pushed to a repo.&lt;/p&gt;

&lt;p&gt;I first attempted to deploy both the infra and trigger using the single bicep file but the approach was wrong. Bicep is for infra and its sole purpose is to provision and configure resources. Embedding the code in bicep template means every time I run the template, it would redeploy the code which makes the code harder to debug and manage.&lt;/p&gt;

&lt;p&gt;So to deploy the code, I used GitHub Actions. This pipeline runs every time I push the code to the repository. To build GitHub actions, I createdsingle workflow file: .github/actions/main.yml and added two jobs:&lt;br&gt;
a) build-and-test: This job is for CI. So this job must be passed to move to the next job and to pass it, the tests i have written earlier runs. If the test passes, then it moves to the second job else the whole deployment process fails.&lt;br&gt;
b) deploy-to-azure: This job is for CD. Dependency is created so that the job only runs after the first job is executed. A secret is created in the project repo so that sensitive information is not hardcoded in the workflow file. &lt;br&gt;
&lt;code&gt;az ad sp create-for-rbac --name "myGitHubActionsServicePrincipal" --role contributor --scopes /subscriptions/sub_id --sdk-auth&lt;/code&gt;&lt;br&gt;
This secret is used to login to Azure. After logging in, bicep template is deployed. After deploying the function app, the function (HTTP Trigger) is zipped and then deployed. &lt;/p&gt;

&lt;p&gt;Like Part 5, this was the most frustrating part of the project and it took me a day to solve it. No matter how many times I debugged the code, the Azure Function just wouldn’t deploy. I searched online forums, used AI tools to check my syntax, and tried different deployment methods but the function just won't appear. So as a last resort, I installed requirements like this:&lt;br&gt;
&lt;code&gt;pip install -r requirements.txt --target=".python_packages/lib/site-packages"&lt;/code&gt;&lt;br&gt;
and the function was deployed. Even though I explicitly specified Python 3.11 in my workflow, the GitHub Actions runner ignored it and defaulted to Python 3.12. So the above code ensures that the workflow actually runs with the right Python version and installs all dependencies into the location Azure Functions expects. After this change, deployment finally worked.&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;Part 8: CI/CD for Fronotend&lt;/h2&gt;

&lt;p&gt;Once the backend is running, the next step is to make updating the frontend file automatically to the storage account. I created a new GitHub repository just for static website files. I wrote a small workflow that runs automatically whenever I push new code to the repository. The workflow takes the new files and uploads them to the Azure Storage $web container, replacing the old version. Finally, I need to purge Azure CDN cache so that I can see the latest changes in the domain instead of the old cached version.&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;Part 9: HTTPS and DNS&lt;/h2&gt;

&lt;p&gt;This was the final step for me. The main goal of this step is to make my resume website publicly available at an address of my own. To complete this, the very first thing I need is a custom domain. To get a custom domain, I need to register a domain from a registrar. A small amount of fee needs to be paid to the registrar to get the domain. So upon searching, I choose the cheapest domain .cloud from Hostinger. I then need to create an Azure Front Door and add domain to it. After adding domain, Azure provides a TXT and CNAME record which needs to be copied to my custom domain. This is done so that my custom domain points to the Azure Front Door. I can make my domain point directly to storage account but if I do that, I would need to buy TLS/SSL certificate for HTTPS from third party CA. But Azure FrontDoor with AFD Managed Certificate handles that automatically.&lt;br&gt;
Finally, I created a routing rule so that when someone visits to my custom domain, they will be forwarded to the storage account. &lt;br&gt;
&lt;br&gt;&lt;br&gt;
So this is the final result of the challenge.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymqmcb13rlfvvcxbgqi2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymqmcb13rlfvvcxbgqi2.png" alt="My Resume Website" width="800" height="1083"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;For a long time, I was caught in the cycle of "learning by reading" where I would read theories and concepts but never apply them. I had a myriad of knowledge in my head, but no real experience to back it up.&lt;/p&gt;

&lt;p&gt;This challenge forced me to break free from that cycle. I didn't start by reading about Azure Functions, CI/CD pipelines or Infrastructure as Code; I went head first, actually built them and then forced myself to learn the basic theory. I admit that the process was messy, full of debugging and unexpected errors, but it was in those moments that the theoretical concepts made sense to me. &lt;/p&gt;

&lt;p&gt;This project also pushed me to explore a wide range of services, from Azure Storage to CosmosDB, without getting lost in a single one. While my knowledge lacks depth right now, I am starting to have a mental map of how all the pieces of a modern web application fit together.&lt;/p&gt;

&lt;p&gt;I had a lot of fun, frustration and learned many things doing this project. I highly recommend any new comers like me to complete this project.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@anikeevxo?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Vladimir Anikeev&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/white-sky-photography-IM8ZyYaSW6g?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>cloudskills</category>
      <category>azure</category>
      <category>learning</category>
    </item>
    <item>
      <title>🐧Week 1 of my Cloud Journey - Building a Strong Linux Foundation</title>
      <dc:creator>asim-makes</dc:creator>
      <pubDate>Sat, 16 Aug 2025 12:18:29 +0000</pubDate>
      <link>https://forem.com/asimmakes/week-1-of-my-cloud-journey-building-a-strong-linux-foundation-55dh</link>
      <guid>https://forem.com/asimmakes/week-1-of-my-cloud-journey-building-a-strong-linux-foundation-55dh</guid>
      <description>&lt;p&gt;It’s been an exciting first week in my cloud journey. While I’ve had some past experience with Linux and basic scripting, it’s been a while since I last explored the cloud. So, I dedicated this week to revisiting the fundamentals and bridging the gap between my existing skills and the next phase of my learning.&lt;/p&gt;

&lt;p&gt;I started by moving around directories, installing software, and managing files. I first learned about package management — a tool to find software from repositories, install it, update it, and remove it. Package management taught me that every distribution has its own way of handling software, but the core principle remains same.&lt;/p&gt;

&lt;p&gt;Then came process management. At first, “killing” a process was confusing because I had to read about signals and what they actually did. But I soon realized they’re simply different ways of telling the system to stop something that’s not working as intended. Running commands like ps, top, and kill gave me a real snapshot of how the system works in Linux. I knew how to do this in Windows, but in Linux, I had no idea until now.&lt;/p&gt;

&lt;p&gt;Next, I dove into user management — creating new users and groups, assigning permissions, and switching between accounts. It was so simple in Linux. In Windows, it’s such a time-consuming process, but in Linux, one or two quick commands and my new user is ready.&lt;/p&gt;

&lt;p&gt;I also explored scheduling with cron. Automating simple tasks, like running backups at a specific time, made me realize how much I enjoy automation. But I soon discovered cron’s limitations when I wanted to schedule tasks without fixed times. For example, I wanted to run a script every day, but what if I missed a day? Cron doesn’t handle that, so I had to use Anacron, which exactly handles those edge cases.&lt;/p&gt;

&lt;p&gt;On the surface, these might look like random Linux commands and scripts. But in reality, this week gave me important lessons that apply directly to cloud engineering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Automation is key. If I repeat something, script it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Know what’s running. Process and service management is the backbone of troubleshooting in the cloud.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For Week 2, I’ll be switching tools a bit. My focus will be on Python and boto3, the AWS SDK for Python, so I can start interacting with cloud services. I’ll also try to set up my AWS account (I cant set it up right now because I dont have a dollar card yet) and get it ready for hands-on practice.&lt;/p&gt;

&lt;p&gt;Step by step, the journey continues 🚀.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>ubuntu</category>
      <category>cloud</category>
      <category>bash</category>
    </item>
    <item>
      <title>🚀 Day 0 of my Cloud Journey - Cutting through the Noise</title>
      <dc:creator>asim-makes</dc:creator>
      <pubDate>Sat, 09 Aug 2025 15:26:50 +0000</pubDate>
      <link>https://forem.com/asimmakes/day-0-of-my-cloud-journey-cutting-through-the-noise-5abl</link>
      <guid>https://forem.com/asimmakes/day-0-of-my-cloud-journey-cutting-through-the-noise-5abl</guid>
      <description>&lt;p&gt;Hi, I’m Asim Baral.&lt;br&gt;
From tomorrow, I’m starting my journey toward becoming a Cloud Engineer. I’ve decided to stop overthinking, stop scrolling through endless “is this field safe?” posts, and instead focus on building real skills and taking consistent action.&lt;/p&gt;

&lt;p&gt;I currently work as an IT Support Engineer (troubleshooting, M365, and more). Now, I’m ready to level up into cloud engineering.&lt;/p&gt;

&lt;p&gt;So this is my high level overview of the roadmap that I have created:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Phase 1: Foundational Level&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Deadline: 1-2 months&lt;/p&gt;

&lt;p&gt;In this phase, I will focus on these areas&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS Fundamentals&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Focus on Linux and Networking&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Python for basic cloud and automation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scripting and Text Processing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic Portfolio&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Phase 2: Core Level&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Deadline: 3-5 months&lt;/p&gt;

&lt;p&gt;In this phase, I will focus heavily on certs and building portfolio projects.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS Associate Level Cert (SAA-C03)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Hands on Projects&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Phase 3: Specialization&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Deadline: 6-12 months&lt;/p&gt;

&lt;p&gt;In this phase, I will focus on deepening my skills in the most in-demand technologies and build a more complex project that showcases my readiness for a professional role.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Terraform, Docker or Kubernetes (Any one of them)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Basic CI/CD pipelines&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Advanced Project&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the end of 1 year, here are my deliverables:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;SAA-C03 cert&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;4-5 portfolio projects&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Intermediate on Linux&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Know how to use Terraform, Docker or Kubernetes&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To be accountable, I will post weekly updates on dev.to with my progress, challenges and the lessons that I have learned. Soon, I will be transitioning to linkedin (maybe 1 or 2 months from now) to not only post updates but also to actively engage on community. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No career path is safe or immune to change. Every career path has endless noise of "Is this field safe", "Can I get an entry level job with this", "Will AI take over my job", "I got rejected with such good portfolios, so this field is cooked", and yada yada. The only thing I can do is to keep learning new skills, be adaptable and have the willingness to learn. &lt;/p&gt;

&lt;p&gt;This post marks by &lt;strong&gt;Day 0&lt;/strong&gt;. Let there be noise. It stays outside. My growth happens here.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>cloudskills</category>
    </item>
  </channel>
</rss>
