<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: nash9</title>
    <description>The latest articles on Forem by nash9 (@nash9).</description>
    <link>https://forem.com/nash9</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/nash9"/>
    <language>en</language>
    <item>
      <title>VPC Part 2 : AWS Site-to-Site VPN (On-Prem Simulation)</title>
      <dc:creator>nash9</dc:creator>
      <pubDate>Fri, 26 Dec 2025 15:23:20 +0000</pubDate>
      <link>https://forem.com/nash9/vpc-part-2-aws-site-to-site-vpn-on-prem-simulation-4eia</link>
      <guid>https://forem.com/nash9/vpc-part-2-aws-site-to-site-vpn-on-prem-simulation-4eia</guid>
      <description>&lt;p&gt;Connecting an "on-premise" network to an AWS VPC is the most common real-world enterprise scenario.However, you cannot use VPC Peering for this. In the real world,you use &lt;strong&gt;AWS Site-to-Site VPN&lt;/strong&gt; or AWS Direct Connect.&lt;/p&gt;

&lt;p&gt;Peering is for AWS-to-AWS only.&lt;br&gt;
VPN is for Anything-to-AWS. You use it to connect your home office, a physical data center, or even a different cloud provider (like Azure or Google Cloud) to your AWS VPC.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Concept&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VPC-A (The Cloud):&lt;/strong&gt; Uses an AWS Virtual Private Gateway (VGW).&lt;br&gt;
&lt;strong&gt;VPC-B (The On-Premise):&lt;/strong&gt; Uses an EC2 instance running strongSwan (an open-source VPN software) to act as your "Corporate Firewall/Router." This is called a Customer Gateway (CGW).&lt;br&gt;
Using  static routing or BGP (Border Gateway Protocol), you create a secure IPsec VPN tunnel between the two gateways over the public internet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Cost Analysis (Still &amp;lt; $2)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Site-to-Site VPN Connection&lt;/strong&gt;: ~$0.05 per hour.&lt;br&gt;
&lt;strong&gt;EC2 (t3.micro)&lt;/strong&gt;: ~$0.01 per hour.&lt;br&gt;
&lt;strong&gt;Public IP&lt;/strong&gt;: ~$0.005 per hour.&lt;/p&gt;

&lt;p&gt;Total: If you run this for 2 hours, it will cost roughly $0.15 - $0.20.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. The Terraform Strategy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To simulate this, we need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create VPC-A (Cloud) and VPC-B (On-Prem).&lt;/li&gt;
&lt;li&gt;In VPC-A, create a Virtual Private Gateway (VGW).&lt;/li&gt;
&lt;li&gt;In VPC-B, create an EC2 instance. This instance needs an Elastic IP.&lt;/li&gt;
&lt;li&gt;Tell AWS that the "Customer Gateway" is the Public IP of that EC2 instance.&lt;/li&gt;
&lt;li&gt;Create the VPN Connection.&lt;/li&gt;
&lt;li&gt;Update Route Tables in both VPCs to allow traffic flow.&lt;/li&gt;
&lt;li&gt;Route Propagation: Unlike Peering, where you manually add routes, in a VPN setup you can enable "Route Propagation." This allows the VGW to automatically tell the VPC about the on-prem routes it learns via BGP (Border Gateway Protocol).&lt;/li&gt;
&lt;li&gt;Encryption (IPsec): This project teaches you that traffic between on-prem and AWS is encrypted in transit over the public internet, unlike Peering which stays on the private AWS backbone.&lt;/li&gt;
&lt;li&gt;The "Customer Gateway" Concept: You learn that AWS doesn't "reach out" to on-prem; you have to define the entry point (CGW) and establish a tunnel.&lt;/li&gt;
&lt;li&gt;Security Groups / NACLs: You will have to allow UDP Port 500 and UDP Port 4500 (ISAKMP/IPsec) for the tunnel to even start.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Project Agenda&lt;/strong&gt;: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apply Terraform: Run terraform apply.&lt;/li&gt;
&lt;li&gt;Get the Config:   Go to AWS Console &amp;gt; VPC &amp;gt; Site-to-Site VPN Connections. Select your new connection and click Download Configuration.Select Vendor: Generic.&lt;/li&gt;
&lt;li&gt;Find the Tunnel Info: Open the text file. Look for Tunnel 1. You will see an Outside IP Address (AWS side) and a Pre-Shared Key.&lt;/li&gt;
&lt;li&gt;Prepare the Bash Script: sample bash script(refer bash.sh file) to configure strongSwan on your On-Prem EC2 instance. Replace the placeholders with the actual values from the text file.&lt;/li&gt;
&lt;li&gt;Run Bash Script: SSH into your "OnPrem-Router" EC2. Paste the bash script above, replacing the placeholders with the data from the text file.&lt;/li&gt;
&lt;li&gt;Check Status: In the AWS Console, the VPN Tunnel 1 status should change from Down to Up (Green) after about 1-2 minutes.&lt;/li&gt;
&lt;li&gt;Test connection. If it works, congratulations! You have successfully connected your on-prem network to your AWS VPC using Site-to-Site VPN.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpzv0osg4p3t3d37gxh7h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpzv0osg4p3t3d37gxh7h.png" alt="HLD" width="738" height="687"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Cost Management Checklist&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
   VPN Connection: $0.05/hour.(Deleted immediately after testing).&lt;br&gt;
   EC2 t3.micro: $0.01/hour.&lt;br&gt;
   Public IP: $0.005/hour per IP.&lt;br&gt;
   Total: If you finish this in 2 hours, you will spend roughly $0.25.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps to Run the Project and make VPN UP&lt;/strong&gt;&lt;br&gt;
Note: Amazon Linux 2023 (AL2023) uses dnf and recommends Libreswan instead of strongSwan. Once you can finally connect to your instance, run this script.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Download Config&lt;/strong&gt; : &lt;br&gt;
Go to AWS Console &amp;gt; VPN &amp;gt; Site-to-Site VPN &amp;gt; Download Configuration. Select Vendor: Generic.&lt;br&gt;
Get Tunnel 1 Data: Find the Pre-Shared Key and the Virtual Private Gateway IP (called "Outside IP").&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8e93r17ic1te3x1bktv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8e93r17ic1te3x1bktv.png" alt="download" width="800" height="90"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run this on the On-Prem EC2&lt;/strong&gt; :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Install VPN software&lt;br&gt;
&lt;code&gt;sudo dnf install libreswan -y&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enable IP Forwarding (Essential for a Router)&lt;br&gt;
&lt;code&gt;sudo sysctl -w net.ipv4.ip_forward=1&lt;br&gt;
echo "net.ipv4.ip_forward = 1" | sudo tee -a /etc/sysctl.conf&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the Secrets file (Replace placeholders)&lt;br&gt;
&lt;code&gt;##Format: &amp;lt;OnPrem_Public_IP&amp;gt; &amp;lt;AWS_VPN_Outside_IP&amp;gt; : PSK "&amp;lt;Your_Pre_Shared_Key&amp;gt;"&lt;br&gt;
sudo vi /etc/ipsec.d/aws.secrets&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the Tunnel config&lt;br&gt;
&lt;code&gt;sudo vi /etc/ipsec.d/aws.conf&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Paste this into aws.conf (Replace the bracketed IPs):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;conn tunnel1&lt;br&gt;
authby=secret&lt;br&gt;
auto=start&lt;br&gt;
left=%defaultroute&lt;br&gt;
leftid=[YOUR_ONPREM_EIP_PUBLIC_IP]&lt;br&gt;
leftsubnet=192.168.0.0/16&lt;br&gt;
right=[AWS_TUNNEL_OUTSIDE_IP]&lt;br&gt;
rightsubnet=10.10.0.0/16&lt;br&gt;
ike=aes128-sha1;modp2048&lt;br&gt;
phase2alg=aes128-sha1;modp2048&lt;br&gt;
keyexchange=ike&lt;br&gt;
ikev2=no&lt;br&gt;
type=tunnel&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;run the following commands to restart strongSwan and bring up the tunnel:&lt;br&gt;
&lt;code&gt;sudo systemctl restart ipsec&lt;br&gt;
sudo ipsec auto --add tunnel1&lt;br&gt;
sudo ipsec auto --up tunnel1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Finally, start the service:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo systemctl start ipsec&lt;br&gt;
sudo systemctl enable ipsec&lt;br&gt;
sudo ipsec status  # Check if "tunnel1" is loaded&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From your On-Prem EC2, try to ping a private EC2 in the Cloud VPC (10.10.x.x).&lt;br&gt;
You should see replies if everything is set up correctly!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7k4x00otx1khf8ykq8ti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7k4x00otx1khf8ykq8ti.png" alt="vpn" width="800" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learning Outcome&lt;/strong&gt; : You will finally understand how packets know where to go when they leave a private subnet. You'll see how the "Virtual Private Gateway" handles the cloud side and how a "Customer Gateway" handles the on-prem side.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>aws</category>
      <category>tutorial</category>
      <category>networking</category>
    </item>
    <item>
      <title>VPC Part 1 : AWS VPC Peering</title>
      <dc:creator>nash9</dc:creator>
      <pubDate>Fri, 26 Dec 2025 15:23:04 +0000</pubDate>
      <link>https://forem.com/nash9/vpc-part-1-aws-vpc-peering-je4</link>
      <guid>https://forem.com/nash9/vpc-part-1-aws-vpc-peering-je4</guid>
      <description>&lt;p&gt;The "Mergers &amp;amp; Acquisitions" Scenario (VPC Peering)&lt;/p&gt;

&lt;h2&gt;
  
  
  Description
&lt;/h2&gt;

&lt;p&gt;This project demonstrates how to set up VPC Peering between two Virtual Private Clouds (VPCs) in a cloud environment. VPC Peering allows resources in different VPCs to communicate with each other as if they were within the same network.This project simulates two different companies (or departments) needing to share resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Goal to achieve :&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create VPC-A (10.1.0.0/16) and VPC-B (10.2.0.0/16).&lt;/li&gt;
&lt;li&gt;Deploy an EC2 in each.&lt;/li&gt;
&lt;li&gt;Set up a VPC Peering Connection.&lt;/li&gt;
&lt;li&gt;Update Route Tables in both VPCs to point to the other’s CIDR.
&lt;strong&gt;Challenge&lt;/strong&gt;: Try to peer VPC-A with a VPC-C that has an overlapping CIDR (10.1.0.0/16)
and see why it fails.
_ Test connectivity by pinging between the EC2 instances in both VPCs._&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Concepts You’ll Learn:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Peering Connections:&lt;/strong&gt; Request/Accept workflow.&lt;br&gt;
&lt;strong&gt;Transitive Routing:&lt;/strong&gt; Learning that VPC Peering is not transitive (If A peers with B, and B with C, A cannot talk to C).&lt;br&gt;
&lt;strong&gt;Overlapping CIDRs:&lt;/strong&gt; The importance of IP planning.&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS CLI installed and configured.&lt;/li&gt;
&lt;li&gt;Terraform installed.&lt;/li&gt;
&lt;li&gt;Basic understanding of VPCs, subnets, and networking concepts.&lt;/li&gt;
&lt;li&gt;An AWS account with necessary permissions.&lt;/li&gt;
&lt;li&gt;Two VPCs created in the same or different regions.&lt;/li&gt;
&lt;li&gt;Instances or resources deployed in each VPC for testing connectivity.&lt;/li&gt;
&lt;li&gt;Familiarity with security groups and route tables.&lt;/li&gt;
&lt;li&gt;Permissions to create and manage VPCs and peering connections.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── README.md
├──terraform
│   ├── variables.tf
│   ├── outputs.tf
│   └── provider.tf
│   ├── ec2.tf
│   ├── security_group.tf
│   ├── vpc_peering.tf
│   ├── vpc_A.tf
│   └── vpc_B.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Steps to Set Up VPC Peering
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create VPCs&lt;/strong&gt;: Set up two separate VPCs in your cloud environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configure Subnets&lt;/strong&gt;: Create subnets within each VPC to host your resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set Up VPC Peering&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Initiate a VPC peering connection request from one VPC to the other.&lt;/li&gt;
&lt;li&gt;Accept the peering request in the target VPC.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Update Route Tables&lt;/strong&gt;: Modify the route tables in both VPCs to allow traffic to flow between them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configure Security Groups&lt;/strong&gt;: Adjust security group rules to permit traffic between resources in the peered VPCs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test Connectivity&lt;/strong&gt;: Launch instances in both VPCs and verify that they can communicate with each other.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;VPC peering terraform will look like this:&lt;br&gt;
`resource "aws_vpc_peering_connection" "peer" {&lt;br&gt;
  vpc_id      = aws_vpc.vpc_a.id&lt;br&gt;
  peer_vpc_id = aws_vpc.vpc_b.id&lt;br&gt;
  auto_accept = true&lt;/p&gt;

&lt;p&gt;tags = {&lt;br&gt;
    Name = "VPC-A-to-VPC-B"&lt;br&gt;
    Project = var.project_name&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
resource "aws_route" "route_a_to_b" {&lt;br&gt;
  route_table_id            = aws_route_table.rt_a.id&lt;br&gt;
  destination_cidr_block    = aws_vpc.vpc_b.cidr_block&lt;br&gt;
  vpc_peering_connection_id = aws_vpc_peering_connection.peer.id&lt;br&gt;
}&lt;br&gt;
resource "aws_route" "route_b_to_a" {&lt;br&gt;
  route_table_id            = aws_route_table.rt_b.id&lt;br&gt;
  destination_cidr_block    = aws_vpc.vpc_a.cidr_block&lt;br&gt;
  vpc_peering_connection_id = aws_vpc_peering_connection.peer.id&lt;br&gt;
}`&lt;/p&gt;

&lt;p&gt;or simply run the following commands in the terraform directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;terraform&lt;span class="o"&gt;(&lt;/span&gt;your infra code directory&lt;span class="o"&gt;)&lt;/span&gt;
terraform init
terraform plan&lt;span class="o"&gt;(&lt;/span&gt;just see what will be created&lt;span class="o"&gt;)&lt;/span&gt;
terraform apply &lt;span class="nt"&gt;--auto-approve&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;always verfiy the plan before applying &lt;span class="k"&gt;in &lt;/span&gt;production&lt;span class="o"&gt;)&lt;/span&gt;
terraform destroy &lt;span class="nt"&gt;--auto-approve&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;to clean up the resources&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa7n1sm1bj79iaz16hw48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa7n1sm1bj79iaz16hw48.png" alt="vpc" width="800" height="58"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By following these steps, you can successfully set up VPC Peering between two VPCs, enabling seamless communication between resources in different networks. This setup is useful for various scenarios, including multi-region architectures and resource sharing across different environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Critical Learning Points
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The Peering Handshake&lt;/strong&gt;: In this code, we used auto_accept = true. In real life, if peering with another AWS account, they must manually "Accept" the request.&lt;br&gt;
&lt;strong&gt;Route Tables&lt;/strong&gt;: Peering creates the "tunnel," but without the aws_route resource, the VPC doesn't know to send traffic through that tunnel.&lt;br&gt;
&lt;strong&gt;Security Groups:&lt;/strong&gt; Notice we allowed "All Traffic." In a real project, you should only allow the Private CIDR of the other VPC.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html" rel="noopener noreferrer"&gt;AWS VPC Peering Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/vpc/docs/vpc-peering" rel="noopener noreferrer"&gt;GCP VPC Peering Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview" rel="noopener noreferrer"&gt;Azure VNet Peering Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>networking</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Building a Scalable RAG System for Repository Intelligence</title>
      <dc:creator>nash9</dc:creator>
      <pubDate>Sun, 21 Dec 2025 11:27:32 +0000</pubDate>
      <link>https://forem.com/nash9/building-a-scalable-rag-system-for-repository-intelligence-jn7</link>
      <guid>https://forem.com/nash9/building-a-scalable-rag-system-for-repository-intelligence-jn7</guid>
      <description>&lt;p&gt;&lt;strong&gt;# 🧠 CodeSense AI: Building a Scalable RAG System for Repository Intelligence&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem:&lt;/strong&gt; Navigating large, unfamiliar codebases is slow.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;The Solution:&lt;/strong&gt; CodeSense AI—a sophisticated RAG (Retrieval-Augmented Generation) engine that lets you "talk" to your code using AWS Bedrock and Pinecone.&lt;/p&gt;


&lt;h2&gt;
  
  
  📖 Overview
&lt;/h2&gt;

&lt;p&gt;CodeSense AI isn't just a chatbot; it's a semantic code navigator. Most code search tools rely on keyword matching (Grepping). CodeSense AI uses &lt;strong&gt;Vector Embeddings&lt;/strong&gt; to understand the &lt;em&gt;intent&lt;/em&gt; and &lt;em&gt;logic&lt;/em&gt; behind your code.&lt;/p&gt;
&lt;h3&gt;
  
  
  Core Value Props:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Instant Architecture Mapping&lt;/strong&gt;: Ask "How does the auth flow work?" and get a cross-file explanation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contextual Debugging&lt;/strong&gt;: Share an error and find exactly where that logic resides in a 100-file repo.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seamless Ingestion&lt;/strong&gt;: Point to a GitHub URL, and the pipeline handles the rest—from cloning to vectorization.&lt;/li&gt;
&lt;/ol&gt;


&lt;h2&gt;
  
  
  🛠️ The Modern AI Stack
&lt;/h2&gt;
&lt;h3&gt;
  
  
  The Frontend (The User Experience)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;React 18.3 &amp;amp; TypeScript&lt;/strong&gt;: A type-safe foundation for handling complex UI states during long indexing processes.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Tailwind CSS &amp;amp; shadcn/ui&lt;/strong&gt;: For a high-fidelity, developer-centric aesthetic.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;TanStack Query&lt;/strong&gt;: Manages the server state for real-time indexing progress updates.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  The Intelligence (The Reasoning)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;AWS Bedrock (Amazon Titan Text Express)&lt;/strong&gt;: Chosen for its high-throughput, low-latency reasoning capabilities.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Titan Embeddings v2&lt;/strong&gt;: Generates &lt;strong&gt;1024-dimensional&lt;/strong&gt; vectors, optimized for technical documentation and source code.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Pinecone&lt;/strong&gt;: A serverless vector database that provides sub-100ms similarity search using &lt;strong&gt;Cosine Similarity&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  🏗️ Architecture &amp;amp; System Design
&lt;/h2&gt;
&lt;h3&gt;
  
  
  High-Level Design (HLD)
&lt;/h3&gt;

&lt;p&gt;The architecture follows a &lt;strong&gt;Decoupled Proxy Pattern&lt;/strong&gt;. To ensure maximum security, the frontend never communicates directly with AWS or Pinecone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu934y8df6pk2oy505ubo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu934y8df6pk2oy505ubo.png" alt="hld" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h3&gt;
  
  
  Low-Level Design (LLD)
&lt;/h3&gt;
&lt;h4&gt;
  
  
  1. The Code-Aware Indexing Pipeline
&lt;/h4&gt;

&lt;p&gt;Standard text chunking fails for code because it breaks logical blocks. CodeSense AI implements a &lt;strong&gt;Sliding Window Chunking&lt;/strong&gt; strategy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Chunk Size&lt;/strong&gt;: 1000 characters.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Overlap&lt;/strong&gt;: 200 characters (ensures variable declarations aren't cut off from their usage).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Metadata Enrichment&lt;/strong&gt;: Every vector is tagged with its &lt;code&gt;filePath&lt;/code&gt;, &lt;code&gt;repoOwner&lt;/code&gt;, and &lt;code&gt;lineRange&lt;/code&gt; to ensure the AI can cite its sources.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  2. Secure Edge Orchestration
&lt;/h4&gt;

&lt;p&gt;Using &lt;strong&gt;Supabase Edge Functions&lt;/strong&gt; as an orchestration layer allows us to implement &lt;strong&gt;AWS Signature V4&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Example: The signature-v4 process ensures your AWS_SECRET_KEY &lt;/span&gt;
&lt;span class="c1"&gt;// never leaves the server-side environment.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;headers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;signRequest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
  &lt;span class="nx"&gt;bedrockUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
  &lt;span class="nx"&gt;requestBody&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
  &lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  3. Multi-Tenant Vector Isolation
&lt;/h4&gt;

&lt;p&gt;To prevent data leakage between repositories, we utilize &lt;strong&gt;Pinecone Namespaces&lt;/strong&gt;. Each repository is assigned a unique namespace derived from its GitHub path.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Query Filtering&lt;/strong&gt;: &lt;code&gt;namespace: "zumerlab-zumerbox"&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Security&lt;/strong&gt;: No user can query code outside of their current repository context.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🚀 Data Flow: The Lifecycle of a Query
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Input&lt;/strong&gt;: User types: &lt;em&gt;"Where is the database connection initialized?"&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Vectorization&lt;/strong&gt;: The query is converted into a 1024-dim vector using AWS Bedrock.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Retrieval&lt;/strong&gt;: Pinecone identifies the &lt;strong&gt;Top-5&lt;/strong&gt; most relevant code chunks within that specific repo's namespace.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Augmentation&lt;/strong&gt;: The system builds a prompt: 
&amp;gt; &lt;em&gt;"System: You are an expert. Context: [Snippet 1], [Snippet 2]. Question: Where is the database...?"&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Generation&lt;/strong&gt;: Titan Express synthesizes the context and generates a markdown-formatted answer.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Lovable UI &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqs4qsxidd2g2ugcyikn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqs4qsxidd2g2ugcyikn.png" alt="UI" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pinecone UI&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq1yvqs9n0s2mvuz6asg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq1yvqs9n0s2mvuz6asg.png" alt="vector db" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚙️ Engineering Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Environment Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;AWS Bedrock Model Access&lt;/strong&gt;: Ensure &lt;code&gt;amazon.titan-text-express-v1&lt;/code&gt; and &lt;code&gt;amazon.titan-embed-text-v2:0&lt;/code&gt; are enabled.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Pinecone Index&lt;/strong&gt;: 1024 dimensions, Cosine metric.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pinecone Index Setup&lt;/strong&gt;&lt;br&gt;
Create a new index in Pinecone Console&lt;br&gt;
Set dimensions to 1024 (Titan Embed v2 output)&lt;br&gt;
Use cosine similarity metric&lt;br&gt;
Note the index URL for configuration&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Bedrock Setup&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable Amazon Bedrock in your AWS account&lt;/li&gt;
&lt;li&gt;Request access to: preferred model (Chat)&lt;/li&gt;
&lt;li&gt;Create IAM credentials with Bedrock access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Development Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node.js 18+ and npm (Used Lovable for building UI)&lt;/li&gt;
&lt;li&gt;Supabase project (or Lovable Cloud)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Edge Function Secrets
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;supabase secrets &lt;span class="nb"&gt;set &lt;/span&gt;&lt;span class="nv"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;xxx
supabase secrets &lt;span class="nb"&gt;set &lt;/span&gt;&lt;span class="nv"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;xxx
supabase secrets &lt;span class="nb"&gt;set &lt;/span&gt;&lt;span class="nv"&gt;PINECONE_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;xxx
supabase secrets &lt;span class="nb"&gt;set &lt;/span&gt;&lt;span class="nv"&gt;PINECONE_INDEX_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://your-index.svc.pinecone.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🔐 Security Standards
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;JWT-Locked APIs&lt;/strong&gt;: All Edge Functions require a valid Supabase Auth header.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Secret Management&lt;/strong&gt;: Zero hardcoded keys. No client-side exposure.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Rate Limiting&lt;/strong&gt;: Implemented at the Edge Function layer to protect AWS Bedrock quotas.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📈 Future Roadmap
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Language Support&lt;/strong&gt;: Expanding AST-based parsing for better semantic chunking.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Repo Chat&lt;/strong&gt;: Aggregating context across microservices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local LLM Support&lt;/strong&gt;: Integrating Ollama for on-premise deployments.&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>rag</category>
      <category>aws</category>
      <category>ai</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Building an "Unstoppable" Serverless Payment System on AWS (Circuit Breaker Pattern)</title>
      <dc:creator>nash9</dc:creator>
      <pubDate>Sat, 13 Dec 2025 10:30:38 +0000</pubDate>
      <link>https://forem.com/nash9/building-an-unstoppable-serverless-payment-system-on-aws-circuit-breaker-pattern-4iba</link>
      <guid>https://forem.com/nash9/building-an-unstoppable-serverless-payment-system-on-aws-circuit-breaker-pattern-4iba</guid>
      <description>&lt;p&gt;&lt;strong&gt;Building an "Unstoppable" Serverless Payment System on AWS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What happens when your payment gateway goes down? In a traditional app, the user sees a spinner, then a "500 Server Error," and you lose the sale.&lt;br&gt;
I wanted to build a system that refuses to crash. Even if the backend database is on fire, the user's order should be accepted, queued, and processed automatically when the system heals.&lt;/p&gt;

&lt;p&gt;Here is how I implemented the Circuit Breaker Pattern using AWS Step Functions, Java Lambda, and Event-Driven Architecture—without provisioning a single server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Tech Stack&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I chose a hybrid, cloud-native stack to enforce strict decoupling between the Frontend and the Backend.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend&lt;/strong&gt;: Python (Streamlit) – Acts as the Store &amp;amp; Admin Dashboard.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orchestration&lt;/strong&gt;: AWS Step Functions – The "Brain" handling the logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compute&lt;/strong&gt;: AWS Lambda (Java 11) – The "Worker" handling business logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;State Store&lt;/strong&gt;: Amazon DynamoDB – Stores circuit status (Open/Closed) and Order History.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resiliency&lt;/strong&gt;: Amazon SQS – The "Parking Lot" for failed orders.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability&lt;/strong&gt;: Grafana Cloud (Loki) – Log aggregation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure&lt;/strong&gt;: Terraform – Complete IaC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note : Use terraform to manage resources ,best practice to keep all your resources terraform in separate file for creation/deletion/any kind of update. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem: Cascading Failures&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In microservices, if Service A calls Service B, and Service B hangs, Service A eventually hangs too. If thousands of users click "Pay," your database gets hammered with retries, effectively DDoS-ing yourself.&lt;br&gt;
The Solution? A Circuit Breaker.&lt;br&gt;
Just like in your house: if there is a surge, the breaker trips to save the house from burning down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High Level Architecture&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;I designed the system to handle three distinct states:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Green Path (Closed): The backend is healthy. Orders process immediately.&lt;/li&gt;
&lt;li&gt;Red Path (Open): The backend is crashing. The system detects this, "Trips" the circuit, and stops sending traffic to the backend.&lt;/li&gt;
&lt;li&gt;Yellow Path (Recovery): Orders are routed to a Queue (SQS) to be retried later automatically.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;HLD may look scary but it is will make your app unstoppable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq49lafgpcg6m5253var9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq49lafgpcg6m5253var9.png" alt="HLD" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How It Works The Logic Flow&lt;/strong&gt; :&lt;/p&gt;

&lt;p&gt;The core of this project is an AWS Step Functions State Machine. It acts as a traffic controller.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The Check&lt;/strong&gt;
Every time a user clicks "Pay," the workflow first checks DynamoDB.
Is the Circuit Status OPEN?
If YES: Skip the backend entirely.
If NO: Proceed to the Java Lambda.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Execution&lt;/strong&gt;
The workflow invokes a Java Lambda to process the payment.
&lt;strong&gt;Success&lt;/strong&gt;: It updates the Order History to COMPLETED and emits an event to EventBridge (triggering a customer email via SNS).
Failure: It catches the error and retries with Exponential Backoff (Wait 1s, then 2s).
3.** The "Trip"**
If the backend fails repeatedly, the Step Function:
Writes Status: OPEN to DynamoDB.
Alerts the SysAdmin via SNS ("Critical: Circuit Tripped").
Marks the order as FAILED in the dashboard.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Self-Healing (Auto-Retry)&lt;/strong&gt;
This is the coolest part. If the circuit is Open, new orders are not rejected. They are marked as QUEUED and sent to Amazon SQS.
A "Retry Handler" Lambda listens to this queue.
It waits for a delay (e.g., 30s).It re-submits the order to the Step Function.If the backend is fixed, the order processes. If not, it goes back to the queue.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbx5oum10rbyo6ul6bwlu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbx5oum10rbyo6ul6bwlu.png" alt="lld" width="588" height="717"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tested Data Scenarios&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;SUCCESS&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9y7132vr6bo7vvzidajb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9y7132vr6bo7vvzidajb.png" alt="success" width="554" height="738"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CHAOS Mode&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9o10ol0o5svp014agvl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9o10ol0o5svp014agvl.png" alt="chaos" width="554" height="738"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Observability &amp;amp; Monitoring&lt;/strong&gt;: I integrated Grafana Cloud (Loki) to ingest logs from CloudWatch.&lt;br&gt;
&lt;strong&gt;Streamlit Dashboard&lt;/strong&gt;: Shows live status of orders (PENDING → COMPLETED or FAILED).&lt;br&gt;
&lt;strong&gt;Grafana Explore&lt;/strong&gt;: Allows deep searching of logs using {service="order-processor"} to find specific stack traces.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Learnings &amp;amp; Trade-offs&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Complexity vs. Reliability
This architecture is more complex than a simple API call. You have more moving parts (Queues, State Machines). However, the trade-off is High Availability. The frontend never sees a crash.&lt;/li&gt;
&lt;li&gt;The "Ghost" Data
When using Catch blocks in Step Functions, the original input (Order ID) is replaced by the Error Message. I learned to use ResultPath to preserve the original data so I could update the database even after a crash.&lt;/li&gt;
&lt;li&gt;Cost Optimization
Step Functions Standard Workflows are expensive at scale. For production, I would switch this to Express Workflows and use ARM64 (Graviton) for the Lambdas to reduce costs by ~40%.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Application looks like&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Order placing UI reference&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdyhcouwox2q38crmn19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdyhcouwox2q38crmn19.png" alt="Order placing ui" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Admin UI &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fse54qtj6bszrafuaa04w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fse54qtj6bszrafuaa04w.png" alt=" " width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
This project demonstrates how Event-Driven Architecture allows you to build systems that degrade gracefully. Instead of losing revenue during a crash, we simply "pause" the traffic and process it when the storm passes.&lt;br&gt;
&lt;strong&gt;Technologies used:&lt;/strong&gt; AWS, Java, Python, Terraform, Grafana.&lt;/p&gt;

&lt;p&gt;Follow for more. Thanks for reading !&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>java</category>
      <category>aws</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Supabase with PowerBI Dashboard</title>
      <dc:creator>nash9</dc:creator>
      <pubDate>Thu, 11 Dec 2025 14:54:14 +0000</pubDate>
      <link>https://forem.com/nash9/supabase-with-powerbi-dashboard-p44</link>
      <guid>https://forem.com/nash9/supabase-with-powerbi-dashboard-p44</guid>
      <description>&lt;h2&gt;
  
  
  Real-Time Sales &amp;amp; Inventory Dashboard
&lt;/h2&gt;

&lt;p&gt;A full-stack Business Intelligence demo integrating &lt;strong&gt;Supabase&lt;/strong&gt; (PostgreSQL) as the cloud backend and &lt;strong&gt;Microsoft Power BI&lt;/strong&gt; for frontend data visualization. This project demonstrates how to track sales revenue and monitor inventory levels in real-time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project Overview
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Backend:&lt;/strong&gt; Supabase (PostgreSQL) hosted on AWS.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Frontend:&lt;/strong&gt; Microsoft Power BI Desktop.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Goal:&lt;/strong&gt; Visualize sales trends, category performance, and low-stock alerts using cloud data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tech Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://supabase.com/" rel="noopener noreferrer"&gt;Supabase&lt;/a&gt; - Open Source Firebase alternative (PostgreSQL Database).&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://powerbi.microsoft.com/" rel="noopener noreferrer"&gt;Power BI Desktop&lt;/a&gt; - Data Visualization Tool.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setup Instructions
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Part 1: Supabase Setup (Backend)
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt; Create a new project on Supabase.&lt;/li&gt;
&lt;li&gt; Navigate to the &lt;strong&gt;SQL Editor&lt;/strong&gt; in the left sidebar.&lt;/li&gt;
&lt;li&gt; Run the following SQL script to create the schema and seed dummy data: Run schema.sql in SQL editor.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- 1. Create Tables
CREATE TABLE public.products (
    id SERIAL PRIMARY KEY,
    name TEXT NOT NULL,
    category TEXT NOT NULL,
    unit_price DECIMAL(10, 2) NOT NULL,
    stock_quantity INT NOT NULL,
    created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);

CREATE TABLE public.orders (
    id SERIAL PRIMARY KEY,
    product_id INT REFERENCES public.products(id),
    quantity INT NOT NULL,
    total_amount DECIMAL(10, 2) NOT NULL,
    order_date DATE NOT NULL DEFAULT CURRENT_DATE
);

-- 2. Insert Dummy Data
INSERT INTO public.products (name, category, unit_price, stock_quantity)
VALUES 
    ('Wireless Mouse', 'Electronics', 25.50, 150),
    ('Mechanical Keyboard', 'Electronics', 85.00, 40),
    ('Gaming Monitor', 'Electronics', 300.00, 15),
    ('Ergonomic Chair', 'Furniture', 150.00, 10),
    ('Desk Lamp', 'Furniture', 45.00, 80),
    ('USB-C Cable', 'Accessories', 12.00, 200);

INSERT INTO public.orders (product_id, quantity, total_amount, order_date)
VALUES 
    (1, 2, 51.00, CURRENT_DATE - INTERVAL '3 days'),
    (2, 1, 85.00, CURRENT_DATE - INTERVAL '3 days'),
    (1, 1, 25.50, CURRENT_DATE - INTERVAL '2 days'),
    (3, 1, 300.00, CURRENT_DATE - INTERVAL '2 days'),
    (5, 4, 180.00, CURRENT_DATE - INTERVAL '1 day'),
    (4, 1, 150.00, CURRENT_DATE),
    (6, 10, 120.00, CURRENT_DATE);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Part 2: Power BI Connection (Frontend)
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;To connect Power BI to Supabase, you must use the Connection Pooler (IPv4) to avoid connectivity issues. Get Credentials:
Go to Supabase &amp;gt; Project Settings &amp;gt; Database.&lt;/li&gt;
&lt;li&gt;Enable "Use connection pooling".
Set Mode to "Session".&lt;/li&gt;
&lt;li&gt;Copy the Host (e.g., aws-0-us-east-1.pooler.supabase.com) and User.&lt;/li&gt;
&lt;li&gt;Connect in Power BI:
Get Data &amp;gt; PostgreSQL database.
Server: Paste the Pooler Host URL.
Database: postgres.
Data Connectivity Mode: Import.
Auth: Use Database authentication (User/Password).&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Dashboard Visuals :
&lt;/h3&gt;

&lt;p&gt;KPI Card: Displays Total Sales Revenue (Sum of total_amount).&lt;br&gt;
Pie Chart: Sales distribution by Category (Electronics vs. Furniture).&lt;br&gt;
Table: A specific list for Low Stock Alerts (Items with stock_quantity &amp;lt; 20).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc00hfde29z6zn1x7dzaa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc00hfde29z6zn1x7dzaa.png" alt="first load" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvdfp2mg7k41ejygrc4f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvdfp2mg7k41ejygrc4f.png" alt="schema" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Troubleshooting : (I have faced this issue ,fix and error details added)
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;"The remote certificate is invalid" Error&lt;br&gt;
Supabase uses SSL, and Power BI may reject the certificate by default.&lt;br&gt;
Fix: Go to File &amp;gt; Options and settings &amp;gt; Data source settings. Select the data source, click Edit Permissions, and uncheck "Encrypt connections".&lt;br&gt;
"Host not found"&lt;br&gt;
Ensure you are using the Pooler URL (port 5432 or 6543) found in Supabase Database settings, not the direct connection string.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Thanks for reading !!!&lt;/p&gt;

</description>
      <category>analytics</category>
      <category>microsoft</category>
      <category>postgres</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>AWS Bedrock with LangChain</title>
      <dc:creator>nash9</dc:creator>
      <pubDate>Thu, 11 Dec 2025 14:44:45 +0000</pubDate>
      <link>https://forem.com/nash9/aws-bedrock-with-langchain-4lbe</link>
      <guid>https://forem.com/nash9/aws-bedrock-with-langchain-4lbe</guid>
      <description>&lt;p&gt;&lt;strong&gt;Demo AWS Bedrock Integration with LangChain , Streamlit ,Titan model along with docker setup (Free tier)&lt;/strong&gt;&lt;br&gt;
To demonstrates how to integrate AWS Bedrock with LangChain and Streamlit using the Titan model with docker setup .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites/Project Structure&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;requirements.txt &lt;/li&gt;
&lt;li&gt;Docker file&lt;/li&gt;
&lt;li&gt;AWS config local setup '=/.aws/config' and credentials '/.aws/credentials' &lt;/li&gt;
&lt;li&gt;AWS Bedrock access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Code base&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Main Python file&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import streamlit as st
import boto3
from botocore.exceptions import ClientError
from langchain_aws import ChatBedrock
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

st.set_page_config(page_title=" AWS Bedrock Docker", layout="wide")

st.title("🐳 AWS Bedrock + Docker + LangChain + Streamlit")
st.caption("Connected to AWS Region: `ap-south-1` via local AWS Config")
# ------------------------------------------------------------------
try:
    boto_session = boto3.Session()
    if not boto_session.get_credentials():
        st.error("❌ No AWS Credentials found. Did you mount ~/.aws in docker-compose?")
        st.stop()

    st.sidebar.success(f"AWS Profile Loaded: {boto_session.profile_name or 'default'}")

except Exception as e:
    st.error(f"AWS Config Error: {e}")
    st.stop()

model_id = st.sidebar.selectbox(
    "Select Model",
    ["anthropic.claude-3-sonnet-20240229-v1:0", "anthropic.claude-v2:1", "amazon.titan-text-express-v1"]
)

llm = ChatBedrock(
    model_id=model_id,
    region_name="ap-south-1",
    model_kwargs={"temperature": 0.5, "max_tokens": 512}
)

#----------------------------------------------
user_input = st.text_area("Enter your prompt:", "Explain how Docker containers work in 3 sentences.")

if st.button("Generate Response"):
    if not user_input:
        st.warning("Please enter a prompt.")
    else:
        try:
            with st.spinner("Calling AWS Bedrock API..."):
                prompt = ChatPromptTemplate.from_messages([
                    ("system", "You are a helpful AI assistant."),
                    ("user", "{input}")
                ])
                output_parser = StrOutputParser()

                chain = prompt | llm | output_parser

                response = chain.invoke({"input": user_input})

                st.subheader("AI Response:")
                st.write(response)

        except ClientError as e:
            st.error(f"AWS API Error: {e}")
            if "AccessDenied" in str(e):
                st.warning("👉 Hint: Did you enable this specific Model ID in the AWS Console &amp;gt; Bedrock &amp;gt; Model Access?")
        except Exception as e:
            st.error(f"An unexpected error occurred: {e}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Docker file&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.11-slim
LABEL authors="naush"

WORKDIR /chatgpt-bedrock-langchain-demo
COPY requirements.txt .
RUN pip install --upgrade pip &amp;amp;&amp;amp; pip install torch --index-url https://download.pytorch.org/whl/cpu &amp;amp;&amp;amp; pip install -r requirements.txt
COPY main.py .
EXPOSE 8090
CMD ["streamlit","run","main.py"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Requirements File&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;streamlit
boto3
langchain-aws
langchain-community
langchain-core

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Setup Instructions&lt;/strong&gt;&lt;br&gt;
Clone the repository:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;git clone or create new repo :&lt;br&gt;
&lt;code&gt;cd repository_name&lt;/code&gt; or &lt;br&gt;
create folder and paste above files (main.py , requirements.txt,docker )&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install the required packages:&lt;br&gt;
&lt;code&gt;pip install -r requirements.txt&lt;/code&gt; or&lt;br&gt;
do run this command &lt;br&gt;
&lt;code&gt;pip install streamlit boto3 langchain-aws langchain-community langchain-core&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure your AWS credentials:&lt;br&gt;
Make sure your AWS credentials are set up in ~/.aws/credentials and ~/.aws/config.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the Streamlit app:&lt;br&gt;
&lt;code&gt;streamlit run app.py&lt;/code&gt;&lt;br&gt;
Open your browser and navigate to &lt;a href="http://localhost:8501" rel="noopener noreferrer"&gt;http://localhost:8501&lt;/a&gt; to access the url Streamlit app. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker Setup To run the application using Docker, follow these steps: Build the Docker image: &lt;br&gt;
&lt;code&gt;bash docker build -t bedrock-langchain-streamlit .  &lt;br&gt;
Run the Docker container: bash docker run -p 8501:8501 bedrock-langchain-streamlit &lt;br&gt;
Open your browser and navigate to http://localhost:8501 to access the Streamlit app running in the Docker container.&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppmd0jnf7zqu088qem9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fppmd0jnf7zqu088qem9m.png" alt="hld" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter your input in the Streamlit app interface and interact with the Titan model powered by AWS Bedrock.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxe5sinaelxtxdeb2gom.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxe5sinaelxtxdeb2gom.png" alt="demo" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>llm</category>
      <category>docker</category>
      <category>python</category>
      <category>aws</category>
    </item>
    <item>
      <title>Monolith app to Cloud-Native (Re-platforming)</title>
      <dc:creator>nash9</dc:creator>
      <pubDate>Thu, 11 Dec 2025 14:15:58 +0000</pubDate>
      <link>https://forem.com/nash9/monolith-app-to-cloud-native-re-platforming-157h</link>
      <guid>https://forem.com/nash9/monolith-app-to-cloud-native-re-platforming-157h</guid>
      <description>&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This project demonstrates the migration of an on-premise Java Spring Boot application to AWS using a "Replatforming" strategy. The application is containerized using Docker and deployed to Amazon ECS (Fargate) for serverless compute, utilizing Amazon RDS for managed persistence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem Statement&lt;/strong&gt;&lt;br&gt;
An on-premise Java Spring Boot application with a MySQL database needs to be migrated to AWS to leverage cloud scalability, reliability, and managed services while minimising operational overhead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Migration Strategy: Replatforming&lt;/strong&gt;&lt;br&gt;
Replatforming involves moving the application to a new platform with minimal changes to the application code.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;App: Java Spring Boot "Customer Management Service."&lt;/li&gt;
&lt;li&gt; Hosting: Running on a physical Linux server (or VM) using java -jar app.jar. &lt;/li&gt;
&lt;li&gt; Database: Local MySQL instance installed on the same server.&lt;/li&gt;
&lt;li&gt; Issues: Scaling is hard (manual hardware upgrades), single point of failure (if the server dies, the app dies), maintenance overhead (patching OS).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Goals &amp;amp; Objectives:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Containerize the existing Java Spring Boot application using Docker.&lt;/li&gt;
&lt;li&gt;Deploy the containerized application to AWS using ECS Fargate.&lt;/li&gt;
&lt;li&gt;Use Amazon RDS for the MySQL database to offload database management tasks.&lt;/li&gt;
&lt;li&gt;Implement Infrastructure as Code (IaC) using Terraform for reproducible deployments.&lt;/li&gt;
&lt;li&gt;Set up a CI/CD pipeline for automated deployments using GitHub Actions (or AWS CodePipeline).&lt;/li&gt;
&lt;li&gt;Ensure security best practices, including network isolation and secrets management.&lt;/li&gt;
&lt;li&gt;Optimize for cost and performance.&lt;/li&gt;
&lt;li&gt;Document the architecture and decisions made during the migration.&lt;/li&gt;
&lt;li&gt;Provide clear instructions for running the application locally and deploying to AWS.&lt;/li&gt;
&lt;li&gt;Include monitoring and logging for operational visibility.&lt;/li&gt;
&lt;li&gt;Facilitate easy rollback and updates through deployment strategies.&lt;/li&gt;
&lt;li&gt;Ensure high availability and fault tolerance in the deployed architecture.&lt;/li&gt;
&lt;li&gt;Leverage AWS managed services to reduce operational overhead.&lt;/li&gt;
&lt;li&gt;Promote best practices in cloud architecture and DevOps.&lt;/li&gt;
&lt;li&gt;Create a modular and reusable Terraform codebase for future projects.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🏗 &lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pattern:&lt;/strong&gt; Microservices-ready Containerization on Serverless Infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traffic Flow:&lt;/strong&gt; Internet -&amp;gt; Application Load Balancer (Public Subnet) -&amp;gt; ECS Fargate Tasks (Private Subnet).&lt;br&gt;
Data Flow: ECS Tasks -&amp;gt; RDS MySQL (Private Subnet).&lt;br&gt;
&lt;strong&gt;Security:&lt;/strong&gt;&lt;br&gt;
Database is strictly isolated in private subnets.&lt;br&gt;
Security Groups act as a virtual firewall (App SG -&amp;gt; DB SG on port 3306).&lt;br&gt;
Secrets managed via Environment Variables (in prod, use AWS Parameter Store).&lt;br&gt;
By this , we achieve : &amp;gt;&amp;gt;&amp;gt;&amp;gt;&lt;/p&gt;

&lt;p&gt;Application Load Balancer (ALB) (HTTPS/443).&lt;br&gt;
ALB distributes traffic across ECS Tasks (Docker containers) running in private subnets across two Availability Zones (AZ).&lt;br&gt;
ECS Tasks talk to Amazon RDS (MySQL) located in a private database subnet.&lt;br&gt;
ECS Tasks pull database credentials securely from AWS Systems Manager Parameter Store (No hardcoded passwords!).&lt;/p&gt;

&lt;p&gt;🧠 &lt;strong&gt;Architectural Decisions &amp;amp; Trade-offs&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Compute&lt;/strong&gt;: ECS Fargate vs. EC2
Decision: Fargate.
Reasoning: Removing the burden of OS patching and server management allows the team to focus on code.
Trade-off: Fargate is slightly more expensive per vCPU than raw EC2, but saves significant operational hours (Human cost &amp;gt; Infrastructure cost).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database&lt;/strong&gt;: Amazon RDS vs. EC2 Hosted DB
Decision: Amazon RDS.
Reasoning: Automated backups, point-in-time recovery, and easy Multi-AZ setup for High Availability.
Trade-off: Less control over the underlying OS configuration, but guarantees 99.95% uptime.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimization Strategy Cost:&lt;/strong&gt; 
Used Linux-based containers (cheaper than Windows). Implemented Auto-scaling based on CPU load.(Here,I used t3.micro for the DB and low CPU for Fargate. In a real scenario, I would implement ECS Service Auto Scaling based on CloudWatch CPU metrics to handle traffic spikes automatically.)
Performance: The application is stateless, allowing horizontal scaling behind the Load Balancer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Network Isolation:&lt;/strong&gt; Placed ECS tasks and RDS instances in private subnets.
&lt;strong&gt;Access Control:&lt;/strong&gt; Used Security Groups to restrict traffic between components.
&lt;strong&gt;Secrets Management:&lt;/strong&gt; AWS Parameter Store or Secrets Manager for production.
&lt;strong&gt;IAM Roles:&lt;/strong&gt; Least privilege principle applied to ECS Task Execution Role.
&lt;strong&gt;Data Encryption:&lt;/strong&gt; Enabled encryption at rest for RDS and enforced TLS for data in transit.
Monitoring &amp;amp; Logging: Integrated AWS CloudWatch for logs and metrics.
5.&lt;strong&gt;Infrastructure as Code Tool&lt;/strong&gt;: Terraform
Reasoning: Enables version control, repeatability, and easy collaboration.
Trade-off: Initial learning curve, but long-term benefits outweigh the costs.
Modules: Used Terraform modules for VPC, ECS, RDS, and Load Balancer to promote reusability.
State Management: Used remote state storage in S3 with state locking via DynamoDB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD Pipeline (GitHub Actions) Tool&lt;/strong&gt;: GitHub Actions (or AWS CodePipeline).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Reasoning: Automates build, test, and deployment processes.&lt;/p&gt;

&lt;p&gt;Trade-off: Initial setup time, but reduces manual errors and speeds up deployments.&lt;/p&gt;

&lt;p&gt;Steps: On push to main branch, build Docker image, push to ECR, and deploy to ECS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhieuxhx3ilah9uv7pjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhieuxhx3ilah9uv7pjc.png" alt="Architecture-hld" width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration&lt;/strong&gt;: On every push to main, Maven builds the JAR and runs unit tests to ensure code quality.&lt;br&gt;
&lt;strong&gt;Delivery:&lt;/strong&gt;&lt;br&gt;
Authenticates with AWS using IAM credentials stored in GitHub Secrets.&lt;br&gt;
Builds a Docker image and pushes it to Amazon ECR.&lt;br&gt;
&lt;strong&gt;Deployment:&lt;/strong&gt;&lt;br&gt;
Triggers an ECS Service Update (--force-new-deployment).&lt;br&gt;
ECS pulls the new image from ECR and performs a rolling update (starts new container, health check passes, drains old container).&lt;br&gt;
Design Choice: 'Rolling Update'&lt;br&gt;
I utilized ECS's native rolling update mechanism. This ensures zero downtime during deployment.&lt;/p&gt;

&lt;p&gt;The Load Balancer keeps sending traffic to the old version until the new version is healthy (Green/Blue deployment can be added for advanced scenarios).&lt;/p&gt;

&lt;p&gt;🚀 &lt;strong&gt;How to Run Locally (Docker)&lt;/strong&gt;&lt;br&gt;
Build: docker build -t aws-poc-app .&lt;br&gt;
&lt;code&gt;Run: docker run -p 8080:8080 -e DB_URL=jdbc:mysql://host.docker.internal:3306/db -e DB_USER=root -e DB_PASS=root aws-poc-app Alternatively, use Docker Compose if a docker-compose.yml is provided.&lt;br&gt;
Access: Open http://localhost:8080 in your browser.&lt;br&gt;
Database: Ensure a local MySQL instance is running and accessible.&lt;/code&gt;&lt;br&gt;
&lt;code&gt;Stop: docker stop &amp;lt;container_id&amp;gt;&lt;br&gt;
Remove: docker rm &amp;lt;container_id&amp;gt;&lt;br&gt;
Logs: docker logs &amp;lt;container_id&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;code&gt;Debug: Use docker exec -it &amp;lt;container_id&amp;gt; /bin/bash to access the container shell.&lt;br&gt;
Environment Variables: Set DB_URL, DB_USER, and DB_PASS for database connectivity.&lt;br&gt;
Network: Use host.docker.internal to connect to host services from Docker on Windows/Mac. Option 2: Use Docker desktop with WSL2 backend for Linux users.(for beginners)&lt;/code&gt;&lt;br&gt;
☁️ &lt;strong&gt;Deployment Steps (Terraform)&lt;/strong&gt;&lt;br&gt;
Navigate to infrastructure/.&lt;br&gt;
&lt;code&gt;Run terraform init.&lt;br&gt;
Run terraform apply.&lt;br&gt;
Terraform will output the Load Balancer URL.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📂 Project Structure&lt;/strong&gt;&lt;br&gt;
aws-migration-poc/&lt;br&gt;
├── src/                    # Java Spring Boot Application Source Code&lt;br&gt;
├── Dockerfile              # Dockerfile for containerizing the application&lt;br&gt;
├── infrastructure/         # Terraform scripts for AWS infrastructure&lt;br&gt;
│   ├── provider.tf         # provider configuration for AWS&lt;br&gt;
│   ├── vpc.tf              # VPC and Subnets configuration&lt;br&gt;
│   ├── ecs-cluster.tf      # ECS Cluster&lt;br&gt;
│   ├── ecs-service.tf      # ECS Fargate Service configuration&lt;br&gt;
|   ├── ecs-task-def.tf     # ECS Task           Definition&lt;br&gt;&lt;br&gt;
|   ├── ecr.tf              # ECR Repository configuration&lt;br&gt;
|   ├── network.tf          # VPC, Subnets, andSecurity Groups configuration&lt;br&gt;
│   ├── database.tf         # RDS Instance configuration&lt;br&gt;
│   ├── iam.tf              # IAM Roles and Policies&lt;br&gt;&lt;br&gt;
│   ├── variables.tf        # Input variables for Terraform&lt;br&gt;
│   ├── outputs.tf          # Output values from Terraform&lt;br&gt;
├── README.md               # Project documentation&lt;br&gt;
└── .gitignore              # Git ignore file&lt;/p&gt;

&lt;p&gt;Follow my github account for codebase!!!&lt;/p&gt;

</description>
      <category>java</category>
      <category>docker</category>
      <category>architecture</category>
      <category>aws</category>
    </item>
    <item>
      <title>Migrating a Legacy Monolith to Serverless on AWS(Strangler Fig Pattern)</title>
      <dc:creator>nash9</dc:creator>
      <pubDate>Wed, 10 Dec 2025 14:40:30 +0000</pubDate>
      <link>https://forem.com/nash9/migrating-a-legacy-monolith-to-serverless-on-aws-free-tier-864</link>
      <guid>https://forem.com/nash9/migrating-a-legacy-monolith-to-serverless-on-aws-free-tier-864</guid>
      <description>&lt;p&gt;🚀 &lt;strong&gt;Overview&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This project demonstrates how to migrate a legacy monolithic application to a serverless architecture using the &lt;strong&gt;Strangler Fig Pattern&lt;/strong&gt; on AWS, leveraging:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform for Infrastructure as Code&lt;/li&gt;
&lt;li&gt;Python for application logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal is to show how new features can be carved out from a monolith and replaced with serverless components—without breaking the existing system.&lt;/p&gt;

&lt;p&gt;🏗 &lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The solution consists of major components:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Legacy Monolith&lt;/strong&gt;&lt;br&gt;
Python Flask app running on an EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;New Serverless Components&lt;/strong&gt;&lt;br&gt;
AWS Lambda functions,DynamoDB tables,API Gateway (The Strangler Facade)&lt;br&gt;
Routes traffic to the appropriate backend (legacy or new).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔎 Route Behavior&lt;/strong&gt;&lt;br&gt;
`/users → Legacy Route&lt;br&gt;
Proxies traffic to EC2 running the Flask app.&lt;/p&gt;

&lt;p&gt;/products → Migrated Route&lt;br&gt;
Handled by Lambda + DynamoDB.&lt;/p&gt;

&lt;p&gt;/products/restock → Enhanced Route&lt;br&gt;
POST request that simulates a failure 60% of the time to demonstrate resiliency handling.`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🖼 Architecture Diagram&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5e1kxxhea3ke4b8jxhp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5e1kxxhea3ke4b8jxhp.png" alt=" " width="652" height="652"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📦 &lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before deploying, ensure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Account (Free Tier eligible)&lt;/li&gt;
&lt;li&gt;AWS CLI installed &amp;amp; configured (aws configure)&lt;/li&gt;
&lt;li&gt;Terraform v1.0+&lt;/li&gt;
&lt;li&gt;Postman or curl for testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;📁 Project Structure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;strangler-fig-aws-migration-demo/&lt;br&gt;
│&lt;br&gt;
├── README.md&lt;br&gt;
├── infra-terraform/&lt;br&gt;
│   ├── provider.tf&lt;br&gt;
│   ├── variables.tf&lt;br&gt;
│   ├── ec2.tf&lt;br&gt;
│   ├── dynamodb.tf&lt;br&gt;
│   ├── lambda.tf&lt;br&gt;
│   ├── apigateway.tf&lt;br&gt;
│   ├── outputs.tf&lt;br&gt;
│   ├── lambda_function/&lt;br&gt;
│   │    └── lambda_function.py&lt;br&gt;
│   └── legacy_app/&lt;br&gt;
│        └── legacy_server.py&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🚀 Deployment Instructions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1️⃣ Initialize Terraform&lt;/p&gt;

&lt;p&gt;Inside infra-terraform:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform init&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
2️⃣ Deploy Infrastructure&lt;br&gt;
&lt;code&gt;terraform apply&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Type yes when prompted.&lt;/p&gt;

&lt;p&gt;⏳ Deployment takes ~2 minutes, plus an additional 3 minutes for the EC2 instance to install dependencies and start the legacy server.&lt;/p&gt;

&lt;p&gt;3️⃣ Retrieve the API URL&lt;/p&gt;

&lt;p&gt;Terraform will output something like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;api_url = "https://&amp;lt;random-id&amp;gt;.execute-api.us-east-1.amazonaws.com"&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Copy this value for testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔬 Testing Scenarios&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Scenario 1: Legacy Route (/users)&lt;br&gt;
Traffic routed to EC2 Flask app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnauwaqdtfjp87pgbvh6l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnauwaqdtfjp87pgbvh6l.png" alt=" " width="800" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scenario 2: Migrated Route (/products)&lt;br&gt;
Traffic routed to Lambda + DynamoDB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fes2ezeyllxp7s6m7aw8y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fes2ezeyllxp7s6m7aw8y.png" alt=" " width="800" height="659"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scenario 3: Chaos Route (/products/restock)&lt;br&gt;
A POST endpoint simulating 60% failure rate.&lt;/p&gt;

&lt;p&gt;Request:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -X POST &amp;lt;api_url&amp;gt;/products/restock&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fro1huyzsmovi8f25p81h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fro1huyzsmovi8f25p81h.png" alt=" " width="800" height="724"&gt;&lt;/a&gt;&lt;br&gt;
Expected Behavior:&lt;/p&gt;

&lt;p&gt;Chance  Response&lt;br&gt;
40% 200 OK → {"status": "Restock Successful"}&lt;br&gt;
60% 500 Internal Server Error → Simulated external failure&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🛠 Troubleshooting Guide&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;❌ 1. 502 Bad Gateway&lt;/p&gt;

&lt;p&gt;Cause: API Gateway couldn't reach EC2 instance.&lt;br&gt;
Fix:&lt;br&gt;
Wait 2–3 minutes after deployment&lt;br&gt;
Ensure EC2 is running&lt;br&gt;
Re-run terraform apply&lt;/p&gt;

&lt;p&gt;❌ 2. 400 Bad Request&lt;/p&gt;

&lt;p&gt;Cause: Payload version mismatch.&lt;br&gt;
Fix:&lt;br&gt;
Ensure this is inside your Lambda integration block:&lt;br&gt;
payload_format_version = "2.0"&lt;/p&gt;

&lt;p&gt;❌ 3. 500 Internal Server Error&lt;/p&gt;

&lt;p&gt;Cause: Expected behavior for chaos testing.&lt;br&gt;
Fix:&lt;br&gt;
Run the command multiple times or remove the failure logic in lambda_function.py.&lt;/p&gt;

&lt;p&gt;❌ 4. 405 Method Not Allowed&lt;/p&gt;

&lt;p&gt;Cause: Using GET on a POST-only route.&lt;br&gt;
Fix:&lt;br&gt;
Use the correct method:&lt;br&gt;
&lt;code&gt;curl -X POST &amp;lt;api_url&amp;gt;/products/restock&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
🧹 &lt;strong&gt;Cleanup (To Avoid Charges)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When finished, destroy all resources:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform destroy&lt;/code&gt;&lt;br&gt;
Double-check the AWS Console to ensure everything is deleted.&lt;/p&gt;

&lt;p&gt;Follow me for more.Thanks !!!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>terraform</category>
      <category>microservices</category>
    </item>
  </channel>
</rss>
