<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Irfan Satrio</title>
    <description>The latest articles on Forem by Irfan Satrio (@irfansatrio).</description>
    <link>https://forem.com/irfansatrio</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/irfansatrio"/>
    <language>en</language>
    <item>
      <title>Exploring the New IPv6 and Dual Stack Connectivity in Amazon ElastiCache Serverless</title>
      <dc:creator>Irfan Satrio</dc:creator>
      <pubDate>Sun, 19 Apr 2026 11:56:09 +0000</pubDate>
      <link>https://forem.com/aws-builders/exploring-the-new-ipv6-and-dual-stack-connectivity-in-amazon-elasticache-serverless-3ofl</link>
      <guid>https://forem.com/aws-builders/exploring-the-new-ipv6-and-dual-stack-connectivity-in-amazon-elasticache-serverless-3ofl</guid>
      <description>&lt;p&gt;On April 2, 2026, Amazon Web Services introduced &lt;a href="https://aws.amazon.com/about-aws/whats-new/2026/04/amazon-elasticache-serverless-ipv6-dual-stack/" rel="noopener noreferrer"&gt;Amazon ElastiCache Serverless now supports IPv6 and dual stack connectivity&lt;/a&gt;, adding support for IPv6 and dual stack connectivity on ElastiCache Serverless. This expands beyond the previous IPv4-only model and allows a cache to accept connections over both IPv4 and IPv6 simultaneously, enabling more flexible connectivity patterns.&lt;/p&gt;

&lt;p&gt;In this post, I put the new dual stack capability to the test by verifying IPv4 and IPv6 connectivity on ElastiCache Serverless.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;I started by enabling IPv6 at the VPC level by attaching an Amazon-provided IPv6 CIDR block, allowing resources inside the VPC to communicate over IPv6.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft93m5pchopuxg04z7diq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft93m5pchopuxg04z7diq.png" alt=" " width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I then deployed an ElastiCache Serverless instance and selected dual stack as the Network Type during creation. This option was introduced in the April 2 update and allows the cache to handle both IPv4 and IPv6 connections at the same time. The selected subnets must support both IPv4 and IPv6 address space for this configuration to work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajw0gdgc6ddv2t74uuk4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajw0gdgc6ddv2t74uuk4.png" alt=" " width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Verification
&lt;/h2&gt;

&lt;p&gt;From an EC2 instance with IPv6 enabled, I first verified that the cache resolves to an IPv6 address:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nslookup &lt;span class="nt"&gt;-type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;AAAA &amp;lt;cache-endpoint&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzi8mgmx6imzp8bcys9v3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzi8mgmx6imzp8bcys9v3.png" alt=" " width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The result shows AAAA records, indicating that the cache is reachable over IPv6.&lt;/p&gt;

&lt;p&gt;Next, I validated connectivity using the approach recommended by Amazon Web Services for ElastiCache Serverless, using &lt;code&gt;openssl s_client&lt;/code&gt; with filtered output for clarity.&lt;/p&gt;

&lt;h3&gt;
  
  
  IPv6 Connectivity
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl s_client &lt;span class="nt"&gt;-connect&lt;/span&gt; &amp;lt;cache-endpoint&amp;gt;:6379 &lt;span class="nt"&gt;-6&lt;/span&gt; 2&amp;gt;&amp;amp;1 | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s2"&gt;"Connecting|CONNECTED|Verification|Protocol"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmolw380fceyduje6lcm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmolw380fceyduje6lcm.png" alt=" " width="573" height="84"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The CONNECTED status confirms that a TCP connection is successfully established, while Verification: OK indicates that the TLS certificate is valid.&lt;/p&gt;

&lt;h3&gt;
  
  
  IPv4 Connectivity
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl s_client &lt;span class="nt"&gt;-connect&lt;/span&gt; &amp;lt;cache-endpoint&amp;gt;:6379 &lt;span class="nt"&gt;-4&lt;/span&gt; 2&amp;gt;&amp;amp;1 | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s2"&gt;"Connecting|CONNECTED|Verification|Protocol"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7l51xmi6asxdunqoh091.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7l51xmi6asxdunqoh091.png" alt=" " width="567" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The IPv4 test also succeeds, showing that the same cache is reachable over IPv4 with a valid TLS session.&lt;/p&gt;

&lt;h2&gt;
  
  
  Analysis
&lt;/h2&gt;

&lt;p&gt;From this test, the dual stack capability in Amazon ElastiCache Serverless works exactly as described by Amazon Web Services. The cache resolves to an IPv6 address and accepts TLS connections over both IPv6 and IPv4 simultaneously from the same endpoint. This reflects a gradual migration path where IPv6 can be introduced alongside IPv4 traffic without impacting existing application connectivity.&lt;/p&gt;

&lt;p&gt;Beyond dual stack, IPv6-only configuration is also supported as a separate option, allowing workloads that fully transition to IPv6 to operate without relying on IPv4 addressing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Based on this hands-on test, the dual stack capability in ElastiCache Serverless performs well in real usage. The same cache can be accessed over IPv4 and IPv6, with both paths functioning as expected, and this capability is available at no additional charge across all AWS Regions, making it easy to adopt IPv6 in existing ElastiCache Serverless workloads as part of a gradual transition.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>networking</category>
      <category>serverless</category>
    </item>
    <item>
      <title>How CloudFront Delivers Traffic to AWS Workloads</title>
      <dc:creator>Irfan Satrio</dc:creator>
      <pubDate>Mon, 02 Mar 2026 10:11:28 +0000</pubDate>
      <link>https://forem.com/irfansatrio/how-cloudfront-delivers-traffic-to-aws-workloads-3m38</link>
      <guid>https://forem.com/irfansatrio/how-cloudfront-delivers-traffic-to-aws-workloads-3m38</guid>
      <description>&lt;p&gt;Traffic delivery on AWS often starts at the edge and moves inward toward application resources. In many architectures, &lt;strong&gt;Amazon CloudFront&lt;/strong&gt; acts as the entry point, handling client requests before they ever reach your VPC. To design these setups correctly, it helps to look at how CloudFront actually connects to backend services and what role networking plays in that path.&lt;/p&gt;

&lt;p&gt;This article walks through how CloudFront forwards requests to AWS workloads, how common origin configurations work, and how newer options like VPC origins change the picture.&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudFront’s Position in the Architecture
&lt;/h2&gt;

&lt;p&gt;CloudFront is a global content delivery network that operates outside your VPC. Requests from users are received at edge locations and then forwarded to an origin when needed. That origin can be a storage service, a load balancer, or another AWS-managed endpoint.&lt;/p&gt;

&lt;p&gt;Even though CloudFront integrates tightly with &lt;strong&gt;Amazon Web Services&lt;/strong&gt;, it does not run inside your VPC. This separation is intentional. CloudFront focuses on edge delivery, caching, and security, while your VPC remains responsible for networking, routing, and workload isolation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Origin Types Behind CloudFront
&lt;/h2&gt;

&lt;p&gt;CloudFront supports several origin types, each with different networking implications.&lt;/p&gt;

&lt;p&gt;When using object storage as an origin, CloudFront retrieves content from a regional endpoint and caches it at the edge. This model works well for static assets and removes the need for compute resources to handle delivery traffic.&lt;/p&gt;

&lt;p&gt;For application workloads, CloudFront often forwards requests to a load balancer. The load balancer then distributes traffic to backend services such as EC2 instances or container-based workloads. In this setup, CloudFront handles edge-level concerns, while the VPC manages routing, security groups, and subnet placement.&lt;/p&gt;

&lt;p&gt;The key point is that CloudFront never forwards traffic directly to private instances. There is always an intermediary origin endpoint that CloudFront can reach.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Traffic Reaches the VPC
&lt;/h2&gt;

&lt;p&gt;When CloudFront forwards a request, it does so over AWS-managed networking. The request enters the VPC through the origin endpoint, not through random ingress points.&lt;/p&gt;

&lt;p&gt;For load balancer–based architectures, this typically means that CloudFront forwards requests to a public-facing endpoint, the load balancer applies routing logic, and backend services receive traffic inside private subnets.&lt;/p&gt;

&lt;p&gt;Inbound access is controlled at multiple layers. Security groups restrict which sources can reach the origin, and application routing determines how traffic is handled once inside the VPC. CloudFront’s IP ranges are often explicitly allowed to limit exposure and keep access paths predictable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why CloudFront Is Kept Separate from the VPC
&lt;/h2&gt;

&lt;p&gt;Keeping CloudFront outside the VPC simplifies both scaling and security. The edge layer can absorb traffic spikes, cache responses, and apply protections before requests ever reach your network.&lt;/p&gt;

&lt;p&gt;From a networking perspective, this separation also keeps VPC design consistent. Subnets, route tables, and gateways behave the same way regardless of whether traffic originates from CloudFront or another external client. The difference lies in how traffic is filtered and controlled before it arrives.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Note on CloudFront VPC Origins
&lt;/h2&gt;

&lt;p&gt;AWS has also introduced CloudFront VPC Origins, which allow CloudFront to connect privately to origins in private subnets without exposing them to the public internet.&lt;/p&gt;

&lt;p&gt;In this model, CloudFront still operates outside the VPC, but it forwards traffic to selected private resources using AWS-managed connectivity. This reduces the need for internet-facing origins and helps tighten access control for sensitive workloads.&lt;/p&gt;

&lt;p&gt;VPC origins do not change CloudFront’s role as an edge service, but they provide more flexibility in how backend connectivity is designed, especially for architectures that prioritize private access paths.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;CloudFront plays a distinct role in AWS networking by handling edge delivery while relying on well-defined origin paths into the VPC. Whether traffic flows through public endpoints or newer VPC origin integrations, the underlying principle remains the same: CloudFront delivers requests to controlled entry points, and the VPC governs what happens next.&lt;/p&gt;

&lt;p&gt;In a follow-up article, I will explore this topic further by discussing CloudFront VPC Origins conceptually and how they compare with traditional public origin designs in AWS architectures.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>networking</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How DNS Works Inside an AWS VPC</title>
      <dc:creator>Irfan Satrio</dc:creator>
      <pubDate>Sat, 21 Feb 2026 13:09:41 +0000</pubDate>
      <link>https://forem.com/irfansatrio/how-dns-works-inside-an-aws-vpc-1jb6</link>
      <guid>https://forem.com/irfansatrio/how-dns-works-inside-an-aws-vpc-1jb6</guid>
      <description>&lt;p&gt;In AWS networking, resources resolve endpoints, services communicate, and applications run as expected. Within a VPC, DNS plays an important role in how services discover each other and how traffic is route. Looking at how DNS actually works inside AWS helps explain why traffic flows the way it does and why certain connections succeed or fail.&lt;/p&gt;

&lt;p&gt;This article walks through DNS inside an AWS VPC from a networking perspective, focusing on resolution flow rather than application logic.&lt;/p&gt;

&lt;h3&gt;
  
  
  DNS as a Core VPC Service
&lt;/h3&gt;

&lt;p&gt;Every VPC comes with a built-in DNS resolver provided by AWS. This resolver is available at a reserved IP address within the VPC and is automatically used by resources unless configured otherwise.&lt;/p&gt;

&lt;p&gt;When an EC2 instance makes a DNS query, the request does not go directly to the internet. Instead, it is handled internally by the VPC DNS resolver, which decides how and where the name should be resolved.&lt;/p&gt;

&lt;p&gt;This design allows AWS to integrate DNS tightly with networking, compute, and managed services.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Role of VPC DNS Settings
&lt;/h3&gt;

&lt;p&gt;DNS behavior in a VPC is controlled by two main settings: DNS resolution and DNS hostnames.&lt;/p&gt;

&lt;p&gt;DNS resolution determines whether resources in the VPC can resolve domain names at all. When enabled, instances can query the VPC resolver for both internal and external domains. DNS hostnames determine whether AWS assigns DNS names to resources such as EC2 instances and load balancers.&lt;/p&gt;

&lt;p&gt;In most cases, both settings are enabled by default. Disabling them is uncommon and usually reserved for specialized networking setups.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resolving Public Domain Names from a VPC
&lt;/h3&gt;

&lt;p&gt;When an instance inside a VPC resolves a public domain name, the request is first sent to the VPC DNS resolver. The resolver then queries public DNS infrastructure on behalf of the instance and returns the result.&lt;/p&gt;

&lt;p&gt;From the instance’s perspective, DNS resolution works as expected, even if the subnet is private. The key point is that DNS resolution itself does not require internet access. Only the subsequent network traffic does.&lt;/p&gt;

&lt;p&gt;This is why private instances can resolve external domain names even when outbound connectivity is restricted or routed through a NAT Gateway.&lt;/p&gt;

&lt;h3&gt;
  
  
  Internal DNS Names and AWS Resources
&lt;/h3&gt;

&lt;p&gt;AWS automatically creates DNS records for many resources inside a VPC. EC2 instances, load balancers, and certain managed services are assigned internal DNS names that resolve to private IP addresses.&lt;/p&gt;

&lt;p&gt;When one resource communicates with another using these names, the resolution happens entirely within the VPC. Traffic stays internal and does not involve the internet.&lt;/p&gt;

&lt;p&gt;This internal DNS behavior is what enables service-to-service communication without hardcoding IP addresses, which would otherwise change over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Private DNS and Service Integration
&lt;/h3&gt;

&lt;p&gt;DNS inside a VPC becomes more powerful when private DNS is involved. With private hosted zones, domain names can be resolved only within one or more VPCs.&lt;/p&gt;

&lt;p&gt;This allows teams to use familiar domain naming patterns for internal services while keeping them inaccessible from outside. Applications can rely on stable names even as infrastructure scales or changes.&lt;/p&gt;

&lt;p&gt;Private DNS is commonly used for internal APIs, microservices, and shared services across multiple environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  How DNS Works with Managed AWS Services
&lt;/h3&gt;

&lt;p&gt;Many AWS services rely heavily on DNS to function correctly. Endpoints for storage, databases, and messaging services are exposed as DNS names rather than fixed IPs.&lt;/p&gt;

&lt;p&gt;When accessed from within a VPC, these names often resolve to internal addresses, especially when VPC endpoints are used. This keeps traffic inside the AWS network and avoids unnecessary exposure to the internet.&lt;/p&gt;

&lt;p&gt;From a networking standpoint, DNS acts as the glue that connects routing, endpoints, and service access together.&lt;/p&gt;

&lt;h3&gt;
  
  
  DNS Resolution and Network Design
&lt;/h3&gt;

&lt;p&gt;DNS decisions influence how traffic flows, even though they do not move packets themselves. A resolved IP address determines whether traffic stays within the VPC, goes through a NAT Gateway, or exits via an Internet Gateway.&lt;/p&gt;

&lt;p&gt;Because of this, DNS should be considered part of network design rather than an afterthought. Clear domain naming, consistent use of private DNS, and an understanding of resolution paths make architectures easier to reason about and troubleshoot.&lt;/p&gt;

&lt;h3&gt;
  
  
  Common Sources of DNS Confusion
&lt;/h3&gt;

&lt;p&gt;DNS issues inside a VPC often come from assumptions rather than misconfigurations. Expecting private instances to resolve names without DNS resolution enabled, confusing public and private DNS records, or assuming DNS queries require internet access are common examples.&lt;/p&gt;

&lt;p&gt;When troubleshooting, checking VPC DNS settings and understanding which resolver is being used often leads to quicker answers than inspecting security rules or routes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;DNS inside a VPC is simple by design, but deeply integrated with AWS networking. The VPC DNS resolver handles both internal and external name resolution in a controlled and predictable way. Once you understand where DNS queries go and how results are returned, it becomes much easier to reason about connectivity, service access, and network behavior across AWS environments.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>networking</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How Traffic Leaves a Private Subnet in AWS</title>
      <dc:creator>Irfan Satrio</dc:creator>
      <pubDate>Sun, 08 Feb 2026 05:48:35 +0000</pubDate>
      <link>https://forem.com/irfansatrio/how-traffic-leaves-a-private-subnet-in-aws-4770</link>
      <guid>https://forem.com/irfansatrio/how-traffic-leaves-a-private-subnet-in-aws-4770</guid>
      <description>&lt;p&gt;Understanding how traffic leaves a private subnet is an important part of AWS networking. Private subnets are often described as “not connected to the internet,” yet instances inside them can still download updates, call external APIs, or access managed AWS services. This behavior is fully intentional and driven by routing decisions inside the VPC. In this article, we’ll walk through how outbound traffic from a private subnet actually works, step by step, without jumping straight into complex architectures.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Makes a Subnet “Private”
&lt;/h3&gt;

&lt;p&gt;A subnet in AWS is considered private not because of a special flag, but because of its routing behavior. A private subnet simply does not have a route that sends internet-bound traffic directly to an Internet Gateway (IGW).&lt;/p&gt;

&lt;p&gt;Most private subnets have a route table that looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A local route for VPC-to-VPC communication (for example, 10.0.0.0/16 local)&lt;/li&gt;
&lt;li&gt;A default route (0.0.0.0/0) pointing to a NAT Gateway, or no default route at all&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This small difference in routing has a big impact. Without a direct route to an IGW, resources in the subnet cannot accept inbound connections from the internet, even if security groups allow it.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Role of the NAT Gateway
&lt;/h3&gt;

&lt;p&gt;The most common way for traffic to leave a private subnet is through a NAT Gateway. A NAT Gateway acts as a controlled exit point for outbound traffic while preventing unsolicited inbound access.&lt;/p&gt;

&lt;p&gt;The key idea is simple. Private instances never talk to the internet directly. They send traffic to the NAT Gateway, which then communicates with the internet on their behalf.&lt;/p&gt;

&lt;p&gt;A typical setup looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Private subnet route table: 0.0.0.0/0 pointing to a NAT Gateway&lt;/li&gt;
&lt;li&gt;NAT Gateway placed in a public subnet&lt;/li&gt;
&lt;li&gt;The public subnet has a route to the IGW&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The NAT Gateway performs network address translation, replacing the private source IP with its own public IP. Responses from the internet return to the NAT Gateway, which then forwards them back to the original private instance.&lt;/p&gt;

&lt;p&gt;From the instance’s perspective, it can reach the internet, but it is never directly exposed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Private Subnets Cannot Receive Inbound Internet Traffic
&lt;/h3&gt;

&lt;p&gt;Even though private instances can send outbound requests, they cannot be reached from the internet. This is because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They do not have public IP addresses&lt;/li&gt;
&lt;li&gt;There is no route from the IGW back to their subnet&lt;/li&gt;
&lt;li&gt;The NAT Gateway only allows response traffic for connections initiated from inside the VPC&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This design is intentional. It ensures that private subnets remain protected by default while still being useful for tasks like software updates, external API calls, or license validation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fully Isolated Private Subnets
&lt;/h3&gt;

&lt;p&gt;Not all private subnets need outbound access. Some are intentionally fully isolated.&lt;/p&gt;

&lt;p&gt;In these cases, the route table contains only the local VPC route. With no default route at all, instances in these subnets cannot leave the VPC.&lt;/p&gt;

&lt;p&gt;This pattern is common for database tiers, internal batch jobs, and highly sensitive workloads. Isolation at the routing level provides a strong security boundary even before security groups or network ACLs are considered.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using VPC Endpoints Instead of the Internet
&lt;/h3&gt;

&lt;p&gt;Another way traffic can leave a private subnet, without actually leaving the AWS network, is through VPC endpoints.&lt;/p&gt;

&lt;p&gt;With gateway or interface endpoints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Traffic to services like Amazon S3 or DynamoDB stays within AWS&lt;/li&gt;
&lt;li&gt;No NAT Gateway or internet access is required&lt;/li&gt;
&lt;li&gt;Routing remains private and predictable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this case, the private subnet still does not have internet access, but it can communicate with specific AWS services efficiently and securely.&lt;/p&gt;

&lt;h3&gt;
  
  
  End-to-End Traffic Flow Examples
&lt;/h3&gt;

&lt;p&gt;Seeing the full flow helps connect the pieces.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Outbound internet access from a private subnet&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An application in a private subnet makes an HTTP request to an external API.&lt;/li&gt;
&lt;li&gt;The route table sends traffic to the NAT Gateway.&lt;/li&gt;
&lt;li&gt;The NAT Gateway forwards the request through the IGW.&lt;/li&gt;
&lt;li&gt;The response returns to the NAT Gateway.&lt;/li&gt;
&lt;li&gt;The NAT Gateway sends it back to the private instance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Accessing S3 through a VPC endpoint&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An instance sends traffic to the S3 service.&lt;/li&gt;
&lt;li&gt;The route table matches the endpoint route.&lt;/li&gt;
&lt;li&gt;Traffic stays inside the AWS network.&lt;/li&gt;
&lt;li&gt;No NAT Gateway or IGW is involved.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These flows show that private does not mean cut off. It means controlled and intentional.&lt;/p&gt;

&lt;h3&gt;
  
  
  Common Routing Patterns for Private Subnets
&lt;/h3&gt;

&lt;p&gt;As environments grow, consistent routing patterns help avoid confusion:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use separate route tables for private subnets with NAT access and fully isolated subnets&lt;/li&gt;
&lt;li&gt;Place NAT Gateways in dedicated public subnets&lt;/li&gt;
&lt;li&gt;Avoid mixing IGW and NAT routes in the same route table&lt;/li&gt;
&lt;li&gt;Name route tables clearly to reflect their purpose&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These practices make troubleshooting and scaling much easier over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Traffic leaving a private subnet in AWS follows clear and deliberate rules. Whether it exits through a NAT Gateway, stays within AWS via a VPC endpoint, or does not leave the VPC at all depends entirely on routing decisions. Once you understand that private subnets are defined by how traffic flows, not by hidden restrictions, designing secure and predictable networks becomes much easier.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>networking</category>
    </item>
    <item>
      <title>Content Delivery Patterns on AWS: CloudFront, ALB, and S3</title>
      <dc:creator>Irfan Satrio</dc:creator>
      <pubDate>Sat, 27 Dec 2025 03:17:08 +0000</pubDate>
      <link>https://forem.com/irfansatrio/content-delivery-patterns-on-aws-cloudfront-alb-and-s3-23i7</link>
      <guid>https://forem.com/irfansatrio/content-delivery-patterns-on-aws-cloudfront-alb-and-s3-23i7</guid>
      <description>&lt;p&gt;Delivering content reliably and at scale is a fundamental requirement for modern applications. As user bases grow and traffic patterns become increasingly global, a simple server-centric delivery model is no longer sufficient. Latency, availability, and security concerns demand architectures that can distribute content efficiently while maintaining strong control over access and traffic flow.&lt;/p&gt;

&lt;p&gt;On AWS, content delivery patterns commonly revolve around three core services: Amazon S3, Application Load Balancer (ALB), and Amazon CloudFront. Each plays a distinct role in how content is stored, processed, and delivered to end users. Understanding how these components interact is essential for designing scalable, performant, and secure systems.&lt;/p&gt;

&lt;p&gt;This article examines the theory behind common content delivery patterns using CloudFront, ALB, and S3, explains when each pattern is appropriate, and highlights the architectural trade-offs involved.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Role of Content Delivery in Cloud Architectures
&lt;/h2&gt;

&lt;p&gt;Content delivery refers to the process of serving static or dynamic content to users with minimal latency and high reliability. This includes assets such as images, videos, JavaScript files, APIs, and even full web applications.&lt;/p&gt;

&lt;p&gt;In cloud environments, content delivery is not just about speed. It also involves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reducing load on origin systems&lt;/li&gt;
&lt;li&gt;Absorbing traffic spikes and DDoS attacks&lt;/li&gt;
&lt;li&gt;Enforcing security controls close to the user&lt;/li&gt;
&lt;li&gt;Ensuring global availability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS achieves these goals by separating content storage, request handling, and edge delivery into specialized services that can be composed into flexible patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon S3 as the Content Origin
&lt;/h2&gt;

&lt;p&gt;Amazon S3 is often the starting point for content delivery architectures. It provides highly durable object storage designed for static content such as images, CSS, JavaScript, documents, and media files.&lt;/p&gt;

&lt;p&gt;S3 is inherently scalable and does not require capacity planning. However, when accessed directly from clients, S3 endpoints may introduce higher latency for users located far from the bucket’s region. Additionally, direct access limits the ability to apply advanced request routing, caching logic, or application-layer security.&lt;/p&gt;

&lt;p&gt;For these reasons, S3 is most effective when used as an origin rather than a direct delivery endpoint.&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudFront as the Global Delivery Layer
&lt;/h2&gt;

&lt;p&gt;Amazon CloudFront is AWS’s content delivery network (CDN) designed to cache and serve content from edge locations close to end users. CloudFront sits in front of origins such as S3 buckets or ALBs and handles incoming requests at the edge.&lt;/p&gt;

&lt;p&gt;By caching content geographically closer to users, CloudFront significantly reduces latency and origin load. It also integrates natively with AWS security services, including AWS Shield, AWS WAF, and IAM-based access controls.&lt;/p&gt;

&lt;p&gt;CloudFront is not limited to static content. It can also front dynamic origins, making it a central component in many delivery patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 1: CloudFront + S3 for Static Content Delivery
&lt;/h2&gt;

&lt;p&gt;The simplest and most common pattern is CloudFront in front of an S3 bucket. In this model, S3 stores static assets, while CloudFront acts as the global entry point.&lt;/p&gt;

&lt;p&gt;Requests from users are routed to the nearest CloudFront edge location. If the content is cached, it is served immediately. If not, CloudFront retrieves the object from S3, caches it, and then delivers it to the user.&lt;/p&gt;

&lt;p&gt;This pattern offers several advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Low latency global delivery&lt;/li&gt;
&lt;li&gt;Reduced direct exposure of the S3 bucket&lt;/li&gt;
&lt;li&gt;Cost-efficient scaling for high traffic volumes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Security is typically enhanced by restricting S3 bucket access so that objects can only be retrieved via CloudFront, using mechanisms such as Origin Access Control (OAC).&lt;/p&gt;

&lt;p&gt;This pattern is ideal for static websites, asset hosting, and media distribution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 2: CloudFront + ALB for Dynamic Content
&lt;/h2&gt;

&lt;p&gt;While S3 excels at static content, dynamic applications require request processing, routing, and compute. In these cases, Application Load Balancer becomes the origin behind CloudFront.&lt;/p&gt;

&lt;p&gt;ALB distributes incoming requests to backend services such as EC2 instances, ECS tasks, or EKS pods. CloudFront sits in front, terminating client connections at the edge and forwarding requests to the ALB when necessary.&lt;/p&gt;

&lt;p&gt;This pattern allows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Edge-level caching for selected dynamic responses&lt;/li&gt;
&lt;li&gt;TLS termination and security enforcement close to users&lt;/li&gt;
&lt;li&gt;Path-based or host-based routing at the ALB layer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Although dynamic responses are often less cacheable, CloudFront still provides benefits such as connection reuse, DDoS protection, and consistent global entry points.&lt;/p&gt;

&lt;p&gt;This pattern is commonly used for APIs, web applications, and microservice-based backends.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 3: Hybrid Content Delivery (CloudFront + S3 + ALB)
&lt;/h2&gt;

&lt;p&gt;Many real-world architectures combine both static and dynamic delivery into a single CloudFront distribution. In this hybrid pattern, CloudFront routes requests to different origins based on path patterns.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Requests to &lt;code&gt;/static/*&lt;/code&gt; are routed to an S3 origin&lt;/li&gt;
&lt;li&gt;Requests to &lt;code&gt;/api/*&lt;/code&gt; are routed to an ALB origin&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach centralizes content delivery under a single domain while allowing each type of content to be served by the most appropriate backend.&lt;/p&gt;

&lt;p&gt;Hybrid delivery improves operational simplicity and performance. Static assets are cached aggressively at the edge, while dynamic requests are forwarded efficiently to application services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security and Access Control Considerations
&lt;/h2&gt;

&lt;p&gt;Content delivery patterns must be designed with security in mind. CloudFront plays a critical role by acting as a protective layer in front of origins.&lt;/p&gt;

&lt;p&gt;Common security practices include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Restricting S3 bucket access to CloudFront only&lt;/li&gt;
&lt;li&gt;Using AWS WAF at the CloudFront level to filter malicious traffic&lt;/li&gt;
&lt;li&gt;Enforcing HTTPS and modern TLS policies&lt;/li&gt;
&lt;li&gt;Limiting ALB exposure to CloudFront IP ranges or private networks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By ensuring that origins are not directly accessible from the internet, architectures reduce attack surfaces and enforce consistent access policies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance and Scalability Implications
&lt;/h2&gt;

&lt;p&gt;CloudFront offloads a significant portion of traffic from origin systems. This reduces compute load, improves response times, and allows backend services to scale more predictably.&lt;/p&gt;

&lt;p&gt;ALB scales automatically with traffic volume, while S3 requires no scaling management at all. Together, these services enable architectures that can handle sudden traffic spikes without manual intervention.&lt;/p&gt;

&lt;p&gt;Caching behavior, TTL settings, and invalidation strategies become important tuning parameters to balance freshness and performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Content delivery on AWS requires selecting the right service for the right workload. CloudFront, ALB, and S3 each address different aspects of delivering content at scale. S3 provides durable and scalable storage, ALB handles intelligent request routing and application traffic, and CloudFront delivers content globally with low latency and strong security.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>networking</category>
      <category>cloud</category>
    </item>
    <item>
      <title>From Network Segmentation to Micro-segmentation on AWS</title>
      <dc:creator>Irfan Satrio</dc:creator>
      <pubDate>Fri, 19 Dec 2025 11:05:37 +0000</pubDate>
      <link>https://forem.com/irfansatrio/from-network-segmentation-to-micro-segmentation-on-aws-1jc8</link>
      <guid>https://forem.com/irfansatrio/from-network-segmentation-to-micro-segmentation-on-aws-1jc8</guid>
      <description>&lt;p&gt;Building secure and scalable systems on AWS begins with a clearly defined network architecture. As applications grow in size and complexity, security can no longer rely on a single perimeter. Instead, protection depends on how workloads are isolated, how traffic is segmented, and how access between components is explicitly controlled. Network segmentation and micro-segmentation are foundational principles for achieving strong security, reducing blast radius, and improving operational resilience in cloud environments.&lt;/p&gt;

&lt;p&gt;In this article, we examine the concepts of network segmentation and micro-segmentation on AWS, how they differ, why both are necessary, and how AWS networking constructs enable granular traffic control aligned with modern cloud security best practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Network Segmentation
&lt;/h2&gt;

&lt;p&gt;Network segmentation is the practice of dividing a network into multiple isolated segments to control traffic flow and establish clear trust boundaries. Rather than allowing unrestricted east-west communication, segmentation ensures that workloads can only communicate across defined paths.&lt;/p&gt;

&lt;p&gt;In AWS, segmentation starts at the Virtual Private Cloud (VPC) level. A VPC provides a logically isolated network where IP addressing, routing, and access control are fully configurable. Inside a VPC, subnets act as the primary mechanism for segmentation by separating workloads based on exposure level, function, or security requirements.&lt;/p&gt;

&lt;p&gt;Segmentation is not solely a security measure. It also improves operational clarity by making traffic flows predictable and reducing unintended coupling between services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Subnets as the First Layer of Segmentation
&lt;/h2&gt;

&lt;p&gt;Subnets are the fundamental building blocks of network segmentation within a VPC. Each subnet is bound to a single Availability Zone and associated with a route table that defines how traffic enters and exits that segment.&lt;/p&gt;

&lt;p&gt;Public subnets are commonly used for internet-facing components such as Application Load Balancers or bastion hosts. These subnets include a route to an Internet Gateway, allowing controlled inbound and outbound internet traffic. Private subnets, in contrast, host internal workloads such as application servers or databases and do not permit direct inbound access from the internet.&lt;/p&gt;

&lt;p&gt;By separating public and private subnets, architectures establish a clear perimeter where internet traffic is terminated at well-defined entry points, while sensitive workloads remain isolated from direct exposure.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Segmentation to Micro-segmentation
&lt;/h2&gt;

&lt;p&gt;While subnet-level segmentation provides coarse-grained isolation, it assumes a level of trust among resources within the same segment. In modern cloud environments, this assumption is increasingly risky. Micro-segmentation addresses this limitation by enforcing security controls at the workload or service level.&lt;/p&gt;

&lt;p&gt;Micro-segmentation ensures that even resources within the same subnet are not implicitly trusted. Each component is allowed to communicate only with the specific services it depends on, following the principle of least privilege.&lt;/p&gt;

&lt;p&gt;On AWS, micro-segmentation is primarily implemented using Security Groups, which act as stateful, resource-level firewalls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Groups as the Core Enforcement Mechanism
&lt;/h2&gt;

&lt;p&gt;Security Groups define exactly which traffic is allowed to reach a resource. They are stateful, meaning that return traffic is automatically permitted, and they deny all traffic by default unless explicitly allowed.&lt;/p&gt;

&lt;p&gt;A key advantage of Security Groups is their ability to reference other Security Groups instead of static IP ranges. This enables intent-based security policies that scale dynamically with the environment.&lt;/p&gt;

&lt;p&gt;For example, consider an internal application composed of multiple services with different trust levels. A frontend service may expose a limited set of ports to accept incoming requests, while backend services only accept traffic from specific upstream components. A data store then allows inbound connections exclusively from designated application services.&lt;/p&gt;

&lt;p&gt;Even if these components reside within the same subnet, unauthorized lateral movement is prevented because each interaction must be explicitly allowed through Security Group rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  Network ACLs and Defense in Depth
&lt;/h2&gt;

&lt;p&gt;In addition to Security Groups, AWS provides Network Access Control Lists (NACLs), which operate at the subnet level and are stateless. NACLs are typically used to enforce coarse-grained security controls, such as blocking known malicious IP ranges or restricting certain protocols across an entire subnet.&lt;/p&gt;

&lt;p&gt;While Security Groups are the primary tool for micro-segmentation, NACLs add an additional layer of protection. Together, they support a defense-in-depth strategy where both subnet-level and resource-level rules must permit traffic for communication to succeed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Controlled Traffic Flow in a Segmented Architecture
&lt;/h2&gt;

&lt;p&gt;In a well-segmented AWS architecture, traffic flows in a strict and predictable manner. Internet users interact only with resources in public subnets, usually through a load balancer. Requests are then forwarded to application workloads in private subnets, which in turn communicate with data stores in more tightly restricted subnets.&lt;/p&gt;

&lt;p&gt;Each hop is governed by route tables, Security Groups, and optionally NACLs. Outbound internet access from private workloads is typically routed through NAT Gateways, ensuring that internal resources are never directly exposed.&lt;/p&gt;

&lt;p&gt;This controlled flow simplifies monitoring, auditing, and incident response, as communication paths are intentional and clearly defined.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security and Operational Benefits
&lt;/h2&gt;

&lt;p&gt;Network segmentation and micro-segmentation significantly reduce the blast radius of security incidents. If a workload is compromised, the attacker’s ability to move laterally is limited by explicit access rules.&lt;/p&gt;

&lt;p&gt;From an operational standpoint, segmentation improves maintainability and scalability. Teams can modify or scale individual components without unintentionally exposing other parts of the system. Security policies remain consistent even as workloads are added or removed.&lt;/p&gt;

&lt;p&gt;These practices strongly align with zero trust principles, where no resource is trusted by default, regardless of its network location.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alignment with AWS Well-Architected Framework
&lt;/h2&gt;

&lt;p&gt;Within the AWS Well-Architected Framework, segmentation and micro-segmentation are key elements of the Security Pillar. They support identity-aware access control, reduce reliance on network perimeter defenses, and help organizations meet compliance and audit requirements.&lt;/p&gt;

&lt;p&gt;By combining VPC isolation, subnet segmentation, and Security Group–based micro-segmentation, architectures become resilient not only to failures but also to misconfigurations and security threats.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Network segmentation and micro-segmentation on AWS are essential design principles for modern cloud architectures. Segmentation establishes clear trust boundaries at the network level, while micro-segmentation enforces least-privilege communication at the workload level. When applied together, these mechanisms create a secure, scalable, and auditable environment where traffic flows only as intended.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>networking</category>
      <category>security</category>
    </item>
    <item>
      <title>Quick Start: Build a Ready-to-Use AWS VPC in Minutes</title>
      <dc:creator>Irfan Satrio</dc:creator>
      <pubDate>Sat, 13 Dec 2025 12:01:29 +0000</pubDate>
      <link>https://forem.com/irfansatrio/quick-start-launch-a-ready-to-use-vpc-setup-in-minutes-1449</link>
      <guid>https://forem.com/irfansatrio/quick-start-launch-a-ready-to-use-vpc-setup-in-minutes-1449</guid>
      <description>&lt;p&gt;In this hands-on guide, you’ll build a ready-to-use AWS networking environment in just a few minutes. You will create a fully functional VPC with public and private subnets, internet access, routing, and security controls, then launch an EC2 instance to verify that everything works as expected. This is one of the fastest ways to get a working VPC layout, especially if you're just starting to explore AWS networking and want something reliable to build on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Create the VPC
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;strong&gt;VPC Console&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Create VPC&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;VPC and more&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Set:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Name&lt;/strong&gt;: my-quickstart-vpc&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Number of AZs&lt;/strong&gt;: 2 or 3 (default is fine)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customize subnets&lt;/strong&gt;: optional&lt;/li&gt;
&lt;li&gt;Leave other defaults as-is&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Create the VPC.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkdv6t62yze8qezr7qeu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkdv6t62yze8qezr7qeu.png" alt=" " width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS will automatically provision all required networking components.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Review Your Subnets
&lt;/h2&gt;

&lt;p&gt;After creation, go to &lt;strong&gt;Subnets&lt;/strong&gt;.&lt;br&gt;
You’ll see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Public subnets (routed to the IGW)&lt;/li&gt;
&lt;li&gt;Private subnets (routed through the S3 Gateway Endpoint)&lt;/li&gt;
&lt;li&gt;Subnets distributed across two Availability Zones (for example, us-east-1a and us-east-1b)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This gives you a practical multi-AZ layout without manual planning.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fery3pw08h3pu567cn3hj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fery3pw08h3pu567cn3hj.png" alt=" " width="800" height="127"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3: Check the Internet-Enabled Path
&lt;/h2&gt;

&lt;p&gt;Open &lt;strong&gt;Internet Gateways&lt;/strong&gt;.&lt;br&gt;
You should see the IGW attached to your new VPC.&lt;br&gt;
This is what gives public subnets outbound internet access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmb3ob0d5zszcgk19d4ok.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmb3ob0d5zszcgk19d4ok.png" width="800" height="247"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4: Launch an EC2 Instance
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Go to the &lt;strong&gt;EC2 Console → Launch instance&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Configure:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AMI&lt;/strong&gt;: Amazon Linux 2023&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Instance type&lt;/strong&gt;: t3.micro (free tier eligible)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key pair&lt;/strong&gt;: Create a new key pair named "hands-on-key" (RSA, .pem)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network&lt;/strong&gt;: my-quickstart-vpc&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subnet&lt;/strong&gt;: choose one of the &lt;em&gt;public&lt;/em&gt; subnets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-assign public IP&lt;/strong&gt;: Enabled&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Group&lt;/strong&gt;: use the default SG or create one allowing SSH&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Launch the instance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7kwnk4r06ij3nf2tqbn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7kwnk4r06ij3nf2tqbn.png" alt=" " width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 5: Connect and Test Connectivity
&lt;/h2&gt;

&lt;p&gt;Connect to the instance using SSH:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i hands-on-key.pem ec2-user@&amp;lt;public-ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A successful connection confirms that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The instance has a public IP&lt;/li&gt;
&lt;li&gt;The route table is correctly configured&lt;/li&gt;
&lt;li&gt;Port 22 is allowed by the Security Group&lt;/li&gt;
&lt;li&gt;Outbound internet access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From inside the instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://www.google.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A successful response confirms outbound internet connectivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No public IP&lt;/strong&gt;&lt;br&gt;
Ensure the selected subnet is public and auto-assign is enabled.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SSH blocked&lt;/strong&gt;&lt;br&gt;
Check your SG inbound rule for port 22.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;curl fails&lt;/strong&gt;&lt;br&gt;
Make sure the public route points to the IGW.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tips for Using This VPC Setup
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Use clear and consistent naming for VPCs, subnets, and route tables.&lt;/li&gt;
&lt;li&gt;Place internet-facing resources in public subnets only.&lt;/li&gt;
&lt;li&gt;Keep backend workloads in private subnets to reduce exposure.&lt;/li&gt;
&lt;li&gt;Treat this VPC as a baseline that can be extended as your architecture grows.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You now have a functional multi-AZ VPC with clearly separated public and private subnets and internet connectivity through an Internet Gateway. This setup illustrates how AWS networking components work together to manage traffic flow and resource isolation, and it provides a solid foundation for future architectural extensions.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>networking</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How a 3-Tier Architecture Achieves High Availability on AWS</title>
      <dc:creator>Irfan Satrio</dc:creator>
      <pubDate>Sat, 13 Dec 2025 02:31:43 +0000</pubDate>
      <link>https://forem.com/irfansatrio/how-a-3-tier-architecture-achieves-high-availability-on-aws-2901</link>
      <guid>https://forem.com/irfansatrio/how-a-3-tier-architecture-achieves-high-availability-on-aws-2901</guid>
      <description>&lt;p&gt;Building reliable applications in AWS starts with a clear architectural foundation. Availability depends on how workloads are structured, how traffic flows between layers, and how failures are isolated. The 3-tier architecture is a widely used pattern for achieving scalability and high availability in cloud environments.&lt;/p&gt;

&lt;p&gt;In this article, we’ll examine how a 3-tier architecture achieves high availability on AWS, the purpose of each layer, and how AWS networking and services support resilient designs.&lt;/p&gt;

&lt;h2&gt;
  
  
  What High Availability Really Means
&lt;/h2&gt;

&lt;p&gt;High availability refers to the ability of a system to remain operational even when individual components fail. Rather than relying on a single server or a single network path, workloads are distributed across multiple failure domains so that failures do not immediately result in downtime.&lt;/p&gt;

&lt;p&gt;In AWS, the primary building block for high availability is the Availability Zone (AZ). Each AZ is physically separate and designed with independent power, cooling, and networking. Deploying across multiple AZs significantly reduces the impact of hardware or data center failures.&lt;/p&gt;

&lt;p&gt;High availability is not about preventing failures entirely. It is about designing systems that continue to serve traffic when failures inevitably occur.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 3-Tier Architecture Concept
&lt;/h2&gt;

&lt;p&gt;A 3-tier architecture separates an application into three logical layers: the presentation tier, the application tier, and the database tier. Each tier has a distinct responsibility and different availability and security requirements.&lt;/p&gt;

&lt;p&gt;On AWS, these tiers are typically mapped to different subnets and services inside a VPC. This separation allows traffic and access to be tightly controlled, making the architecture easier to scale, secure, and operate over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Presentation Tier (Web Layer)
&lt;/h2&gt;

&lt;p&gt;The presentation tier handles incoming user traffic and is usually the first point where HTTP or HTTPS requests arrive.&lt;/p&gt;

&lt;p&gt;In a typical AWS setup, this layer consists of an Application Load Balancer placed in public subnets and multiple web servers running on EC2 instances or containers. These resources are deployed across multiple Availability Zones to avoid dependency on a single failure domain.&lt;/p&gt;

&lt;p&gt;The load balancer continuously checks the health of its targets and distributes traffic only to healthy instances. If one instance or even an entire Availability Zone becomes unavailable, traffic is automatically routed elsewhere without user impact. Because this tier must accept traffic from the internet, it is placed in public subnets and commonly exposes ports 80 and 443.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Tier (Logic Layer)
&lt;/h2&gt;

&lt;p&gt;The application tier processes business logic and coordinates communication between the web and database layers.&lt;/p&gt;

&lt;p&gt;This tier is typically deployed in private subnets with no direct internet access. Requests are received only from the web tier, often through an internal load balancer. By keeping this layer private, the overall attack surface of the application is reduced.&lt;/p&gt;

&lt;p&gt;High availability at this layer is achieved by running multiple application instances across different Availability Zones and using Auto Scaling to replace unhealthy instances automatically. This allows the system to remain responsive during failures as well as during traffic spikes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database Tier (Data Layer)
&lt;/h2&gt;

&lt;p&gt;The database tier stores persistent application data and requires the highest level of stability and protection.&lt;/p&gt;

&lt;p&gt;On AWS, this tier is commonly implemented using Amazon RDS with Multi-AZ enabled. The database runs in private, isolated subnets and only allows inbound access from the application tier. AWS maintains a standby replica in another Availability Zone to support automatic failover.&lt;/p&gt;

&lt;p&gt;If the primary database fails, AWS promotes the standby instance automatically, minimizing downtime without manual intervention. This tier typically has no outbound internet access and is tightly restricted through Security Groups and routing rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Availability Zones Work Together
&lt;/h2&gt;

&lt;p&gt;In a high-availability 3-tier architecture, each tier spans multiple Availability Zones, but traffic flows in a strict and predictable order.&lt;/p&gt;

&lt;p&gt;Users access the system through a public load balancer in the web tier. Requests are forwarded to the application tier, which processes business logic and communicates with the database tier as needed. Each tier only interacts with the tier directly above or below it.&lt;/p&gt;

&lt;p&gt;If an entire Availability Zone fails, the load balancer stops routing traffic to that zone. Application instances in healthy zones continue serving requests, and the database layer fails over to a standby instance if required. This layered design is what gives the architecture its resilience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Networking and Traffic Flow
&lt;/h2&gt;

&lt;p&gt;Networking plays a critical role in maintaining high availability. Public subnets host internet-facing components, while private subnets host internal workloads such as application and database tiers.&lt;/p&gt;

&lt;p&gt;Route tables control which layers can access the internet, and Security Groups define which tiers are allowed to communicate. Traffic flows downward through the tiers in a controlled manner, which simplifies troubleshooting and reduces the risk of accidental exposure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling and Fault Tolerance
&lt;/h2&gt;

&lt;p&gt;High availability works best when combined with automated scaling mechanisms. Load balancers distribute traffic evenly, health checks detect failures early, and Auto Scaling ensures capacity adjusts based on demand and instance health.&lt;/p&gt;

&lt;p&gt;Instead of manually fixing failed servers, the system replaces unhealthy resources automatically. This approach reduces downtime and lowers operational overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Use Cases
&lt;/h2&gt;

&lt;p&gt;A high-availability 3-tier architecture is commonly used in scenarios such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Public-facing web applications&lt;/li&gt;
&lt;li&gt;Backend APIs with complex business logic&lt;/li&gt;
&lt;li&gt;E-commerce platforms with strict uptime requirements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These workloads benefit most from tier isolation and multi-AZ deployments to maintain availability during failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design Tips
&lt;/h2&gt;

&lt;p&gt;A few practical principles help ensure a high-availability architecture behaves as expected under stress:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep each tier in separate subnets&lt;/li&gt;
&lt;li&gt;Always deploy across at least two Availability Zones&lt;/li&gt;
&lt;li&gt;Use load balancers instead of direct instance access&lt;/li&gt;
&lt;li&gt;Restrict traffic using Security Groups rather than IP ranges&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;A high-availability 3-tier architecture on AWS is built by combining logical separation, multi-AZ deployment, and controlled traffic flow. Each tier plays a specific role, and AWS services work together to isolate failures and maintain uptime. With this structure in place, applications can continue serving traffic even when individual components or Availability Zones fail.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>architecture</category>
      <category>networking</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Gateway Endpoints vs Interface Endpoints: What’s the Difference?</title>
      <dc:creator>Irfan Satrio</dc:creator>
      <pubDate>Mon, 01 Dec 2025 09:51:05 +0000</pubDate>
      <link>https://forem.com/irfansatrio/gateway-endpoints-vs-interface-endpoints-whats-the-difference-10kh</link>
      <guid>https://forem.com/irfansatrio/gateway-endpoints-vs-interface-endpoints-whats-the-difference-10kh</guid>
      <description>&lt;p&gt;AWS provides several ways to keep your workloads connected without exposing them to the public internet. One of the most useful tools for this is the &lt;strong&gt;VPC Endpoint&lt;/strong&gt;, which enables private access from your VPC to AWS services over the AWS internal network. There are two main types: &lt;strong&gt;Gateway Endpoints&lt;/strong&gt; and &lt;strong&gt;Interface Endpoints&lt;/strong&gt;. Each endpoint type serves a different purpose, so choosing the right one matters for security, performance, and cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  What VPC Endpoints Actually Do
&lt;/h2&gt;

&lt;p&gt;A VPC Endpoint creates a private path so your resources can reach specific AWS services without requiring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Public IPs&lt;/li&gt;
&lt;li&gt;Internet Gateways&lt;/li&gt;
&lt;li&gt;NAT Gateways&lt;/li&gt;
&lt;li&gt;Direct internet routing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both endpoint types enable private connectivity, but they operate differently inside the VPC.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gateway Endpoints (for S3 and DynamoDB)
&lt;/h2&gt;

&lt;p&gt;Gateway Endpoints are the simpler option. They work by adding routes to your route tables so that traffic to S3 or DynamoDB stays within AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key characteristics&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Attached to route tables — not subnets or resources&lt;/li&gt;
&lt;li&gt;No hourly cost&lt;/li&gt;
&lt;li&gt;Supports only S3 and DynamoDB&lt;/li&gt;
&lt;li&gt;Scales automatically with no bandwidth limits&lt;/li&gt;
&lt;li&gt;Works at the subnet level through routing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This makes Gateway Endpoints ideal for workloads that frequently interact with S3 or DynamoDB and need a predictable, low-cost way to stay private.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example use case&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A private application uploading logs to S3 can use a Gateway Endpoint to avoid NAT Gateway charges and keep all traffic on the internal AWS network.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interface Endpoints (AWS PrivateLink)
&lt;/h2&gt;

&lt;p&gt;Interface Endpoints work differently. Instead of modifying routes, they create &lt;strong&gt;Elastic Network Interfaces (ENIs)&lt;/strong&gt; in your subnets. These ENIs act as private entry points for AWS services using &lt;strong&gt;PrivateLink&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important traits&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates ENIs with private IP addresses&lt;/li&gt;
&lt;li&gt;Supports many AWS services (SSM, Secrets Manager, ECR, KMS, CloudWatch, etc.)&lt;/li&gt;
&lt;li&gt;Charges per hour and per GB processed&lt;/li&gt;
&lt;li&gt;Uses Security Groups for traffic filtering&lt;/li&gt;
&lt;li&gt;Provides fine-grained, resource-level control&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This makes Interface Endpoints ideal when you need controlled access to a wide range of AWS services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example use case&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An EC2 instance retrieving secrets from AWS Secrets Manager through an Interface Endpoint, with Security Groups enforcing access restrictions.&lt;/p&gt;

&lt;h2&gt;
  
  
  How They Work Together
&lt;/h2&gt;

&lt;p&gt;Both endpoint types enable private access, but through different mechanisms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gateway Endpoints&lt;/strong&gt; use route tables to redirect S3/DynamoDB traffic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interface Endpoints&lt;/strong&gt; expose AWS services as private IPs through ENIs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;strong&gt;Gateway Endpoints&lt;/strong&gt; for large, cost-sensitive workloads that rely heavily on S3 or DynamoDB.&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Interface Endpoints&lt;/strong&gt; when you need granular control or must access services beyond S3 and DynamoDB.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Choosing the Right Type
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use Gateway Endpoints when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You only need S3 or DynamoDB&lt;/li&gt;
&lt;li&gt;You want zero hourly cost&lt;/li&gt;
&lt;li&gt;You need high throughput&lt;/li&gt;
&lt;li&gt;You prefer subnet-wide behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use Interface Endpoints when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need access to services like SSM, ECR, KMS, CloudWatch, or Secrets Manager&lt;/li&gt;
&lt;li&gt;You want Security Group filtering&lt;/li&gt;
&lt;li&gt;You need strict network isolation or compliance&lt;/li&gt;
&lt;li&gt;You use PrivateLink for cross-VPC or third-party connectivity&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Practical Examples
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Private subnet accessing S3&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use Gateway Endpoint&lt;/li&gt;
&lt;li&gt;Result: no internet exposure, no NAT cost&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;EC2 accessing Secrets Manager&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use Interface Endpoint&lt;/li&gt;
&lt;li&gt;Result: controlled access through Security Groups&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Microservices across VPCs&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use Interface Endpoint + PrivateLink&lt;/li&gt;
&lt;li&gt;Result: no internet or VPC peering required&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Fully isolated environment with no internet&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use Gateway Endpoint for S3&lt;/li&gt;
&lt;li&gt;Result: workloads remain isolated but functional&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Operational Notes
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Gateway Endpoints&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Very little maintenance&lt;/li&gt;
&lt;li&gt;No Security Groups to configure&lt;/li&gt;
&lt;li&gt;Easy to troubleshoot&lt;/li&gt;
&lt;li&gt;Ideal for high-volume S3/DynamoDB traffic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Interface Endpoints&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Requires correct Security Group configuration&lt;/li&gt;
&lt;li&gt;Adds cost per AZ and per GB&lt;/li&gt;
&lt;li&gt;DNS overrides may affect applications&lt;/li&gt;
&lt;li&gt;Creates multiple ENIs, increasing resource management&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tips for Working with VPC Endpoints
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Use Gateway Endpoints whenever possible for S3 and DynamoDB&lt;/li&gt;
&lt;li&gt;Keep SG rules simple for Interface Endpoints&lt;/li&gt;
&lt;li&gt;Monitor the cost of multiple Interface Endpoints&lt;/li&gt;
&lt;li&gt;Enable Private DNS for easier service access&lt;/li&gt;
&lt;li&gt;Use clear naming conventions for all endpoints&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Gateway Endpoints and Interface Endpoints both enable private access to AWS services, but they operate differently. Gateway Endpoints offer a simple, free, route-based option for S3 and DynamoDB, while Interface Endpoints provide ENI-based, security-controlled access to a wide range of AWS services.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>networking</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Security Groups vs NACLs: A Simple Breakdown</title>
      <dc:creator>Irfan Satrio</dc:creator>
      <pubDate>Mon, 01 Dec 2025 06:49:34 +0000</pubDate>
      <link>https://forem.com/irfansatrio/security-groups-vs-nacls-a-simple-breakdown-391j</link>
      <guid>https://forem.com/irfansatrio/security-groups-vs-nacls-a-simple-breakdown-391j</guid>
      <description>&lt;p&gt;AWS networking includes multiple layers of traffic control, and two of the most important components are Security Groups (SGs) and Network ACLs (NACLs). They’re often compared because both filter traffic, but they operate at different layers and behave differently. Understanding how each one works helps you build clean and predictable VPC architectures. In this article, we’ll look at how they differ and how they work together inside a VPC.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Security Groups Actually Do
&lt;/h2&gt;

&lt;p&gt;Security Groups act as &lt;strong&gt;virtual firewalls for individual resources&lt;/strong&gt;, such as EC2 instances, load balancers, and RDS databases. They operate at the &lt;strong&gt;instance level&lt;/strong&gt;, meaning the rules apply directly to the resource they’re attached to.&lt;/p&gt;

&lt;p&gt;A few key characteristics define how they behave:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stateful rules&lt;/strong&gt;: If inbound traffic is allowed, the response is automatically allowed outbound. You don’t need to create a matching outbound rule.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attachment-based&lt;/strong&gt;: You attach SGs to resources, not subnets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Allow-only model&lt;/strong&gt;: SGs do not support explicit deny rules.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fine-grained filtering&lt;/strong&gt;: Perfect for controlling ports and traffic sources per service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This makes Security Groups ideal for defining application-level behavior. For example, allowing only port 80 and 443 from the internet for a web server, or allowing a private app server to reach a database on port 3306.&lt;/p&gt;

&lt;p&gt;Because they are stateful and resource-specific, SGs are usually the first layer people think about when securing workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  What NACLs Do in a VPC
&lt;/h2&gt;

&lt;p&gt;Network ACLs (Network Access Control Lists)  operate at the &lt;strong&gt;subnet level&lt;/strong&gt;, meaning every instance inside that subnet is affected by the NACL rules. Unlike Security Groups, NACLs behave more like &lt;strong&gt;traditional network firewalls&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Important traits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stateless rules&lt;/strong&gt;: Inbound and outbound rules must be defined separately.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Allow and deny options&lt;/strong&gt;: You can explicitly block traffic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subnet-wide enforcement&lt;/strong&gt;: All resources in the subnet follow the same NACL behavior.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rule evaluation with numbering&lt;/strong&gt;: Lower-numbered rules are evaluated first.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because NACLs act at a broader layer, they make sense when you need consistent filtering across many instances or when you want to explicitly block certain traffic patterns.&lt;/p&gt;

&lt;p&gt;For example, blocking a malicious IP at the subnet level or enforcing that a subnet can only accept HTTP/HTTPS traffic regardless of the SGs attached to its resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  How They Work Together
&lt;/h2&gt;

&lt;p&gt;Even though both control traffic, they don’t replace each other. Instead, they &lt;strong&gt;stack&lt;/strong&gt;, and each layer has a role:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security Groups determine which traffic can reach a specific instance.&lt;/li&gt;
&lt;li&gt;NACLs determine which traffic can reach the subnet where the instance lives.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Traffic must pass &lt;em&gt;both&lt;/em&gt; to reach the instance.&lt;/p&gt;

&lt;p&gt;In practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SGs handle &lt;strong&gt;application-level access&lt;/strong&gt; (ports and services).&lt;/li&gt;
&lt;li&gt;NACLs handle &lt;strong&gt;broader network boundaries&lt;/strong&gt; (subnet-level policies).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keeping the logic clear helps avoid misconfigurations like accidentally blocking return traffic or denying traffic twice in different layers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing When to Use Which
&lt;/h2&gt;

&lt;p&gt;In most AWS architectures, &lt;strong&gt;Security Groups do the heavy lifting&lt;/strong&gt;. They’re flexible, easy to update, and precise.&lt;/p&gt;

&lt;p&gt;NACLs are typically used when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want an extra guardrail at the subnet level.&lt;/li&gt;
&lt;li&gt;You need explicit deny rules.&lt;/li&gt;
&lt;li&gt;You’re segmenting environments (e.g., dev, staging, prod) with strict separation.&lt;/li&gt;
&lt;li&gt;You're mitigating a known bad IP or port pattern.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For everyday use, SGs are more intuitive. NACLs become more useful in regulatory, high-security, or tightly segmented networks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Examples
&lt;/h2&gt;

&lt;p&gt;Here are some common patterns you’ll see:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Public web server&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security Group: Allow inbound 80/443 from anywhere.&lt;/li&gt;
&lt;li&gt;NACL: Allow ephemeral ports for return traffic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Private application server&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security Group: Allow inbound 8080 only from public web tier.&lt;/li&gt;
&lt;li&gt;NACL: Block all inbound from the internet ranges at subnet level.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Database subnet&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security Group: Allow inbound 3306 only from app SG.&lt;/li&gt;
&lt;li&gt;NACL: No internet-bound rules; keep the subnet isolated.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Blocking a suspicious IP&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NACL: Deny inbound from 203.x.x.x directly at subnet level.&lt;/li&gt;
&lt;li&gt;SG: No need to modify instance-level rules — the subnet is already protected.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Operational Considerations
&lt;/h2&gt;

&lt;p&gt;Managing SGs and NACLs affects operations in different ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stateful SGs simplify troubleshooting because return traffic doesn’t need explicit rules.&lt;/li&gt;
&lt;li&gt;NACLs require careful planning since one missing outbound ephemeral rule can break everything.&lt;/li&gt;
&lt;li&gt;Too many SGs can become messy without naming conventions.&lt;/li&gt;
&lt;li&gt;NACLs with hundreds of rules become difficult to maintain and audit.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keeping both clean and well-documented saves time when debugging connectivity issues or scaling your VPC.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tips for Working with SGs and NACLs
&lt;/h2&gt;

&lt;p&gt;A few simple habits make daily work much smoother:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name SGs and NACLs clearly based on purpose.&lt;/li&gt;
&lt;li&gt;Keep NACLs simple unless you have a strong reason to do otherwise.&lt;/li&gt;
&lt;li&gt;Review inbound and outbound flows regularly.&lt;/li&gt;
&lt;li&gt;Document port usage for each tier or service.&lt;/li&gt;
&lt;li&gt;Avoid overlapping or conflicting NACL rules.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These practices reduce mistakes and make both layers predictable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Security Groups and Network ACLs both influence how traffic moves through your VPC, but they solve different problems. SGs secure individual resources through flexible, stateful rules, while NACLs enforce subnet-level boundaries with stateless filtering. Once you understand how the two layers interact, building secure and well-structured VPC networks becomes much easier to manage.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>networking</category>
    </item>
    <item>
      <title>A Simple Guide to Route Tables and Internet Gateways in AWS</title>
      <dc:creator>Irfan Satrio</dc:creator>
      <pubDate>Wed, 19 Nov 2025 22:11:03 +0000</pubDate>
      <link>https://forem.com/irfansatrio/a-simple-guide-to-route-tables-and-internet-gateways-in-aws-3b1k</link>
      <guid>https://forem.com/irfansatrio/a-simple-guide-to-route-tables-and-internet-gateways-in-aws-3b1k</guid>
      <description>&lt;p&gt;Exploring how traffic moves inside a VPC can make AWS networking feel much more approachable. Route tables and Internet Gateways (IGWs) quietly influence how your subnets function across your network. In this article, we’ll walk through them step by step so the concepts stay clear and grounded.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Role of Route Tables in a VPC
&lt;/h3&gt;

&lt;p&gt;Route tables are essentially instruction sets that guide traffic within your VPC. Each subnet connects to exactly one route table, which determines what that subnet can reach. The default local route (for example, 10.0.0.0/16 local) lets all subnets talk to each other without extra configuration. Anything beyond that depends on the routes you add.&lt;/p&gt;

&lt;p&gt;Think of routes like road signs: “If traffic wants to go to X, send it to Y.”&lt;br&gt;
For instance, a 0.0.0.0/0 route pointing to an IGW makes a subnet public. Pointing the same destination to a NAT Gateway makes it private with controlled outbound access. The logic is simple, but the design impact is significant.&lt;/p&gt;

&lt;p&gt;Route tables also become more interesting in multi-VPC environments. You might add a route to another VPC via peering or a route to an on-premises network through a VPN. Without clear routes, traffic may fail to reach its destination or take inefficient paths.&lt;/p&gt;

&lt;h3&gt;
  
  
  Public Subnets and Internet Gateways
&lt;/h3&gt;

&lt;p&gt;An Internet Gateway provides a path between your VPC and the internet. It doesn’t automatically expose your resources; the route table determines traffic flow, and instances need public IPs or Elastic IPs. Without those, the IGW sits idle.&lt;/p&gt;

&lt;p&gt;Only subnets explicitly configured with routes to the IGW and instances with public IPs become public. Everything else remains internal by default.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; Your web server subnet is public, allowing external traffic. Meanwhile, your backend API or database subnet remains private and unreachable from the internet.&lt;/p&gt;

&lt;p&gt;This separation ensures intentional design and helps prevent accidental exposure of internal systems. It also means you can confidently design hybrid architectures without worrying about default internet access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Private Subnets and Controlled Outbound Traffic
&lt;/h3&gt;

&lt;p&gt;Private subnets typically route outbound traffic through a NAT Gateway or sometimes VPC endpoints. Their route tables may have a 0.0.0.0/0 → NAT route, allowing them to reach the internet without being reachable from outside.&lt;/p&gt;

&lt;p&gt;Subnets can also remain fully isolated. If the route table only contains the local VPC route, they cannot leave the VPC. This is common for internal services, like database layers or batch processing workloads, where internet access isn’t needed.&lt;/p&gt;

&lt;p&gt;You can even mix private subnets: some with NAT access for updates or API calls, and some fully isolated for sensitive workloads. This flexibility is one of the reasons AWS networking scales well for different application needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Traffic Flows End-to-End
&lt;/h3&gt;

&lt;p&gt;Understanding route tables in isolation is useful, but seeing the end-to-end traffic flow makes networking clearer. For example:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A client requests data from a public web server in your VPC.&lt;/li&gt;
&lt;li&gt;The server’s subnet route table directs traffic through the IGW.&lt;/li&gt;
&lt;li&gt;Responses travel back via the IGW to the client.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For a private subnet:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An internal application needs to call an external API.&lt;/li&gt;
&lt;li&gt;Traffic goes to the NAT Gateway.&lt;/li&gt;
&lt;li&gt;The NAT translates the private IP and sends the request out.&lt;/li&gt;
&lt;li&gt;Responses return via the NAT, preserving internal IP addresses.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Visualizing these flows—either in a diagram or with mental models—makes it easier to predict how changes in routes affect the network.&lt;/p&gt;

&lt;h3&gt;
  
  
  Route Table Patterns That Keep Things Predictable
&lt;/h3&gt;

&lt;p&gt;As VPCs grow, predictable routing becomes essential. Some patterns to follow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Separate route tables for public and private subnets.&lt;/li&gt;
&lt;li&gt;Avoid mixing IGW and NAT routes unnecessarily.&lt;/li&gt;
&lt;li&gt;Keep routes minimal and descriptive.&lt;/li&gt;
&lt;li&gt;Consider how peering, Transit Gateway, or multi-account architectures will interact with your tables.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another useful approach is documenting all route table associations in a simple table or spreadsheet. When you add subnets or connect new accounts, this reference helps avoid misconfigurations.&lt;/p&gt;

&lt;p&gt;These habits reduce mistakes and make debugging easier as the network grows.&lt;/p&gt;

&lt;h3&gt;
  
  
  How IGWs Fit Into Larger Architectures
&lt;/h3&gt;

&lt;p&gt;In hybrid setups, shared services, or multi-account designs, IGWs maintain their role as the gateway to the internet. The route table determines which traffic exits through the IGW versus staying internal through Direct Connect or VPNs. AWS doesn’t forward traffic between these paths automatically, so each destination needs a clear route.&lt;/p&gt;

&lt;p&gt;Large architectures benefit from keeping IGW routes simple. Complex configurations can introduce unexpected routing loops or unintended internet exposure. Even small changes—like adding an IGW route to a newly created subnet—can have cascading effects if not planned.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring and Troubleshooting Route Tables and IGWs
&lt;/h3&gt;

&lt;p&gt;Managing route tables and Internet Gateways is not just about creating the correct routes, it is also about making sure everything works as intended. VPC Flow Logs are a useful tool to monitor traffic within the VPC and verify whether packets reach their destinations or are blocked along the way.&lt;/p&gt;

&lt;p&gt;It is important to regularly check route table associations. Make sure each subnet is connected to the correct route table and that routes to the IGW or NAT Gateway are properly configured. Even small mistakes, such as a missing default route or an unassociated subnet, can cause instances to lose internet connectivity.&lt;/p&gt;

&lt;p&gt;Practical tests using ping or curl from instances in public and private subnets help confirm that traffic flows as expected. Keeping documentation of route tables, routes, and subnet associations in a spreadsheet or diagram helps teams understand the network, reduce errors when adding new subnets, and identify problems more quickly. These practices make VPC management more predictable and easier to scale as the infrastructure grows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Operational Considerations
&lt;/h3&gt;

&lt;p&gt;Routing decisions impact both operations and cost:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NAT Gateways incur charges, so overuse in private subnets can increase costs.&lt;/li&gt;
&lt;li&gt;Public subnets reduce NAT reliance but require careful security configurations.&lt;/li&gt;
&lt;li&gt;Troubleshooting connectivity issues often begins by checking route tables and subnet associations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Clear labeling of route tables and consistent patterns make ongoing management simpler. Over time, this makes scaling, auditing, and monitoring much more manageable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Route tables and Internet Gateways define how your VPC connects to the outside world and how your subnets behave internally. Once you understand how a single default route can shift a subnet’s role, AWS networking starts to feel much more manageable. And once that idea becomes clear, it’s easier to reason about how everything in your VPC fits together.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>networking</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Getting Started with CIDR and Subnetting in AWS</title>
      <dc:creator>Irfan Satrio</dc:creator>
      <pubDate>Mon, 17 Nov 2025 06:38:04 +0000</pubDate>
      <link>https://forem.com/irfansatrio/getting-started-with-cidr-and-subnetting-in-aws-1cb9</link>
      <guid>https://forem.com/irfansatrio/getting-started-with-cidr-and-subnetting-in-aws-1cb9</guid>
      <description>&lt;p&gt;Understanding AWS networking can feel tricky at first, especially when it comes to organizing IP addresses. Concepts like CIDR and subnetting are the tools that help shape your VPC and manage traffic, and in this article, we’ll go through them step by step so you can follow along more easily.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding CIDR in AWS
&lt;/h3&gt;

&lt;p&gt;CIDR (Classless Inter-Domain Routing) defines the size of an IP address range. A block like &lt;code&gt;10.0.0.0/16&lt;/code&gt; shows which part of the IP is the network and how many addresses are available. Smaller prefixes provide larger address spaces while larger prefixes give smaller networks.&lt;/p&gt;

&lt;p&gt;In AWS, your VPC’s CIDR defines the total address space you have to work with. Choosing wisely is important. Picking a range too small can lead to running out of IPs as more subnets and services are added. Overly large ranges can cause overlap with other networks and complicate peering or hybrid connections.&lt;/p&gt;

&lt;p&gt;Suppose a VPC &lt;code&gt;10.0.0.0/16&lt;/code&gt; is divided into three /24 subnets across different Availability Zones for public, application, and database workloads. This provides enough addresses for medium workloads while leaving room for future growth.&lt;/p&gt;

&lt;p&gt;It’s also useful to leave spare address space for unexpected growth or new services. For example, if you plan to add containerized applications later, remember that each container may consume multiple IPs per node, especially in AWS ECS or EKS using the &lt;code&gt;awsvpc&lt;/code&gt; networking mode.&lt;/p&gt;

&lt;h3&gt;
  
  
  Subnetting and VPC Layout
&lt;/h3&gt;

&lt;p&gt;Subnets divide your VPC into smaller segments. Each subnet belongs to a single Availability Zone, helping isolate failure domains and making traffic behavior more predictable.&lt;/p&gt;

&lt;p&gt;Subnets are tied to routing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Public subnets&lt;/strong&gt; route through an Internet Gateway for external access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Private subnets&lt;/strong&gt; route through a NAT Gateway or VPC endpoint for controlled outbound traffic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database subnets&lt;/strong&gt; remain isolated, often without direct internet access.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keeping subnets simple and well-labeled makes your architecture easier to understand and operate. Once subnets are defined, it is important to consider traffic flow and security rules.&lt;/p&gt;

&lt;p&gt;Additionally, naming conventions can help a lot. For example, naming subnets like &lt;code&gt;public-us-east-1a-web&lt;/code&gt;, &lt;code&gt;private-us-east-1b-app&lt;/code&gt;, or &lt;code&gt;db-us-east-1c&lt;/code&gt; immediately communicates their purpose and AZ, reducing mistakes when applying security groups or route tables later.&lt;/p&gt;

&lt;h3&gt;
  
  
  Routing and Security Considerations
&lt;/h3&gt;

&lt;p&gt;CIDR directly affects routing. Route tables rely on clear, non-overlapping ranges to deliver traffic correctly. As environments grow with Transit Gateway, PrivateLink, or multi-account setups, predictable CIDR allocations prevent confusion.&lt;/p&gt;

&lt;p&gt;Security controls also depend on CIDR:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Security Groups&lt;/strong&gt; define which IP ranges can reach your instances.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network ACLs&lt;/strong&gt; apply rules at the subnet level.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interface Endpoints&lt;/strong&gt; like S3 or DynamoDB private connections consume IP addresses from subnets, so leave room for future connections.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Leaving buffer IPs in each subnet prevents future deployments from being blocked.&lt;/p&gt;

&lt;p&gt;It is also a good practice to visualize which subnets need which level of access. For example, a public web server might only allow inbound HTTP/HTTPS, while a database subnet may only allow inbound traffic from private application subnets. This planning reduces accidental exposure and simplifies troubleshooting later.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scaling and Connectivity
&lt;/h3&gt;

&lt;p&gt;Good CIDR and subnet planning simplifies hybrid and multi-VPC environments. Direct Connect or VPNs need non-overlapping ranges. VPC peering and Transit Gateway connections also rely on clear boundaries. Poor planning can lead to workarounds such as NAT routing or IP translation.&lt;/p&gt;

&lt;p&gt;Standardizing CIDR patterns across accounts makes automation and monitoring easier, reduces mistakes, and simplifies route propagation when adding new workloads or Availability Zones. For instance, using a consistent scheme like &lt;code&gt;10.X.Y.0/24&lt;/code&gt; for each subnet type makes it much easier to predict where new subnets should go and reduces the risk of overlapping addresses in multi-account setups.&lt;/p&gt;

&lt;h3&gt;
  
  
  Common VPC Use Cases and Examples
&lt;/h3&gt;

&lt;p&gt;Here are some practical scenarios to illustrate how CIDR and subnets work together:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Web server on a public subnet&lt;/strong&gt;: Needs a route to the IGW and public IP for internet access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend application on a private subnet&lt;/strong&gt;: Uses NAT Gateway to reach external APIs while remaining hidden from the internet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database server&lt;/strong&gt;: Fully isolated; route table only includes the local VPC range.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid cloud access&lt;/strong&gt;: Some subnets connect to on-prem via VPN or Direct Connect, while others rely on IGW for internet connectivity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Seeing these patterns repeatedly in labs or real-world projects helps internalize routing behavior and subnet design principles.&lt;/p&gt;

&lt;h3&gt;
  
  
  Operational and Cost Implications
&lt;/h3&gt;

&lt;p&gt;CIDR choices impact operations and cost in subtle ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Poorly sized subnets can increase cross-AZ NAT traffic, raising costs.&lt;/li&gt;
&lt;li&gt;Fragmented ranges may require extra endpoints or Transit Gateway attachments.&lt;/li&gt;
&lt;li&gt;Clear address planning reduces operational overhead for monitoring, logging, and troubleshooting.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another consideration is automation. Many teams use Infrastructure as Code tools like Terraform or AWS CloudFormation. Well-planned CIDR ranges and subnet layouts make it much easier to template VPCs, reducing errors and deployment time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tips for Planning
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Visualize your VPC&lt;/strong&gt;: Even a simple diagram helps understand where subnets and routes interact.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Leave buffer IPs&lt;/strong&gt;: Reserve extra addresses for future scaling, new workloads, or temporary needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use descriptive naming&lt;/strong&gt;: Makes it easier to track subnets and route table associations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document everything&lt;/strong&gt;: Keep a spreadsheet or diagram of CIDR blocks, subnets, and their purpose.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These small practices save a lot of time when scaling, auditing, or troubleshooting.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;CIDR and subnetting are the foundation of AWS networking. Choosing the right VPC range and organizing subnets carefully ensures smooth routing, security, and scalability. Planning these fundamentals early makes future growth and operations much easier.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>networking</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
