<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Gergo Vadasz</title>
    <description>The latest articles on Forem by Gergo Vadasz (@gergovadasz).</description>
    <link>https://forem.com/gergovadasz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/gergovadasz"/>
    <language>en</language>
    <item>
      <title>No More Middlemen: Native AWS to Google Cloud Connectivity Explained</title>
      <dc:creator>Gergo Vadasz</dc:creator>
      <pubDate>Fri, 24 Apr 2026 21:42:23 +0000</pubDate>
      <link>https://forem.com/gergovadasz/no-more-middlemen-native-aws-to-google-cloud-connectivity-explained-1nli</link>
      <guid>https://forem.com/gergovadasz/no-more-middlemen-native-aws-to-google-cloud-connectivity-explained-1nli</guid>
      <description>&lt;p&gt;Until now, connecting AWS and Google Cloud meant stitching together VPNs over the public internet, colocating in the same facility with two separate cross-connects, or paying a third-party network provider to bridge the gap. These approaches work - plenty of companies rely on them daily - but they all come with operational overhead and added complexity that make multi-cloud connectivity harder to set up and maintain than it needs to be.&lt;/p&gt;

&lt;p&gt;That changed on April 14, 2026, when AWS launched &lt;strong&gt;AWS Interconnect - multicloud&lt;/strong&gt; with Google Cloud as the first partner. For the first time, you can provision a dedicated, private connection between AWS and GCP directly from the console - no middleman, no colo, no VPN tunnels. I got hands-on with it the same week it went GA, and in this post I'll walk you through exactly how to set it up, step by step.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS Interconnect - multicloud?
&lt;/h2&gt;

&lt;p&gt;AWS Interconnect - multicloud is a new service under the AWS Direct Connect family that lets you create private, high-bandwidth connections directly to other cloud providers. At launch, Google Cloud is the only supported partner, with Microsoft Azure and Oracle Cloud Infrastructure coming later in 2026.&lt;/p&gt;

&lt;p&gt;The key things to know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It lives under &lt;strong&gt;Direct Connect&lt;/strong&gt; in the AWS console&lt;/li&gt;
&lt;li&gt;Connections are region-to-region (e.g., AWS eu-central-1 to GCP europe-west3)&lt;/li&gt;
&lt;li&gt;It uses a Direct Connect gateway on the AWS side and Partner Cross-Cloud Interconnect on the GCP side&lt;/li&gt;
&lt;li&gt;Bandwidth ranges from 1 Gbps to 100 Gbps, with granular sizing - unlike traditional Cross-Cloud Interconnect which only offers 10G/100G increments&lt;/li&gt;
&lt;li&gt;Redundancy is built into the underlying resources - no need to manually configure redundant connections like with traditional Cross-Cloud Interconnect&lt;/li&gt;
&lt;li&gt;The connection can be initiated from either the AWS or the GCP side&lt;/li&gt;
&lt;li&gt;Each GCP project is limited to one transport resource per region&lt;/li&gt;
&lt;li&gt;Pricing is based on bandwidth and geographic scope&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Free tier&lt;/strong&gt;: &lt;a href="https://aws.amazon.com/about-aws/whats-new/2026/04/aws-announces-ga-AWS-interconnect-multicloud/" rel="noopener noreferrer"&gt;One free local 500 Mbps interconnect per region, starting May 2026&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;Before diving into the setup, here's what the end-to-end architecture looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7kayjic5ll14f820ij8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7kayjic5ll14f820ij8.png" alt="AWS to Google Cloud Interconnect architecture" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the AWS side, traffic flows from your EC2 instances through a &lt;strong&gt;VPC Attachment&lt;/strong&gt; into a &lt;strong&gt;Transit Gateway&lt;/strong&gt;, then via a &lt;strong&gt;Direct Connect Attachment&lt;/strong&gt; to a &lt;strong&gt;Direct Connect Gateway&lt;/strong&gt;. The Direct Connect Gateway connects to GCP through the &lt;strong&gt;Partner Cross-Cloud Interconnect&lt;/strong&gt; - this is the actual cross-cloud link that AWS provisions behind the scenes. On the Google Cloud side, the interconnect attaches to your &lt;strong&gt;VPC&lt;/strong&gt; via &lt;strong&gt;GCP VPC Network Peering&lt;/strong&gt; - to be clear, this is not a peering between the AWS VPC and the GCP VPC. It's a standard GCP VPC Network Peering used to connect the interconnect's managed network to your own GCP VPC, giving your &lt;strong&gt;Compute Engine&lt;/strong&gt; instances direct reachability to AWS resources.&lt;/p&gt;

&lt;p&gt;VPC Network Peering is not the only option on the GCP side - you can also use &lt;strong&gt;Network Connectivity Center (NCC)&lt;/strong&gt; to connect the Partner Cross-Cloud Interconnect to your Google Cloud environment. NCC is the better choice if you need to connect multiple VPCs or integrate this into a broader hub-and-spoke topology on the GCP side. In this walkthrough, I'm using VPC Network Peering for simplicity.&lt;/p&gt;

&lt;p&gt;The key takeaway from this diagram is that both sides use familiar networking primitives - there's no new proprietary overlay. If you've worked with AWS Transit Gateway or GCP Partner Interconnect before, the building blocks will feel familiar. What makes this work is that AWS and Google maintain pre-established network links between their regions. This service automates the provisioning of a dedicated connection over that shared infrastructure - no manual cross-connects or third-party involvement needed.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you begin, you'll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS account with access to Direct Connect in a supported region&lt;/li&gt;
&lt;li&gt;A Google Cloud project with the Network Connectivity API enabled&lt;/li&gt;
&lt;li&gt;Appropriate IAM permissions on both sides&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Create the Multicloud Interconnect in AWS
&lt;/h2&gt;

&lt;p&gt;Navigate to &lt;strong&gt;Direct Connect &amp;gt; AWS Interconnect - multicloud&lt;/strong&gt; in the AWS console. You'll see the interconnect dashboard - click &lt;strong&gt;Create Multicloud Interconnect&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8gbi9z2uk27ypjthb89.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8gbi9z2uk27ypjthb89.png" alt="AWS Interconnect multicloud dashboard" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Select a provider
&lt;/h3&gt;

&lt;p&gt;The first step is selecting your cloud provider. Currently, only &lt;strong&gt;Google Cloud&lt;/strong&gt; is available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8byjodjha9eu2cfzhmzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8byjodjha9eu2cfzhmzu.png" alt="Select Google Cloud as the provider" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Select regions
&lt;/h3&gt;

&lt;p&gt;Choose the AWS region and the corresponding Google Cloud region for your interconnect. Your region choices determine the physical connection path, which affects latency and performance. In my setup, I'm using &lt;strong&gt;eu-central-1&lt;/strong&gt; (Frankfurt) on the AWS side and &lt;strong&gt;europe-west3&lt;/strong&gt; (Frankfurt) on the GCP side - keeping both in the same metro for the lowest latency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46g9ih6sb5df55isd455.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46g9ih6sb5df55isd455.png" alt="Select AWS and GCP regions" width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure options
&lt;/h3&gt;

&lt;p&gt;Configure the interconnect details: give it a description, select a &lt;strong&gt;Direct Connect gateway&lt;/strong&gt; (or create one), choose your bandwidth, and add any tags. On the right side, you'll see the option to create a new Direct Connect gateway if you don't have one yet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7tyhg59xxaz2ealw0ck.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7tyhg59xxaz2ealw0ck.png" alt="Configure interconnect options and Direct Connect gateway" width="800" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once submitted, AWS provisions the interconnect and generates an &lt;strong&gt;activation key&lt;/strong&gt;. This key is what ties the AWS side to the GCP side - copy it, you'll need it in the next step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkusjbu8082ihx3w4gsv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkusjbu8082ihx3w4gsv.png" alt="Activation key generated by AWS" width="800" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Create the Transport in Google Cloud
&lt;/h2&gt;

&lt;p&gt;Now switch to the &lt;strong&gt;Google Cloud Console&lt;/strong&gt;. Navigate to &lt;strong&gt;Network Connectivity &amp;gt; Partner Cross-Cloud Interconnect&lt;/strong&gt; and click &lt;strong&gt;Create Transport&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdhi1qxd5uwspyzy4tpr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdhi1qxd5uwspyzy4tpr.png" alt="GCP Partner Cross-Cloud Interconnect dashboard" width="800" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Connection start point
&lt;/h3&gt;

&lt;p&gt;Paste the &lt;strong&gt;activation key&lt;/strong&gt; from AWS. Google Cloud will automatically detect the remote cloud provider and region. A transport profile is pre-provisioned for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ot4qjyvwcqttsf1uc0x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ot4qjyvwcqttsf1uc0x.png" alt="Paste activation key and transport profile" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Transport profile
&lt;/h3&gt;

&lt;p&gt;The transport profile confirms the connection details: the remote cloud service provider (Amazon Web Services), region, description, and bandwidth.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5r6onwkhq21z1av73b2w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5r6onwkhq21z1av73b2w.png" alt="Transport profile details" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic configuration
&lt;/h3&gt;

&lt;p&gt;Configure the transport name, bandwidth, IP stack type (IPv4 single stack), and transport connectivity settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qlwh2gbl4p2uoikpf92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qlwh2gbl4p2uoikpf92.png" alt="Basic configuration" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Connection
&lt;/h3&gt;

&lt;p&gt;Select the appropriate VPC on GCP side, and provide what ip ranges should GCP advertise towards AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkhzjum3m9mxqps0u613.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkhzjum3m9mxqps0u613.png" alt="Connection configuration" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Verify the transport via CLI
&lt;/h3&gt;

&lt;p&gt;You can also verify and manage the transport using &lt;code&gt;gcloud&lt;/code&gt;. Note that at the time of writing, the transport commands are only available in the &lt;code&gt;beta&lt;/code&gt; track:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud beta network-connectivity transports list

NAME: gcp-to-aws
REGION: europe-west3
REMOTE_PROFILE: aws-eu-central-1
BANDWIDTH: BPS_1G
STATE: ACTIVE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more details, use &lt;code&gt;describe&lt;/code&gt; with the &lt;code&gt;--region&lt;/code&gt; flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud beta network-connectivity transports describe gcp-to-aws &lt;span class="nt"&gt;--region&lt;/span&gt; europe-west3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Set up VPC Network Peering via CLI
&lt;/h3&gt;

&lt;p&gt;Once the transport is active, create the VPC Network Peering between your VPC and the transport's managed network. You can find the peering network URI in the transport's &lt;code&gt;peeringNetwork&lt;/code&gt; field from the describe output above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute networks peerings create &lt;span class="s2"&gt;"gcp-to-aws"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--network&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gcp-vpc"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--peer-network&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"projects/n088e7d12bbcf2d64p-tp/global/networks/transport-5c75b1ed8bc1eeec-vpc"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--stack-type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;IPV4_ONLY &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--import-custom-routes&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--export-custom-routes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure to enable &lt;code&gt;--import-custom-routes&lt;/code&gt; and &lt;code&gt;--export-custom-routes&lt;/code&gt; so that routes are exchanged between your VPC and the interconnect.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Wait for the Connection to come up
&lt;/h2&gt;

&lt;p&gt;Since we initiated the connection from the AWS side and pasted the activation key into GCP, there's no additional key exchange needed. The GCP side confirms there is no pairing key to share back - the activation key from AWS was sufficient.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg7scq7mf0hi8smauwpa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg7scq7mf0hi8smauwpa.png" alt="GCP transport - no pairing key needed" width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Back in the AWS console, the interconnect status will transition from &lt;strong&gt;Pending&lt;/strong&gt; to &lt;strong&gt;Available&lt;/strong&gt; automatically once both sides have completed their configuration. No manual acceptance is required when the connection is initiated from AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ru0ihwj31isbz4hpstt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ru0ihwj31isbz4hpstt.png" alt="AWS interconnect status changed to available" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: AWS Networking Setup
&lt;/h2&gt;

&lt;p&gt;With the interconnect link established, you now need to wire up the AWS networking side to make traffic flow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a Transit Gateway
&lt;/h3&gt;

&lt;p&gt;Create a Transit Gateway to act as the central hub for routing between your VPCs and the interconnect.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcf76ev6c4qbm9u8h0a6y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcf76ev6c4qbm9u8h0a6y.png" alt="Create Transit Gateway" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Direct Connect Gateway
&lt;/h3&gt;

&lt;p&gt;The Direct Connect gateway bridges the interconnect and your AWS networking. Link it to the Transit Gateway so traffic can flow between your VPCs and GCP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqah84k6hwx9h978ml80.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqah84k6hwx9h978ml80.png" alt="Direct Connect gateway configuration" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Allowed Prefixes
&lt;/h3&gt;

&lt;p&gt;Finally, specify the &lt;strong&gt;allowed prefixes&lt;/strong&gt; - these are the AWS-side CIDR ranges that will be advertised towards GCP over the interconnect.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5qa4c2rvtx7oin7rqvk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5qa4c2rvtx7oin7rqvk.png" alt="Associate gateway with allowed prefixes" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Verify the Connection
&lt;/h2&gt;

&lt;h3&gt;
  
  
  AWS side: Transit Gateway route table
&lt;/h3&gt;

&lt;p&gt;Once everything is associated, check the Transit Gateway route table. You should see routes learned from the GCP side via the interconnect.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yf6a5zyoyftz3cui0v2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yf6a5zyoyftz3cui0v2.png" alt="Transit Gateway route table with learned routes" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  GCP side: Transport details
&lt;/h3&gt;

&lt;p&gt;Back in Google Cloud, the transport details should show an &lt;strong&gt;Active&lt;/strong&gt; status, confirming the connection is up and running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xywtqntyrvtescynsz0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xywtqntyrvtescynsz0.png" alt="GCP transport details showing active status" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  GCP side: VPC routes
&lt;/h3&gt;

&lt;p&gt;Finally, check your GCP VPC routes. You should see the AWS prefixes appearing in the route table, learned through the cross-cloud interconnect.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckjmbwv1o9fnqan7000a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckjmbwv1o9fnqan7000a.png" alt="GCP VPC routes showing AWS prefixes" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Ping test
&lt;/h3&gt;

&lt;p&gt;With routes in place on both sides, let's verify end-to-end connectivity. From the AWS EC2 instance (192.168.0.10), pinging the GCP VM (10.0.0.2):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ubuntu@ip-192-168-0-10:~$ ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=61 time=4.16 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=61 time=1.60 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=61 time=1.62 ms
--- 10.0.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.598/2.456/4.155/1.201 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And from the GCP VM (10.0.0.2), pinging back to the AWS instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gergo@gcp-test-vm:~$ ping 192.168.0.10
PING 192.168.0.10 (192.168.0.10) 56(84) bytes of data.
64 bytes from 192.168.0.10: icmp_seq=1 ttl=61 time=2.42 ms
64 bytes from 192.168.0.10: icmp_seq=2 ttl=61 time=1.33 ms
64 bytes from 192.168.0.10: icmp_seq=3 ttl=61 time=1.44 ms
64 bytes from 192.168.0.10: icmp_seq=4 ttl=61 time=1.36 ms
--- 192.168.0.10 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 1.334/1.636/2.419/0.453 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sub-2ms latency between Frankfurt regions - that's what you'd expect from a dedicated interconnect within the same metro.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  First Impressions
&lt;/h2&gt;

&lt;p&gt;After setting this up, a few things stand out:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I liked:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The activation key / pairing key exchange is straightforward - it's similar to how Partner Interconnect works in GCP today&lt;/li&gt;
&lt;li&gt;End-to-end setup took under an hour, which is remarkable compared to traditional cross-connect provisioning&lt;/li&gt;
&lt;li&gt;The integration with existing AWS networking primitives (Direct Connect gateway, Transit Gateway) means you can plug this into an existing hub-and-spoke architecture without redesigning anything&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What to watch out for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Direct Connect gateway and Transit Gateway association is an extra step that could trip up users who are new to AWS networking. AWS VPN Gateway can be used as well instead of TGW.&lt;/li&gt;
&lt;li&gt;Terraform support is not yet available, though it's &lt;a href="https://github.com/hashicorp/terraform-provider-aws/issues/47458" rel="noopener noreferrer"&gt;being tracked&lt;/a&gt; - for now, it's console/CLI only&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What I'm curious about:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How the free 500 Mbps tier (coming May 2026) will work in practice&lt;/li&gt;
&lt;li&gt;Performance characteristics compared to VPN-over-internet approaches&lt;/li&gt;
&lt;li&gt;How Azure and Oracle Cloud integrations will look when they launch later this year - cross-cloud connectivity is already possible between &lt;a href="https://gergovadasz.hu/secure-and-private-connectivity-between-azure-and-oracle-cloud/" rel="noopener noreferrer"&gt;Azure and Oracle Cloud&lt;/a&gt;, so it will be interesting to see how the AWS Interconnect approach compares&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AWS Interconnect - multicloud is a significant step forward for multi-cloud networking. It removes the biggest friction point - the physical connectivity - and turns what used to be a weeks-long procurement process into something you can set up in an afternoon. If you're running workloads across AWS and Google Cloud, this is worth evaluating immediately, especially with the free 500 Mbps tier on the horizon.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/about-aws/whats-new/2026/04/aws-announces-ga-AWS-interconnect-multicloud/" rel="noopener noreferrer"&gt;AWS Interconnect - multicloud GA announcement&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.cloud.google.com/network-connectivity/docs/interconnect/concepts/partner-cci-for-aws-overview" rel="noopener noreferrer"&gt;Partner Cross-Cloud Interconnect for AWS overview - Google Cloud&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/hashicorp/terraform-provider-aws/issues/47458" rel="noopener noreferrer"&gt;Terraform AWS provider - AWS Interconnect support request (GitHub)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://gergovadasz.hu/no-more-middlemen-native-aws-to-google-cloud-connectivity-explained/" rel="noopener noreferrer"&gt;gergovadasz.hu&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>googlecloud</category>
      <category>networking</category>
      <category>multicloud</category>
    </item>
    <item>
      <title>Deploy a Private Website with Cloudflare Zero Trust and Terraform</title>
      <dc:creator>Gergo Vadasz</dc:creator>
      <pubDate>Thu, 23 Apr 2026 13:00:00 +0000</pubDate>
      <link>https://forem.com/gergovadasz/deploy-a-private-website-with-cloudflare-zero-trust-and-terraform-5gcn</link>
      <guid>https://forem.com/gergovadasz/deploy-a-private-website-with-cloudflare-zero-trust-and-terraform-5gcn</guid>
      <description>&lt;p&gt;Cloudflare Zero Trust is a security platform that lets you control who can access your internal or private applications — without using a traditional VPN. It authenticates users through methods like email or Google/Microsoft accounts before granting access.&lt;/p&gt;

&lt;p&gt;In this post, I'll show you how to deploy a private website behind Cloudflare Zero Trust using Terraform, with a VM hosted on Google Cloud.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  What You'll Need
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A free Cloudflare account&lt;/li&gt;
&lt;li&gt;A domain managed by Cloudflare&lt;/li&gt;
&lt;li&gt;Cloudflare Zero Trust activated&lt;/li&gt;
&lt;li&gt;Infrastructure to host the website (VM, PaaS, etc.)&lt;/li&gt;
&lt;li&gt;For this guide: a Google Cloud project with VPC network access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Collect Cloudflare Account Details
&lt;/h2&gt;

&lt;p&gt;Create a &lt;code&gt;terraform.tfvars&lt;/code&gt; file with your Cloudflare and GCP details:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;cloudflare_zone&lt;/span&gt;           &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"yourdomain.com"&lt;/span&gt;
&lt;span class="nx"&gt;cloudflare_zone_id&lt;/span&gt;        &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ce...."&lt;/span&gt;
&lt;span class="nx"&gt;cloudflare_account_id&lt;/span&gt;     &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"7a...."&lt;/span&gt;
&lt;span class="nx"&gt;cloudflare_email&lt;/span&gt;          &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"[email protected]"&lt;/span&gt;
&lt;span class="nx"&gt;cloudflare_token&lt;/span&gt;          &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"b6...."&lt;/span&gt;
&lt;span class="nx"&gt;gcp_project_id&lt;/span&gt;            &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"your-gcp-project"&lt;/span&gt;
&lt;span class="nx"&gt;zone&lt;/span&gt;                      &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"europe-west4-a"&lt;/span&gt;
&lt;span class="nx"&gt;machine_type&lt;/span&gt;              &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"e2-small"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;API Token Requirements:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloudflare Tunnel: Edit&lt;/li&gt;
&lt;li&gt;Access: Apps and Policies: Edit&lt;/li&gt;
&lt;li&gt;DNS: Edit&lt;/li&gt;
&lt;li&gt;Zero Trust: Edit&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy with Terraform
&lt;/h2&gt;

&lt;p&gt;The Terraform code is available in my public repository: &lt;a href="https://github.com/vadaszgergo/terraform-public/tree/main/cloudflare-zero-trust-web-application" rel="noopener noreferrer"&gt;github.com/vadaszgergo/terraform-public/tree/main/cloudflare-zero-trust-web-application&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Deployment involves:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Cloudflare Zero Trust resource creation (takes seconds)&lt;/li&gt;
&lt;li&gt;VM provisioning in Google Cloud&lt;/li&gt;
&lt;li&gt;Auto-installation via cloud-init script (5-6 minutes):

&lt;ul&gt;
&lt;li&gt;OS updates and package installation&lt;/li&gt;
&lt;li&gt;Static website creation&lt;/li&gt;
&lt;li&gt;Cloudflared tunnel configuration and startup&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once complete, your website is accessible at &lt;code&gt;http_app.yourdomain.com&lt;/code&gt; — but only after email authentication through Cloudflare's access policy.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes This Powerful
&lt;/h2&gt;

&lt;p&gt;This setup can serve as a secure entry point for both private and public websites. The flexibility is what makes it interesting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Private applications:&lt;/strong&gt; Internal dashboards, admin panels, staging environments — accessible only to authenticated users&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Public with protection:&lt;/strong&gt; Your production site behind DDoS protection and WAF&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Any hosting backend:&lt;/strong&gt; Works with VMs, containers, home labs, or any environment that can run &lt;code&gt;cloudflared&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You get all of this without opening any inbound ports on your server, without configuring a VPN, and without managing certificates manually.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;It's surprisingly simple to protect your applications using Cloudflare's powerful policies and authentication features, without relying on a traditional VPN. The Terraform code handles everything — from the Cloudflare tunnel and access policies to the GCP VM and website setup.&lt;/p&gt;

&lt;p&gt;You can extend this further with multi-user policies, device posture checks, and Cloudflare's analytics dashboard.&lt;/p&gt;

&lt;p&gt;Check out the full guide and Terraform code at &lt;a href="https://gergovadasz.hu/deploy-a-private-website-with-cloudflare-zero-trust-and-terraform/?utm_source=devto&amp;amp;utm_medium=crosspost" rel="noopener noreferrer"&gt;gergovadasz.hu&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://gergovadasz.hu" rel="noopener noreferrer"&gt;gergovadasz.hu&lt;/a&gt;. I write hands-on cloud networking guides with production-ready Terraform code for AWS, Azure, and GCP. &lt;a href="https://gergovadasz.hu/#/portal/signup?utm_source=devto&amp;amp;utm_medium=crosspost" rel="noopener noreferrer"&gt;Subscribe for more&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cloudflare</category>
      <category>terraform</category>
      <category>security</category>
      <category>networking</category>
    </item>
    <item>
      <title>Save NAT Gateway Costs by Using an EC2 - Terraform Code Included</title>
      <dc:creator>Gergo Vadasz</dc:creator>
      <pubDate>Tue, 21 Apr 2026 13:00:00 +0000</pubDate>
      <link>https://forem.com/gergovadasz/save-nat-gateway-costs-by-using-an-ec2-terraform-code-included-507a</link>
      <guid>https://forem.com/gergovadasz/save-nat-gateway-costs-by-using-an-ec2-terraform-code-included-507a</guid>
      <description>&lt;p&gt;AWS NAT Gateways cost at least $33/month before you even send a byte of data. For dev environments, small startups, or personal projects, that's a lot of money for something a $3.50/month EC2 instance can handle.&lt;/p&gt;

&lt;p&gt;In this post, I'll show you how to use a small EC2 instance as a NAT device — and provide the complete Terraform code to deploy it.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  The Cost Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Solution&lt;/th&gt;
&lt;th&gt;Monthly Base Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;AWS NAT Gateway&lt;/td&gt;
&lt;td&gt;~$33 + data processing fees&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;EC2 NAT Instance (t2.micro)&lt;/td&gt;
&lt;td&gt;~$3.50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Elastic IP (per instance)&lt;/td&gt;
&lt;td&gt;$3.60&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The trade-off: an EC2 NAT instance introduces a potential single point of failure without an HA setup. For production, you'd want multiple instances. But even two EC2 NAT instances (one per AZ) are more economical than a single NAT Gateway.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;The concept is simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Deploy a Linux server in a public subnet&lt;/li&gt;
&lt;li&gt;Create a route table for private subnets pointing &lt;code&gt;0.0.0.0/0&lt;/code&gt; to the EC2 instance's network interface&lt;/li&gt;
&lt;li&gt;Disable source/destination check on the EC2 instance&lt;/li&gt;
&lt;li&gt;Configure IP forwarding and iptables NAT rules on the instance&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration Commands
&lt;/h2&gt;

&lt;p&gt;The EC2 instance needs IP forwarding enabled and iptables configured for NAT:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Enable IP forwarding&lt;/span&gt;
&lt;span class="nb"&gt;echo &lt;/span&gt;1 | &lt;span class="nb"&gt;tee&lt;/span&gt; /proc/sys/net/ipv4/ip_forward

&lt;span class="c"&gt;# Make it persistent&lt;/span&gt;
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'/net.ipv4.ip_forward=1/s/^#//g'&lt;/span&gt; /etc/sysctl.conf
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'/net.ipv4.conf.all.accept_redirects=0/s/^#//g'&lt;/span&gt; /etc/sysctl.conf
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'/net.ipv4.conf.all.send_redirects=0/s/^#//g'&lt;/span&gt; /etc/sysctl.conf
sysctl &lt;span class="nt"&gt;-p&lt;/span&gt;

&lt;span class="c"&gt;# Configure iptables for NAT masquerading&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/iptables
iptables &lt;span class="nt"&gt;-t&lt;/span&gt; nat &lt;span class="nt"&gt;-A&lt;/span&gt; POSTROUTING &lt;span class="nt"&gt;-o&lt;/span&gt; eth0 &lt;span class="nt"&gt;-j&lt;/span&gt; MASQUERADE
iptables-save &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/iptables/rules.v4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Implementation
&lt;/h2&gt;

&lt;p&gt;The Terraform code handles everything:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC with CIDR &lt;code&gt;10.0.0.0/16&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Internet Gateway&lt;/li&gt;
&lt;li&gt;2 public subnets (&lt;code&gt;10.0.1.0/24&lt;/code&gt;, &lt;code&gt;10.0.2.0/24&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;2 private subnets (&lt;code&gt;10.0.3.0/24&lt;/code&gt;, &lt;code&gt;10.0.4.0/24&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;EC2 NAT instance (t2.micro, Ubuntu 22.04 LTS)&lt;/li&gt;
&lt;li&gt;Route tables and security groups&lt;/li&gt;
&lt;li&gt;Disabled source/destination checking&lt;/li&gt;
&lt;li&gt;Cloud-init script for automated NAT configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deployment is straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The cloud-init script handles all the NAT configuration automatically on first boot — no SSH required.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use This
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Good for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Development and testing environments&lt;/li&gt;
&lt;li&gt;Startups and small businesses watching cloud costs&lt;/li&gt;
&lt;li&gt;Personal projects and labs&lt;/li&gt;
&lt;li&gt;Any environment where $33/month per NAT Gateway adds up&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Stick with NAT Gateway for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Production workloads requiring high availability out of the box&lt;/li&gt;
&lt;li&gt;High-throughput scenarios (NAT Gateway scales to 45 Gbps)&lt;/li&gt;
&lt;li&gt;Environments where operational simplicity outweighs cost savings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;For high availability, deploying multiple EC2 NAT instances (one per AZ) remains more economical than multiple NAT Gateways while maintaining secure private resource internet access.&lt;/p&gt;

&lt;p&gt;Get the complete Terraform code at &lt;a href="https://gergovadasz.hu/save-nat-gateway-costs-by-using-an-ec2-terraform-code-included/?utm_source=devto&amp;amp;utm_medium=crosspost" rel="noopener noreferrer"&gt;gergovadasz.hu&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://gergovadasz.hu" rel="noopener noreferrer"&gt;gergovadasz.hu&lt;/a&gt;. I write hands-on cloud networking guides with production-ready Terraform code for AWS, Azure, and GCP. &lt;a href="https://gergovadasz.hu/#/portal/signup?utm_source=devto&amp;amp;utm_medium=crosspost" rel="noopener noreferrer"&gt;Subscribe for more&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Hub-and-Spoke Topology with Azure Firewall - Deployment Guide with Terraform</title>
      <dc:creator>Gergo Vadasz</dc:creator>
      <pubDate>Thu, 16 Apr 2026 13:00:00 +0000</pubDate>
      <link>https://forem.com/gergovadasz/hub-and-spoke-topology-with-azure-firewall-deployment-guide-with-terraform-4cci</link>
      <guid>https://forem.com/gergovadasz/hub-and-spoke-topology-with-azure-firewall-deployment-guide-with-terraform-4cci</guid>
      <description>&lt;p&gt;Enterprise organizations frequently employ hub-and-spoke network architectures. This design streamlines network administration, enables centralized traffic inspection, and establishes a single connectivity gateway to and from the internet. In Azure, Azure Firewall is frequently the go-to solution for implementing such a centralized security model.&lt;/p&gt;

&lt;p&gt;In this post, I'll walk through a simplified hub-and-spoke configuration where Azure Firewall manages both north-south (internet-bound) and east-west (inter-spoke) traffic filtering.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  What is Hub and Spoke Topology?
&lt;/h2&gt;

&lt;p&gt;A hub-and-spoke topology consists of a central hub serving as the primary connection point, with all other networks (spokes) connecting to it. In Azure, the hub hosts shared services — firewalls, VPN gateways, monitoring tools — while spokes contain workloads dependent on the hub for connectivity and security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key advantages:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simplifies management across multiple networks&lt;/li&gt;
&lt;li&gt;Centralizes shared services deployment&lt;/li&gt;
&lt;li&gt;Reduces operational complexity as scale increases&lt;/li&gt;
&lt;li&gt;Improves security posture through unified policy enforcement&lt;/li&gt;
&lt;li&gt;Enhances scalability — adding spokes doesn't require reworking multiple peerings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Firewall as a Central Security Solution
&lt;/h2&gt;

&lt;p&gt;Azure Firewall is a managed, cloud-native network security service controlling and logging traffic across Azure environments. In hub-and-spoke topologies, it serves as the enforcement point for both inbound/outbound connections and lateral spoke-to-spoke traffic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of central firewall placement:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Centralized traffic inspection with consistent policy enforcement&lt;/li&gt;
&lt;li&gt;Automatic scalability and built-in high availability&lt;/li&gt;
&lt;li&gt;Single location for rule management instead of per-spoke deployments&lt;/li&gt;
&lt;li&gt;Advanced threat protection via Threat Intelligence filtering, FQDN filtering, and Application Rules&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn9jgjluc5vr3t9scv73t.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn9jgjluc5vr3t9scv73t.jpg" alt="Azure Hub-Spoke with Azure Firewall" width="800" height="767"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hub VNet:&lt;/strong&gt; &lt;code&gt;10.0.0.0/16&lt;/code&gt; containing &lt;code&gt;AzureFirewallSubnet&lt;/code&gt; with dedicated public IP&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spoke 1:&lt;/strong&gt; &lt;code&gt;192.168.0.0/16&lt;/code&gt; with subnet &lt;code&gt;192.168.1.0/24&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spoke 2:&lt;/strong&gt; &lt;code&gt;172.16.0.0/16&lt;/code&gt; with subnet &lt;code&gt;172.16.1.0/24&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Components:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;VNet Peerings:&lt;/strong&gt; Hub connects to each spoke with forwarded traffic enabled&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User-Defined Routes:&lt;/strong&gt; Each spoke subnet routes &lt;code&gt;0.0.0.0/0&lt;/code&gt; to firewall private IP (Virtual Appliance)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exception Routes:&lt;/strong&gt; SSH access from specific external IPs bypasses firewall&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Firewall Rules:&lt;/strong&gt; Permit ICMP/TCP (22/80/443) between spokes and to internet&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing VMs:&lt;/strong&gt; Ubuntu VMs in each spoke with public IPs for connectivity testing&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Result:&lt;/strong&gt; Direct spoke-to-internet and spoke-to-spoke communication is blocked; all traffic routes through the firewall for inspection and logging.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Challenge: Azure Firewall SNAT Port Exhaustion
&lt;/h2&gt;

&lt;p&gt;A critical limitation worth knowing about: an Azure Firewall instance with a single public IP provides 2,496 SNAT ports per backend virtual machine, totaling approximately 4,992 ports with the default dual instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scaling solutions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Attach up to 250 public IPs (expensive; operationally complex for external whitelisting)&lt;/li&gt;
&lt;li&gt;Integrate NAT Gateway for 64,512 ports per IP supporting up to 16 IPs (1+ million pooled ports)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is something that often catches people off guard in production, so plan for it early if you expect significant outbound connections.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;The complete Terraform code deploys this entire hub-and-spoke architecture with Azure Firewall — VNets, peerings, UDRs, firewall rules, and test VMs. A single &lt;code&gt;terraform apply&lt;/code&gt; gets you a working lab.&lt;/p&gt;

&lt;p&gt;Check out the full code at &lt;a href="https://gergovadasz.hu/hub-and-spoke-topology-with-azure-firewall-deployment-guide-with-terraform/?utm_source=devto&amp;amp;utm_medium=crosspost" rel="noopener noreferrer"&gt;gergovadasz.hu&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://gergovadasz.hu" rel="noopener noreferrer"&gt;gergovadasz.hu&lt;/a&gt;. I write hands-on cloud networking guides with production-ready Terraform code for AWS, Azure, and GCP. &lt;a href="https://gergovadasz.hu/#/portal/signup?utm_source=devto&amp;amp;utm_medium=crosspost" rel="noopener noreferrer"&gt;Subscribe for more&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>terraform</category>
      <category>networking</category>
      <category>security</category>
    </item>
    <item>
      <title>Deploying an AWS Multi-Region Hub-Spoke Architecture with Terraform</title>
      <dc:creator>Gergo Vadasz</dc:creator>
      <pubDate>Tue, 14 Apr 2026 13:00:00 +0000</pubDate>
      <link>https://forem.com/gergovadasz/deploying-an-aws-multi-region-hub-spoke-architecture-with-terraform-3jp0</link>
      <guid>https://forem.com/gergovadasz/deploying-an-aws-multi-region-hub-spoke-architecture-with-terraform-3jp0</guid>
      <description>&lt;p&gt;I recently tested a multi-region hub-and-spoke architecture in AWS using Transit Gateways. While this architecture may not suit all production scenarios, it effectively demonstrates Transit Gateway functionality and scalable multi-region networking principles.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  High-Level Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgb8wnhyx3oz0z58dk779.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgb8wnhyx3oz0z58dk779.jpg" alt="AWS Hub-Spoke Transit Gateway Architecture" width="800" height="990"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Components:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 Hub VPC in eu-west-1&lt;/li&gt;
&lt;li&gt;1 Spoke VPC in eu-central-1&lt;/li&gt;
&lt;li&gt;1 Spoke VPC in us-east-1&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dedicated Transit Gateways per region facilitate inter-region routing&lt;/li&gt;
&lt;li&gt;Spoke TGWs connect to Hub TGW, enabling spoke-to-spoke traffic through the hub&lt;/li&gt;
&lt;li&gt;Hub VPC provides internet outbound access via NAT gateways&lt;/li&gt;
&lt;li&gt;Deployment includes test Ubuntu servers: one in EU spoke public subnet, one in US spoke private subnet&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Walkthrough
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Deploying VPCs and Subnets
&lt;/h3&gt;

&lt;p&gt;Three VPCs deployed across regions with public/private subnets and route tables. These require later modifications for proper cross-region routing.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  2. Setting Up Internet Access
&lt;/h3&gt;

&lt;p&gt;Internet Gateways deployed in Hub and EU spoke VPCs to enable SSH access for testing.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  3. Deploying Transit Gateways and Attachments
&lt;/h3&gt;

&lt;p&gt;Three Transit Gateways created with two attachment types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;VPC Attachments:&lt;/strong&gt; Connect VPCs to regional TGWs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Peering Attachments:&lt;/strong&gt; Connect Transit Gateways across regions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Communication flow: &lt;code&gt;VPC 1 → VPC Attachment → Transit GW 1 → Peering Attachment → Transit GW 2 → VPC Attachment → VPC 2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  4. Configuring Transit Gateway Route Tables
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Spoke TGW Route Tables:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Default route (&lt;code&gt;0.0.0.0/0&lt;/code&gt;) directs traffic to peering attachment connecting spoke to hub&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Hub TGW Route Table:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EU Spoke VPC CIDR → EU Spoke Peering Attachment&lt;/li&gt;
&lt;li&gt;US Spoke VPC CIDR → US Spoke Peering Attachment&lt;/li&gt;
&lt;li&gt;Default Route (&lt;code&gt;0.0.0.0/0&lt;/code&gt;) → Hub VPC Attachment with NAT Gateways&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  5. Modify VPC Route Tables
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Spoke VPCs:&lt;/strong&gt; Add default route (&lt;code&gt;0.0.0.0/0&lt;/code&gt;) to local Transit Gateway&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hub VPC:&lt;/strong&gt; Add routes for spoke VPC CIDRs (&lt;code&gt;10.0.0.0/16&lt;/code&gt; and &lt;code&gt;192.168.0.0/16&lt;/code&gt;) to local Hub TGW&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  6. Deploying NAT Gateways for Internet Access
&lt;/h3&gt;

&lt;p&gt;NAT Gateways deployed in Hub VPC public subnets route spoke VPC internet traffic. Hub private subnet route tables configured with &lt;code&gt;0.0.0.0/0&lt;/code&gt; pointing to NAT Gateways.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  7. Deploying and Testing Servers
&lt;/h3&gt;

&lt;p&gt;Testing procedure:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;SSH into EU Spoke public server via its public IP&lt;/li&gt;
&lt;li&gt;From EU server, SSH into US private server (verifies spoke-to-spoke communication)&lt;/li&gt;
&lt;li&gt;From US server, test internet connectivity:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ping google.com
curl ifconfig.me
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The public IP should match the Hub NAT Gateway IP, confirming correct outbound routing through the hub.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This architecture demonstrates how Transit Gateways enable scalable multi-region networking in AWS. Terraform automates the entire deployment, ensuring consistency and repeatability versus manual configuration.&lt;/p&gt;

&lt;p&gt;The complete Terraform code provisions everything — VPCs, Transit Gateways, peering attachments, route tables, NAT Gateways, and test servers across all three regions.&lt;/p&gt;

&lt;p&gt;Check out the full code at &lt;a href="https://gergovadasz.hu/deploying-an-aws-multi-region-hub-spoke-architecture-with-terraform/?utm_source=devto&amp;amp;utm_medium=crosspost" rel="noopener noreferrer"&gt;gergovadasz.hu&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://gergovadasz.hu" rel="noopener noreferrer"&gt;gergovadasz.hu&lt;/a&gt;. I write hands-on cloud networking guides with production-ready Terraform code for AWS, Azure, and GCP. &lt;a href="https://gergovadasz.hu/#/portal/signup?utm_source=devto&amp;amp;utm_medium=crosspost" rel="noopener noreferrer"&gt;Subscribe for more&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>networking</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Free Uptime Monitoring Using a Self-Hosted Solution</title>
      <dc:creator>Gergo Vadasz</dc:creator>
      <pubDate>Sat, 11 Apr 2026 09:06:35 +0000</pubDate>
      <link>https://forem.com/gergovadasz/free-uptime-monitoring-using-a-self-hosted-solution-cpf</link>
      <guid>https://forem.com/gergovadasz/free-uptime-monitoring-using-a-self-hosted-solution-cpf</guid>
      <description>&lt;p&gt;I recently encountered a peculiar issue with one of my clients. They were experiencing intermittent connectivity problems with a specific website—random occurrences a few times a day where the site would become unreachable. They suspected the issue might be within their Azure virtual network environment. However, after a thorough review of their setup, I was fairly confident that the problem wasn't on the Azure side. The challenge was proving it definitively.&lt;/p&gt;

&lt;p&gt;What caught my attention was the website's unusual rate-limiting behavior. If I tested the site multiple times in quick succession, I would get blocked for 15 minutes. This behavior initially sidetracked me, but I eventually realized I needed a way to monitor the website's uptime to determine if it was going down intermittently. That's when I started exploring uptime monitoring solutions.&lt;/p&gt;

&lt;p&gt;There are several free or trial-based SaaS solutions available in the market, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;UptimeRobot&lt;/li&gt;
&lt;li&gt;Pingdom&lt;/li&gt;
&lt;li&gt;Site24x7&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While these tools are great, I wanted something more flexible and self-hosted since I already own a few servers. That's when I discovered &lt;a href="https://github.com/louislam/uptime-kuma" rel="noopener noreferrer"&gt;&lt;strong&gt;Uptime Kuma&lt;/strong&gt;&lt;/a&gt;, a free, open-source uptime monitoring tool. Its code is available on GitHub, and it offers both Docker and non-Docker installation options. I opted for the Docker installation since I already had other containers running.&lt;/p&gt;

&lt;p&gt;Getting started with Uptime Kuma is incredibly straightforward. With just a single Docker command, you can have it up and running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;always &lt;span class="nt"&gt;-p&lt;/span&gt; 3001:3001 &lt;span class="nt"&gt;-v&lt;/span&gt; uptime-kuma:/app/data &lt;span class="nt"&gt;--name&lt;/span&gt; uptime-kuma louislam/uptime-kuma:1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Uptime Kuma boasts a clean, responsive user interface and is packed with features. It supports monitoring for a wide range of services, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTTP/HTTPS&lt;/li&gt;
&lt;li&gt;TCP&lt;/li&gt;
&lt;li&gt;Ping&lt;/li&gt;
&lt;li&gt;gRPC&lt;/li&gt;
&lt;li&gt;DNS&lt;/li&gt;
&lt;li&gt;MQTT&lt;/li&gt;
&lt;li&gt;Various databases (MySQL, PostgreSQL, etc.)&lt;/li&gt;
&lt;li&gt;RADIUS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can configure monitors with specific HTTP methods, expected response codes, headers, and even response bodies, making it highly customizable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fixp3l2e38q7hdt27xw7m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fixp3l2e38q7hdt27xw7m.png" alt="Uptime Kuma monitor configuration interface" width="800" height="668"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you've added a monitor, Uptime Kuma provides detailed charts and logs showing the service's uptime and downtime. This makes it easy to identify patterns or recurring issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4cstct4m9os10tfwfi9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4cstct4m9os10tfwfi9m.png" alt="Uptime Kuma uptime charts and logs" width="800" height="668"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To ensure comprehensive monitoring, I set up Uptime Kuma on two different servers located in separate countries. Additionally, my client is planning to deploy it in their own environment. This means we'll soon have three independent monitoring sources. My idea is simple: if the client's application running in Azure starts experiencing errors, we can cross-check the logs from all three monitoring systems. If the website is down across all three, we can conclusively prove that the issue lies with the website itself, not the client's infrastructure.&lt;/p&gt;

&lt;p&gt;So far, Uptime Kuma has proven to be an excellent tool. Its extensive feature set, ease of use, and the fact that it's completely free make it a standout choice for anyone in need of a reliable uptime monitoring solution. The open-source community behind it deserves immense appreciation for offering such a powerful tool at no cost.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://gergovadasz.hu/free-uptime-monitoring-using-a-self-hosted-solution-2/" rel="noopener noreferrer"&gt;gergovadasz.hu&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>devops</category>
      <category>docker</category>
      <category>selfhosted</category>
    </item>
    <item>
      <title>Azure Route Server and NVA: Enforcing VNet Traffic - plus Terraform Code</title>
      <dc:creator>Gergo Vadasz</dc:creator>
      <pubDate>Thu, 09 Apr 2026 13:00:00 +0000</pubDate>
      <link>https://forem.com/gergovadasz/azure-route-server-and-nva-enforcing-vnet-traffic-plus-terraform-code-3j42</link>
      <guid>https://forem.com/gergovadasz/azure-route-server-and-nva-enforcing-vnet-traffic-plus-terraform-code-3j42</guid>
      <description>&lt;p&gt;I recently discovered some knowledge gaps regarding Azure Route Server (ARS) during a discussion with a cloud architect, so I decided to explore it in depth using my personal lab environment.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Lab Topology
&lt;/h2&gt;

&lt;p&gt;The test environment includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hub Virtual Network containing both an NVA (Network Virtual Appliance) and Azure Route Server&lt;/li&gt;
&lt;li&gt;Two Spoke VNets, each peered only with the hub&lt;/li&gt;
&lt;li&gt;BGP enabled between the NVA and Azure Route Server&lt;/li&gt;
&lt;li&gt;Spoke-to-spoke traffic routed through the NVA&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal: understand how Azure Route Server can dynamically manage routing and its advantages over static approaches.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Azure Route Server?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Limitations of Standard VNet Peering
&lt;/h3&gt;

&lt;p&gt;Default VNet peering provides full connectivity but lacks traffic filtering capabilities. Enterprises often require inspection or filtering of inter-VNet traffic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Traditional Solutions
&lt;/h3&gt;

&lt;p&gt;Traffic can be redirected through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure Firewall (Microsoft's native solution)&lt;/li&gt;
&lt;li&gt;Third-party NVAs: Cisco ASR/ASA, Palo Alto NGFW, FortiGate NGFW, F5 Load Balancer, or Linux VMs running iptables/UFW&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Manual implementation requires creating User-Defined Routes (UDRs), assigning next-hop values to the NVA, and updating routes when topology changes. This becomes complex and error-prone — especially in environments with dozens of VNets and subnets.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  How Azure Route Server Solves This
&lt;/h2&gt;

&lt;p&gt;ARS simplifies routing through dynamic BGP-based route injection:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Eliminates manual UDR management&lt;/strong&gt; — routes learned dynamically from NVAs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatically updates routes&lt;/strong&gt; — no manual next-hop modifications needed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scales efficiently&lt;/strong&gt; — supports multiple NVAs for redundancy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provides high availability&lt;/strong&gt; — deploys two redundant BGP peering IPs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once an NVA establishes BGP peering with ARS, learned routes inject automatically into Azure's routing tables. All VNet resources route through the NVA without manual UDRs.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Testing Spoke-to-Spoke Routing via the NVA
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Lab Configuration
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Hub VNet hosting NVA and Azure Route Server&lt;/li&gt;
&lt;li&gt;Spoke VNets peered only with hub (no direct spoke-to-spoke peering)&lt;/li&gt;
&lt;li&gt;BGP session established between NVA and ARS&lt;/li&gt;
&lt;li&gt;Spoke-to-spoke traffic constrained through NVA only&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frr151nvixtdlh35uw331.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frr151nvixtdlh35uw331.jpg" alt="Azure Route Server spoke-to-spoke diagram" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Validation Results
&lt;/h3&gt;

&lt;p&gt;ARS received routes from the NVA, learned routes were injected into spoke VNets, and spoke-to-spoke traffic was successfully routed via NVA.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BGP Neighborship Summary:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nva-test# sh ip bgp summary

IPv4 Unicast Summary (VRF default):
BGP router identifier 10.0.1.4, local AS number 65020 vrf-id 0
BGP table version 6
RIB entries 8, using 1472 bytes of memory
Peers 2, using 1446 KiB of memory

Neighbor        V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt Desc
10.0.3.4        4      65515       175       160        0    0    0 00:01:37            3        1 N/A
10.0.3.5        4      65515       173       160        0    0    0 00:01:37            3        1 N/A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;BGP Routes Table:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nva-test# sh ip bgp

   Network          Next Hop            Metric LocPrf Weight Path
   10.0.0.0/8       0.0.0.0                  0         32768 i
   10.0.0.0/16      10.0.3.5                               0 65515 i
                    10.0.3.4                               0 65515 i
   10.1.0.0/16      10.0.3.5                               0 65515 i
                    10.0.3.4                               0 65515 i
   10.2.0.0/16      10.0.3.5                               0 65515 i
                    10.0.3.4                               0 65515 i
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Spoke VNet Route Evidence
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9aswxii25uw0bsyzywi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9aswxii25uw0bsyzywi.png" alt="Spoke VNet effective routes" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Connectivity Validation
&lt;/h3&gt;

&lt;p&gt;Ping from spoke 1 to spoke 2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gergo@spoke1-vm:~$ ping 10.2.1.4
PING 10.2.1.4 (10.2.1.4) 56(84) bytes of data.
64 bytes from 10.2.1.4: icmp_seq=1 ttl=63 time=5.87 ms
64 bytes from 10.2.1.4: icmp_seq=2 ttl=63 time=2.48 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Traceroute confirms traffic goes through the NVA:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gergo@spoke1-vm:~$ traceroute 10.2.1.4
traceroute to 10.2.1.4 (10.2.1.4), 30 hops max, 60 byte packets
 1  10.0.1.4 (10.0.1.4)  1.528 ms  1.503 ms  1.490 ms  # NVA IP
 2  10.2.1.4 (10.2.1.4)  2.736 ms *  2.706 ms           # Spoke-02 server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Azure Route Server is a powerful tool for simplifying routing and enforcing network policies in cloud environments. By leveraging BGP-based dynamic routing, it eliminates manual UDR management, making it ideal for large-scale enterprise hub-and-spoke architectures in Azure.&lt;/p&gt;

&lt;p&gt;Have you used Azure Route Server in production? Share your thoughts or challenges in the comments!&lt;/p&gt;

&lt;p&gt;The complete Terraform code for this lab is available at &lt;a href="https://gergovadasz.hu/azure-route-server-and-nva-enforcing-vnet-traffic-plus-terraform-code/?utm_source=devto&amp;amp;utm_medium=crosspost" rel="noopener noreferrer"&gt;gergovadasz.hu&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://gergovadasz.hu" rel="noopener noreferrer"&gt;gergovadasz.hu&lt;/a&gt;. I write hands-on cloud networking guides with production-ready Terraform code for AWS, Azure, and GCP. &lt;a href="https://gergovadasz.hu/#/portal/signup?utm_source=devto&amp;amp;utm_medium=crosspost" rel="noopener noreferrer"&gt;Subscribe for more&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>terraform</category>
      <category>networking</category>
      <category>bgp</category>
    </item>
    <item>
      <title>Connecting Your Hybrid Cloud with GCP Connectivity Center and Router Appliance - with Terraform</title>
      <dc:creator>Gergo Vadasz</dc:creator>
      <pubDate>Mon, 06 Apr 2026 20:28:01 +0000</pubDate>
      <link>https://forem.com/gergovadasz/connecting-your-hybrid-cloud-with-gcp-connectivity-center-and-router-appliance-with-terraform-iop</link>
      <guid>https://forem.com/gergovadasz/connecting-your-hybrid-cloud-with-gcp-connectivity-center-and-router-appliance-with-terraform-iop</guid>
      <description>&lt;p&gt;In hybrid cloud environments, connecting on-premises networks to Google Cloud in a scalable and manageable way is a common challenge. Google Cloud's Network Connectivity Center (NCC) implements a hub-and-spoke model for streamlined network management.&lt;/p&gt;

&lt;p&gt;In this post, I'll walk through deploying two VPCs — one serving as a network hub with a Router Appliance VM, another for testing connectivity with a standard VM instance.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  What is GCP Network Connectivity Center?
&lt;/h2&gt;

&lt;p&gt;Network Connectivity Center (NCC) is Google Cloud's centralized network connectivity management solution. It enables a hub-and-spoke topology, simplifying connections across VPCs, VPNs, interconnects, and SD-WAN gateways in hybrid and multi-cloud settings.&lt;/p&gt;

&lt;p&gt;Key benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduced complexity in peering and route configuration&lt;/li&gt;
&lt;li&gt;Enhanced visibility into hybrid network topology&lt;/li&gt;
&lt;li&gt;Scalable network expansion through spoke attachment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gslfcivlxzqgq21f2zj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gslfcivlxzqgq21f2zj.png" alt="GCP NCC Architecture" width="800" height="617"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The environment consists of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Connectivity Center hub&lt;/li&gt;
&lt;li&gt;Internal VPC spoke with test VM&lt;/li&gt;
&lt;li&gt;Router appliance VPC&lt;/li&gt;
&lt;li&gt;Router appliance VM attached as spoke&lt;/li&gt;
&lt;li&gt;Cloud Router for BGP route exchange&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Step-by-Step Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Create the Internal VPC
&lt;/h3&gt;

&lt;p&gt;Create an internal VPC with a &lt;code&gt;10.0.0.0/24&lt;/code&gt; subnet and deploy a Linux VM in it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblc7weux6oe572kcxdvl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblc7weux6oe572kcxdvl.png" alt="Internal VPC" width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmw5v4t0h5fdz1gbageq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmw5v4t0h5fdz1gbageq1.png" alt="Internal VM" width="800" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  2. Create the Router Appliance VPC
&lt;/h3&gt;

&lt;p&gt;Create a router appliance VPC with a &lt;code&gt;10.1.0.0/24&lt;/code&gt; subnet and deploy a Linux VM that will serve as the router appliance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwyh3f08y0b745tsr5l5v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwyh3f08y0b745tsr5l5v.png" alt="Router Appliance VPC" width="800" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frm05gputrwjw0hfpy8o0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frm05gputrwjw0hfpy8o0.png" alt="Router Appliance VM" width="800" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  3. Create the Cloud Router
&lt;/h3&gt;

&lt;p&gt;Create a GCP Cloud Router in the router appliance subnet with AS number 64512.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14wmj40ynssz7x9qzl5q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14wmj40ynssz7x9qzl5q.png" alt="Cloud Router" width="800" height="731"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  4. Create the NCC Hub and Spokes
&lt;/h3&gt;

&lt;p&gt;Create an NCC Hub, then attach two spokes: one VPC spoke (internal VPC) and one Router Appliance spoke.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxttbwej4pq4phiew7df.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxttbwej4pq4phiew7df.png" alt="NCC Hub" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffe50mt0cyzriasuvuins.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffe50mt0cyzriasuvuins.png" alt="NCC Spokes" width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configure the Router Appliance spoke with BGP using AS number 65001.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsj5wv1whbsnc2q96nbl1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsj5wv1whbsnc2q96nbl1.png" alt="BGP Configuration" width="800" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  5. Configure the Router Appliance VM
&lt;/h3&gt;

&lt;p&gt;SSH into the Router Appliance VM. Since I want to demonstrate a hybrid connection, we need to simulate routing towards a remote location. In real-life scenarios, this could be a remote data center, branch office, SD-WAN solution, or even another cloud environment.&lt;/p&gt;

&lt;p&gt;Create a loopback interface with a &lt;code&gt;192.168.0.0/24&lt;/code&gt; IP to simulate the remote network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create loopback ip address&lt;/span&gt;
ip addr add 192.168.0.1/24 dev lo

&lt;span class="c"&gt;# Add entry to the route table&lt;/span&gt;
ip route add 192.168.0.0/24 dev lo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gergo@nva-instance:~$ ip a
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 192.168.0.1/24 scope global lo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  6. Install FRR and Configure BGP
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install FRR&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;frr

&lt;span class="c"&gt;# Enable BGP in FRR&lt;/span&gt;
&lt;span class="nb"&gt;sudo sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s/bgpd=no/bgpd=yes/'&lt;/span&gt; /etc/frr/daemons

&lt;span class="c"&gt;# Restart FRR to take effect&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart frr

&lt;span class="c"&gt;# Configure BGP&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;vtysh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'conf t'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'route-map ACCEPT-ALL permit 10'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'exit'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'router bgp 65001'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'neighbor 10.1.0.4 remote-as 64512'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'neighbor 10.1.0.4 description "GCP Peer 1"'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'neighbor 10.1.0.4 ebgp-multihop'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'neighbor 10.1.0.4 disable-connected-check'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'neighbor 10.1.0.5 remote-as 64512'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'neighbor 10.1.0.5 description "GCP 2"'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'neighbor 10.1.0.5 ebgp-multihop'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'neighbor 10.1.0.5 disable-connected-check'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'address-family ipv4 unicast'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'network 192.168.0.0/24'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'neighbor 10.1.0.4 soft-reconfiguration inbound'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'neighbor 10.1.0.4 route-map ACCEPT-ALL in'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'neighbor 10.1.0.4 route-map ACCEPT-ALL out'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'neighbor 10.1.0.5 soft-reconfiguration inbound'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'neighbor 10.1.0.5 route-map ACCEPT-ALL in'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'neighbor 10.1.0.5 route-map ACCEPT-ALL out'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'end'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'write'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Verifying Route Exchange
&lt;/h2&gt;

&lt;p&gt;After configuration, BGP neighborship with Cloud Router should be established:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nva-instance# show ip bgp summary

IPv4 Unicast Summary:
BGP router identifier 192.168.0.1, local AS number 65001 VRF default vrf-id 0
BGP table version 2
RIB entries 3, using 384 bytes of memory
Peers 2, using 47 KiB of memory

Neighbor        V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt Desc
10.1.0.4        4      64512       101       104        2    0    0 00:32:29            1        2 "GCP Cloud Router
10.1.0.5        4      64512       101       104        2    0    0 00:32:29            1        2 "GCP Cloud Router
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The BGP table shows the internal VPC subnet learned from Cloud Router, and the &lt;code&gt;192.168.0.0/24&lt;/code&gt; loopback advertised locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nva-instance# show ip bgp
BGP table version is 2, local router ID is 192.168.0.1, vrf id 0

     Network          Next Hop            Metric LocPrf Weight Path
 *&amp;gt;  10.1.0.0/24      10.1.0.1               100             0 64512 ?
 *=                   10.1.0.1               100             0 64512 ?
 *&amp;gt;  192.168.0.0/24   0.0.0.0                  0         32768 i
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Route Tables in Google Cloud Console
&lt;/h3&gt;

&lt;p&gt;The Router Appliance VPC routing table shows the NCC Hub advertising the internal VPC, and the Router Appliance advertising &lt;code&gt;192.168.0.0/24&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7oktzh31dhk7300ouie.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7oktzh31dhk7300ouie.png" alt="Router Appliance VPC routes" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Internal VPC routing table shows &lt;code&gt;192.168.0.0/24&lt;/code&gt; being advertised with next hop as NCC Hub:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14ydirpbshu81p2lbw03.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14ydirpbshu81p2lbw03.png" alt="Internal VPC routes" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Connectivity Test
&lt;/h3&gt;

&lt;p&gt;With firewall rules permitting traffic, connectivity testing succeeds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gergo@internal-vm:~$ ping 192.168.0.1
PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.
64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=0.823 ms
64 bytes from 192.168.0.1: icmp_seq=2 ttl=64 time=0.336 ms
64 bytes from 192.168.0.1: icmp_seq=3 ttl=64 time=0.268 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;NCC is a powerful tool that can simplify networking setup and operation in Google Cloud. From here, you could expand the demo with additional VPCs, more Cloud Routers, or failover scenario testing.&lt;/p&gt;

&lt;p&gt;The complete Terraform code provisions everything automatically — NCC hub, spokes, Router Appliance, Cloud Router, VMs, and even the BGP routing. No manual steps required in the Google Cloud Console.&lt;/p&gt;

&lt;p&gt;Check out the full Terraform code and guide at &lt;a href="https://gergovadasz.hu/connecting-your-hybrid-cloud-with-gcp-connectivity-center-and-router-appliance-with-terraform/?utm_source=devto&amp;amp;utm_medium=crosspost" rel="noopener noreferrer"&gt;gergovadasz.hu&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://gergovadasz.hu" rel="noopener noreferrer"&gt;gergovadasz.hu&lt;/a&gt;. I write hands-on cloud networking guides with production-ready Terraform code for AWS, Azure, and GCP. &lt;a href="https://gergovadasz.hu/#/portal/signup?utm_source=devto&amp;amp;utm_medium=crosspost" rel="noopener noreferrer"&gt;Subscribe for more&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>gcp</category>
      <category>terraform</category>
      <category>bgp</category>
      <category>networking</category>
    </item>
    <item>
      <title>Hybrid DNS with GCP Network Connectivity Center and Enterprise IPAM</title>
      <dc:creator>Gergo Vadasz</dc:creator>
      <pubDate>Sat, 04 Apr 2026 19:29:34 +0000</pubDate>
      <link>https://forem.com/gergovadasz/hybrid-dns-with-gcp-network-connectivity-center-and-enterprise-ipam-3g3e</link>
      <guid>https://forem.com/gergovadasz/hybrid-dns-with-gcp-network-connectivity-center-and-enterprise-ipam-3g3e</guid>
      <description>&lt;p&gt;I recently worked through a hybrid DNS design for a Google Cloud environment with some interesting constraints that I think are worth writing up.&lt;/p&gt;

&lt;p&gt;The setup involved implementing a company-wide on-premises DNS system built on enterprise IPAM platforms (Infoblox, EfficientIP, or BlueCat) with two critical requirements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Security policies prohibit DNS queries originating from Google's public IP ranges&lt;/li&gt;
&lt;li&gt;The IPAM must remain the authoritative source for all DNS records, including GCP-hosted zones&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The solution involved deploying virtual machines within GCP to bridge these constraints.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  How DNS Works in Google Cloud
&lt;/h2&gt;

&lt;p&gt;By default, Compute Engine instances use the VPC-internal DNS resolver at &lt;code&gt;169.254.169.254&lt;/code&gt;, handled by Cloud DNS based on the VPC network configuration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud DNS Zone Types
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Private zones:&lt;/strong&gt; Cloud DNS hosts records directly and is authoritative&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Forwarding zones:&lt;/strong&gt; Cloud DNS forwards queries to target name servers; with private routing, source IPs originate from &lt;code&gt;35.199.192.0/19&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Peering zones:&lt;/strong&gt; Cloud DNS delegates resolution to another VPC network's DNS context via metadata-plane operations (no actual DNS packets exchanged between VPCs)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The 35.199.192.0/19 Source IP Challenge
&lt;/h3&gt;

&lt;p&gt;Cloud DNS forwarding zones support standard and private routing modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Standard forwarding:&lt;/strong&gt; Source IP depends on target (public IPs use Google ranges; RFC 1918 addresses use &lt;code&gt;35.199.192.0/19&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Private forwarding:&lt;/strong&gt; Forces all queries through the VPC network using &lt;code&gt;35.199.192.0/19&lt;/code&gt;, regardless of target IP type&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Critical characteristics of this range:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Google Cloud automatically installs a non-removable route for &lt;code&gt;35.199.192.0/19&lt;/code&gt; in every VPC network. This route is not visible in route tables and cannot be modified or exchanged between VPCs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Enterprise firewalls typically block this range as it appears to be from a public IP range.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Solution Is Necessary
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem 1:&lt;/strong&gt; On-premises firewalls reject Cloud DNS forwarding queries arriving from &lt;code&gt;35.199.192.0/19&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem 2:&lt;/strong&gt; Organizations require IPAM platforms to remain the single authoritative source for all DNS records across hybrid environments&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Solution:&lt;/strong&gt; Deploy an IPAM grid member (simulated as a BIND VM) within GCP that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is authoritative for GCP zones (e.g., &lt;code&gt;gcp.example.com&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Forwards on-premises zone queries to on-prem DNS servers using its private IP&lt;/li&gt;
&lt;li&gt;Receives all GCP workload DNS queries via Cloud DNS forwarding&lt;/li&gt;
&lt;li&gt;Receives GCP zone queries from on-premises via conditional forwarding&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Network Topology
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3c3duyyg0a9rmmkls08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3c3duyyg0a9rmmkls08.png" alt="GCP Cloud DNS Topology" width="800" height="904"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  NCC Hub-and-Spoke Setup
&lt;/h3&gt;

&lt;p&gt;The design uses three VPC networks connected through Google Cloud's Network Connectivity Center:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infra VPC Spoke:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hosts the DNS VM (IPAM grid member)&lt;/li&gt;
&lt;li&gt;Hosts the SD-WAN VM&lt;/li&gt;
&lt;li&gt;Central VPC where DNS forwarding zones reside&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;App VPC Spoke:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hosts application workloads&lt;/li&gt;
&lt;li&gt;VMs query DNS through Cloud DNS, which peers to Infra VPC&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;SD-WAN Router Appliance Spoke:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The SD-WAN VM (physically in Infra VPC) registered as a router appliance in NCC&lt;/li&gt;
&lt;li&gt;Runs FRR (Free Range Routing)&lt;/li&gt;
&lt;li&gt;Peers BGP with NCC Cloud Router (ASN 64515 ↔ 64514)&lt;/li&gt;
&lt;li&gt;Advertises on-premises subnet &lt;code&gt;192.168.1.0/24&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The SD-WAN VM includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NIC0 in Infra VPC (for BGP peering with NCC)&lt;/li&gt;
&lt;li&gt;NIC1 in On-Prem VPC (simulating WAN link to data center)&lt;/li&gt;
&lt;li&gt;IP forwarding enabled for inter-network routing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;NCC route exchange ensures all VPCs learn about each other's subnets, enabling the App VPC to reach on-premises resources via the SD-WAN VM.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud DNS Peering and Forwarding Configuration
&lt;/h3&gt;

&lt;p&gt;Cloud DNS includes four managed zones with no private authoritative zones:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Peering Zones (in App VPC):&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Zone&lt;/th&gt;
&lt;th&gt;DNS Name&lt;/th&gt;
&lt;th&gt;From&lt;/th&gt;
&lt;th&gt;To&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;gcp-dns-peering&lt;/td&gt;
&lt;td&gt;gcp.example.com&lt;/td&gt;
&lt;td&gt;App VPC&lt;/td&gt;
&lt;td&gt;Infra VPC&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;onprem-dns-peering&lt;/td&gt;
&lt;td&gt;on-prem.example.com&lt;/td&gt;
&lt;td&gt;App VPC&lt;/td&gt;
&lt;td&gt;Infra VPC&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Forwarding Zones (in Infra VPC):&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Zone&lt;/th&gt;
&lt;th&gt;DNS Name&lt;/th&gt;
&lt;th&gt;Target&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;gcp-dns-forwarding&lt;/td&gt;
&lt;td&gt;gcp.example.com&lt;/td&gt;
&lt;td&gt;DNS VM (10.0.1.10)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;onprem-dns-forwarding&lt;/td&gt;
&lt;td&gt;on-prem.example.com&lt;/td&gt;
&lt;td&gt;DNS VM (10.0.1.10)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Both forwarding zones use &lt;code&gt;forwarding_path = "private"&lt;/code&gt; to ensure VPC routing rather than internet routing.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Why Peering Instead of Cross-VPC Forwarding?
&lt;/h3&gt;

&lt;p&gt;Cloud DNS forwarding cannot work across VPCs because of the &lt;code&gt;35.199.192.0/19&lt;/code&gt; return path behavior.&lt;/p&gt;

&lt;p&gt;When Cloud DNS uses a forwarding zone with private routing, queries originate from &lt;code&gt;35.199.192.0/19&lt;/code&gt;. Every VPC contains a special, non-removable route for this range pointing to that VPC's own Cloud DNS context. This route is not exchanged between VPCs through any mechanism (NCC, VPC peering, etc.).&lt;/p&gt;

&lt;p&gt;If the App VPC forwards a query to the DNS VM in the Infra VPC:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The query arrives with a source IP from &lt;code&gt;35.199.192.0/19&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The DNS VM responds to that source IP&lt;/li&gt;
&lt;li&gt;The Infra VPC's special route for &lt;code&gt;35.199.192.0/19&lt;/code&gt; points to its own Cloud DNS, not back to the App VPC&lt;/li&gt;
&lt;li&gt;The response is silently dropped&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;DNS peering solves this entirely by evaluating queries in the target VPC's DNS context without sending packets between VPCs, eliminating cross-VPC return path issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Referenced Documentation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/vpc/docs/routes#cloud-dns" rel="noopener noreferrer"&gt;VPC Routes - Cloud DNS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/dns/docs/zones/forwarding-zones" rel="noopener noreferrer"&gt;DNS Forwarding Zones&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Traffic Flows
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Flow 1: GCP Workload Resolves a GCP Hostname
&lt;/h3&gt;

&lt;p&gt;Query: &lt;code&gt;app-server.gcp.example.com&lt;/code&gt; from App VM (10.0.2.10)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;App VM (10.0.2.10)
  ↓ dig app-server.gcp.example.com
Cloud DNS (169.254.169.254) -- App VPC context
  ↓ Peering zone: gcp.example.com → Infra VPC (metadata-plane)
Cloud DNS -- Infra VPC context
  ↓ Forwarding zone: gcp.example.com → 10.0.1.10, Source: 35.199.192.0/19
DNS VM / IPAM (10.0.1.10)
  ↓ Authoritative zone: gcp.example.com
  ↓ app-server = 10.0.2.10
Response: 10.0.2.10
  ↓ Returns to 35.199.192.0/19 (Infra VPC route, correct context)
Cloud DNS returns answer to App VM
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The query remains within GCP, with Cloud DNS peering then forwarding to the DNS VM.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Flow 2: GCP Workload Resolves an On-Premises Hostname
&lt;/h3&gt;

&lt;p&gt;Query: &lt;code&gt;app1.on-prem.example.com&lt;/code&gt; from App VM (10.0.2.10)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;App VM (10.0.2.10)
  ↓ dig app1.on-prem.example.com
Cloud DNS (169.254.169.254) -- App VPC context
  ↓ Peering zone: on-prem.example.com → Infra VPC
Cloud DNS -- Infra VPC context
  ↓ Forwarding zone: on-prem.example.com → 10.0.1.10, Source: 35.199.192.0/19
DNS VM / IPAM (10.0.1.10)
  ↓ Forward zone: on-prem.example.com → 192.168.1.10
  ↓ Source IP: 10.0.1.10 (private IP, on-prem firewall allows)
  ↓ Path: via SD-WAN VM (NCC router appliance, BGP-learned route)
On-Prem DNS (192.168.1.10)
  ↓ Authoritative zone: on-prem.example.com
  ↓ app1 = 192.168.1.50
Response travels back the same path
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The on-premises DNS server sees the query from &lt;code&gt;10.0.1.10&lt;/code&gt;, a private RFC1918 address, never from &lt;code&gt;35.199.192.0/19&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Flow 3: On-Premises Resolves a GCP Hostname
&lt;/h3&gt;

&lt;p&gt;Query: &lt;code&gt;app-server.gcp.example.com&lt;/code&gt; from on-premises client&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;On-Prem Client
  ↓ dig app-server.gcp.example.com
On-Prem DNS (192.168.1.10)
  ↓ Conditional forwarder: gcp.example.com → 10.0.1.10
  ↓ Path: via SD-WAN VM (static route in on-prem VPC)
DNS VM / IPAM (10.0.1.10)
  ↓ Authoritative zone: gcp.example.com
  ↓ app-server = 10.0.2.10
Response: 10.0.2.10
  ↓ Returns to on-prem DNS via SD-WAN VM
On-prem client receives answer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The DNS VM resolves this locally as it is authoritative for &lt;code&gt;gcp.example.com&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Traffic Flow Summary
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;From&lt;/th&gt;
&lt;th&gt;To&lt;/th&gt;
&lt;th&gt;Zone&lt;/th&gt;
&lt;th&gt;Path&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;App VM&lt;/td&gt;
&lt;td&gt;GCP hostname&lt;/td&gt;
&lt;td&gt;gcp.example.com&lt;/td&gt;
&lt;td&gt;Cloud DNS peer → forward → DNS VM (authoritative)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;App VM&lt;/td&gt;
&lt;td&gt;On-prem hostname&lt;/td&gt;
&lt;td&gt;on-prem.example.com&lt;/td&gt;
&lt;td&gt;Cloud DNS peer → forward → DNS VM → on-prem DNS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;On-prem&lt;/td&gt;
&lt;td&gt;GCP hostname&lt;/td&gt;
&lt;td&gt;gcp.example.com&lt;/td&gt;
&lt;td&gt;On-prem DNS → DNS VM (authoritative)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;On-prem&lt;/td&gt;
&lt;td&gt;On-prem hostname&lt;/td&gt;
&lt;td&gt;on-prem.example.com&lt;/td&gt;
&lt;td&gt;On-prem DNS (authoritative, local)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Key Design Decisions
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No Cloud DNS private zones:&lt;/strong&gt; The IPAM remains the single authoritative source; Cloud DNS only peers and forwards&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DNS peering is mandatory for cross-VPC resolution:&lt;/strong&gt; The &lt;code&gt;35.199.192.0/19&lt;/code&gt; return path constraint makes forwarding zones across VPCs non-functional&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DNS VM uses private IP for on-premises queries:&lt;/strong&gt; Queries originate from &lt;code&gt;10.0.1.10&lt;/code&gt; rather than &lt;code&gt;35.199.192.0/19&lt;/code&gt;, allowing enterprise firewalls to accept them into trusted zones&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;NCC provides hybrid routing:&lt;/strong&gt; The SD-WAN VM advertises on-premises routes via BGP to NCC, which propagates them to all spoke VPCs&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;The complete Terraform code for this setup is available on my blog — it provisions the entire environment including the NCC hub, DNS VM with BIND, SD-WAN VM with FRR, and all Cloud DNS zones. A single &lt;code&gt;terraform apply&lt;/code&gt; gets you a working lab.&lt;/p&gt;

&lt;p&gt;Check it out at &lt;a href="https://gergovadasz.hu/hybrid-dns-with-gcp-network-connectivity-center-and-enterprise-ipam/" rel="noopener noreferrer"&gt;gergovadasz.hu&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://gergovadasz.hu" rel="noopener noreferrer"&gt;gergovadasz.hu&lt;/a&gt;. I write hands-on cloud networking guides with production-ready Terraform code for AWS, Azure, and GCP. &lt;a href="https://gergovadasz.hu/#/portal/signup" rel="noopener noreferrer"&gt;Subscribe for more&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>gcp</category>
      <category>terraform</category>
      <category>dns</category>
      <category>networking</category>
    </item>
  </channel>
</rss>
