<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kentaro Matsumoto</title>
    <description>The latest articles on Forem by Kentaro Matsumoto (@mksamba).</description>
    <link>https://forem.com/mksamba</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mksamba"/>
    <language>en</language>
    <item>
      <title>Trying Out AWS VPC Encryption Control</title>
      <dc:creator>Kentaro Matsumoto</dc:creator>
      <pubDate>Sun, 22 Mar 2026 06:08:28 +0000</pubDate>
      <link>https://forem.com/aws-builders/trying-out-aws-vpc-encryption-control-2bfg</link>
      <guid>https://forem.com/aws-builders/trying-out-aws-vpc-encryption-control-2bfg</guid>
      <description>&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;While reviewing recent AWS feature updates, I came across an article about "VPC Encryption Control." It was released in November 2025 and is set to become a paid feature starting March 2026.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I was curious about how exactly it "enforces" encryption, so I decided to test its behavior myself.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. What is VPC Encryption Control? (My Understanding)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Initially, I wondered: "Does this mean all traffic within the VPC must be encrypted? Will it detect if I'm using SSH/HTTPS (OK) versus Telnet/HTTP (NG) by inspecting packets?"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As it turns out, that’s not quite how it works. Instead, it monitors or enforces whether resources within the VPC are using Nitro-based EC2 instances or RDS that support transparent encryption at the AWS infrastructure layer.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. What I Did
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Created a VPC with VPC Encryption Control enabled (Monitor mode).&lt;/li&gt;
&lt;li&gt;Set up VPC Flow Logs with specific fields required to identify whether traffic is encrypted.&lt;/li&gt;
&lt;li&gt;Verified how the following traffic patterns are judged by Encryption Control:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;#&lt;/th&gt;
&lt;th&gt;SRC&lt;/th&gt;
&lt;th&gt;DST&lt;/th&gt;
&lt;th&gt;Protocol&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Local PC&lt;/td&gt;
&lt;td&gt;nginx(t3.micro)&lt;/td&gt;
&lt;td&gt;http&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Local PC&lt;/td&gt;
&lt;td&gt;nginx(m7i.large)&lt;/td&gt;
&lt;td&gt;http&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Local PC&lt;/td&gt;
&lt;td&gt;nginx(t3.micro)&lt;/td&gt;
&lt;td&gt;https&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Local PC&lt;/td&gt;
&lt;td&gt;nginx(m7i.large)&lt;/td&gt;
&lt;td&gt;https&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;in-VPC curl client(t3.micro)&lt;/td&gt;
&lt;td&gt;nginx(t3.micro)&lt;/td&gt;
&lt;td&gt;http&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;in-VPC curl client(m7i.large)&lt;/td&gt;
&lt;td&gt;nginx(m7i.large)&lt;/td&gt;
&lt;td&gt;http&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;in-VPC curl client(t3.micro)&lt;/td&gt;
&lt;td&gt;nginx(t3.micro)&lt;/td&gt;
&lt;td&gt;https&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;in-VPC curl client(m7i.large)&lt;/td&gt;
&lt;td&gt;nginx(m7i.large)&lt;/td&gt;
&lt;td&gt;https&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Switched the VPC Encryption Control mode to Enforce mode.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4. Architecture Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuayzvrg18cymu3mb5rc6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuayzvrg18cymu3mb5rc6.png" alt="image.png" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Procedure
&lt;/h2&gt;

&lt;h3&gt;
  
  
  5.1 Creating a VPC with Encryption Control
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create a VPC with VPC Encryption Control enabled (start with Monitor mode). This can be specified simply during the VPC creation process.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbbkhjdfehohflpqck38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbbkhjdfehohflpqck38.png" alt="image.png" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm that the created VPC has an Encryption Control ID and is set to Monitor mode.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rx1vthpl89rbw36w2f8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rx1vthpl89rbw36w2f8.png" alt="image.png" width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5.2 Creating Test Instances
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Launch two instances with nginx installed (t3.micro and m7i.large).&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure nginx with a server certificate to accept HTTPS (Reference: &lt;a href="https://blog.serverworks.co.jp/acm-exported-certificate-nginx-ssl-auto-renewal-on-ec2" rel="noopener noreferrer"&gt;"Automatic SSL Certificate Renewal on EC2 using ACM Exported Certificates(in Japanese)"&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Launch two instances for the curl client (t3.micro and m7i.large).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Note: Not all Nitro-based instances support automatic encryption. There is a specific list of supported instance types. For example, while t3 is Nitro-based, it is not supported for this feature.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5.3 Creating VPC Flow Logs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;To determine if the traffic is judged as "encrypted," configure VPC Flow Logs using a custom format that includes the ${encryption-status} field.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfe768bs9ysqsttgcvlv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfe768bs9ysqsttgcvlv.png" alt="image.png" width="800" height="637"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5.4 Test Traffic and Results
&lt;/h2&gt;

&lt;p&gt;Run curl from the local PC and the in-VPC instances to the nginx servers.&lt;br&gt;
Example commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; curl http://x.x.x.x
&amp;gt; curl -k https://x.x.x.x (using -k to skip certificate validation when accessing via IP)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
`&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Results for the encryption-status field:

&lt;ul&gt;
&lt;li&gt;0: Not encrypted at the infrastructure layer.&lt;/li&gt;
&lt;li&gt;1: Encrypted by the Nitro hardware.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;#&lt;/th&gt;
&lt;th&gt;SRC&lt;/th&gt;
&lt;th&gt;DST&lt;/th&gt;
&lt;th&gt;Protocol&lt;/th&gt;
&lt;th&gt;Result&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Local PC&lt;/td&gt;
&lt;td&gt;nginx(t3.micro)&lt;/td&gt;
&lt;td&gt;http&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Local PC&lt;/td&gt;
&lt;td&gt;nginx(m7i.large)&lt;/td&gt;
&lt;td&gt;http&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Local PC&lt;/td&gt;
&lt;td&gt;nginx(t3.micro)&lt;/td&gt;
&lt;td&gt;https&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Local PC&lt;/td&gt;
&lt;td&gt;nginx(m7i.large)&lt;/td&gt;
&lt;td&gt;https&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;in-VPC curl client(t3.micro)&lt;/td&gt;
&lt;td&gt;nginx(t3.micro)&lt;/td&gt;
&lt;td&gt;http&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;in-VPC curl client(m7i.large)&lt;/td&gt;
&lt;td&gt;nginx(m7i.large)&lt;/td&gt;
&lt;td&gt;http&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;in-VPC curl client(t3.micro)&lt;/td&gt;
&lt;td&gt;nginx(t3.micro)&lt;/td&gt;
&lt;td&gt;https&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;in-VPC curl client(m7i.large)&lt;/td&gt;
&lt;td&gt;nginx(m7i.large)&lt;/td&gt;
&lt;td&gt;https&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Key Takeaway:

&lt;ul&gt;
&lt;li&gt;Only traffic between two supported Nitro instances is flagged as 1. &lt;/li&gt;
&lt;li&gt;Even if you use HTTPS, if the underlying infrastructure doesn't support the Nitro-level encryption, the VPC Encryption Control check does not consider it "encrypted."&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  5.5 Switching to Enforce Mode
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;To switch to Enforce mode, you must address any non-compliant resources. This includes the Internet Gateway and any non-compatible ENIs (like those belonging to the t3.micro).&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp719uj5gde0rqjccsvon.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp719uj5gde0rqjccsvon.png" alt="image.png" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By upgrading instances to m7i.large and setting exclusion rules for the Internet Gateway, you can successfully enable Enforce mode.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0aumxme88a8ohibddtvf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0aumxme88a8ohibddtvf.png" alt="image.png" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  6. Reference Articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Official AWS Blog: Provides a solid overview of the feature (in Japanese).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/jp/blogs/news/introducing-vpc-encryption-controls-enforce-encryption-in-transit-within-and-across-vpcs-in-a-region/" rel="noopener noreferrer"&gt;https://aws.amazon.com/jp/blogs/news/introducing-vpc-encryption-controls-enforce-encryption-in-transit-within-and-across-vpcs-in-a-region/&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deep Dive Verification: An article exploring what happens when you switch from Monitor to Enforce mode (in Japanese).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://persol-serverworks.co.jp/blog/vpc/vpcvpc.html" rel="noopener noreferrer"&gt;https://persol-serverworks.co.jp/blog/vpc/vpcvpc.html&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Final Thoughts
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;While I don't see myself using this for my current systems anytime soon, I was impressed by the Nitro system's ability to transparently encrypt all inter-instance traffic. It's a powerful tool for high-compliance environments.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>vpc</category>
    </item>
    <item>
      <title>Trying out Amazon CloudWatch Network Flow Monitor in EKS</title>
      <dc:creator>Kentaro Matsumoto</dc:creator>
      <pubDate>Tue, 25 Nov 2025 14:20:25 +0000</pubDate>
      <link>https://forem.com/aws-builders/trying-out-amazon-cloudwatch-network-flow-monitor-in-eks-7pc</link>
      <guid>https://forem.com/aws-builders/trying-out-amazon-cloudwatch-network-flow-monitor-in-eks-7pc</guid>
      <description>&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;The Amazon CloudWatch Network Flow Monitor service, which can monitor the communication status between resources within AWS, was released in December 2024. &lt;br&gt;
This time, we will confirm the setup procedure and usability of the EKS version (where the agent runs as a DaemonSet).&lt;/p&gt;
&lt;h2&gt;
  
  
  2. What We Did
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Prepare a barebones EKS cluster.&lt;/li&gt;
&lt;li&gt;Add the Network Flow Monitor (for EKS) add-on to the EKS cluster.&lt;/li&gt;
&lt;li&gt;Configure the Network Flow Monitor "monitors".&lt;/li&gt;
&lt;li&gt;Access an Nginx pod launched inside EKS from an external client and verify that Network Flow Monitor metrics are collected.&lt;/li&gt;
&lt;li&gt;Introduce packet loss to one of the Nginx pods and confirm that the Network Flow Monitor metrics change.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  3. Architecture Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmt2lghrrruvyakc37kjh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmt2lghrrruvyakc37kjh.jpg" alt=" " width="800" height="531"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  4. Configuration Steps
&lt;/h2&gt;
&lt;h3&gt;
  
  
  4.1 Pre-environment Setup
&lt;/h3&gt;

&lt;p&gt;We will build a VPC and EKS cluster for this evaluation (detailed steps omitted). This time, we use the Management Console with mostly default settings.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The k8s version is 1.33.&lt;/li&gt;
&lt;li&gt;We prepared two t3.medium worker nodes as the node group.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The add-on "Amazon EKS Pod Identity Agent" was added (necessary for Network Flow Monitor; automatically added by default).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The pod status after environment construction is as follows:
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ec2-user@ip-10-0-0-60 mysample]$ kubectl get pod -A -o wide
NAMESPACE      NAME                              READY   STATUS    RESTARTS   AGE   IP            NODE                                             NOMINATED NODE   READINESS GATES
external-dns   external-dns-754cf78755-ks8nc     1/1     Running   0          19h   10.0.10.131   ip-10-0-10-159.ap-northeast-3.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    aws-node-bwshx                    2/2     Running   0          19h   10.0.11.7     ip-10-0-11-7.ap-northeast-3.compute.internal     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    aws-node-ckvdl                    2/2     Running   0          19h   10.0.10.159   ip-10-0-10-159.ap-northeast-3.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    coredns-bdbfddcf5-54sbq           1/1     Running   0          19h   10.0.10.13    ip-10-0-10-159.ap-northeast-3.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    coredns-bdbfddcf5-zvdr9           1/1     Running   0          19h   10.0.10.115   ip-10-0-10-159.ap-northeast-3.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    eks-node-monitoring-agent-ltzc9   1/1     Running   0          19h   10.0.10.159   ip-10-0-10-159.ap-northeast-3.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    eks-node-monitoring-agent-nckx7   1/1     Running   0          19h   10.0.11.7     ip-10-0-11-7.ap-northeast-3.compute.internal     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    eks-pod-identity-agent-jz6kc      1/1     Running   0          19h   10.0.11.7     ip-10-0-11-7.ap-northeast-3.compute.internal     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    eks-pod-identity-agent-khq8q      1/1     Running   0          19h   10.0.10.159   ip-10-0-10-159.ap-northeast-3.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    kube-proxy-569w9                  1/1     Running   0          19h   10.0.10.159   ip-10-0-10-159.ap-northeast-3.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    kube-proxy-cm94v                  1/1     Running   0          19h   10.0.11.7     ip-10-0-11-7.ap-northeast-3.compute.internal     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    metrics-server-fdccf8449-2b2sj    1/1     Running   0          19h   10.0.10.110   ip-10-0-10-159.ap-northeast-3.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    metrics-server-fdccf8449-5584h    1/1     Running   0          19h   10.0.10.55    ip-10-0-10-159.ap-northeast-3.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  4.2 Adding the Network Flow Monitor Add-on
&lt;/h3&gt;

&lt;p&gt;We add the Network Flow Monitor add-on using the Management Console. The official procedure is &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-NetworkFlowMonitor-agents-kubernetes-eks.html" rel="noopener noreferrer"&gt;"Install the EKS AWS Network Flow Monitor Agent add-on."&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From the Add-ons section of the constructed EKS cluster, select "Get more add-ons."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Few9d77dhwnksedehus7e.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Few9d77dhwnksedehus7e.jpg" alt=" " width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the AWS Network Flow Monitor Agent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5dypti8xlc9lvp5u0u8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5dypti8xlc9lvp5u0u8.jpg" alt=" " width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7yp3bk009knd13wlc9h7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7yp3bk009knd13wlc9h7.jpg" alt=" " width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create the necessary IAM role to be attached to the Network Flow Monitor Agent pods by selecting "Create Recommended Role."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4teppnh6aeh01rihpnuj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4teppnh6aeh01rihpnuj.jpg" alt=" " width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create the IAM role with the default settings and configure it as the role to be attached to the pods.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fue0pcltx8we6wyx0v6yc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fue0pcltx8we6wyx0v6yc.jpg" alt=" " width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1lqjayhdkybbxyowgdh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1lqjayhdkybbxyowgdh.jpg" alt=" " width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After being added as an add-on, confirm that it is running as a DaemonSet:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ec2-user@ip-10-0-0-60 mysample]$ kubectl get daemonsets -A
NAMESPACE                     NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
amazon-network-flow-monitor   aws-network-flow-monitor-agent   2         2         2       2            2           kubernetes.io/os=linux   75s
kube-system                   aws-node                         2         2         2       2            2           &amp;lt;none&amp;gt;                   19h
kube-system                   dcgm-server                      0         0         0       0            0           kubernetes.io/os=linux   19h
kube-system                   eks-node-monitoring-agent        2         2         2       2            2           kubernetes.io/os=linux   19h
kube-system                   eks-pod-identity-agent           2         2         2       2            2           &amp;lt;none&amp;gt;                   19h
kube-system                   kube-proxy                       2         2         2       2            2           &amp;lt;none&amp;gt;                   19h
[ec2-user@ip-10-0-0-60 mysample]$ kubectl get pod -A -o wide
NAMESPACE                     NAME                                   READY   STATUS    RESTARTS   AGE   IP            NODE                                             NOMINATED NODE   READINESS GATES
amazon-network-flow-monitor   aws-network-flow-monitor-agent-7v24v   1/1     Running   0          64s   10.0.11.7     ip-10-0-11-7.ap-northeast-3.compute.internal     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
amazon-network-flow-monitor   aws-network-flow-monitor-agent-rpqr6   1/1     Running   0          64s   10.0.10.159   ip-10-0-10-159.ap-northeast-3.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
... (other pods)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  4.3 Configuring the Network Flow Monitor "Monitors"
&lt;/h3&gt;

&lt;p&gt;We create three "monitors": for the entire VPC, for the AZ-3a side of the VPC, and for the AZ-3b side of the VPC.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From CloudWatch - Flow Monitors, select "Create Monitor."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5veb48mdmtt214c9jdxj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5veb48mdmtt214c9jdxj.jpg" alt=" " width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a monitor targeting the entire EKS VPC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqf7tifumpuy7wk2dglo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqf7tifumpuy7wk2dglo.jpg" alt=" " width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a monitor selecting the AZ-3a subnet of the EKS VPC (and similarly for AZ-3b).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5r8k0auf7gtgkj6xzdqf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5r8k0auf7gtgkj6xzdqf.jpg" alt=" " width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  4.4 Preparing Nginx
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Preparing Nginx with tc
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;To introduce packet loss to a pod later, we prepare an Nginx container image that can use the tc (Traffic Control) command and register it in ECR (steps omitted). The Dockerfile is as follows:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Use the official Nginx image as the base image
FROM nginx:alpine

# Install the iproute2 package, which includes the tc command
RUN apk update &amp;amp;&amp;amp; apk add iproute2

# Start nginx when the container launches
CMD ["nginx", "-g", "daemon off;"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Deploying Nginx
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;We deploy Nginx so that one pod runs on each of the two worker nodes and expose the HTTP port externally. The NET_ADMIN capability is required to run the tc command. We use AntiAffinity to prevent both pods from launching on the same worker node.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Deployment
metadata:
  name: mynginx-with-tc-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: mynginx-with-tc
  template:
    metadata:
      labels:
        app: mynginx-with-tc
    spec:
      containers:
      - name: mynginx-with-tc-container
        image: xxxxxxxxxxxx.dkr.ecr.ap-northeast-3.amazonaws.com/mksamba/mynginx-with-tc-repo:latest
        ports:
        - containerPort: 80
        securityContext:
          capabilities:
            add: ["NET_ADMIN"]
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - mynginx-with-tc
            topologyKey: "kubernetes.io/hostname"
---
apiVersion: v1
kind: Service
metadata:
  name: mynginx-with-tc-service
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing

spec:
  type: LoadBalancer
  selector:
    app: mynginx-with-tc
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Confirm that pods are running on each worker node:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ec2-user@ip-10-0-0-60 mysample]$ kubectl apply -f mynginx-with-tc.yaml 
deployment.apps/mynginx-with-tc-deployment created
service/mynginx-with-tc-service created

[ec2-user@ip-10-0-0-60 mysample]$ kubectl get pod -A -o wide | grep nginx
default                       mynginx-with-tc-deployment-68cb4fff79-qjw9q   1/1     Running   0          71s   10.0.10.8     ip-10-0-10-159.ap-northeast-3.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default                       mynginx-with-tc-deployment-68cb4fff79-tfc8s   1/1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  4.5 Accessing Nginx from an External Client
&lt;/h3&gt;

&lt;p&gt;We access Nginx via the CLB (Classic Load Balancer) about 10,000 times using curl from the Internet. Traffic is distributed to the two pods by the CLB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
# Specify the target URL
URL="http://xxxxxxxxxx.ap-northeast-3.elb.amazonaws.com"

# Loop
for ((i=1; i&amp;lt;=10000; i++))
do
  echo "Request #$i"
  curl -o /dev/null -s -w "%{http_code}\n" "$URL"
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4.6 Introducing Packet Loss to Nginx
&lt;/h3&gt;

&lt;p&gt;We introduce a 3% packet loss to only one pod (the AZ-3b side).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Identify the pod name to configure
[ec2-user@ip-10-0-0-60 ~]$ kubectl get pod -A -o wide |grep mynginx
default                       mynginx-with-tc-deployment-68cb4fff79-qjw9q   1/1     Running   0          27m   10.0.10.8     ip-10-0-10-159.ap-northeast-3.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default                       mynginx-with-tc-deployment-68cb4fff79-tfc8s   1/1     Running   0          27m   10.0.11.191   ip-10-0-11-7.ap-northeast-3.compute.internal     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# Introduce packet loss to the AZ-3b side pod
[ec2-user@ip-10-0-0-60 ~]$ kubectl exec -it mynginx-with-tc-deployment-68cb4fff79-tfc8s -- tc qdisc add dev eth0 root netem loss 3%

# (Reference) Command to revert the packet loss setting
[ec2-user@ip-10-0-0-60 ~]$ kubectl exec -it mynginx-with-tc-deployment-68cb4fff79-tfc8s -- tc qdisc del dev eth0 root
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4.7 Checking the Network Flow Monitor "Monitors"
&lt;/h3&gt;

&lt;p&gt;We check the monitor values during normal operation and when packet loss is introduced to one pod.&lt;br&gt;
Around 21:20PM is normal operation, and around 21:40PM is the traffic with packet loss introduced.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC-wide Monitor: Around 21:40PM, retransmissions are occurring.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwlddxewur9rk2ibxwvh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwlddxewur9rk2ibxwvh.jpg" alt=" " width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1i05oxzh9epqc8ncuie.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1i05oxzh9epqc8ncuie.jpg" alt=" " width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqk9xc79zt613ij9o5b55.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqk9xc79zt613ij9o5b55.jpg" alt=" " width="800" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AZ-3a Monitor: Traffic is half of the VPC-wide total, but since the pod is normal, there are no retransmissions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfvvbeahfarnl3tv80nw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfvvbeahfarnl3tv80nw.jpg" alt=" " width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhulci13mmiuj1fmp5sql.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhulci13mmiuj1fmp5sql.jpg" alt=" " width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsle2orpt6ponjqrkruv9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsle2orpt6ponjqrkruv9.jpg" alt=" " width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AZ-3b Monitor: Traffic is half of the VPC-wide total, and around 21:40PM, a large amount of retransmissions occurs due to the packet loss in the pod.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl30f9n3ct5wks5leg6zj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl30f9n3ct5wks5leg6zj.jpg" alt=" " width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopxmfm4wg0rvbr3kz57i.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopxmfm4wg0rvbr3kz57i.jpg" alt=" " width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpful3g21fvz9p22wmu5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpful3g21fvz9p22wmu5.jpg" alt=" " width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In this example, using the Network Flow Monitor, we can proceed with investigation like this: "Increased retransmissions across the entire VPC" -&amp;gt; "No issue on the AZ-3a side" -&amp;gt; "Retransmissions only on the AZ-3b side" -&amp;gt; "Network health Indicator for AZ-3b is Healthy, so it's not an AWS infrastructure issue" -&amp;gt; "Perhaps an anomaly within the user's scope of responsibility, such as the AZ-3b worker node or pod?"&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Impressions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The setup process was extremely easy, simply adding the add-on via the Management Console. &lt;/li&gt;
&lt;li&gt;This time, we confirmed the metric difference by introducing packet loss to generate retransmissions, but we want to consider how this service can improve our level of monitoring going forward.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>cloudwatch</category>
    </item>
  </channel>
</rss>
