<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Hazmei</title>
    <description>The latest articles on Forem by Hazmei (@hazmei).</description>
    <link>https://forem.com/hazmei</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/hazmei"/>
    <language>en</language>
    <item>
      <title>[EKS] Pods stuck in Init/ContainerCreating state</title>
      <dc:creator>Hazmei</dc:creator>
      <pubDate>Tue, 19 Dec 2023 12:34:19 +0000</pubDate>
      <link>https://forem.com/hazmei/eks-pods-stuck-in-initcontainercreating-state-14ch</link>
      <guid>https://forem.com/hazmei/eks-pods-stuck-in-initcontainercreating-state-14ch</guid>
      <description>&lt;h2&gt;
  
  
  What is EKS?
&lt;/h2&gt;

&lt;p&gt;EKS aka Elastic Kubernetes Service is a managed kubernetes service offered by AWS. AWS helps to manage the control plane of the kubernetes cluster while you manage the data plane. Here at &lt;a href="https://www.ascenda.com/" rel="noopener noreferrer"&gt;Ascenda Loyalty&lt;/a&gt;, we have been running our applications on EKS for more than a year.&lt;/p&gt;

&lt;h2&gt;
  
  
  Some background info.
&lt;/h2&gt;

&lt;p&gt;Recently we have been observing a couple of pods stuck in ContainerCreating state for more than 10 minutes. For context, we are using security group for pods for the application pods and m5a.xlarge EC2 instance. &lt;/p&gt;

&lt;p&gt;If you are familiar with EKS and security group for pods, this is only supported by most nitro based Amazon EC2 instance families and has a lower limit of max pods (if all the pods uses security group).&lt;/p&gt;

&lt;h2&gt;
  
  
  What happened?
&lt;/h2&gt;

&lt;p&gt;Recently we increased the pods replicas started seeing more frequent deployment failure due to pods staying in Init/ContainerCreating state for a long, long time (sometimes beyond 10 minutes).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdq36laes4b0phq33wbj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdq36laes4b0phq33wbj.png" alt="kubectl all pods output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So... What gives?&lt;/p&gt;

&lt;p&gt;From the initial look, it seems that the pods are not getting any private IPv4 address from the controller. This causes it to stay in the &lt;code&gt;Init&lt;/code&gt;/&lt;code&gt;ContainerCreating&lt;/code&gt; state until it gets assigned with one. We can rule out that this is due to a scheduling issue as the pods managed to get scheduled on the nodes.&lt;/p&gt;

&lt;p&gt;The first thing that comes to mind is to check the available private IPv4 address in the subnet if we've exhausted the whole ip range allocated. This is not the case so let's move on.&lt;/p&gt;

&lt;p&gt;The other thing that comes to mind is that we ran out of branch network interface (pod eni) in the affected worker nodes so off we go and run the following commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kubectl get pods -A -o wide
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx102q3cnt0xbhgvwdxyg.png" alt="kubectl affected node"&gt;
Check which pods is affected and which node its scheduled on.&lt;/li&gt;
&lt;li&gt;kubectl describe -n  pods 
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1s81cw69egk4z7o6crl.png" alt="kubectl describe pod"&gt;
Check the status of the pod. If it's due to pod eni hitting the limitation, it will show up in the status.&lt;/li&gt;
&lt;li&gt;kubectl describe nodes 
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pqnlcjxi4qdygt7bj4j.png" alt="kubectl describe node"&gt;
Check the allocated resources for vpc.amazonaws.com/pod-eni. We know that with m5a.xlarge instance, the max  pod eni is 18 per instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It doesn't seem that we've maxed our usage for branch-eni. 🤔&lt;/p&gt;

&lt;p&gt;Let's dig a little further elsewhere since this is related to the pod not getting any ip address. One thing that came to mind is the AWS CNI that we use. The version used at that time was version 1.7.10. There might be a bug in the version that we've deployed that cause these random failure.&lt;/p&gt;

&lt;p&gt;A quick google search brought us &lt;a href="https://github.com/aws/amazon-vpc-cni-k8s/issues/1245" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Most of the solution points to upgrading the AWS CNI to version ≥ v1.7.7 (which we're already on). There were also other comments stating that certain environment variables needed to be set to use security group for pods (which we did correctly). AWS CNI has newer released at that time with latest being v1.9.0 and with no options left, we upgraded to the latest CNI version.&lt;/p&gt;

&lt;p&gt;Everything seems fine for a few hours until the same error returns to haunt us.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5vw4exnqdzzl6rd9mhv.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5vw4exnqdzzl6rd9mhv.gif" alt="Enraged panda"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Fast forward
&lt;/h2&gt;

&lt;p&gt;After opening up AWS support ticket and going back and forth with the AWS engineer, we found that it was indeed due to the max pod eni. Our usage of the security group for pods were ultimately causing this error &lt;code&gt;failed to assign an IP address to container&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84vk553qq79r0gav1x3h.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84vk553qq79r0gav1x3h.gif" alt="facepalm"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although there are shortfall in using security group for pods in EKS (lesser number of pods per nodes), we're still using it to maintain the high level of security between different AWS resources such as RDS and Elastic MemCache. &lt;/p&gt;

&lt;h3&gt;
  
  
  Why didn’t we notice that we ran out of pod eni in the first place?
&lt;/h3&gt;

&lt;p&gt;For each application, we deploy a kubernetes job that runs a db migration step before deploying a set of webapp and worker pods. These consumes pod eni as they are using security group per pod. &lt;/p&gt;

&lt;p&gt;When we first check if we’re hitting the limit of pod eni, we execute these commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;kubectl get pods -A -o wide&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl describe -n &amp;lt;namespace&amp;gt; pods &amp;lt;pod name&amp;gt;&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl describe nodes &amp;lt;node name&amp;gt;&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Upon further inspection of the output from kubectl describe nodes , there’s a discrepancy between the reported allocated resource for &lt;code&gt;vpc.amazonaws.com/pod-eni&lt;/code&gt; and the number of pods that uses the pod eni. We can verify this with the following command: &lt;code&gt;kubectl get pods -o wide -A | grep &amp;lt;node name&amp;gt;&lt;/code&gt; and count the number of pods that uses security group for pods. The discrepancy can be somewhere between 1 - 6 pods as reported on the describe node commands. &lt;/p&gt;

&lt;h3&gt;
  
  
  What’s causing these discrepancy?
&lt;/h3&gt;

&lt;p&gt;It’s the db migration jobs. These uses kubernetes job and the security group for pods. On completion, the pod eni allocated does not get detached and this does not get reflected properly in the output of &lt;code&gt;kubectl describe node &amp;lt;node name&amp;gt;&lt;/code&gt;. That command only reports running pods and does not include completed pods. &lt;/p&gt;

&lt;h2&gt;
  
  
  What now?
&lt;/h2&gt;

&lt;p&gt;These are some of the possible solutions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Specify the &lt;code&gt;.spec.ttlSecondsAfterFinished&lt;/code&gt; in the job manifest.&lt;br&gt;
Not possible at the moment for us. This feature is currently in alpha stage on Kubernetes v1.19. EKS does not enable features pre-beta. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set the CI/CD system to delete the kubernetes job after it successfully completed.&lt;br&gt;
This is the suitable solution for us. We can remove the successful job since it doesn’t serve any purpose keeping it around and consuming 1 pod eni per pod.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the db migration job as part of the webapp initcontainer. &lt;br&gt;
We would be freeing up one pod eni per application since it’ll be running in the same pod. However, this requires a bit of work on our CI/CD, helm charts and we would have a bit of uncertainty on the impact to the system.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;em&gt;Posted in 2020.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>eks</category>
      <category>aws</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>A year in review as a DevOps Engineer</title>
      <dc:creator>Hazmei</dc:creator>
      <pubDate>Sun, 07 Jun 2020 07:30:03 +0000</pubDate>
      <link>https://forem.com/hazmei/a-year-in-review-as-a-devops-engineer-5g1j</link>
      <guid>https://forem.com/hazmei/a-year-in-review-as-a-devops-engineer-5g1j</guid>
      <description>&lt;p&gt;Last week marks my 1 year as a DevOps Engineer in Ascenda. It's been a great 1 year here from being someone fresh out of graduation to refining my foundations and picking up new skills.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xT6U5vgK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/8k388fkrw2vl0qc2dx3h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xT6U5vgK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/8k388fkrw2vl0qc2dx3h.jpg" alt="asc-allenby-house" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This was my desk back at the previous office in Allenby House. It's basically a shophouse on the third floor which has been renovated into an office. The worst part has to be the fact that it was directly beside the road. Every now and then you'll hear vehicles passing by. Thankfully I only have to endure that for the first 2 weeks and we moved to a new office a walking distance down the road.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6p2-okXO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/5ewrv3m44664uvn9l0yf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6p2-okXO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/5ewrv3m44664uvn9l0yf.jpg" alt="asc-arc380" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just look at that beauty. 😍&lt;/p&gt;

&lt;p&gt;Okay back to reflecting the past 1 year as a DevOps Engineer. I came in as a junior in my role (duh) and did not expect to be given the opportunities to take charge and even implement things. &lt;/p&gt;

&lt;p&gt;Here are some of the things that I picked up over the past 1 year:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform&lt;/li&gt;
&lt;li&gt;Ansible&lt;/li&gt;
&lt;li&gt;CircleCI, Jenkins&lt;/li&gt;
&lt;li&gt;AWS&lt;/li&gt;
&lt;li&gt;Cloudflare&lt;/li&gt;
&lt;li&gt;PostgreSQL&lt;/li&gt;
&lt;li&gt;Jenkins&lt;/li&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;Kubernetes (Currently)&lt;/li&gt;
&lt;li&gt;Helm (Currently)&lt;/li&gt;
&lt;li&gt;Golang&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These might seem a lot but I would not call myself an expert in all things DevOps. There are still more things to learn (god Kubernetes is full of stuff and it's confusing as hell to someone who is new to it).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x5kjQpSP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/0shbu3cw7p4onzgo9ox5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x5kjQpSP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/0shbu3cw7p4onzgo9ox5.jpg" alt="asc-engnr" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PSA to those who're looking for jobs. We are hiring and if you are situated in Singapore, Manila, Sydney, or Vietnam have a look at our &lt;a href="https://ascendaloyalty.recruitee.com"&gt;openings here&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
