<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Adedamola Ajibola</title>
    <description>The latest articles on Forem by Adedamola Ajibola (@damola12345).</description>
    <link>https://forem.com/damola12345</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/damola12345"/>
    <language>en</language>
    <item>
      <title>The EKS 1.32 to 1.33 Upgrade That Broke Everything (And How I Fixed It)</title>
      <dc:creator>Adedamola Ajibola</dc:creator>
      <pubDate>Fri, 26 Dec 2025 15:40:55 +0000</pubDate>
      <link>https://forem.com/damola12345/the-eks-132-133-upgrade-that-broke-everything-and-how-i-fixed-it-5fe9</link>
      <guid>https://forem.com/damola12345/the-eks-132-133-upgrade-that-broke-everything-and-how-i-fixed-it-5fe9</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6epi8xitcc4c4cr999mo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6epi8xitcc4c4cr999mo.png" alt=" " width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Upgrading Kubernetes should be boring.&lt;/p&gt;

&lt;p&gt;This one wasn’t.&lt;/p&gt;

&lt;p&gt;I recently upgraded a production &lt;strong&gt;Amazon EKS cluster&lt;/strong&gt; from &lt;strong&gt;1.32 to 1.33&lt;/strong&gt;, expecting a routine change. Instead, it triggered a cascading failure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nodes went NotReady&lt;/li&gt;
&lt;li&gt;Add-ons stalled indefinitely&lt;/li&gt;
&lt;li&gt;Karpenter stopped provisioning capacity&lt;/li&gt;
&lt;li&gt;The cluster deadlocked itself&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This post walks through what broke, why it broke, and the exact steps that stabilized the cluster so you don’t repeat my mistakes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Critical Issues&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you're upgrading to EKS 1.33, know this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Amazon Linux 2 is NOT supported - Must migrate to AL2023 first&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Anonymous auth is restricted - New RBAC required for kube-apiserver&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Karpenter needs &lt;code&gt;eks:DescribeCluster&lt;/code&gt; permission - Missing this breaks everything&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Addons can get stuck "Updating" - managed node groups are your escape hatch&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Part 1: The Failed First Attempt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I Did Wrong&lt;/strong&gt;&lt;br&gt;
I started with what looked like a standard Terraform upgrade:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "damola_eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~&amp;gt; 20.0"

  cluster_name    = local.name
  cluster_version = "1.33"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What happened&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: AMI Type AL2_x86_64 is only supported for kubernetes versions 1.32 or earlier
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The Root Cause&lt;/strong&gt;&lt;br&gt;
EKS 1.33 drops Amazon Linux 2 completely:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AL2 reaches end of support on Nov 26, 2025 and no AL2 AMIs exist for 1.33.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;The Fix: AL2 → AL2023 Migration&lt;/strong&gt;&lt;br&gt;
For Karpenter users, this is actually simple. Update your EC2NodeClass:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# EC2NodeClass
spec:
  amiSelectorTerms:
    - alias: al2023@latest

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Managed node groups (Terraform)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ami_type = "AL2023_x86_64_STANDARD"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait until all nodes are AL2023, then upgrade the control plane.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part 2: The Karpenter Catastrophe&lt;/strong&gt;&lt;br&gt;
After migrating to AL2023, I cordoned old nodes, no new nodes came up.&lt;/p&gt;

&lt;p&gt;Karpenter was completely stuck.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Error&lt;/strong&gt;&lt;br&gt;
Checking Karpenter logs revealed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
{
  "message": "failed to detect the cluster CIDR",
  "error": "not authorized to perform: eks:DescribeCluster"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The Root Cause&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Starting with Karpenter v1.0, the controller requires &lt;code&gt;eks:DescribeCluster&lt;/code&gt; to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detect cluster networking (CIDR)&lt;/li&gt;
&lt;li&gt;Discover API endpoint configuration&lt;/li&gt;
&lt;li&gt;Validate authentication mode&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without this permission, provisioning silently fails.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Fix&lt;/strong&gt;&lt;br&gt;
Add the permission to your Karpenter controller IAM role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Effect": "Allow",
  "Action": "eks:DescribeCluster",
  "Resource": "*"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Then restart:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl rollout restart deployment/karpenter -n karpenter
kubectl rollout status deployment/karpenter -n karpenter

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Karpenter recovered but the cluster still wasn’t healthy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part 3: The Addon Deadlock&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After the control plane upgraded:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add-ons started updating (&lt;code&gt;vpc-cni&lt;/code&gt;, &lt;code&gt;kube-proxy&lt;/code&gt;, &lt;code&gt;coredns&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;They got stuck in Updating&lt;/li&gt;
&lt;li&gt;All nodes went NotReady&lt;/li&gt;
&lt;li&gt;No new nodes could join&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Classic deadlock:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Nodes need add-ons → add-ons need healthy nodes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;The Error&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
# All showed: NotReady

kubectl logs -n kube-system -l k8s-app=kube-dns
# Error: Authorization error (user=kube-apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The Root Causes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anonymous auth restricted (EKS 1.33)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Anonymous API access is now limited to health endpoints only. &lt;/li&gt;
&lt;li&gt;The kube-apiserver requires explicit RBAC to communicate with kubelet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Add-on update deadlock&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add-ons need healthy nodes to update. &lt;/li&gt;
&lt;li&gt;Nodes need working add-ons to become Ready. &lt;/li&gt;
&lt;li&gt;When all nodes are NotReady, everything gets stuck.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Fix Part 1: RBAC for kube-apiserver&lt;/strong&gt;&lt;br&gt;
Create the missing RBAC:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:kube-apiserver-to-kubelet
rules:
- apiGroups: [""]
  resources: ["nodes/proxy","nodes/stats","nodes/log","nodes/spec","nodes/metrics"]
  verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
- kind: User
  name: kube-apiserver-kubelet-client
  apiGroup: rbac.authorization.k8s.io
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Errors stopped  but add-ons were still stuck.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Fix Part 2: Breaking the Deadlock with Managed Nodes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With broken Karpenter nodes, I had no way out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; temporarily scale up managed node groups.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;desired_size = 1
ami_type     = "AL2023_x86_64_STANDARD"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why this works&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Managed nodes bootstrap independently&lt;/li&gt;
&lt;li&gt;They come up with working VPC CNI&lt;/li&gt;
&lt;li&gt;Add-ons get healthy replicas&lt;/li&gt;
&lt;li&gt;Karpenter recovers&lt;/li&gt;
&lt;li&gt;Broken nodes can be safely deleted&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Within ~10 minutes, the cluster recovered.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part 4: Final Cleanup &amp;amp; Validation&lt;/strong&gt;&lt;br&gt;
Once stable verify all nodes healthy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes -o custom-columns=\
NAME:.metadata.name,\
STATUS:.status.conditions[-1].type,\
OS:.status.nodeInfo.osImage,\
VERSION:.status.nodeInfo.kubeletVersion

# All showed:
# Ready | Amazon Linux 2023.9.20251208 | v1.33.5-eks-ecaa3a6

# Verify addons
kubectl get daemonset -n kube-system
# All showed READY = DESIRED

# Clean up stuck terminating pods
kubectl delete pod -n kube-system --force --grace-period=0 &amp;lt;stuck-pod-names&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Recommended Add-on Versions for EKS 1.33&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CoreDNS: &lt;code&gt;v1.12.4-eksbuild.1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;kube-proxy: &lt;code&gt;v1.33.5-eksbuild.2&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;VPC CNI: &lt;code&gt;v1.21.1-eksbuild.1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;EBS CSI: &lt;code&gt;v1.54.0-eksbuild.1&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Takeaways&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AL2023 is mandatory for EKS 1.33&lt;/li&gt;
&lt;li&gt;Karpenter needs &lt;code&gt;eks:DescribeCluster&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;kube-apiserver &lt;strong&gt;RBAC&lt;/strong&gt; must be updated&lt;/li&gt;
&lt;li&gt;Keep managed node groups as a safety net&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;&lt;br&gt;
Looking back, the main issue wasn’t just missing permissions it was configuration drift.&lt;/p&gt;

&lt;p&gt;While the cluster was still running &lt;code&gt;EKS 1.32&lt;/code&gt;, I manually added &lt;code&gt;eks:DescribeCluster&lt;/code&gt; during the &lt;code&gt;AL2023&lt;/code&gt; migration. Everything worked, so I forgot to codify it in Terraform.&lt;/p&gt;

&lt;p&gt;During the upgrade to &lt;code&gt;EKS 1.33&lt;/code&gt;, Terraform re-applied the IAM role and &lt;strong&gt;removed the permission&lt;/strong&gt; right when Karpenter started requiring it.&lt;/p&gt;

&lt;p&gt;The upgrade didn’t introduce the bug.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Environment:&lt;/strong&gt; EKS, Terraform, Karpenter v1.x&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resources&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html" rel="noopener noreferrer"&gt;AWS EKS Version Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/linux/al2023/ug/compare-with-al2.html" rel="noopener noreferrer"&gt;Amazon Linux 2023 Migration Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions-standard.html" rel="noopener noreferrer"&gt;EKS Kubernetes Versions (Standard Support)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>aws</category>
      <category>eks</category>
    </item>
    <item>
      <title>How to Fix Karpenter Migration Issues During Upgrade (v0.25.0 v1.5.0)</title>
      <dc:creator>Adedamola Ajibola</dc:creator>
      <pubDate>Thu, 04 Dec 2025 13:51:27 +0000</pubDate>
      <link>https://forem.com/damola12345/how-to-fix-karpenter-migration-issues-during-upgrade-v0250-v150-l8p</link>
      <guid>https://forem.com/damola12345/how-to-fix-karpenter-migration-issues-during-upgrade-v0250-v150-l8p</guid>
      <description>&lt;p&gt;This blog post covers the real issues I ran into while upgrading Karpenter from v0.25.0 → v1.5.0 in production, why they happened, and the exact fixes. If you're planning this upgrade, this guide will save you hours of debugging.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;Upgrading Karpenter from 0.25.0 to 1.5.0 is not a simple version bump. It requires migrating from v1alpha5 APIs to the new v1 APIs a breaking change.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Starting point:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Karpenter v0.25.0&lt;/li&gt;
&lt;li&gt;EKS 1.31&lt;/li&gt;
&lt;li&gt;v1alpha5 CRDs (Provisioner, AWSNodeTemplate)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Target:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Karpenter v1.5.0&lt;/li&gt;
&lt;li&gt;v1 CRDs (NodePool, EC2NodeClass)&lt;/li&gt;
&lt;li&gt;EKS 1.32 compatibility&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Skipping the CRD migration step leads to controller crashes, stuck resources, and broken uninstalls all of which I learned the hard way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problems I Faced During Migration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Problem 1: Chart not found in Helm repository
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Error:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade karpenter karpenter/karpenter &lt;span class="nt"&gt;--version&lt;/span&gt; 1.5.0
&lt;span class="c"&gt;# Error: chart "karpenter" matching 1.5.0 not found&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why:&lt;/strong&gt; The old Helm repo only contains versions up to 0.16.3. Karpenter v1.x was moved to an OCI registry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade karpenter oci://public.ecr.aws/karpenter/karpenter &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--version&lt;/span&gt; 1.5.0 &lt;span class="nt"&gt;--namespace&lt;/span&gt; karpenter &lt;span class="nt"&gt;--wait&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Problem 2: OCI registry tag not found
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Error:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade karpenter oci://public.ecr.aws/karpenter/karpenter &lt;span class="nt"&gt;--version&lt;/span&gt; v1.5.0
&lt;span class="c"&gt;# Error: ... v1.5.0: not found&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why:&lt;/strong&gt; Starting in v0.35.0, OCI tags no longer use the &lt;code&gt;v&lt;/code&gt; prefix.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nt"&gt;--version&lt;/span&gt; 1.5.0   &lt;span class="c"&gt;# NOT v1.5.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Problem 3: Controller crash on startup (missing CRDs)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Error:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# ERROR: no matches for kind "NodeClaim" in version "karpenter.sh/v1"&lt;/span&gt;
&lt;span class="c"&gt;# panic: unable to retrieve the complete list of server APIs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why:&lt;/strong&gt; The v1.5.0 controller requires new v1 CRDs (NodePool, NodeClaim, EC2NodeClass). Your cluster still contains only v1alpha5 CRDs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Install CRDs first&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; karpenter-crd &lt;span class="se"&gt;\&lt;/span&gt;
  oci://public.ecr.aws/karpenter/karpenter-crd &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--version&lt;/span&gt; 1.5.0 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt; karpenter &lt;span class="nt"&gt;--create-namespace&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then upgrade the controller:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade karpenter oci://public.ecr.aws/karpenter/karpenter &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--version&lt;/span&gt; 1.5.0 &lt;span class="nt"&gt;--namespace&lt;/span&gt; karpenter &lt;span class="nt"&gt;--wait&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Most common migration failure&lt;/strong&gt; CRDs must be upgraded first.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem 4: IAM permission denied
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Error:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"not authorized to perform: ec2:DescribeImages"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why:&lt;/strong&gt; Karpenter v1 introduces new instance profile and AMI discovery workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Add this to the Karpenter controller IAM role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"ec2:DescribeImages"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"iam:GetInstanceProfile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"iam:CreateInstanceProfile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"iam:DeleteInstanceProfile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"iam:AddRoleToInstanceProfile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"iam:RemoveRoleFromInstanceProfile"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl rollout restart deployment karpenter &lt;span class="nt"&gt;-n&lt;/span&gt; karpenter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step-by-Step Migration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Backup existing resources
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get provisioners &lt;span class="nt"&gt;-A&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; yaml &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; provisioners-backup.yaml
kubectl get awsnodetemplates &lt;span class="nt"&gt;-A&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; yaml &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; awsnodetemplates-backup.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Install v1 CRDs
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; karpenter-crd &lt;span class="se"&gt;\&lt;/span&gt;
  oci://public.ecr.aws/karpenter/karpenter-crd &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--version&lt;/span&gt; 1.5.0 &lt;span class="nt"&gt;--namespace&lt;/span&gt; karpenter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Update IAM permissions
&lt;/h3&gt;

&lt;p&gt;(Add the policy from Problem 4.)&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Upgrade the controller
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade karpenter oci://public.ecr.aws/karpenter/karpenter &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--version&lt;/span&gt; 1.5.0 &lt;span class="nt"&gt;--namespace&lt;/span&gt; karpenter &lt;span class="nt"&gt;--wait&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Convert v1alpha5 → v1 resources
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Provisioner → NodePool&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter.sh/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePool&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requirements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter.sh/capacity-type&lt;/span&gt;
        &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
        &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;on-demand"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;nodeClassRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter.k8s.aws&lt;/span&gt;
        &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;EC2NodeClass&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;disruption&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;consolidationPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WhenEmptyOrUnderutilized&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;AWSNodeTemplate → EC2NodeClass&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;karpenter.k8s.aws/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;EC2NodeClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;amiSelectorTerms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alias&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;al2@latest&lt;/span&gt;
  &lt;span class="na"&gt;subnetSelectorTerms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;karpenter.sh/discovery&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-cluster&lt;/span&gt;
  &lt;span class="na"&gt;securityGroupSelectorTerms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;karpenter.sh/discovery&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-cluster&lt;/span&gt;
  &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;karpenter-node-role-name"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  6. Apply and verify
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ec2nodeclass.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; nodepool.yaml

kubectl get ec2nodeclass
kubectl get nodepools
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  7. Migrate nodes
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Test provisioning&lt;/span&gt;
kubectl run &lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--requests&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;cpu&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1,memory&lt;span class="o"&gt;=&lt;/span&gt;1Gi

&lt;span class="c"&gt;# Drain old nodes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  8. Clean up old resources
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete provisioner default
kubectl delete awsnodetemplate default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Karpenter v1.5.0 running smoothly&lt;/li&gt;
&lt;li&gt;All nodes migrated to NodePools&lt;/li&gt;
&lt;li&gt;Cluster ready for EKS 1.32&lt;/li&gt;
&lt;li&gt;Zero downtime&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Useful Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://karpenter.sh/docs/upgrading/upgrade-guide/" rel="noopener noreferrer"&gt;Official Upgrade Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://karpenter.sh/v1.0/upgrading/v1-migration/" rel="noopener noreferrer"&gt;v1 Migration Guide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Monitor logs during upgrade:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &lt;span class="nt"&gt;-n&lt;/span&gt; karpenter &lt;span class="nt"&gt;-l&lt;/span&gt; app.kubernetes.io/name&lt;span class="o"&gt;=&lt;/span&gt;karpenter &lt;span class="nt"&gt;-f&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>karpenter</category>
      <category>eks</category>
    </item>
    <item>
      <title>W.TEC’S EARLY INNOVATORS CAMP: Where Creativity Meets Technology</title>
      <dc:creator>Adedamola Ajibola</dc:creator>
      <pubDate>Thu, 11 Sep 2025 07:30:47 +0000</pubDate>
      <link>https://forem.com/damola12345/wtecs-early-innovators-camp-where-creativity-meets-technology-4na5</link>
      <guid>https://forem.com/damola12345/wtecs-early-innovators-camp-where-creativity-meets-technology-4na5</guid>
      <description>&lt;p&gt;When I first signed up to volunteer at W.TEC’s Early Innovators Camp, I thought it would simply be a chance to give back. What I didn’t realize was how much it would give back to me in return. This camp isn’t just about summer activities. It’s about nurturing creativity, problem-solving skills, and confidence in technology for the next generation.&lt;/p&gt;

&lt;p&gt;For two weeks, I had the privilege of stepping into a dual role as both facilitator and sponsor and it turned out to be one of the most meaningful experiences of my journey in tech so far.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2ll6c0uxlxlrbmvygis.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2ll6c0uxlxlrbmvygis.png" alt="wtec camp" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;GIVING BACK IN MY OWN WAY&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Volunteering has always been important to me, but this year I wanted to take it a step further. I decided to sponsor two children to attend the camp, giving them the opportunity to be part of an environment they might not have accessed otherwise.&lt;/p&gt;

&lt;p&gt;Watching them dive into activities, raise their hands eagerly during sessions, and share their excitement with their peers was incredibly rewarding. It reminded me that impact doesn’t always have to come from grand gestures sometimes, it’s the small steps that create ripples of change. Seeing those children grow in confidence over the two weeks reaffirmed for me that access to opportunities is often the difference between potential left dormant and potential unlocked.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;HANDS-ON LEARNING AND MY ROLE&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I also got the chance to anchor a few fun, hands-on sessions. Two of my Favorites were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;The Magic Tissue Paper Flower Experiment&lt;/em&gt;&lt;/strong&gt;: Blending science and art, the activity encouraged kids to think about how everyday materials can transform into something unexpected, a simple reminder that science is everywhere if we pay attention.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;The Balloon Rocket Experiment&lt;/em&gt;&lt;/strong&gt;: A playful demonstration of Newton’s third law of motion. As balloons zoomed across the room, the children laughed, shouted, and chased after them, all while unknowingly grasping a key principle of physics.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The excitement on the children’s faces as the balloon rockets zoomed across the room was priceless. Their endless &lt;code&gt;why&lt;/code&gt; and &lt;code&gt;how&lt;/code&gt; questions reminded me why curiosity is the engine of innovation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;LEARNING FROM OTHERS&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the most enriching parts of the camp was learning alongside the kids. I got to sit in on sessions led by my colleagues &lt;code&gt;Nifesimi&lt;/code&gt; and &lt;code&gt;Abraham&lt;/code&gt;, who taught robotics, remote-controlled cars, and art drawing. Watching them break down complex concepts into digestible, engaging lessons was inspiring.&lt;/p&gt;

&lt;p&gt;Even outside structured sessions, learning continued through anime watch and discuss gatherings, spontaneous debates about which superhero had the best powers, and laughter-filled walks after lunch. The camp created a community where kids could explore ideas in a relaxed and open environment. These moments reminded me that some of the most powerful lessons aren’t formally taught they’re experienced and shared together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;THE W.TEC COMMUNITY&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4j72xjw0munrvyw4wae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq4j72xjw0munrvyw4wae.png" alt="early innovators" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Behind the scenes, the camp was powered by a team of passionate volunteers: &lt;code&gt;Abraham&lt;/code&gt;, &lt;code&gt;Nifesimi&lt;/code&gt;, &lt;code&gt;Joy&lt;/code&gt;, &lt;code&gt;Debbie&lt;/code&gt;, and &lt;code&gt;Stella&lt;/code&gt;, who poured their energy into creating a safe, fun, and inspiring space. Each person brought something unique: from patience in guiding the kids through challenges, to humour that kept the atmosphere light, to creativity that turned ordinary lessons into memorable experiences.&lt;/p&gt;

&lt;p&gt;Working alongside them reminded me how much stronger we are when united by a shared mission. Together, we weren’t just facilitators; we were mentors, role models, cheerleaders, and sometimes even students ourselves. That spirit of collaboration made the camp feel less like work and more like a family.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;A LASTING IMPACT&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As the camp ended, I found myself feeling deeply nostalgic. I even sent a message to my fellow facilitators:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;I’ve been feeling quite nostalgic since the program ended. I truly miss the kids already. It was such a meaningful two weeks&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And I meant every word. Volunteering at the camp was a privilege, but the real reward was seeing the spark of possibility light up in young minds. Witnessing their growth reaffirmed the importance of creating access to opportunities like this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;CLOSING THOUGHTS&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;W.TEC’s Early Innovators Camp proves that when you blend fun, technology, and mentorship, you don’t just create summer activities, you shape futures.&lt;/p&gt;

&lt;p&gt;I left the camp with a heart full of gratitude, inspired by the children, my fellow facilitators, and the W.TEC team. And I’m already looking forward to being part of this journey again.&lt;/p&gt;

&lt;p&gt;To the W.TEC community, the facilitators, and most importantly, the kids, thank you for making it the best two weeks ever.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Fixing “Invalid Credentials” in AWS SSM Fleet Manager RDP</title>
      <dc:creator>Adedamola Ajibola</dc:creator>
      <pubDate>Sun, 31 Aug 2025 06:29:21 +0000</pubDate>
      <link>https://forem.com/damola12345/fixing-invalid-credentials-in-aws-ssm-fleet-manager-rdp-bld</link>
      <guid>https://forem.com/damola12345/fixing-invalid-credentials-in-aws-ssm-fleet-manager-rdp-bld</guid>
      <description>&lt;p&gt;When granting RDP access to a Windows EC2 instance, it’s tempting to open port &lt;code&gt;3389&lt;/code&gt; to the world &lt;code&gt;0.0.0.0/0&lt;/code&gt;. That’s a major security risk. Instead, AWS SSM Fleet Manager lets you connect over a secure channel without exposing RDP to the internet.&lt;/p&gt;

&lt;p&gt;Recently, I ran into an issue where Fleet Manager failed with this error:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Unable to establish Remote Desktop connection. Verify that valid credentials were provided, and that the user you specified has been granted permission to log in through Remote Desktop&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Root Cause&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On the Windows Server 2022 Base AMI, the default Administrator account was present, but its password had already expired. Since RDP connections including those tunneled through SSM Fleet Manager require a valid and active password, the expired credentials caused the login failure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvgimq85godxg2t0565ta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvgimq85godxg2t0565ta.png" alt="Ssm fleetmanager" width="596" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;The Fix&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Reset the Administrator password via SSM Run Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;net user Administrator "xxxxx28xx@xxxx!73"
net localgroup "Remote Desktop Users" Administrator /add
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then log in through Fleet Manager RDP with your username and the new password.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Best Practices&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Never expose &lt;strong&gt;RDP&lt;/strong&gt; &lt;code&gt;3389&lt;/code&gt; to &lt;code&gt;0.0.0.0/0&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Use SSM Fleet Manager for secure access&lt;/li&gt;
&lt;li&gt;Enforce strong passwords and rotate them regularly&lt;/li&gt;
&lt;li&gt;Ensure EC2 has the IAM role: &lt;code&gt;AmazonSSMManagedInstanceCore&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Takeaway&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
If Fleet Manager RDP shows &lt;em&gt;&lt;code&gt;Invalid credentials&lt;/code&gt;&lt;/em&gt;, it’s usually not an SSM issue but a Windows password problem. Just reset the password through SSM and you’re good to go.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;References&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://repost.aws/knowledge-center/systems-manager-ec2-windows-connection-rdp?" rel="noopener noreferrer"&gt;AWS Knowledge Center – Connect to Windows EC2 using Systems Manager&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/fleet-manager-remote-desktop-connections.html?" rel="noopener noreferrer"&gt;AWS Docs – Fleet Manager Remote Desktop Connections&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=M-HKTcpt_Xg" rel="noopener noreferrer"&gt;YouTube – AWS Fleet Manager Walkthrough&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Demystifying AWS Security: IAM Password Policies vs. Automated Access Key Rotation</title>
      <dc:creator>Adedamola Ajibola</dc:creator>
      <pubDate>Mon, 10 Jun 2024 12:42:39 +0000</pubDate>
      <link>https://forem.com/damola12345/demystifying-aws-security-iam-password-policies-vs-automated-access-key-rotation-4l70</link>
      <guid>https://forem.com/damola12345/demystifying-aws-security-iam-password-policies-vs-automated-access-key-rotation-4l70</guid>
      <description>&lt;p&gt;Are you new to managing security in your AWS environment? Navigating the intricacies of AWS Identity and Access Management (IAM) can be overwhelming, especially when it comes to ensuring strong security practices. In this beginner-friendly blog post, we'll explore two fundamental aspects of AWS security: IAM password policies and automatically rotating IAM access keys using a Lambda function.&lt;/p&gt;

&lt;h2&gt;
  
  
  IAM Password Policy: Strengthening Your Authentication
&lt;/h2&gt;

&lt;p&gt;Let's start with IAM password policies. These policies define the rules and requirements for user passwords within your AWS account. By enforcing strong password policies, you can significantly enhance the security of your AWS environment. Here's what you need to know:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Complexity Requirements:&lt;/em&gt;&lt;/strong&gt; IAM password policies allow you to specify complexity requirements such as minimum length, the inclusion of special characters, and the prohibition of common passwords.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Password Expiry:&lt;/em&gt;&lt;/strong&gt; You can set password expiry periods to ensure that users regularly update their passwords. This helps mitigate the risk of compromised credentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Preventing Password Reuse:&lt;/em&gt;&lt;/strong&gt; IAM password policies can also prevent users from reusing previous passwords, further bolstering security.&lt;/p&gt;

&lt;p&gt;By configuring a robust IAM password policy, you establish a strong foundation for authentication security within your AWS account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automatically Rotating IAM Access Keys: Enhancing Key Security
&lt;/h2&gt;

&lt;p&gt;In addition to strong password policies, it's essential to regularly rotate IAM access keys. Access keys are used to authenticate programmatic access to AWS services, and regularly rotating them helps mitigate the risk of unauthorized access. Here's how you can automate this process using a Lambda function:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Lambda Function:&lt;/em&gt;&lt;/strong&gt; AWS Lambda allows you to run code in response to various triggers. By creating a custom Lambda function, you can automate the rotation of IAM access keys.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Key Rotation Logic:&lt;/em&gt;&lt;/strong&gt; The Lambda function checks the age of existing access keys associated with IAM users. If a key exceeds a specified age threshold, the function generates a new access key and deactivates the old one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Scheduled Execution:&lt;/em&gt;&lt;/strong&gt; You can schedule the Lambda function to run regularly, ensuring that access keys are rotated at predefined intervals without manual intervention.&lt;/p&gt;

&lt;p&gt;By automatically rotating IAM access keys, you maintain a higher level of security in your AWS environment and reduce the risk of unauthorized access due to compromised credentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Conclusion&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;IAM password policies and automated access key rotation are essential components of AWS security. By enforcing strong password policies and regularly rotating access keys, you significantly reduce the risk of security breaches and unauthorized access in your AWS environment.&lt;/p&gt;

</description>
      <category>security</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>Upgrading an EKS Cluster: A Step-by-Step Guide</title>
      <dc:creator>Adedamola Ajibola</dc:creator>
      <pubDate>Sat, 18 May 2024 00:02:19 +0000</pubDate>
      <link>https://forem.com/damola12345/upgrading-an-eks-cluster-a-step-by-step-guide-2alk</link>
      <guid>https://forem.com/damola12345/upgrading-an-eks-cluster-a-step-by-step-guide-2alk</guid>
      <description>&lt;p&gt;Upgrading an Amazon EKS (Elastic Kubernetes Service) cluster can seem daunting, especially in a production environment. However, with a well-defined strategy and the right tools, the process can be smooth and minimally disruptive. In this post, I'll walk you through the upgrade process from version 1.27 to 1.28 using Terraform, ensuring your EKS cluster remains functional and resilient throughout.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Upgrading EKS Matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EKS clusters need to stay updated to leverage the latest features, security patches, and performance improvements. However, EKS-managed clusters can only be upgraded one minor version at a time, making a systematic approach essential.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;The Upgrade Process&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you're managing your EKS cluster configuration with Terraform, it's essential to understand that EKS-managed clusters can only undergo upgrades one minor version at a time. The &lt;a href="https://github.com/terraform-aws-modules/terraform-aws-eks/blob/0e3cb9a0afcea87e1da0c87ecc969ffb963619ea/main.tf#L406-L412" rel="noopener noreferrer"&gt;terraform-aws-eks&lt;/a&gt; module is specifically designed to handle upgrades in the correct order when changing the cluster version.&lt;/p&gt;

&lt;p&gt;Here's a simplified overview of what we'll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upgrading the EKS Cluster Version&lt;/li&gt;
&lt;li&gt;Updating EKS Add-ons&lt;/li&gt;
&lt;li&gt;Upgrading EKS Managed Node Groups&lt;/li&gt;
&lt;li&gt;Upgrade Other Resources (e.g., Karpenter)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 1: Upgrading the Control Plane&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The control plane is the brain of your Kubernetes cluster, managing all the operations within the cluster. To upgrade:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Update the Control Plane&lt;/em&gt;&lt;/strong&gt;: This is the first step in the upgrade process. The control plane version dictates the compatibility of the cluster's components.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Initiate the Upgrade&lt;/em&gt;&lt;/strong&gt;: Use Terraform to apply the new configuration. This ensures that the control plane updates to the desired version without manual intervention.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~&amp;gt; 20.0"

  cluster_name    = local.name
  cluster_version = "1.27" =&amp;gt; "1.28"

  cluster_endpoint_public_access = true

  cluster_addons = {
    coredns = {
      most_recent = true
    }
    kube-proxy = {
      most_recent = true
    }
    vpc-cni = {
      most_recent = true
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yrfj32hgldaaqx4cmvn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yrfj32hgldaaqx4cmvn.png" alt="eks-console" width="800" height="122"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 2: Updating EKS Add-ons&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Add-ons in EKS are additional features that enhance your cluster's capabilities, such as DNS management or monitoring tools. These add-ons are tightly coupled with the cluster's version.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Automatic Compatibility&lt;/em&gt;&lt;/strong&gt;: When the control plane is updated, EKS automatically aligns the add-on versions with the new cluster version.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Verify Add-on Versions&lt;/em&gt;&lt;/strong&gt;: Ensure that each add-on is updated and compatible with the new version of the control plane.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 3: Upgrading EKS Managed Node Groups&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Node groups are the workers in your cluster, running your applications. These groups need to be in sync with the control plane.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Node Group Update Process:&lt;/em&gt;&lt;/strong&gt; After the control plane is updated, node groups are updated to match the new version. This ensures that all nodes run the compatible Kubernetes version.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;&lt;strong&gt;Minimize Disruption:&lt;/strong&gt;&lt;/em&gt; EKS handles node group updates in a way that minimizes disruption. By default, it limits the number of unavailable nodes during the upgrade to 33%, ensuring that most of your applications remain operational.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv77bwra4dbjntlro4399.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv77bwra4dbjntlro4399.png" alt="modules" width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Ensuring Minimal Disruption&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our upgrade approach prioritizes minimal disruption, making it suitable for production environments. By systematically updating the control plane, add-ons, and node groups, you ensure that your cluster remains functional and efficient throughout the upgrade process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Upgrade Other Resources&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you are using additional tools like Karpenter, an open-source cluster autoscaler, you will need to upgrade these as well:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify the compatibility of these resources with the new EKS version.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Upgrading your EKS cluster is paramount to maintaining security, stability, and accessing the latest Kubernetes features. It's imperative to follow the recommended upgrade procedure to seamlessly transition from v1.27 to v1.28 while mitigating disruptions. Before implementing upgrades in production, thorough testing in a staging environment is essential. Leveraging the terraform-aws-eks module can streamline the upgrade process, ensuring efficiency and accuracy. Stay informed about Kubernetes releases and adhere to best practices to uphold the success of your AWS EKS deployment.&lt;/p&gt;

</description>
      <category>eks</category>
      <category>kubernetes</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>Solving Kubernetes CronJob Stuck on Pending with Pod Affinity</title>
      <dc:creator>Adedamola Ajibola</dc:creator>
      <pubDate>Tue, 06 Feb 2024 07:00:53 +0000</pubDate>
      <link>https://forem.com/damola12345/solving-kubernetes-cronjob-stuck-on-pending-with-pod-affinity-41eg</link>
      <guid>https://forem.com/damola12345/solving-kubernetes-cronjob-stuck-on-pending-with-pod-affinity-41eg</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhs4zkqcvzij3sfpwthvu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhs4zkqcvzij3sfpwthvu.png" alt="cronjob" width="256" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Running cron jobs on a Kubernetes cluster provides an automated way to perform periodic tasks. However, when your cluster is dynamically managed, and new nodes come into play, you might encounter an issue where your cron job pods get stuck in the Pending state. In such cases, applying pod affinity can be a powerful solution to ensure successful pod scheduling. This blog post delves into how to address this issue using pod affinity and the advantages it offers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up a CronJob in a Dynamic Kubernetes Cluster
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;CronJob Configuration:&lt;/em&gt;&lt;/strong&gt; Define a CronJob resource in your Kubernetes manifest, specifying the schedule and the container to run the job. Ensure you have your data and resources correctly configured.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Dynamic Node Scaling:&lt;/em&gt;&lt;/strong&gt; In dynamic Kubernetes clusters, tools like Karpenter automatically provision new nodes based on resource demands. This scaling mechanism is efficient but can lead to pod scheduling issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;When a new node spins up close to the time of your cron job execution, the scheduler may place the job on the newly created node. This can lead to pods getting stuck in a Pending state because the resources required by the CronJob are not immediately available.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;To address this problem, you can leverage pod affinity rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Using Pod Affinity to Ensure Successful Scheduling:&lt;/em&gt;&lt;/strong&gt; Pod affinity allows you to influence where your pods are scheduled, making it a valuable tool in managing the scheduling of your cron jobs in dynamic clusters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Affinity Rules:&lt;/em&gt;&lt;/strong&gt; Define pod affinity rules that indicate where the cron job pods should be scheduled. For example, you can specify that they should be scheduled on nodes running specific labels or running pods of a particular type.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Topology Constraints:&lt;/em&gt;&lt;/strong&gt; You can also apply topology constraints, ensuring that your cron job pods are only scheduled on nodes that meet certain criteria. For instance, you can specify that they should run on nodes in a different availability zone to improve high availability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Node Selection:&lt;/em&gt;&lt;/strong&gt; Utilize node selectors and node affinity to further fine-tune the selection of nodes for your pods.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages of Pod Affinity&lt;/strong&gt;&lt;br&gt;
By applying pod affinity in your Kubernetes cluster, you gain several advantages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Predictable Scheduling:&lt;/em&gt;&lt;/strong&gt; Pod affinity ensures that your cron job pods are scheduled in a way that aligns with your requirements, even in dynamic clusters. This leads to predictable and reliable execution of your tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Resource Utilization:&lt;/em&gt;&lt;/strong&gt; It optimizes resource utilization by avoiding overloading new nodes that might not have sufficient resources for your cron jobs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Improved High Availability:&lt;/em&gt;&lt;/strong&gt; Through topology constraints and affinity rules, you can enhance fault tolerance and high availability by spreading your pods across different nodes or zones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In dynamic Kubernetes clusters where nodes can be spun up and down automatically, scheduling cron jobs without proper consideration can lead to pod stuck-in-pending issues. Applying pod affinity rules allows you to regain control over pod placement, ensuring your cron jobs run reliably and efficiently. This approach offers improved high availability, resource utilization, and predictable scheduling in your Kubernetes environment, ultimately enhancing the overall stability and performance of your workloads.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cronjob</category>
    </item>
    <item>
      <title>Kubernetes Affinity Basics</title>
      <dc:creator>Adedamola Ajibola</dc:creator>
      <pubDate>Fri, 19 Jan 2024 07:23:59 +0000</pubDate>
      <link>https://forem.com/damola12345/kubernetes-affinity-basics-24fh</link>
      <guid>https://forem.com/damola12345/kubernetes-affinity-basics-24fh</guid>
      <description>&lt;p&gt;In Kubernetes, affinity is like a set of rules that decide where your pods go. Think of pods as small units of your app. There are two types: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Node Affinity:&lt;/em&gt;&lt;/strong&gt; Decides where to put pods.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Pod Affinity:&lt;/em&gt;&lt;/strong&gt; Decides where to group pods together.
These rules help you control how pods are placed based on specific conditions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;em&gt;Node Affinity VS Pod Affinity&lt;/em&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Node Affinity&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;RequiredDuringSchedulingIgnoredDuringExecution:&lt;/em&gt;&lt;/strong&gt; Pods must follow these rules to be scheduled on a node. If the rules are not met, the pod won't be placed on that node. But once scheduled, the pod won't be moved even if the rules are broken.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;PreferredDuringSchedulingIgnoredDuringExecution:&lt;/strong&gt;&lt;/em&gt; These rules are preferred, but not mandatory. The scheduler will try to follow them, but if it can't, the pod might still be placed on a node that breaks the rules.&lt;/p&gt;

&lt;p&gt;Here's an example of a node affinity rule that requires a pod to be scheduled on a node with a specific label. Think of node affinity like choosing where your pod should live.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0j8ou5pr840xwe8p1yg4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0j8ou5pr840xwe8p1yg4.png" alt="node-affinity" width="800" height="635"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pod Affinity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;RequiredDuringSchedulingIgnoredDuringExecution:&lt;/em&gt;&lt;/strong&gt; Similar to node affinity, but it's about pods wanting to be on the same node as other pods. Rules must be satisfied for the pods to be scheduled together on a node.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;PreferredDuringSchedulingIgnoredDuringExecution:&lt;/strong&gt;&lt;/em&gt; Similar to preferred node affinity, but at the pod level.&lt;/p&gt;

&lt;p&gt;Here's an illustration of a pod affinity guideline below&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3q0yufdbp5tkfe46q6x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3q0yufdbp5tkfe46q6x.png" alt="pod-affinity" width="657" height="689"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Affinity rules in Kubernetes use labels (key-value pairs) to guide where pods should be placed in the cluster. By setting these rules, you enhance efficiency and performance, essentially telling Kubernetes where you prefer your pods to be based on the labels you've defined. This optimization helps in better resource utilization, improves performance, and ensures that related pods share the same nodes for efficient communication. For more details, check out the &lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;. It's a powerful tool for managing workload placement in Kubernetes.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>beginners</category>
      <category>aws</category>
    </item>
    <item>
      <title>Deploying a Next.js Static Site on DigitalOcean's App Platform</title>
      <dc:creator>Adedamola Ajibola</dc:creator>
      <pubDate>Wed, 29 Nov 2023 09:10:03 +0000</pubDate>
      <link>https://forem.com/damola12345/deploying-a-nextjs-static-site-on-digitaloceans-app-platform-306g</link>
      <guid>https://forem.com/damola12345/deploying-a-nextjs-static-site-on-digitaloceans-app-platform-306g</guid>
      <description>&lt;p&gt;Wondering if deploying a Next.js static site on DigitalOcean can deliver a seamless experience? 😂&lt;/p&gt;

&lt;p&gt;I was assigned the task of deploying a static site and initially, I assumed it would be as simple as deploying a React Native app. I diligently followed the steps outlined in the &lt;a href="https://docs.digitalocean.com/developer-center/deploy-a-next.js-app-to-app-platform/#deploying-nextjs-as-a-static-site" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After successfully completing the build process, I eagerly clicked on the URL, only to encounter a frustrating 404 error. It was then that I delved into the build logs to troubleshoot the underlying issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Identifying the Issue
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9b11u2n3eixg7gcinxsd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9b11u2n3eixg7gcinxsd.png" alt="nextjs" width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The root cause of the problem lies in the absence of a clear definition for the output directory of the static site. As a result, it was searching for static files and, by default, resorting to the standard 404 error document provided by the App Platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding a Solution
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3qawdsfbm94brpauax8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3qawdsfbm94brpauax8.png" alt="settings" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To resolve this issue, head to the settings and make a modification in the &lt;code&gt;appsec&lt;/code&gt;  file by adding the line &lt;code&gt;output_dir: /out&lt;/code&gt;. Don't forget to clear the build cache before initiating a redeployment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bxgw1nrfxsg448u1c4e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bxgw1nrfxsg448u1c4e.png" alt="spec" width="800" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodz6e2ry05yds7kqomyq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodz6e2ry05yds7kqomyq.png" alt="appspec" width="800" height="683"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The build process concluded successfully, and the static files were exported to the designated &lt;code&gt;/out&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5uw4gnujlas1x5eat6cp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5uw4gnujlas1x5eat6cp.png" alt="logs" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing URL Path Rewrites and Redirects
&lt;/h2&gt;

&lt;p&gt;If your application involves URL path rewrites or redirects, consult the &lt;a href="https://docs.digitalocean.com/products/app-platform/how-to/url-rewrites/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; for the specific steps. You'll either need to manually configure these routes or include them in the &lt;code&gt;appsec&lt;/code&gt; file under the &lt;code&gt;ingress rule&lt;/code&gt; section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6sjxwwvw2wnymhbqhjt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6sjxwwvw2wnymhbqhjt.png" alt="ingress" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling Domain Management in the App Platform
&lt;/h2&gt;

&lt;p&gt;For guidance on effectively managing domains while using the App Platform, refer to the &lt;a href="https://docs.digitalocean.com/products/app-platform/how-to/manage-domains/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;, which offers a clear and straightforward guide.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Deploying a Next.js static site on DigitalOcean's App Platform is easy when you have the right guidance. This guide is here to help you make the process smooth and enjoyable. Additionally, the GitHub integration automates the build and deployment process, making it even more convenient.&lt;/p&gt;

&lt;p&gt;App platforms offer many benefits like simplified deployment, automatic scaling, managed infrastructure, security features, CI/CD support, isolation, monitoring, and more. They boost developer productivity and reduce costs. The choice of the right platform depends on your project's needs and your organization's goals.&lt;/p&gt;

</description>
      <category>appplatform</category>
      <category>digitalocean</category>
      <category>nextjs</category>
    </item>
    <item>
      <title>Resolving "The Node Had Condition [Node Disk-Pressure]" &amp; "The Node Was Low On Resource: Ephemeral-storage" In Kubernetes</title>
      <dc:creator>Adedamola Ajibola</dc:creator>
      <pubDate>Mon, 30 Oct 2023 01:01:22 +0000</pubDate>
      <link>https://forem.com/damola12345/resolving-the-node-had-condition-node-disk-pressure-the-node-was-low-on-resource-ephemeral-storage-in-kubernetes-3ncb</link>
      <guid>https://forem.com/damola12345/resolving-the-node-had-condition-node-disk-pressure-the-node-was-low-on-resource-ephemeral-storage-in-kubernetes-3ncb</guid>
      <description>&lt;p&gt;Kubernetes is a powerful platform for managing containerized workloads, but it can be challenging to ensure that your applications are running efficiently and without issue. One common issue that you may encounter is &lt;code&gt;Node Disk-Pressure &amp;amp; Node Was Low On Resource&lt;/code&gt;, which occurs when a node in your Kubernetes cluster runs out of disk space. In this blog post, we'll explore how to identify and resolve Node Disk-Pressure issues in your  Kubernetes environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fry4weozut18vuu7pe6dm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fry4weozut18vuu7pe6dm.png" alt="Node-evi img" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identifying The Issue&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzigfl8oa91d1inli2qf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzigfl8oa91d1inli2qf.jpg" alt="DiskPressure" width="666" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3thuz1t003cqag205xy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3thuz1t003cqag205xy.jpg" alt="eviction pod" width="800" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A pod was evicted because of DiskPressure issues, and upon inspecting the specific pod, two prevalent issues were observed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions" rel="noopener noreferrer"&gt;Node conditions&lt;/a&gt;: [DiskPressure]&lt;/li&gt;
&lt;li&gt;The node was low on resources: ephemeral-storage.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To identify the source of the Node Disk-Pressure issue, there are a couple of different methods you can use. One way is to use the Kubernetes &lt;code&gt;Kubectl top&lt;/code&gt; command to see which processes are consuming the most disk space on your node. Another way is to SSH into the node and run the &lt;code&gt;df -h&lt;/code&gt; command to check the disk usage on the node and identify which directories are consuming the most disk space.&lt;/p&gt;

&lt;p&gt;Once you've identified the processes or directories that are causing the disk usage, you can determine whether they are essential to the system or whether they can be deleted or moved to another location.  By removing unnecessary files and directories or moving them to a different disk or node, you can free up disk space and resolve the Node Disk-Pressure issue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resolving the Issue&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To resolve the Node Disk-Pressure issue, you can take several steps, such as increasing the EBS volume size, deleting unused resources, implementing resource limits, implementing pod anti-affinity, or adding more nodes to your cluster.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In my case, I resolved the issue by deleting unused resources and increasing the EBS volume size&lt;/em&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;One way to resolve the issue is to increase the EBS volume size. To do this, you can follow the steps outlined in the &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt;. Essentially, you will need to modify the volume size in the EC2 console or using the AWS CLI or Terraform, and then extend the file system on the node to utilize the additional space.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Another way to free up disk space and resolve the Node &lt;code&gt;Disk-Pressure&lt;/code&gt; issue is to delete any unused resources on your node. This can include pods, images, or volumes that are no longer required. By removing these resources, you can free up additional disk space on the node and prevent future &lt;code&gt;Disk Pressure&lt;/code&gt; issues.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, if none of the above steps works, you may need to consider adding more nodes to your cluster to distribute the workload across a larger number of machines. This will reduce the likelihood of any one node experiencing &lt;code&gt;Disk-Pressure&lt;/code&gt; issues.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Node Disk-Pressure can be a challenging issue to resolve, but by following the steps outlined in this blog post, you can ensure that your Kubernetes cluster is running efficiently and that your applications can run successfully without being evicted due to Disk-Pressure issues. It's important to monitor your  Kubernetes environment and take proactive steps to prevent Disk-Pressure issues from occurring in the first place.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>aws</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Reducing Data Transfer Costs with S3 Gateway Endpoint</title>
      <dc:creator>Adedamola Ajibola</dc:creator>
      <pubDate>Mon, 22 May 2023 11:00:43 +0000</pubDate>
      <link>https://forem.com/damola12345/reducing-data-transfer-costs-with-s3-gateway-endpoint-2nmg</link>
      <guid>https://forem.com/damola12345/reducing-data-transfer-costs-with-s3-gateway-endpoint-2nmg</guid>
      <description>&lt;p&gt;In today's world, data is king. Businesses across all industries rely on the storage, retrieval, and analysis of data to make informed decisions and remain competitive. However, with the explosion of big data, cloud storage has become a necessary tool for businesses to store and analyze large volumes of data. Amazon S3 is a popular cloud storage service used by millions of businesses worldwide due to its scalability, high durability, and reliability.&lt;/p&gt;

&lt;p&gt;However, using S3 from within a VPC can incur significant data transfer costs. To access S3 from within a VPC, traffic must be routed over the internet and back, which can add up to high data transfer costs, especially for businesses with large volumes of data.&lt;/p&gt;

&lt;p&gt;Fortunately, Amazon has a solution to this problem - the S3 Gateway Endpoint. In this blog post, we will explore what an S3 Gateway Endpoint is, how it works, and how it has helped reduce data transfer costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is an S3 Gateway Endpoint?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8r1pkilrrf2iuow1i5th.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8r1pkilrrf2iuow1i5th.jpg" alt="s3-gateway" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An S3 Gateway Endpoint is a service that allows you to access S3 from within a VPC without incurring any data transfer costs. It provides a secure and private connection between your VPC and S3 over the AWS network, bypassing the public internet. This allows you to access your S3 buckets and objects directly from your VPC as if they were hosted within your VPC.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Does An S3 Gateway Endpoint Work?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you create an S3 Gateway Endpoint, AWS creates an endpoint in your VPC that maps to the S3 service. Traffic between the S3 Gateway Endpoint and S3 is routed through the AWS network, avoiding the public internet. This provides a secure and private connection between your VPC and S3, ensuring that your data is protected from unauthorized access.&lt;/p&gt;

&lt;p&gt;To use the S3 Gateway Endpoint, you simply need to update your routing tables to direct traffic destined for S3 to the endpoint. This ensures that any traffic to S3 from within your VPC is sent over the AWS network to the S3 Gateway Endpoint and then on to the S3 service.&lt;/p&gt;

&lt;p&gt;Note that VPC and S3 buckets need to be in the same AWS region.&lt;/p&gt;

&lt;p&gt;For a visual demonstration and detailed setup instructions, I recommend watching this video &lt;a href="https://www.youtube.com/watch?v=i7aIsvch1y8" rel="noopener noreferrer"&gt;tutorial&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How S3 Gateway Endpoint Helped Reduce Data Transfer Costs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Using S3 extensively to store and analyze data can incur significant data transfer costs. Routing traffic over the internet and back to access S3 buckets can add up to high data transfer costs. However, implementing an S3 Gateway Endpoint can significantly reduce data transfer costs.&lt;/p&gt;

&lt;p&gt;By implementing an S3 Gateway Endpoint, it was possible to access S3 buckets and objects directly from within a VPC, without incurring any data transfer costs. This allowed us to store and access data more efficiently, reducing overall data transfer costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In conclusion, Amazon's S3 Gateway Endpoint is an excellent solution to the problem of data transfer costs when accessing S3 from within a VPC. It provides a secure and private connection between your VPC and S3, ensuring that your data is protected from unauthorized access, while also allowing you to access your S3 buckets and objects directly from within your VPC, without incurring any data transfer costs.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Software Documentation Part2</title>
      <dc:creator>Adedamola Ajibola</dc:creator>
      <pubDate>Mon, 17 Oct 2022 05:24:07 +0000</pubDate>
      <link>https://forem.com/damola12345/software-documentation-part2-24o</link>
      <guid>https://forem.com/damola12345/software-documentation-part2-24o</guid>
      <description>&lt;p&gt;This post will discuss the importance, limitations, and different types of software documentation created for different audiences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Software Documentation Limitation
&lt;/h2&gt;

&lt;p&gt;The majority of process documents are tailored to a specific moment or phase of the process. As a result, these documents become quickly out of date and obsolete. However, they should be kept in development because they may be useful in the future for similar tasks or maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Documentation IS Essential
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;It allows new users to quickly learn how to use the software, simplifies the product, and reduces support costs.&lt;/li&gt;
&lt;li&gt;It aids in knowledge transfer to other developers who may wish to modify or maintain the software.&lt;/li&gt;
&lt;li&gt;It assists in keeping track of all aspects of an application and improves software product quality.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here are some examples of software documentation types created for various audiences:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;End User Documentation&lt;/strong&gt;: This refers to documentation written specifically for end users. It should explain in as few words as possible how the software can help users solve their problems. Many large customer-based products replace some parts of user documentation, such as tutorials and onboarding, with onboarding training. Furthermore, online delivery of user documentation is becoming increasingly popular. Technical writers must be more creative when creating user documentation for the web. The following sections should be included in online end-user documentation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Video demonstrations &lt;/li&gt;
&lt;li&gt;In-built assistance &lt;/li&gt;
&lt;li&gt;Help Portals&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical documentation&lt;/strong&gt;: Entails the documentation of software source codes, algorithms, APIs, and so on. It is typically written for a technical audience, such as software developers, technicians, and maintenance engineers. Technical documentation in software engineering is an umbrella term for all written documents and materials dealing with the development of software products. All software development products, whether developed by a small team or a large corporation, necessitate some level of documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture Documentation&lt;/strong&gt;: Specifies the high-level architecture of the software system under development. May describe the system's main components, their roles and functions, as well as the data and control flow between those components. The main architectural decisions are included in software architecture design documents. We do not recommend listing everything, but rather focusing on the most important and difficult ones. The following information sections make up an effective design and architecture document.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Requirements Documentation&lt;/strong&gt;: Usually created at the start of a software development project. The goal is to clearly and precisely specify the expectations for the software being developed. During the analysis phase of the SDLC, the requirements for software development are created and documented. The requirements are a description of the functionality of a software application that is used throughout the software development process to explain how the software is supposed to work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Documentation is an important part of the software development process because it makes it easier for users to use new software and also helps transfer knowledge to another developer who may want to modify the software.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
