<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Isreal Urephu</title>
    <description>The latest articles on Forem by Isreal Urephu (@isreal_urephu).</description>
    <link>https://forem.com/isreal_urephu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/isreal_urephu"/>
    <language>en</language>
    <item>
      <title>Why Kubernetes Ingress Is Broken (and How Gateway API Fixes It)</title>
      <dc:creator>Isreal Urephu</dc:creator>
      <pubDate>Thu, 28 Aug 2025 17:53:10 +0000</pubDate>
      <link>https://forem.com/isreal_urephu/why-kubernetes-ingress-is-broken-and-how-gateway-api-fixes-it-488i</link>
      <guid>https://forem.com/isreal_urephu/why-kubernetes-ingress-is-broken-and-how-gateway-api-fixes-it-488i</guid>
      <description>&lt;p&gt;The 𝗞𝟴𝘀 𝗜𝗻𝗴𝗿𝗲𝘀𝘀 has been one of (if not the) major ways of exposing applications running in a Kubernetes cluster to external traffic. Other options include 𝗡𝗼𝗱𝗲𝗣𝗼𝗿𝘁, 𝗟𝗼𝗮𝗱𝗕𝗮𝗹𝗮𝗻𝗰𝗲𝗿, or using a 𝘀𝗲𝗿𝘃𝗶𝗰𝗲 𝗺𝗲𝘀𝗵. But Ingress still comes out on top and remains one of the most widely adopted ways to expose applications.&lt;/p&gt;

&lt;p&gt;That said, Ingress has plenty of limitations that make it tricky to manage. Let’s break it down:&lt;/p&gt;

&lt;p&gt;In a typical Ingress resource, the 𝘀𝗽𝗲𝗰 section is where you configure your host, route to the backend service, and the service itself. That’s pretty much it. But to make an Ingress 𝙥𝙧𝙤𝙙𝙪𝙘𝙩𝙞𝙤𝙣-𝙧𝙚𝙖𝙙𝙮, we usually need more: 𝘁𝗿𝗮𝗳𝗳𝗶𝗰 𝘀𝗽𝗹𝗶𝘁𝘁𝗶𝗻𝗴, 𝗧𝗟𝗦 𝗰𝗹𝗶𝗲𝗻𝘁 𝗮𝘂𝘁𝗵𝗲𝗻𝘁𝗶𝗰𝗮𝘁𝗶𝗼𝗻, 𝗿𝗮𝘁𝗲 𝗹𝗶𝗺𝗶𝘁𝗶𝗻𝗴, 𝗖𝗢𝗥𝗦, 𝗵𝗲𝗮𝗱𝗲𝗿 𝗺𝗮𝗻𝗶𝗽𝘂𝗹𝗮𝘁𝗶𝗼𝗻, and so on. These can’t be handled in the spec directly.&lt;/p&gt;

&lt;p&gt;The workaround? 𝗔𝗻𝗻𝗼𝘁𝗮𝘁𝗶𝗼𝗻𝘀.&lt;/p&gt;

&lt;p&gt;The problem? Annotations are specific to the 𝗶𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 (𝗡𝗚𝗜𝗡𝗫, 𝗧𝗿𝗮𝗲𝗳𝗶𝗸, 𝗞𝗼𝗻𝗴, 𝗲𝘁𝗰.). This becomes a mess if you ever want to migrate to a different Ingress controller, because you basically have to rewrite everything.&lt;/p&gt;

&lt;p&gt;This is where the 𝗚𝗮𝘁𝗲𝘄𝗮𝘆 𝗔𝗣𝗜 comes in.&lt;/p&gt;

&lt;p&gt;With Gateway API, everything is 𝘀𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝗶𝘇𝗲𝗱 from traffic routing to rate limiting to advanced policies. And the best part: it’s the same across implementations. So an 𝗻𝗴𝗶𝗻𝘅-𝗴𝗮𝘁𝗲𝘄𝗮𝘆-𝗳𝗮𝗯𝗿𝗶𝗰 setup would work with 𝗧𝗿𝗮𝗲𝗳𝗶𝗸 without needing to change annotations.&lt;/p&gt;

&lt;p&gt;𝗖𝗼𝗿𝗲 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀 𝗼𝗳 𝗚𝗮𝘁𝗲𝘄𝗮𝘆 𝗔𝗣𝗜:&lt;br&gt;
1️⃣ 𝗚𝗮𝘁𝗲𝘄𝗮𝘆𝗖𝗹𝗮𝘀𝘀 – Points to the implementation (e.g., Istio, NGINX Gateway Fabric, HAProxy Ingress).&lt;br&gt;
2️⃣ 𝗚𝗮𝘁𝗲𝘄𝗮𝘆 – Refers to the endpoint that processes traffic (e.g., filtering, routing, load balancing). This could be a cloud load balancer, proxy, or a server with LB installed.&lt;br&gt;
3️⃣ 𝗥𝗼𝘂𝘁𝗲𝘀 – Define how traffic flows from the Gateway endpoint to backend services, which then forward to the pods.&lt;/p&gt;

&lt;p&gt;This is just a high-level overview. In my upcoming posts, I’ll share how to migrate from Ingress to Gateway API.&lt;br&gt;
👉 Consider a repost if you found this useful.&lt;/p&gt;

&lt;p&gt;Useful links:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://gateway-api.sigs.k8s.io/" rel="noopener noreferrer"&gt;kubernetes gateway-api&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://gateway-api.sigs.k8s.io/implementations/#nginx-gateway-fabric" rel="noopener noreferrer"&gt;Nginx-gateway-fabric&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>ingres</category>
      <category>docker</category>
      <category>cloud</category>
    </item>
    <item>
      <title>💙💚 𝗪𝗵𝗮𝘁 𝗗𝗼𝗲𝘀 𝗕𝗹𝘂𝗲-𝗚𝗿𝗲𝗲𝗻 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 𝗟𝗼𝗼𝗸 𝗟𝗶𝗸𝗲 𝘁𝗼 𝗬𝗼𝘂?</title>
      <dc:creator>Isreal Urephu</dc:creator>
      <pubDate>Tue, 19 Aug 2025 15:32:18 +0000</pubDate>
      <link>https://forem.com/isreal_urephu/--25gl</link>
      <guid>https://forem.com/isreal_urephu/--25gl</guid>
      <description>&lt;p&gt;Blue-green deployment is a deployment strategy where we have two identical environments running one hosting the current (blue) version of our application, and the other running the new (green) version.&lt;/p&gt;

&lt;p&gt;When it’s time to release, we simply switch traffic from blue → green. If something goes wrong, we can roll back instantly by sending traffic back to blue.&lt;/p&gt;

&lt;p&gt;But here’s the big question:&lt;br&gt;
𝗛𝗼𝘄 𝗱𝗼 𝘄𝗲 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁 𝘁𝗵𝗶𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗿𝗲𝗮𝗹 𝘄𝗼𝗿𝗹𝗱 𝘄𝗵𝗲𝗻 𝘄𝗲 𝗻𝗲𝗲𝗱 𝗵𝗶𝗴𝗵 𝗮𝘃𝗮𝗶𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗮𝗻𝗱 𝗳𝗮𝘂𝗹𝘁 𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝗰𝗲?&lt;/p&gt;

&lt;p&gt;🛠️ 𝗟𝗲𝘁’𝘀 𝘀𝘁𝗮𝗿𝘁 𝘄𝗶𝘁𝗵 𝗮 𝗳𝗲𝘄 𝗰𝗼𝗺𝗺𝗼𝗻 𝗮𝗽𝗽𝗿𝗼𝗮𝗰𝗵𝗲𝘀:&lt;br&gt;
1️⃣ 𝗧𝘄𝗼 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁𝘀 𝗶𝗻 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲 𝗰𝗹𝘂𝘀𝘁𝗲𝗿&lt;br&gt;
 • Each with its own Kubernetes Service&lt;br&gt;
 • Your Ingress routes traffic to either blue or green depending on which one you want to serve customers.&lt;/p&gt;

&lt;p&gt;2️⃣ 𝗕𝗹𝘂𝗲 𝗮𝗻𝗱 𝗴𝗿𝗲𝗲𝗻 𝘀𝗲𝗽𝗮𝗿𝗮𝘁𝗲𝗱 𝗯𝘆 𝗻𝗮𝗺𝗲𝘀𝗽𝗮𝗰𝗲𝘀&lt;br&gt;
 • Deploy both environments into different namespaces&lt;br&gt;
 • Direct traffic to the target namespace using routing rules or service mesh(Istio).&lt;/p&gt;

&lt;p&gt;3️⃣ 𝗦𝗲𝗽𝗮𝗿𝗮𝘁𝗲 𝗰𝗹𝘂𝘀𝘁𝗲𝗿𝘀 𝗶𝗻 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲 𝗿𝗲𝗴𝗶𝗼𝗻&lt;br&gt;
 • Blue and green run on completely different clusters for stronger isolation.&lt;/p&gt;

&lt;p&gt;4️⃣ 𝗦𝗲𝗽𝗮𝗿𝗮𝘁𝗲 𝗰𝗹𝘂𝘀𝘁𝗲𝗿𝘀 𝗶𝗻 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 𝗿𝗲𝗴𝗶𝗼𝗻𝘀 🌍&lt;br&gt;
 • Adds disaster recovery capability on top of deployment isolation.&lt;/p&gt;

&lt;p&gt;⚖️ 𝗪𝗵𝗶𝗰𝗵 𝗼𝗻𝗲 𝘀𝗵𝗼𝘂𝗹𝗱 𝘆𝗼𝘂 𝗰𝗵𝗼𝗼𝘀𝗲?&lt;br&gt;
It depends on:&lt;/p&gt;

&lt;p&gt;• 𝗧𝗿𝗮𝗳𝗳𝗶𝗰 𝘃𝗼𝗹𝘂𝗺𝗲 🚦&lt;br&gt;
 • 𝗥𝗲𝗹𝗲𝗮𝘀𝗲 𝗳𝗿𝗲𝗾𝘂𝗲𝗻𝗰𝘆 ⏱️&lt;br&gt;
 • 𝗕𝘂𝗱𝗴𝗲𝘁 💰&lt;br&gt;
 • 𝗧𝗼𝗹𝗲𝗿𝗮𝗻𝗰𝗲 𝗳𝗼𝗿 𝗱𝗼𝘄𝗻𝘁𝗶𝗺𝗲 🛑&lt;br&gt;
 • 𝗡𝗲𝗲𝗱 𝗳𝗼𝗿 𝗱𝗶𝘀𝗮𝘀𝘁𝗲𝗿 𝗿𝗲𝗰𝗼𝘃𝗲𝗿𝘆 🌎&lt;/p&gt;

&lt;p&gt;I’m curious to know how you’re implementing Blue/Green deployments in your environment. Feel free to share&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgz63t6uxhe598425e8nd.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgz63t6uxhe598425e8nd.gif" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>𝗪𝗵𝘆 𝗺𝗮𝗻𝘆 𝗺𝗼𝗱𝗲𝗿𝗻 𝘄𝗲𝗯 𝗮𝗽𝗽𝘀 𝗿𝘂𝗻 𝗼𝗻 𝗘𝘃𝗲𝗻𝘁-𝗗𝗿𝗶𝘃𝗲𝗻 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 (𝗘𝗗𝗔) 🚀</title>
      <dc:creator>Isreal Urephu</dc:creator>
      <pubDate>Sun, 17 Aug 2025 18:07:46 +0000</pubDate>
      <link>https://forem.com/isreal_urephu/--2gi2</link>
      <guid>https://forem.com/isreal_urephu/--2gi2</guid>
      <description>&lt;p&gt;𝗪𝗵𝘆 𝗺𝗮𝗻𝘆 𝗺𝗼𝗱𝗲𝗿𝗻 𝘄𝗲𝗯 𝗮𝗽𝗽𝘀 𝗿𝘂𝗻 𝗼𝗻 𝗘𝘃𝗲𝗻𝘁-𝗗𝗿𝗶𝘃𝗲𝗻 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 (𝗘𝗗𝗔) 🚀&lt;/p&gt;

&lt;p&gt;Think about Amazon or any big e-commerce site. Behind the scenes, dozens of different services inventory, payments, notifications, billing all need to work together smoothly.&lt;/p&gt;

&lt;p&gt;In a traditional setup, this communication happens directly between services. For example:&lt;/p&gt;

&lt;p&gt;• 𝗦𝗲𝗿𝘃𝗶𝗰𝗲 𝗔 calls 𝗦𝗲𝗿𝘃𝗶𝗰𝗲 𝗕 (like a client talking to a server).&lt;br&gt;
 • Service B processes the request and sends a response back.&lt;br&gt;
 • This is 𝘴𝘺𝘯𝘤𝘩𝘳𝘰𝘯𝘰𝘶𝘴 — Service A waits until Service B finishes.&lt;/p&gt;

&lt;p&gt;The problem? If Service B is slow or goes down, Service A suffers too. This creates 𝘁𝗶𝗴𝗵𝘁 𝗰𝗼𝘂𝗽𝗹𝗶𝗻𝗴 between services.&lt;/p&gt;

&lt;p&gt;𝗘𝘃𝗲𝗻𝘁-𝗗𝗿𝗶𝘃𝗲𝗻 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 🎯&lt;/p&gt;

&lt;p&gt;Instead of talking directly, services use a broker (think of it as a post office for events).&lt;/p&gt;

&lt;p&gt;Here’s how it works:&lt;/p&gt;

&lt;p&gt;𝟭. Service A becomes a 𝗽𝗿𝗼𝗱𝘂𝗰𝗲𝗿 — it sends an event to the broker.&lt;br&gt;
 𝟮. The broker holds the event until someone is ready to process it.&lt;br&gt;
 𝟯. Service B (the 𝗰𝗼𝗻𝘀𝘂𝗺𝗲𝗿) subscribes to the broker and processes events when it’s ready.&lt;/p&gt;

&lt;p&gt;The benefits?&lt;br&gt;
✅ Services are 𝗱𝗲𝗰𝗼𝘂𝗽𝗹𝗲𝗱 they don’t depend on each other’s availability.&lt;br&gt;
✅ Events can be processed 𝗮𝘀𝘆𝗻𝗰𝗵𝗿𝗼𝗻𝗼𝘂𝘀𝗹𝘆 and even in parallel.&lt;br&gt;
✅ If one service goes down, the broker still holds the events until it comes back.&lt;/p&gt;

&lt;p&gt;This approach is why you can order something online, get an instant confirmation, and still receive updates later even if some backend services were temporarily offline.&lt;/p&gt;

&lt;p&gt;Event-Driven Architecture = 𝘀𝗰𝗮𝗹𝗮𝗯𝗹𝗲, 𝗿𝗲𝘀𝗶𝗹𝗶𝗲𝗻𝘁, and 𝗿𝗲𝗮𝗱𝘆 for the unpredictable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxuqdq3vo43vk58nlegis.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxuqdq3vo43vk58nlegis.gif" alt=" " width="1021" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>🔒𝗦𝘁𝗼𝗿𝗶𝗻𝗴 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗦𝗲𝗰𝗿𝗲𝘁𝘀 𝗦𝗲𝗰𝘂𝗿𝗲𝗹𝘆 𝘄𝗶𝘁𝗵 𝗘𝘅𝘁𝗲𝗿𝗻𝗮𝗹 𝗦𝗲𝗰𝗿𝗲𝘁𝘀 𝗢𝗽𝗲𝗿𝗮𝘁𝗼𝗿</title>
      <dc:creator>Isreal Urephu</dc:creator>
      <pubDate>Thu, 07 Aug 2025 14:00:15 +0000</pubDate>
      <link>https://forem.com/isreal_urephu/secret-manager-1khf</link>
      <guid>https://forem.com/isreal_urephu/secret-manager-1khf</guid>
      <description>&lt;p&gt;🔒𝗦𝘁𝗼𝗿𝗶𝗻𝗴 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗦𝗲𝗰𝗿𝗲𝘁𝘀 𝗦𝗲𝗰𝘂𝗿𝗲𝗹𝘆 𝘄𝗶𝘁𝗵 𝗘𝘅𝘁𝗲𝗿𝗻𝗮𝗹 𝗦𝗲𝗰𝗿𝗲𝘁𝘀 𝗢𝗽𝗲𝗿𝗮𝘁𝗼𝗿&lt;/p&gt;

&lt;p&gt;With the growing adoption of the GitOps principle where a single source of truth is maintained for everything, all application manifests (deployments, configmaps, ingresses, etc.) are stored in a version control system.&lt;/p&gt;

&lt;p&gt;However, 𝘀𝗲𝗻𝘀𝗶𝘁𝗶𝘃𝗲 𝗶𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 such as credentials stored in Kubernetes Secrets cannot be committed to version control in plain text and even Base64 encoding isn’t safe, since it can be decoded easily (it’s encoding, not encryption).&lt;/p&gt;

&lt;p&gt;Here are a few ways to store Kubernetes secrets in a Git repository without compromising them:&lt;/p&gt;

&lt;p&gt;1️⃣Using External Secrets Operator backed by a secret manager backend like AWS Secrets Manager, HashiCorp Vault, Azure Key Vault, etc.&lt;br&gt;
2️⃣Using a Sealed Secrets controller running in your Kubernetes cluster.&lt;br&gt;
3️⃣ Using SOPS, which encrypts secrets using AWS KMS, GCP KMS, and other key management services.&lt;/p&gt;

&lt;p&gt;In this post, I’ll focus on 𝗘𝘅𝘁𝗲𝗿𝗻𝗮𝗹 𝗦𝗲𝗰𝗿𝗲𝘁𝘀 𝗢𝗽𝗲𝗿𝗮𝘁𝗼𝗿 (𝗘𝗦𝗢)&lt;/p&gt;

&lt;p&gt;🔑 𝗘𝘅𝘁𝗲𝗿𝗻𝗮𝗹 𝗦𝗲𝗰𝗿𝗲𝘁 𝗢𝗽𝗲𝗿𝗮𝘁𝗼𝗿&lt;br&gt;
Instead of storing your Kubernetes Secrets directly in Git, you store them in a secure backend (AWS Secrets Manager, Parameter Store, HashiCorp Vault, Azure Key Vault, etc.) and let the External Secrets Operator sync them into your cluster.&lt;/p&gt;

&lt;p&gt;𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀&lt;br&gt;
1️⃣ Store your secret in AWS Secrets Manager, Parameter Store, Vault, or another supported backend.&lt;br&gt;
2️⃣ Create a 𝗦𝗲𝗰𝗿𝗲𝘁𝗦𝘁𝗼𝗿𝗲 or 𝗖𝗹𝘂𝘀𝘁𝗲𝗿𝗦𝗲𝗰𝗿𝗲𝘁𝗦𝘁𝗼𝗿𝗲 this Custom Resource Definition (CRD) specifies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The backend type (AWS, Vault, Azure, etc.)&lt;/li&gt;
&lt;li&gt;The authentication method used to access the backend.
3️⃣ Create an 𝗘𝘅𝘁𝗲𝗿𝗻𝗮𝗹𝗦𝗲𝗰𝗿𝗲𝘁, this CRD is stored in Git and references:&lt;/li&gt;
&lt;li&gt;The 𝗦𝗲𝗰𝗿𝗲𝘁𝗦𝘁𝗼𝗿𝗲&lt;/li&gt;
&lt;li&gt;The name of the secret in the backend&lt;/li&gt;
&lt;li&gt;The name of the Kubernetes Secret to be created
4️⃣ The External Secrets Operator reads the 𝗘𝘅𝘁𝗲𝗿𝗻𝗮𝗹𝗦𝗲𝗰𝗿𝗲𝘁, authenticates to the backend using the credentials defined in the 𝗦𝗲𝗰𝗿𝗲𝘁𝗦𝘁𝗼𝗿𝗲, fetches the secret, and creates the Kubernetes Secret in your cluster.
5️⃣ 𝗘𝗦𝗢 𝗰𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀𝗹𝘆 𝘀𝘆𝗻𝗰𝘀 secrets from the backend to Kubernetes, ensuring they are always up to date according to the sync configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How are you managing Kubernetes secrets in your GitOps workflow?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmcfpqtjz28t3m7anjhpo.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmcfpqtjz28t3m7anjhpo.gif" alt=" " width="760" height="528"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>kubernetes</category>
      <category>cloudnative</category>
    </item>
  </channel>
</rss>
