<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Devtron</title>
    <description>The latest articles on Forem by Devtron (@devtron_inc).</description>
    <link>https://forem.com/devtron_inc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/devtron_inc"/>
    <language>en</language>
    <item>
      <title>How to Backup and Restore Kubernetes clusters using Velero</title>
      <dc:creator>Devtron</dc:creator>
      <pubDate>Mon, 14 Oct 2024 11:02:00 +0000</pubDate>
      <link>https://forem.com/devtron_inc/how-to-backup-and-restore-kubernetes-clusters-using-velero-4f6o</link>
      <guid>https://forem.com/devtron_inc/how-to-backup-and-restore-kubernetes-clusters-using-velero-4f6o</guid>
      <description>&lt;p&gt;Ensuring the security and recovery of applications and data is important in the Kubernetes world. A powerful tool that can help achieve this is Velero, a versatile backup and recovery solution designed specifically for Kubernetes clusters. In this guide, we'll cover the process of securing your Kubernetes Cluster with Velero, providing you with peace of mind and protection against unforeseen events.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Understanding Velero&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Velero, formerly known as Heptio Ark, is an open-source tool that simplifies backup, recovery, and migration for Kubernetes cluster resources and regular containers. It allows users to create scheduled or temporary backups of their resources and applications, ensuring data integrity and recovery in the event of failure, damage, or improper deletion.&lt;/p&gt;

&lt;p&gt;All Velero operations (on-demand backup, scheduled backup, restore) are private, defined using the Kubernetes Custom Resource Definition (CRD), and stored elsewhere. Velero also includes a controller that manages dedicated resources to perform backup, restore, and all related tasks.&lt;/p&gt;

&lt;p&gt;Velero is ideal for implementing disaster recovery and taking snapshots of the application state before performing cluster operations such as upgrades.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why Backup Your Kubernetes Cluster?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;It is important to implement a backup strategy to reduce the risks associated with data loss or corruption. By using Velero to regularly backup your Kubernetes Application Stack, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improved Testability:&lt;/strong&gt; Regularly scheduled backups with Velero allow you to create isolated environments for testing purposes. You can restore a specific point-in-time backup of your Application stack to test new features, configurations, or disaster recovery procedures without impacting your production environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduced Downtime:&lt;/strong&gt; In the event of an accidental deletion, configuration drift, or even a security breach, restoring from a recent Velero backup can significantly reduce downtime compared to rebuilding your K8s Application stack from scratch. This translates to faster recovery times and minimized disruptions for your users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Granular Backups and Restores:&lt;/strong&gt; Velero offers flexibility when it comes to backups. You can choose to back up the entire Application stack or specific components like the Devtron CRDs (Custom Resource Definitions) or specific namespaces and their configuration data. This allows for granular restores, where you can recover only the affected part of the stack instead of the entire system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Compliance and Auditability:&lt;/strong&gt; For organizations with strict compliance requirements, Velero backups provide a verifiable audit trail. You can track backup versions, timestamps, and success/failure logs, demonstrating adherence to data retention policies and regulations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Disaster Recovery Across Environments:&lt;/strong&gt; Velero supports backups to various cloud providers and on-premises storage solutions. This enables you to restore your K8s Cluster to a completely different environment in case of a disaster that renders your primary cluster unusable. This provides an additional layer of protection and ensures business continuity.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Workflow of Cluster Backup using Velero
&lt;/h3&gt;

&lt;p&gt;Velero gets installed on the cluster with the given configurations. While backing up Velero creates the .tar files of the backup and pushes them onto the storage provider. For restoring the backups, Velero searches for the .tar files on the given storage provider, pulls it into the target cluster, and applies it to the cluster as you can see in the [Fig 1]&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faa7dhsrnjjq8j37pmibv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faa7dhsrnjjq8j37pmibv.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Prerequisites for Cluster Backup Using Velero&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let’s take the example of a cluster where Devtron is running. We would like to take the backup of the cluster and restore the backup in case of any disaster. For this demo, we would be using AWS S3 to store the backup and then restoring the backup in the target cluster using the backup pushed at S3.&lt;/p&gt;

&lt;p&gt;Before getting our hands dirty, we need to install the CLI first.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Linux:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Install the Velero CLI client from the official GitHub page of Velero i.e. &lt;a href="https://github.com/vmware-tanzu/velero/releases/tag/v1.13.2" rel="noopener noreferrer"&gt;https://github.com/vmware-tanzu/velero&lt;/a&gt; on the CLI from where you have access to the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;After downloading, extract the tar file and add Velero to the bin.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv /Users/demo-user/velero-v1.14.0-linux-amd64/velero  /usr/local/bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For macOS:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can use homebrew - &lt;code&gt;brew install velero&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure Velero for Cluster Backup &amp;amp; Restore
&lt;/h2&gt;

&lt;p&gt;Let's configure the Velero CLI that we would be using for taking the backup of the Kubernetes cluster and the same CLI would be used for restoring the backup in the target cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-1:&lt;/strong&gt; Create a directory named Velero. Follow the commands below to navigate to the directory named &lt;code&gt;velero&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir velero
cd ./velero
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a file named Velero-creds and add the following&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[default]
aws_access_key_id = &amp;lt; s3_storage_access_key_id&amp;gt; 
aws_secret_access_key = &amp;lt; s3_storage_secret_access &amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-2:&lt;/strong&gt; Install the Velero client with the following configurations&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.10.0 \
--bucket k8s-backup \
--backup-location-config region=ap-southeast-1 \
--snapshot-location-config region=ap-southeast-1 \
--secret-file ./velero-creds
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb65k93mpy1180rh077be.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb65k93mpy1180rh077be.png" alt="Image description" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can check the installation by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get all -n velero
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwtosdtna4tfuidz8it6s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwtosdtna4tfuidz8it6s.png" alt="Image description" width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the Velero has been installed successfully on the cluster and has been configured.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cluster Backup &amp;amp; Restore
&lt;/h2&gt;

&lt;p&gt;After configuring the Velero CLI, now let's take the backup of the cluster, and restore the backup in the target cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Backup
&lt;/h3&gt;

&lt;p&gt;Now as the Velero is configured we need to run a command to take the backup of our Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;velero backup create k8s-backup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will create a backup of the Kubernetes cluster and store it in the storage option provided with the given name i.e &lt;code&gt;k8s-backup&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can also take backups of specific namespaces using the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;velero backup create &amp;lt;backup-name&amp;gt; --include-namespaces &amp;lt;namespace&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the backup is completed you can see the tar files of your backup on your S3 bucket. Check the backup by running the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Velero backup describe &amp;lt;backup name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnewhhuwgbx765ftf798.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnewhhuwgbx765ftf798.png" alt="Image description" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Restore
&lt;/h3&gt;

&lt;p&gt;Now the backup is ready and can be restored by going through the same process of installing Velero and running the following commands.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: If you restore the backup on the same cluster where the Velero is configured, there is no need to configure it again.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;velero restore create –from-backup devtroncd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is an option to include the PVCs with the backup using &lt;code&gt;--csi-snapshot-timeout&lt;/code&gt; flag&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;velero backup create nginx-backup --include-namespacesnginx-example --csi-snapshot-timeout 20m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this way, you can recover the K8s cluster using Velero. For more operations, you can go to the &lt;a href="https://velero.io/docs/v1.8/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; or use the command: &lt;code&gt;velero --help&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you have any questions, feel free to join our vibrant &lt;a href="https://discord.devtron.ai/" rel="noopener noreferrer"&gt;Discord Community&lt;/a&gt; and share your questions if you have any.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>backup</category>
      <category>velero</category>
      <category>restore</category>
    </item>
    <item>
      <title>12 Tools that will make Kubernetes management easier in 2024</title>
      <dc:creator>Devtron</dc:creator>
      <pubDate>Thu, 10 Oct 2024 12:01:00 +0000</pubDate>
      <link>https://forem.com/devtron_inc/12-tools-that-will-make-kubernetes-management-easier-in-2024-427m</link>
      <guid>https://forem.com/devtron_inc/12-tools-that-will-make-kubernetes-management-easier-in-2024-427m</guid>
      <description>&lt;p&gt;Kubernetes, the revolutionary container orchestration platform, has empowered developers to build, deploy, and scale applications with unparalleled flexibility and efficiency for a decade. But let's be honest, Kubernetes' sheer scale and complexity can sometimes feel overwhelming, even for seasoned DevOps engineers. You're still battling tangled deployments, resource conflicts, and security concerns frequently. Developers may face distractions from core application development due to manual operational tasks and the steep learning curve.  &lt;/p&gt;

&lt;p&gt;But fear not, Intrepid CloudNative Warriors! This blog post is your guide to navigating the intricate world of Kubernetes with ease. We'll explore 12 essential tools that streamline workflows, boost efficiency, and make managing your Kubernetes management easier in 2024.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But Wait! Why Simplify Kubernetes - Management?&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Let's face it, modern applications are demanding. You're juggling containerized workloads, scaling for peak usage, and ensuring everything runs smoothly. This is where efficient management tools come in, offering these crucial benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduced Complexity:&lt;/strong&gt; No more wrestling with Kubectl commands or endless configuration files. These tools offer intuitive interfaces and automated processes that make your life easier.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Increased Efficiency:&lt;/strong&gt; Let's be real, you're a busy person. These tools automate repetitive tasks and streamline workflows, freeing you up to focus on problem-solving &amp;amp; innovation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced Security:&lt;/strong&gt; Security is non-negotiable. Dedicated security tools proactively scan for vulnerabilities, enforce policies, and help you keep your applications and infrastructure safe from harm.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improved Cost Optimization:&lt;/strong&gt; You want to make the most of your resources and cost analyzers and autoscale help you optimize resource utilization, preventing over-provisioning and saving you money.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Now that we've recognized the significance, let's delve into 12 essential tools that streamline Kubernetes management.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. KEDA
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://keda.sh/" rel="noopener noreferrer"&gt;Keda&lt;/a&gt; (Kubernetes Event-Driven Autoscaling) is an event-driven autoscale for Kubernetes workloads. Simply defined, it can scale an application based on the number of events needing to be handled.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing KEDA
&lt;/h3&gt;

&lt;p&gt;KEDA has well-documented steps for installation in their documentation. You can install it with &lt;a href="https://keda.sh/docs/2.15/deploy/" rel="noopener noreferrer"&gt;Helm, Operator Hub, or YAML declarations&lt;/a&gt;. In this blog, let's go with the helm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Add Helm Repo&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm repo add kedacore https://kedacore.github.io/charts


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Update Helm Repo&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm repo update


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Install KEDA in &lt;code&gt;keda&lt;/code&gt; Namespace&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm install keda kedacore/keda --namespace keda --create-namespace


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;p&gt;KEDA is a classic example of autoscaling based on different metrics. When you want to autoscale your application beyond the resource metrics like CPU/ Memory, you can use KEDA. It listens to specific events such as messages from message queues, HTTP requests, custom Prometheus metrics, Kafka lag, etc. &lt;a href="https://devtron.ai/blog/introduction-to-kubernetes-event-driven-autoscaling-keda/" rel="noopener noreferrer"&gt;To deep dive into KEDA, check out this blog&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  2. Karpenter
&lt;/h2&gt;

&lt;p&gt;Built in AWS, &lt;a href="https://karpenter.sh/" rel="noopener noreferrer"&gt;Karpenter&lt;/a&gt; is a high-performance, flexible, open-source Kubernetes cluster auto-scaler. One of its key features is the ability to launch EC2 instances based on specific workload requirements such as storage, compute, acceleration, and scheduling needs.&lt;/p&gt;
&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;Install Karpenter in the Kubernetes cluster using Helm charts. But before doing this, you must ensure enough computing capacity is available. Karpenter requires permissions to provision compute resources that are based on the cloud provider you have chosen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Install Utilities&lt;/p&gt;

&lt;p&gt;Karpenter can be installed in clusters using a Helm chart. Install these tools before proceeding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;kubectl - &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/" rel="noopener noreferrer"&gt;the Kubernetes CLI&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;eksctl (&amp;gt;= v0.180.0) - &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html" rel="noopener noreferrer"&gt;the CLI for AWS EKS&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;helm - &lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;the package manager for Kubernetes&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html" rel="noopener noreferrer"&gt;Configure the AWS CLI&lt;/a&gt; with a user that has sufficient privileges to create an EKS cluster. Verify that the CLI can authenticate properly by running aws sts get-caller-identity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Set Environment Variables&lt;/p&gt;

&lt;p&gt;After installing the dependencies, set the Karpenter namespace, version and Kubernetes version as follows.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

export KARPENTER_NAMESPACE="kube-system"
export KARPENTER_VERSION="0.37.0"
export K8S_VERSION="1.30"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then set the following environment variables which would be further used for creating an EKS cluster.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

export AWS_PARTITION="aws" # if you are not using standard partitions, you may need to configure to aws-cn / aws-us-gov
export CLUSTER_NAME="${USER}-karpenter-demo"
export AWS_DEFAULT_REGION="us-west-2"
export AWS_ACCOUNT_ID="$(aws sts get-caller-identity --query Account --output text)"
export TEMPOUT="$(mktemp)"
export ARM_AMI_ID="$(aws ssm get-parameter --name /aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2-arm64/recommended/image_id --query Parameter.Value --output text)"
export AMD_AMI_ID="$(aws ssm get-parameter --name /aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2/recommended/image_id --query Parameter.Value --output text)"
export GPU_AMI_ID="$(aws ssm get-parameter --name /aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2-gpu/recommended/image_id --query Parameter.Value --output text)"



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Create a Cluster&lt;/p&gt;

&lt;p&gt;The following configs will create an EKS cluster with the user configured in aws-cli having the relevant permissions to create an EKS cluster.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

curl -fsSL https://raw.githubusercontent.com/aws/karpenter-provider-aws/v"${KARPENTER_VERSION}"/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml  &amp;gt; "${TEMPOUT}" \
&amp;amp;&amp;amp; aws cloudformation deploy \
  --stack-name "Karpenter-${CLUSTER_NAME}" \
  --template-file "${TEMPOUT}" \
  --capabilities CAPABILITY_NAMED_IAM \
  --parameter-overrides "ClusterName=${CLUSTER_NAME}"

eksctl create cluster -f - &amp;lt;&amp;lt;EOF
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: ${CLUSTER_NAME}
  region: ${AWS_DEFAULT_REGION}
  version: "${K8S_VERSION}"
  tags:
    karpenter.sh/discovery: ${CLUSTER_NAME}

iam:
  withOIDC: true
  podIdentityAssociations:
  - namespace: "${KARPENTER_NAMESPACE}"
    serviceAccountName: karpenter
    roleName: ${CLUSTER_NAME}-karpenter
    permissionPolicyARNs:
    - arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerPolicy-${CLUSTER_NAME}

iamIdentityMappings:
- arn: "arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME}"
  username: system:node:{{EC2PrivateDNSName}}
  groups:
  - system:bootstrappers
  - system:nodes
  ## If you intend to run Windows workloads, the kube-proxy group should be specified.
  # For more information, see https://github.com/aws/karpenter/issues/5099.
  # - eks:kube-proxy-windows

managedNodeGroups:
- instanceType: m5.large
  amiFamily: AmazonLinux2
  name: ${CLUSTER_NAME}-ng
  desiredCapacity: 2
  minSize: 1
  maxSize: 10

addons:
- name: eks-pod-identity-agent
EOF

export CLUSTER_ENDPOINT="$(aws eks describe-cluster --name "${CLUSTER_NAME}" --query "cluster.endpoint" --output text)"
export KARPENTER_IAM_ROLE_ARN="arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:role/${CLUSTER_NAME}-karpenter"

echo "${CLUSTER_ENDPOINT} ${KARPENTER_IAM_ROLE_ARN}"



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Unless your AWS account has already been onboarded to EC2 Spot, you will need to create the service-linked role to avoid the &lt;a href="https://karpenter.sh/docs/troubleshooting/#missing-service-linked-role" rel="noopener noreferrer"&gt;&lt;code&gt;ServiceLinkedRoleCreationNotPermitted&lt;/code&gt; error&lt;/a&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

aws iam create-service-linked-role --aws-service-name spot.amazonaws.com || true
# If the role has already been successfully created, you will see:
# An error occurred (InvalidInput) when calling the CreateServiceLinkedRole operation: Service role name AWSServiceRoleForEC2Spot has been taken in this account, please try a different suffix.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Install Karpenter&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# Logout of helm registry to perform an unauthenticated pull against the public ECR
helm registry logout public.ecr.aws

helm upgrade --install karpenter oci://public.ecr.aws/karpenter/karpenter --version "${KARPENTER_VERSION}" --namespace "${KARPENTER_NAMESPACE}" --create-namespace \
  --set "settings.clusterName=${CLUSTER_NAME}" \
  --set "settings.interruptionQueue=${CLUSTER_NAME}" \
  --set controller.resources.requests.cpu=1 \
  --set controller.resources.requests.memory=1Gi \
  --set controller.resources.limits.cpu=1 \
  --set controller.resources.limits.memory=1Gi \
  --wait


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once the installation is done, you can create NodePool and define the instance family, architecture, and start using Karpenter as your cluster autoscaler. For &lt;a href="https://karpenter.sh/docs/getting-started/?ref=devtron.ai" rel="noopener noreferrer"&gt;detailed installation information and its usage, feel free to refer to its documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;p&gt;Karpenter is used to automatically provision and optimize Kubernetes cluster resources, ensuring efficient and cost-effective scaling. It dynamically adjusts node capacity based on workload demands, reducing over-provisioning and underutilization, thus enhancing performance and lowering cloud infrastructure costs. Check out &lt;a href="https://devtron.ai/blog/karpenter-vs-kubernetes-cluster-autoscaler-choosing-right-autoscaling-tool/" rel="noopener noreferrer"&gt;this blog to understand in-depth about Karpenter and Cluster Autocaler&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Devtron
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://devtron.ai/" rel="noopener noreferrer"&gt;Devtron&lt;/a&gt; is a tool integration platform for Kubernetes and enables swift app containerization, seamless Kubernetes deployment, and peak performance optimization. It deeply integrates with products across the lifecycle of microservices i.e., CI/CD, security, cost, debugging, and observability via an intuitive web interface.&lt;br&gt;&lt;br&gt;
Devtron helps you to deploy, observe, manage &amp;amp; debug the existing Helm apps in all your clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;Run the following command to install the latest version of Devtron along with the CI/CD module:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm repo add devtron https://helm.devtron.ai 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm repo update devtron


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm install devtron devtron/devtron-operator \
--create-namespace --namespace devtroncd \
--set installer.modules={cicd}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Check out the complete &lt;a href="https://docs.devtron.ai/install" rel="noopener noreferrer"&gt;guide&lt;/a&gt; here. If you have questions, please let us know on our &lt;a href="https://rebrand.ly/devtron-discord" rel="noopener noreferrer"&gt;discord channel.&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;p&gt;Devtron simplifies Kubernetes adoption by addressing key challenges, making it easier to deploy, monitor, observe, and debug applications at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here's how Devtron helps:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Simplifying the Adoption Process:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Single Pane of Glass:&lt;/strong&gt; Provides a unified view of all Kubernetes resources, enabling easy navigation and understanding of cluster components.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-Time Application Status Monitoring:&lt;/strong&gt; Displays the health and status of applications in real-time, highlighting potential issues and unhealthy components.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplified Debugging:&lt;/strong&gt; Offers tools like event logs, pod logs, and interactive shells for debugging issues within the Kubernetes environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Containerization Made Easy:&lt;/strong&gt; Provides templates and options for building container images, simplifying the containerization process for various frameworks and languages.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Streamlining Tool Integration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Helm Marketplace:&lt;/strong&gt; Integrates with the Helm chart repository to easily deploy and manage various Kubernetes tools.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Built-in Integrations:&lt;/strong&gt; Offers native integrations with popular tools like Grafana, Trivy, and Clair for enhanced functionality.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Simplifying Multi-Cluster/Cloud Workloads:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Centralized Visibility:&lt;/strong&gt; Provides a unified view of applications across multiple clusters and cloud environments, enabling consistent management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Environment-Specific Configurations:&lt;/strong&gt; Allows setting environment-specific configurations, making it easier to manage applications in diverse environments.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Simplifying DevSecOps:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fine-Grained Access Control:&lt;/strong&gt; Enables granular control over user permissions for Kubernetes resources, ensuring secure access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security Scanning and Policies:&lt;/strong&gt; Offers built-in security scanning with Trivy and allows configuring policies to enforce security best practices.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;If you liked what Devtron is solving, do give it a &lt;a href="https://github.com/devtron-labs/devtron" rel="noopener noreferrer"&gt;Star ⭐️ on GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  4. K9s
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://k9scli.io/" rel="noopener noreferrer"&gt;K9s&lt;/a&gt; is a terminal-based UI to interact with your Kubernetes clusters. This project aims to make it easier to navigate, observe, and manage your deployed applications in the wild. K9s continually watch Kubernetes for changes and offer subsequent commands to interact with your observed resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;K9s is available on Linux, macOS, and Windows platforms. You can get the latest binaries for different architectures and operating systems from the &lt;a href="https://github.com/derailed/k9s/releases" rel="noopener noreferrer"&gt;releases on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MacOS/ Linux&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

 # Via Homebrew
 brew install derailed/k9s/k9s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Windows&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# Via chocolatey
choco install k9s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For other ways of installation, feel free to check out the &lt;a href="https://k9scli.io/topics/install/" rel="noopener noreferrer"&gt;documentation of K9s&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;p&gt;K9s make it much easier as compared to other Kubernetes clients like kubectl to manage and orchestrate applications on Kubernetes. You get a terminal-based GUI which helps you manage your resources,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource Monitoring and Visibility&lt;/strong&gt;: It provides continuous monitoring of your Kubernetes cluster, offering a clear view of resource statuses. It helps you understand your K8s cluster by displaying information about pods, deployments, services, nodes, and more. With K9s, you can easily navigate through cluster resources, ensuring better visibility and awareness.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource Interaction and Management&lt;/strong&gt;: K9s allows you to interact with resources directly from the terminal. You can view, edit, and delete resources without switching to a separate management tool. Common operations include scaling deployments, restarting pods, and inspecting logs. You can also initiate port-forwarding to access services running within pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Namespace Management&lt;/strong&gt;: This lets you focus on specific namespaces within your cluster. You can switch between namespaces seamlessly, making it easier to work with isolated environments. By filtering resources based on namespaces, you can avoid clutter and stay organized.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Advanced Features&lt;/strong&gt;: It also offers advanced capabilities, such as opening a shell in a container directly from the UI. It supports context switching between different clusters, making it convenient for multi-cluster environments. Additionally, K9s integrate with vulnerability scanning tools, enhancing security practices.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Winter Soldier
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/devtron-labs/winter-soldier" rel="noopener noreferrer"&gt;Winter Soldier&lt;/a&gt; is an open-source tool from Devtron, it enables time-based scaling for Kubernetes workloads. The time-based scaling with Winter Soldier helps us to reduce the cloud cost, it can be deployed to execute things such as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Batch deletion of the unused resources&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling of Kubernetes workloads&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;If you want to dive deeper into it, please check the following resources -&lt;/strong&gt;&lt;a href="https://devtron.ai/blog/winter-soldier-scale-down-your-infrastructure-in-the-easiest-possible-way/" rel="noopener noreferrer"&gt;https://devtron.ai/blog/winter-soldier-scale-down-your-infrastructure-in-the-easiest-possible-way&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give it Star on the Github if you like the project: &lt;a href="https://github.com/devtron-labs/winter-soldier" rel="noopener noreferrer"&gt;https://github.com/devtron-labs/winter-soldier&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;p&gt;Winter Soldier is a valuable tool for anyone who wants to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimize Cloud Costs:&lt;/strong&gt; By automatically scaling Kubernetes workloads based on time, Winter Soldier helps reduce unnecessary resource usage, lowering cloud bills.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automate Routine Tasks:&lt;/strong&gt; Tasks like deleting unused resources or scaling workloads at specific times can be automated, freeing up time for other initiatives.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improve Resource Utilization:&lt;/strong&gt; By ensuring resources are only allocated when needed, Winter Soldier maximizes resource utilization and improves overall efficiency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Implement Time-Based Scaling:&lt;/strong&gt; It's ideal for scenarios where workloads have predictable usage patterns (e.g., a website that experiences heavy traffic during specific hours) or when resources need to be adjusted based on time.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Examples:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;E-commerce Website:&lt;/strong&gt; Scale up resources during peak shopping hours and scale down during off-peak periods to reduce costs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Processing Jobs:&lt;/strong&gt; Schedule resource scaling for batch processing jobs that run only during specific time windows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Development and Testing Environments:&lt;/strong&gt; Automatically scale down development and testing environments after hours to minimize resource usage.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Benefits:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Reduction:&lt;/strong&gt; Optimizing resource utilization translates to lower cloud bills.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improved Efficiency:&lt;/strong&gt; Automating resource management frees up time for other tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced Reliability:&lt;/strong&gt; By ensuring resources are allocated appropriately, Winter Soldier helps improve the reliability of Kubernetes applications.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  6. Silver Surfer
&lt;/h2&gt;

&lt;p&gt;Currently, there is no easy way to upgrade Kubernetes objects in case of Kubernetes upgrade. It's a tedious task to know whether the current ApiVersion of the Object is Removed, Deprecated, or Unchanged. It provides details of issues with the Kubernetes object in case they are migrated to a cluster with a newer Kubernetes version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fniz9mmhp9cacuabs9a2s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fniz9mmhp9cacuabs9a2s.png" alt="Image description" width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;Just with a few commands, it's ready to serve your cluster.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

git clone https://github.com/devtron-labs/silver-surfer.git
cd silver-surfer
go mod vendor
go mod download
make


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It's done. A bin directory might have been created with the binary ready-to-use ./kubedd command.&lt;/p&gt;

&lt;p&gt;It categorizes Kubernetes objects based on changes in ApiVersion. Categories are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Removed ApiVersion&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deprecated ApiVersion&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Newer ApiVersion&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Unchanged ApiVersion&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Within each category it identifies the migration path to the newer API Version, possible paths are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;It cannot be migrated as there are no common ApiVersions between the source and target Kubernetes version&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can be migrated but has some issues which need to be resolved&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can be migrated with just an ApiVersion change&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This activity is performed for both current and new ApiVersion.&lt;/p&gt;

&lt;p&gt;Check out the Github repo and give it a star ⭐️: &lt;a href="https://github.com/devtron-labs/silver-surfer" rel="noopener noreferrer"&gt;https://github.com/devtron-labs/silver-surfer&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pre-Upgrade Planning:&lt;/strong&gt; Silver Surfer helps you identify potential issues before the upgrade, giving you time to plan and resolve them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Streamlined Upgrade Process:&lt;/strong&gt; The tool provides detailed guidance, minimizing downtime and errors during the upgrade.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kubernetes Object Management:&lt;/strong&gt; Silver Surfer provides greater visibility into the compatibility of your objects with different Kubernetes versions, aiding in managing your cluster.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Benefits&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduced Upgrade Complexity:&lt;/strong&gt; Simplifies the Kubernetes upgrade process, reducing stress and the potential for errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improved Uptime:&lt;/strong&gt; Minimizes downtime during the upgrade process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced Cluster Management:&lt;/strong&gt; Provides a better understanding of your Kubernetes objects and their compatibility with different versions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  7. Trivy
&lt;/h2&gt;

&lt;p&gt;Trivy is a simple and comprehensive vulnerability scanner for containers and other artifacts. A software vulnerability is a glitch, flaw, or weakness present in the software or in an Operating System. Trivy detects vulnerabilities of OS packages (Alpine, RHEL, CentOS, etc.) and application dependencies (Bundler, Composer, npm, yarn, etc.). Trivy is easy to use. Just install the binary and you're ready to scan. All you need to do for scanning is to specify a target such as an image name of the container.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;Installing from the the Aqua Chart Repository.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm repo add aquasecurity https://aquasecurity.github.io/helm-charts/ 
helm repo update 
helm search repo trivy 
helm install my-trivy aquasecurity/trivy


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Installing the Chart.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
To install the chart with the release name &lt;code&gt;my-release&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm install my-release .


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The command deploys Trivy on the Kubernetes cluster in the default configuration. The &lt;a href="https://aquasecurity.github.io/trivy/v0.18.3/installation/#parameters" rel="noopener noreferrer"&gt;Parameters&lt;/a&gt; section lists the parameters that can be configured during installation.&lt;/p&gt;

&lt;p&gt;Know more about Trivvy installation &lt;a href="https://aquasecurity.github.io/trivy/v0.18.3/installation/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsabxdf2kuok6z1oa92z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsabxdf2kuok6z1oa92z.png" alt="Image description" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DevSecOps Integration:&lt;/strong&gt; Trivvy seamlessly integrates into your CI/CD pipelines, identifying vulnerabilities early in the development process and enabling automated remediation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Secure Container Deployment:&lt;/strong&gt; It ensures only secure container images are deployed to production by scanning them before deployment and integrating with container registries for continuous scanning.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ongoing Security Monitoring:&lt;/strong&gt; Enables regular vulnerability scans and provides detailed reports, allowing for proactive security maintenance and tracking of remediation efforts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Beyond Containers:&lt;/strong&gt; Extend security assessments to operating systems, server configurations, and other infrastructure components.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Software Supply Chain Analysis:&lt;/strong&gt; Analyze your entire software supply chain, from source to deployment, to identify and address vulnerabilities at every stage.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  8. Cert-Manager
&lt;/h2&gt;

&lt;p&gt;Cert Manager is an open-source tool designed to automate the management and provisioning of digital certificates in Kubernetes environments. It solves the challenge of handling TLS/SSL certificates for applications running on Kubernetes by simplifying the process of obtaining, renewing, and distributing certificates. Cert Manager enhances security and reduces operational complexity, ensuring that applications have valid and up-to-date certificates for secure communication.&lt;/p&gt;

&lt;p&gt;It automates the lifecycle of your TLS certificates! No more manual renewal!&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;You don't require any tweaking of the cert-manager install parameters.&lt;/p&gt;

&lt;p&gt;The default static configuration can be installed as follows:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.15.1/cert-manager.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;📖 Read more about &lt;a href="https://cert-manager.io/docs/installation/kubectl/" rel="noopener noreferrer"&gt;installing cert-manager using kubectl apply and static manifests&lt;/a&gt; and &lt;a href="https://cert-manager.io/docs/installation/helm/" rel="noopener noreferrer"&gt;Installing with Helm&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Checkout our blog to learn how to setup cert-manager using Devtron: &lt;a href="https://devtron.ai/blog/kubernetes-ssl-certificate-automation-using-certmanager-part-1/" rel="noopener noreferrer"&gt;https://devtron.ai/blog/kubernetes-ssl-certificate-automation-using-certmanager-part-1&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated Certificate Acquisition &amp;amp; Renewal:&lt;/strong&gt; Effortlessly obtain and renew certificates from providers like Let's Encrypt, eliminating manual effort and ensuring continuous security.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Secure Ingress Controllers:&lt;/strong&gt; Automatically provision certificates for Ingress controllers, enabling secure HTTPS communication for services exposed through the Ingress.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Centralized Certificate Management:&lt;/strong&gt; Manage all your certificates from a single point of control, simplifying issuance, renewal, and revocation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced Application Security:&lt;/strong&gt; Strengthen encryption and protect sensitive data by ensuring valid and up-to-date TLS/SSL certificates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Streamlined Operations:&lt;/strong&gt; Reduce operational overhead, minimize downtime, and ensure continuous application availability by automating certificate management.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  9. Istio
&lt;/h2&gt;

&lt;p&gt;Istio extends Kubernetes to establish a programmable, application-aware network. Working with both Kubernetes and traditional workloads, Istio brings standard, universal traffic management, telemetry, and security to complex deployments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;Check out the Istio docs for &lt;a href="https://istio.io/latest/docs/setup/getting-started/" rel="noopener noreferrer"&gt;Getting Started&lt;/a&gt; and a complete walkthrough of &lt;a href="https://devtron.ai/blog/canary-deployment-with-flagger-and-istio/" rel="noopener noreferrer"&gt;Canary Deployment with Flagger and Istio on Devtron&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Simplify Microservice Communication&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Istio abstracts away complex networking concerns like service discovery, routing, and load balancing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Developers can focus on business logic while Istio handles communication between microservices.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enhance Security:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Implement consistent authentication, authorization, and encryption across all services using Istio.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It mitigates security risks by enforcing security policies throughout the mesh.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Traffic Management&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Istio enables A/B testing, canary deployments, and blue-green deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can control traffic routing, timeouts, and fault injection seamlessly.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Observability and Monitoring&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Monitor service behavior, track performance metrics, and troubleshoot issues with Istio.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It integrates with observability tools for better insights into your microservices.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  10. KRR
&lt;/h2&gt;

&lt;p&gt;Robusta KRR (Kubernetes Resource Recommender) is a CLI tool for optimizing resource allocation in Kubernetes clusters. It gathers pod usage data from Prometheus and recommends requests and limits for CPU and memory. This reduces costs and improves performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;The installation is pretty straight-forward. You can install the &lt;a href="https://github.com/robusta-dev/krr/releases/" rel="noopener noreferrer"&gt;binary directly from their releases&lt;/a&gt;. To install the CLI, depending upon your operating system, you can &lt;a href="https://github.com/robusta-dev/krr?tab=readme-ov-file#installation-methods" rel="noopener noreferrer"&gt;install the KRR cli&lt;/a&gt; and use it for optimising the resources. You can use brew for installing on mac:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

brew tap robusta-dev/homebrew-krr

brew install krr


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Savings:&lt;/strong&gt; Reduce cloud bills by recommending optimal resource requests, and eliminating over-provisioning.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance Boost:&lt;/strong&gt; Improve application responsiveness by preventing resource contention and ensuring sufficient resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data-Driven Insights:&lt;/strong&gt; Gain insights into resource usage patterns for better planning and scaling decisions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated Optimization:&lt;/strong&gt; Integrate with CI/CD pipelines to automatically adjust resource allocation for continuous optimization.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  11. Kyverno
&lt;/h2&gt;

&lt;p&gt;Kyverno is a Kubernetes-native policy engine designed for Kubernetes platform engineering teams. It enables security, automation, compliance, and governance using policy-as-code. Kyverno can validate, mutate, generate, and cleanup configurations using Kubernetes admission controls, background scans, and source code repository scans in real time. Kyverno policies can be managed as Kubernetes resources and do not require learning a new language. Kyverno is designed to work nicely with tools you already use like kubectl, kustomize, and Git.&lt;/p&gt;
&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;To install Kyverno with Helm, first add the Kyverno Helm repository.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm repo add kyverno https://kyverno.github.io/kyverno/


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Scan the new repository for charts.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm repo update


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Optionally, show all available chart versions for Kyverno.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

helm search repo kyverno -l


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Check the whole guide &lt;a href="https://kyverno.io/docs/installation/methods/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And be sure to check out our blog on&lt;/strong&gt; &lt;a href="https://devtron.ai/blog/how-to-secure-kubernetes-clusters-with-kyverno-policies" rel="noopener noreferrer"&gt;&lt;strong&gt;securing Kubernetes clusters with Kyverno policies&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security:&lt;/strong&gt; Enforce policies to prevent deployments with root privileges, restrict resource requests, control network access, and secure sensitive data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Compliance:&lt;/strong&gt; Implement auditing, labeling, and access control policies to meet regulatory requirements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Best Practices:&lt;/strong&gt; Automate resource validation, enforce naming conventions, and manage resource lifecycles.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Extend Kubernetes:&lt;/strong&gt; Customize admission control and validate custom resources.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  12. Opencost
&lt;/h2&gt;

&lt;p&gt;OpenCost is a vendor-neutral open-source project for measuring and allocating cloud infrastructure and container costs. It’s built for Kubernetes cost monitoring to power real-time cost monitoring, showback, and chargeback. It is a sandbox project with the Cloud Native Computing Foundation (CNCF).&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;Check out the &lt;a href="https://www.opencost.io/docs/installation/install" rel="noopener noreferrer"&gt;Installation guide&lt;/a&gt; to start monitoring and managing your spend in minutes. Additional documentation is available for &lt;a href="https://www.opencost.io/docs/installation/prometheus" rel="noopener noreferrer"&gt;configuring Prometheus&lt;/a&gt; and managing your &lt;a href="https://www.opencost.io/docs/installation/helm" rel="noopener noreferrer"&gt;OpenCost with Helm&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Identifying Costly Workloads:&lt;/strong&gt; Identify specific pods or deployments consuming excessive resources and take steps to optimize their resource allocation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Allocating Costs to Teams:&lt;/strong&gt; Use OpenCost to generate detailed reports showing the cost incurred by each team or project using your Kubernetes cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Right-sizing Resources:&lt;/strong&gt; Optimize resource requests and limits for pods based on actual usage, reducing unnecessary resource allocation and saving costs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Predictive Cost Management:&lt;/strong&gt; Forecast future costs based on historical data and identify potential spikes in resource consumption to proactively adjust resource allocation or budget.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Kubernetes has revolutionized the way we build and deploy applications. However, its complexity can be daunting. The 12 tools we've explored provide a comprehensive toolkit for simplifying and optimizing your Kubernetes management.&lt;/p&gt;

&lt;p&gt;The Coud-native landscape is overflowing with tools, each serving a specific purpose. Remember, there's no one-size-fits-all solution. Carefully evaluate your use case and choose the tools that best address your specific needs for a more efficient, secure, and cost-optimized Kubernetes experience.&lt;/p&gt;

&lt;p&gt;By leveraging these powerful tools, you can unlock greater efficiency, reduce costs, enhance security, and ultimately, focus on delivering innovative applications faster. So, embrace these powerful tools and build a better infrastructure with confidence!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you have any questions, or want to discuss any specific usecase, feel free to&lt;/em&gt; &lt;a href="https://rebrand.ly/devtron-demo?ref=blog" rel="noopener noreferrer"&gt;connect with us&lt;/a&gt; or &lt;em&gt;ask them in our actively growing&lt;/em&gt; &lt;a href="https://rebrand.ly/Devtron-Discord?ref=devtron.ai" rel="noopener noreferrer"&gt;&lt;em&gt;Discord Community&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Happy Deploying!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>productivity</category>
      <category>developers</category>
      <category>k9s</category>
    </item>
    <item>
      <title>Reserving Minimum IPs In EKS Cluster</title>
      <dc:creator>Devtron</dc:creator>
      <pubDate>Wed, 09 Oct 2024 13:02:00 +0000</pubDate>
      <link>https://forem.com/devtron_inc/reserving-minimum-ips-in-eks-cluster-487e</link>
      <guid>https://forem.com/devtron_inc/reserving-minimum-ips-in-eks-cluster-487e</guid>
      <description>&lt;p&gt;The popularity of AWS Elastic Kubernetes Service (EKS) is consistently rising as a managed Kubernetes solution. From resource management, to networking, and implementing new requirements, EKS really comes with an easy user-friendly approach to overseeing all components. An abundance of good documentation and regular updates offered by the AWS community further enhance user experience, simplifying operations for end-users.&lt;/p&gt;

&lt;p&gt;However, when it comes to the scalability of your workloads or Kubernetes cluster, challenges arise if proper planning was not undertaken during the initial phase of the cluster setup. A prominent issue arises in the management of IP addresses, a critical factor in scaling clusters. The insufficiency of available IP addresses within your subnets can precipitate an alarming shortage within your cluster. IP shortage can lead to operational challenges, impacting the deployment and functioning of applications. This article delves into this issue and its corresponding solution, providing an in-depth exploration of the matter.&lt;/p&gt;

&lt;h1&gt;
  
  
  Issues Arising from IP Shortages
&lt;/h1&gt;

&lt;p&gt;If your EKS cluster facing an IP shortage issue then you would likely have come across the subsequent error message when attempting to deploy a new application or scale an existing one within the pod events:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "82d95ef9391fdfff08b86bbf6b8c4b6568b4ee7bb81fce" network for pod "example-pod_namespace": network plugin cni failed to set up pod "example-pod_namespace" network: add cmd: failed to assign an IP address to container


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Under such conditions, your pod will be stuck in the "ContainerCreating" state, unable to initiate until an IP address is assigned to it. Upon investigation of the cluster's private subnets, you'll likely discover that the specific subnet where the worker node is allocated and the pod is assigned exhibits an available IP address count of 0.&lt;/p&gt;

&lt;h1&gt;
  
  
  What could be a possible solution?
&lt;/h1&gt;

&lt;p&gt;In order to understand the resolution for this issue, we have to first understand the functioning of the EKS cluster and how IP addresses get allocated to worker nodes and further to pods.&lt;/p&gt;

&lt;h1&gt;
  
  
  Understanding IP Allocation in AWS EKS Clusters
&lt;/h1&gt;

&lt;p&gt;Within an AWS EKS cluster, IP addresses play a vital role in facilitating communication among worker nodes, services, and external entities. The management of IP addresses is primarily managed by the AWS-node daemonset as a default mechanism. This daemonset is responsible for the allocation of distinctive IP addresses to each worker node. It ensures that each worker node receives a unique IP address by requesting IPs from the Amazon VPC's IP address range associated with the cluster's subnet.&lt;/p&gt;

&lt;p&gt;For the detailed information, check &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html" rel="noopener noreferrer"&gt;Amazon VPC CNI&lt;/a&gt; documentation&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtzfipiw8q1qu1mu96w4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtzfipiw8q1qu1mu96w4.png" alt="Image description" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By default, upon the joining of a new worker node into the cluster, a specific number of IP addresses is affixed to the worker node, based on the associated network interfaces. For instance, consider the scenario of a compute-optimized instance such as &lt;strong&gt;c5a.2xlarge&lt;/strong&gt;, having 2 network interfaces attached. In this instance, a total of 30 private IP addresses will be allocated from the corresponding subnet where this worker node has been allocated. Now it does not matter whether you have 5 pods running in this worker node or 15, these 30 IP addresses will be attached to this worker node which is obviously a wastage of lots of IP addresses. In the below screenshot, you can see the numbers of IPs attached to the existing node.&lt;/p&gt;

&lt;p&gt;To view this option select worker node in EC2 instance console -&amp;gt; choose networking section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvda4cq9qbhlfx8mwvww.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvda4cq9qbhlfx8mwvww.png" alt="Image description" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To check each instance type default IP allocation count, &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI" rel="noopener noreferrer"&gt;refer to the AWS official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To check the available IP addresses on the subnet&lt;/p&gt;

&lt;p&gt;Open VPC console -&amp;gt; choose subnet -&amp;gt; select subnets of the clusters&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqcl5qg60c94o6z0qnwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqcl5qg60c94o6z0qnwb.png" alt="Image description" width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above snapshot, it is clear that in this subnet we have 1454 IP addresses available. So what if we ran out of IP addresses? Let’s see how it can be resolved if such a scenario exists.&lt;/p&gt;

&lt;h1&gt;
  
  
  Avoiding IP Shortages
&lt;/h1&gt;

&lt;p&gt;There can be multiple ways to avoid IP shortages in your EKS clusters. Following are the different ways that can be helpful.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Adding ENI configuration in the cluster with different subnets. &lt;a href="https://aws.github.io/aws-eks-best-practices/networking/custom-networking/" rel="noopener noreferrer"&gt;Check out this detailed documentation&lt;/a&gt; which talks about adding custom ENI configurations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update the existing cni-plugin to assign the minimum IP at the boot time of the new worker node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prefix Mode for Linux&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this blog post, we will proceed with the implementation of updating the existing cni-plugin as we don’t have to create extra objects. With this approach, we just have to update the existing cni-plugin which is the default cni-plugin provided by AWS EKS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Updating existing CNI Plugin
&lt;/h2&gt;

&lt;p&gt;To update the existing cni-plugin, we will add/configure 3 environment variables in the AWS-node daemonset. For detailed information, &lt;a href="https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/eni-and-ip-target.md" rel="noopener noreferrer"&gt;check out the official documentation&lt;/a&gt; which talks about the environment variables.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. WARM_IP_TARGET&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The number of Warm IP addresses to be maintained. A Warm IP is available on an actively attached ENI but has not been assigned to a Pod. In other words, the number of Warm IPs available is the number of IPs that may be assigned to a Pod without requiring an additional ENI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Example&lt;/strong&gt;: Consider an instance with 1 ENI, each ENI supporting 20 IP addresses. WARM_IP_TARGET is set to 5. WARM_ENI_TARGET is set to 0. Only 1 ENI will be attached until a 16th IP address is needed. Then, the CNI will attach a second ENI, consuming 20 possible addresses from the subnet CIDR.&lt;/p&gt;

&lt;p&gt;2. &lt;strong&gt;MINIMUM_IP_TARGET&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The minimum number of IP addresses to be allocated at any time. This is commonly used to front-load the assignment of multiple ENIs at instance launch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Example&lt;/strong&gt;: Consider a newly launched instance. It has 1 ENI and each ENI supports 10 IP addresses. MINIMUM_IP_TARGET is set to 100. The ENI immediately attaches 9 more ENIs for a total of 100 addresses. This happens regardless of any WARM_IP_TARGET or WARM_ENI_TARGET values.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. WARM_ENI_TARGET&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The number of Warm ENIs to be maintained. An ENI is “warm” when it is attached as a secondary ENI to a node, but it is not in use by any Pod. More specifically, no IP addresses of the ENI have been associated with a Pod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Example:&lt;/strong&gt; Consider an instance with 2 ENIs, each ENI supporting 5 IP addresses. WARM_ENI_TARGET is set to 1. If exactly 5 IP addresses are associated with the instance, the CNI maintains 2 ENIs attached to the instance. The first ENI is in use, and all 5 possible IP addresses of this ENI are used. The second ENI is “warm” with all 5 IP addresses in the pool. If another Pod is launched on the instance, a 6th IP address will be needed. The CNI will assign this 6th Pod an IP address from the second ENI and from 5 IPs from the pool. The second ENI is now in use, and no longer in a “warm” status. The CNI will allocate a 3rd ENI to maintain at least 1 warm ENI.&lt;/p&gt;

&lt;p&gt;You have the choice to implement this either by manually adding the required environment variables to the manifest or by using the patch command to configure the environment variables.&lt;/p&gt;

&lt;p&gt;Note: Opting for the patch option is advisable, as it ensures that other elements within the manifest remain unaffected and unaltered.&lt;/p&gt;

&lt;h4&gt;
  
  
  Option 1: Adding the environment variables in manifest
&lt;/h4&gt;

&lt;p&gt;Run the following command&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl edit daemonset aws-node -n kube-system


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And add the environmental variables in the env section of the container i.e, spec.template.spec.containers.env as&lt;/p&gt;

&lt;p&gt;env:&lt;br&gt;&lt;br&gt;
- name: WARM_IP_TARGET&lt;br&gt;&lt;br&gt;
value: "2"&lt;br&gt;&lt;br&gt;
- name: MINIMUM_IP_TARGET&lt;br&gt;&lt;br&gt;
value: "10"&lt;/p&gt;

&lt;h4&gt;
  
  
  Option 2: Adding env variables using patch
&lt;/h4&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl set env daemonset aws-node -n kube-system WARM_IP_TARGET=10 MINIMUM_IP_TARGET=2


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After the implementation of this adjustment takes effect across each pod of the aws-node, you will observe that when a new node becomes part of the cluster, it will initiate with an allocation of only 12 IPs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1zwozbbfl1d3j61zqph.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy1zwozbbfl1d3j61zqph.png" alt="Image description" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And it has reduced only 13 IPs from the subnet, one is for the node private IP and 12 are reserved for pods.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdgx0sz43qw7b2srinxb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdgx0sz43qw7b2srinxb.png" alt="Image description" width="800" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;By reserving a specific number of warm IP addresses and ENIs, you ensure that each worker node has a minimum number of IPs available for pod assignment, reducing the risk of IP shortage. While the solution mentioned above offers advantages, it's important to be aware of a few tradeoffs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;If multiple pods are scheduled on a single node, exceeding the warm IP count, there might be a slight delay in the startup time for these pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In cases where multiple pods are scheduled on a single node, surpassing the warm IP count, and the subnet lacks any available IP addresses, the scheduling process will fail. In such scenarios, the remaining option involves utilizing extra ENI-configurations with distinct subnets.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;IP shortage can pose challenges to the scalability and smooth functioning of AWS EKS clusters. By configuring the aws-node daemonset using the WARM_IP_TARGET, MINIMUM_IP_TARGET, and WARM_ENI_TARGET environment variables, you can effectively mitigate IP shortage concerns. This approach ensures that each worker node has a minimum number of IP addresses reserved for pod assignment while dynamically allocating additional IPs.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Feel free to connect with us on our&lt;/em&gt; &lt;a href="https://rebrand.ly/Devtron-Discord?ref=devtron.ai" rel="noopener noreferrer"&gt;&lt;em&gt;Discord Community&lt;/em&gt;&lt;/a&gt; &lt;em&gt;if you have any queries. We would be more than happy to help you.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>kubernetes</category>
      <category>networking</category>
    </item>
    <item>
      <title>Kubernetes Adoption: Key Challenges in Migrating to Kubernetes</title>
      <dc:creator>Devtron</dc:creator>
      <pubDate>Mon, 07 Oct 2024 12:01:00 +0000</pubDate>
      <link>https://forem.com/devtron_inc/kubernetes-adoption-key-challenges-in-migrating-to-kubernetes-32c5</link>
      <guid>https://forem.com/devtron_inc/kubernetes-adoption-key-challenges-in-migrating-to-kubernetes-32c5</guid>
      <description>&lt;p&gt;Many organizations have spent many years building and refining their software delivery infrastructure within non-kubernetes environments. They might run their infrastructure on cloud-hosted VMs or bare metal servers using a virtualizing tool such as Proxmox. While these methods are useful and get the job done, they have limitations beyond a certain scale.&lt;/p&gt;

&lt;p&gt;To address this scaling issue, companies want to move their workloads over to a Kubernetes environment. Apart from providing improved scalability, Kubernetes also provides a lot of other benefits such as automation, efficiency, auto-healing, and flexibility. However, migrating the entire business workload to Kubernetes is a daunting task, and it has several challenges associated with it.&lt;/p&gt;

&lt;p&gt;In 2024, it would found in the State of Production Kubernetes survey that nearly 75% of responders use Kubernetes for running their production applications, leaving only 25% of respondents using traditional infrastructure such as VMs for their production applications.&lt;/p&gt;

&lt;p&gt;Before we explore the challenges of adopting Kubernetes let’s understand what the adoption journey might look like.&lt;/p&gt;

&lt;h1&gt;
  
  
  Kubernetes Adoption Journey
&lt;/h1&gt;

&lt;p&gt;Kubernetes adoption is one of the most difficult adoption that most of the organizations would go through. The journey of adopting Kubernetes is no less than a roller coaster ride. Even before you start with creating your very first Kubernetes cluster, you would need to make sure your application is containerized. And to containerize your application, you need to make sure the application is ready for containerization. It takes most organizations months or years to gain complete Kubernetes maturity, depending on multiple factors such as the size and scale of existing applications, technical expertise, existing infrastructure, and more. Let’s take a look at the 4 different stages that every organization would go through to achieve Kubernetes Maturity.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Setting up Kubernetes:&lt;/strong&gt; The first stage in any Kubernetes adoption journey is to create a Kubernetes cluster and get it ready for production deployments. You need to ensure that the cluster has the proper security and compliance before you start deploying your application to the cluster. All these tasks require Kubernetes expertise.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Migrating Workloads:&lt;/strong&gt; Making your application complaint to the Kubernetes environment and onboarding your first application can be cumbersome. Comparatively, migrating all applications might involve repetitive tasks and is a time-consuming activity. A process has to be created that can help you onboard applications quickly, without worrying about the configurations and writing helm-charts or K8s-manifests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Software Delivery Acceleration:&lt;/strong&gt; There needs to be a proper process in place, which enables developers to accelerate their software delivery speed, while also ensuring that the proper compliance policies are being followed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Day 2 Operations:&lt;/strong&gt; Once your applications are all deployed onto Kubernetes you would want to make sure that they are stable. This means ensuring that they can be updated to newer versions using deployment patterns such as blue-green or canary, without any significant downtime. This also involves getting visibility into the environments and checking if there are any resource constraints, ensuring dynamic resource scaling to meet workloads, etc.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Challenges with Getting Started
&lt;/h1&gt;

&lt;p&gt;Within the entire Kubernetes adoption journey, there are multiple different challenges that organizations face. Let’s look at some of these challenges, and understand why migrating to Kubernetes takes a significant amount of time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Steep Learning Curve
&lt;/h2&gt;

&lt;p&gt;Kubernetes brings a lot of flexibility to the table, which enables developers and operations teams to have faster release cycles, while also ensuring maximum reliability. However, these advantages come with a lot of added complexity and nuances. In the pre-Kubernetes era, we were used to handling a lot of VM-level abstraction. This laid down a different foundation, mental model, and building model for the infrastructure components.&lt;/p&gt;

&lt;p&gt;Understanding these models in a Kubernetes context is a challenge. Rather, it takes a lot of time to understand these nuances well. Even after understanding the Kubernetes nuances, developers lack the confidence needed to deploy their applications onto Kubernetes. You don’t want to take your applications to production if you can’t control your applications to the best of your ability, as it might affect the SLAs set in place. Within the &lt;a href="https://20518613.fs1.hubspotusercontent-na1.net/hubfs/20518613/Spectro%20Cloud%202024%20State%20of%20Production%20Kubernetes%20(1).pdf?utm_campaign=2024%20State%20of%20Production%20Kubernetes&amp;amp;utm_medium=email&amp;amp;_hsenc=p2ANqtz-8b-cWU5nqN9chXriNn-13vsvFUAcaEi_z6-NTLAgCWKiiWc0M_4ddql7yhViMvZvQzwXWLAoJSCsnUOc1uiH4n5tDCFo0QlcHCk3sD-MpPrzNIkYY&amp;amp;_hsmi=308611242&amp;amp;utm_content=308611242&amp;amp;utm_source=hs_automation" rel="noopener noreferrer"&gt;State of Production Kubernetes 2024&lt;/a&gt; report by Spectro cloud, 77% of respondents said that Kubernetes complexities have inhibited their adoption journey.&lt;/p&gt;

&lt;p&gt;Before Kubernetes, developers had a very simple world, where they needed to know only a few technologies and develop software to solve the business concerns. They didn’t need to worry about different infrastructure components, and if the application’s design worked well with the existing infrastructure. Learning about Kubernetes for their application takes time, and it’s a new burden for the developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Containerization
&lt;/h2&gt;

&lt;p&gt;Kubernetes by its very nature, is designed to run workloads as microservices. When migrating to Kubernetes, one of the first challenges you will be facing is containerizing your workloads. If you’ve previously run a monolithic application, i.e. every component bundled into a single large application, you will need to break these components down into smaller individual self-contained applications i.e. microservices to obtain the maximum benefit by shifting to a Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Kubernetes is a container orchestrator, which means that to run your workloads, each application will need to be containerized. Learning how to create, build, and use a container image adds to its learning curve. Moreover, depending on your application’s tech stack, the configurations needed for creating a container image would vary.&lt;/p&gt;

&lt;h1&gt;
  
  
  Challenges with Tool Integrations
&lt;/h1&gt;

&lt;p&gt;The cloud-native ecoystem is nothing less than a pacific ocean. The deeper you go, the more you are lost. There are hundereds and thousands of tools built for specific use-cases and the biggest problem is to integrate and manage the tools-sprawls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Broad Ecosystem
&lt;/h2&gt;

&lt;p&gt;Kubernetes has an extremely huge and &lt;a href="https://devtron.ai/blog/elevating-cloud-native-development-kubernetes/" rel="noopener noreferrer"&gt;extensive ecosystem&lt;/a&gt;. If you look at the &lt;a href="https://landscape.cncf.io/" rel="noopener noreferrer"&gt;CNCF ecosystem&lt;/a&gt;, there are 100+ different tools, all solving different problems. To make your cluster production ready, it is necessary to integrate these tools within your clusters. However, just the huge number of solutions for every category can be overwhelming. You have to evaluate the tools and select the ones that fit your needs.&lt;/p&gt;

&lt;p&gt;Even after you’ve evaluated the tools, and shortlisted a small number of tools that meet your requirements, there still is the question of how you are going to integrate these tools within your cluster. Some of the tools are straightforward to integrate, but many require a lot of additional configuration which adds a cognitive burden and introduces a learning curve for developers.&lt;/p&gt;

&lt;p&gt;While having the choice to pick your tools offers quite a lot of flexibility, &lt;a href="https://20518613.fs1.hubspotusercontent-na1.net/hubfs/20518613/Spectro%20Cloud%202024%20State%20of%20Production%20Kubernetes%20(1).pdf?utm_campaign=2024%20State%20of%20Production%20Kubernetes&amp;amp;utm_medium=email&amp;amp;_hsenc=p2ANqtz-8b-cWU5nqN9chXriNn-13vsvFUAcaEi_z6-NTLAgCWKiiWc0M_4ddql7yhViMvZvQzwXWLAoJSCsnUOc1uiH4n5tDCFo0QlcHCk3sD-MpPrzNIkYY&amp;amp;_hsmi=308611242&amp;amp;utm_content=308611242&amp;amp;utm_source=hs_automation" rel="noopener noreferrer"&gt;48% of survey respondents&lt;/a&gt; state that it is very difficult to choose the right tools form the broad ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-cluster &amp;amp; Multi-cloud Strategy
&lt;/h2&gt;

&lt;p&gt;A lot of different organizations try to adopt a multi-cluster or multi-cloud strategy for their workloads i.e. spreading their workloads across multiple Kubernetes clusters or multiple different cloud providers. There is also an increasing demand for adopting a Hybrid cloud strategy i.e. hosting some clusters on a public cloud such as AWS or GCP, and hosting other clusters in on-prem infrastructure. This can help with enhanced reliability for applications and ensures maximum uptime. However, managing a multi-cloud or hybrid workload is not easy.&lt;/p&gt;

&lt;p&gt;One of the most prevalent challenges with a multi-cloud setup is a lack of visibility across all the clusters. There isn’t a uniform way in which you can look at the Kubernetes objects of all clusters in a single place. There is constant context switching between multiple clusters which can make it difficult to debug an application if needed or understand how certain components are related to each other.&lt;/p&gt;

&lt;h1&gt;
  
  
  Challenges with Securing Kubernetes Clusters
&lt;/h1&gt;

&lt;p&gt;Every piece of software has a common shared struggle: being secure enough. Kubernetes is no different, and securing the cluster can become a huge challenge. Kubernetes has many features that help in security, but they can become overwhelming even for experienced K8s users. Let’s explore these challenges in detail.&lt;/p&gt;

&lt;h2&gt;
  
  
  Access Control Management
&lt;/h2&gt;

&lt;p&gt;Managing the right level of access control for aspects of your Kubernetes cluster is a big challenge that many Kubernetes adopters face. For example, you might want to allow your developers to deploy applications in a staging environment, but not in a production environment. Out of the box, Kubernetes provides a lot of mechanisms for creating fine-grained access control for multiple different users.&lt;/p&gt;

&lt;p&gt;The real challenge lies with managing access control and creating the right level of abstraction in real time without hindering the speed and agility of teams. Imagine if you accidentally gave super admin permissions to anyone within your organization. They would be able to do anything within the cluster, which might disrupt your services and lead to unhappy customers.&lt;/p&gt;

&lt;h2&gt;
  
  
  DevSecOps implementation
&lt;/h2&gt;

&lt;p&gt;When running on Kubernetes, you want to ensure that you have robust DevSecOps practices in place. Before deploying your application to any environment, you should run some security scans on the application code and containers.&lt;/p&gt;

&lt;p&gt;Evaluating tools, and integrating them within your CI/CD pipelines is quite a big challenge. Moreover, what if you want to set some governance policies based on the number of vulnerabilities found in the security scans? Setting these policies can also be a big challenge.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;The adoption and migration journey from a traditional VM-based workload to a Kubernetes environment is quite long and filled with challenges at different levels. The learning curve is especially high because Kubernetes itself is a distributed system, and there are many different tools within the Kubernetes ecosystem that you will need to learn about. Even after getting past the high learning curve, there are still many challenges with security, different tool stacks, setting up monitoring, etc, and then managing all of these different aspects within the cluster. This adoption journey can typically take anything from a few months, to even a few years depending on the scale of your organization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://devtron.ai/" rel="noopener noreferrer"&gt;Devtron&lt;/a&gt; is a modern Kubernetes dashboard that simplifies a lot of the challenges with Kubernetes. With the help of Devtron, you can reduce your adoption time from a couple of months to just a couple of weeks. It helps reduce a lot of the unnecessary complexities in Kubernetes so that you can focus on developing your applications, and not worry about the operations. To learn more about how Devtron addresses all the challenges mentioned above, check out the &lt;a href="https://devtron.ai/blog/how-devtron-helps-with-kubernetes-adoption/" rel="noopener noreferrer"&gt;second part of this article&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>cloud</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Setting up custom DNS routing on EKS Cluster</title>
      <dc:creator>Devtron</dc:creator>
      <pubDate>Wed, 02 Oct 2024 10:52:00 +0000</pubDate>
      <link>https://forem.com/devtron_inc/setting-up-custom-dns-routing-on-eks-cluster-4141</link>
      <guid>https://forem.com/devtron_inc/setting-up-custom-dns-routing-on-eks-cluster-4141</guid>
      <description>&lt;p&gt;Our one of the third party API URL was failing to resolve, so we figured out the solution to route through Google Public DNS, thus changing the routing of a particular domain from EKS Default DNS ( 10.100.0.10 ) to resolve using Google Public DNS.We used 8.8.8.8, the primary DNS server for Google DNS, in order to function it correctly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure Conditional Forwarder with CoreDNS in Amazon EKS cluster
&lt;/h2&gt;

&lt;h2&gt;
  
  
  What is CoreDNS?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;CoreDNS is a DNS server that is modular and pluggable, and each plugin adds new functionality to CoreDNS. This can be configured by maintaining a Corefile, which is the CoreDNS configuration file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As a cluster administrator, you can modify the ConfigMap for the CoreDNS Corefile to change how DNS service discovery behaves for that cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CoreDNS uses negative caching whereas kube-dns does not (this means CoreDNS can cache failed DNS queries as well as successful ones, which overall should equal better speed in name resolution).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can use CoreDNS to configure conditional forwarding for DNS queries that are sent to the domains resolved by a customized DNS server(like Google DNS Server).&lt;/p&gt;

&lt;h2&gt;
  
  
  How Amazon EKS uses CoreDNS?
&lt;/h2&gt;

&lt;p&gt;Pods running inside the Amazon EKS cluster use the CoreDNS service’s cluster IP as the default name server for querying internal and external DNS records.&lt;/p&gt;

&lt;p&gt;You can follow the mentioned steps to modify the CoreDNS ConfigMap and add the conditional forwarder configuration.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Run the following command:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl -n kube-system edit configmap coredns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output of the command should be:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt; 
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt; 
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;eks.amazonaws.com/component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;coredns&lt;/span&gt; 
    &lt;span class="na"&gt;k8s-app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-dns&lt;/span&gt; 
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;coredns&lt;/span&gt; 
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt; 
&lt;span class="na"&gt;data: Corefile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt; 
        &lt;span class="s"&gt;.:53 { &lt;/span&gt;
            &lt;span class="s"&gt;errors &lt;/span&gt;
            &lt;span class="s"&gt;health &lt;/span&gt;
            &lt;span class="s"&gt;kubernetes cluster.local in-addr.arpa ip6.arpa { &lt;/span&gt;
              &lt;span class="s"&gt;pods insecure &lt;/span&gt;
              &lt;span class="s"&gt;upstream &lt;/span&gt;
              &lt;span class="s"&gt;fallthrough in-addr.arpa ip6.arpa &lt;/span&gt;
            &lt;span class="s"&gt;} &lt;/span&gt;
           &lt;span class="s"&gt;prometheus :9153 &lt;/span&gt;
           &lt;span class="s"&gt;proxy . /etc/resolv.conf &lt;/span&gt;
           &lt;span class="s"&gt;cache 30 &lt;/span&gt;
           &lt;span class="s"&gt;loop &lt;/span&gt;
           &lt;span class="s"&gt;reload&lt;/span&gt;
           &lt;span class="s"&gt;loadbalance &lt;/span&gt;
       &lt;span class="err"&gt;}&lt;/span&gt; 
       &lt;span class="s"&gt;domain-name:53 {&lt;/span&gt; 
           &lt;span class="s"&gt;errors&lt;/span&gt;
           &lt;span class="s"&gt;cache 30&lt;/span&gt; 
           &lt;span class="s"&gt;forward . custom-dns-server&lt;/span&gt; 
           &lt;span class="s"&gt;reload&lt;/span&gt; 
     &lt;span class="s"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; We have customized the above configMap with the domain-name &lt;strong&gt;“&lt;/strong&gt;&lt;a href="http://plapi.ecomexpress.in" rel="noopener noreferrer"&gt;&lt;strong&gt;plapi.ecomexpress.in&lt;/strong&gt;&lt;/a&gt;. Replace it with your domain name.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;custom-DNS-server&lt;/strong&gt; &lt;strong&gt;IP address&lt;/strong&gt; for Google DNS is used, that is (8.8.8.8). Replace the &lt;strong&gt;custom DNS server IP address&lt;/strong&gt; with your &lt;strong&gt;custom DNS server&lt;/strong&gt; IP address.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.The final CoreDNS ConfigMap will look like:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Corefile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
         &lt;span class="s"&gt;.:53 {&lt;/span&gt;
               &lt;span class="s"&gt;errors&lt;/span&gt;
               &lt;span class="s"&gt;health&lt;/span&gt;
               &lt;span class="s"&gt;kubernetes cluster.local in-addr.arpa ip6.arpa {    &lt;/span&gt;
                   &lt;span class="s"&gt;pods insecure&lt;/span&gt;
                   &lt;span class="s"&gt;upstream&lt;/span&gt;
                   &lt;span class="s"&gt;fallthrough in-addr.arpa ip6.arpa&lt;/span&gt;
                 &lt;span class="s"&gt;}&lt;/span&gt;
                 &lt;span class="s"&gt;prometheus :9153&lt;/span&gt;
                 &lt;span class="s"&gt;forward . /etc/resolv.conf&lt;/span&gt;
                 &lt;span class="s"&gt;cache 30&lt;/span&gt;
                 &lt;span class="s"&gt;loop&lt;/span&gt;
                 &lt;span class="s"&gt;reload&lt;/span&gt;
                 &lt;span class="s"&gt;loadbalance&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;span class="s"&gt;plapi.ecomexpress.in:53 {&lt;/span&gt;
       &lt;span class="s"&gt;errors&lt;/span&gt;
       &lt;span class="s"&gt;cache &lt;/span&gt;&lt;span class="m"&gt;30&lt;/span&gt;
       &lt;span class="s"&gt;forward . 8.8.8.8&lt;/span&gt;
       &lt;span class="s"&gt;reload&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. To verify that domain-name resolution works, run the following command:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;prod@ip-192-168-X-XXX:/home/devtron$ kubectl exec busybox -- nslookup domain-name.in&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Replace the &lt;strong&gt;domain-name&lt;/strong&gt; with your &lt;strong&gt;domain name&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The output before updating custom route for CoreDNS:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;prod@ip-192-168-X-XXX:/home/devtron$ kubectl exec busybox -- nslookup plapi.ecomexpress.in&lt;/span&gt;

&lt;span class="na"&gt;Server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;    &lt;span class="s"&gt;10.100.0.10&lt;/span&gt;
&lt;span class="na"&gt;Address 1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.100.0.10 kube-dns.kube-system.svc.cluster.local&lt;/span&gt;
&lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;      &lt;span class="s"&gt;plapi.ecomexpress.in&lt;/span&gt;
&lt;span class="na"&gt;Address 1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;172.20.92.37 ip-172-20-92-37.ap-south-1.compute.internal&lt;/span&gt;
&lt;span class="na"&gt;Address 2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;172.20.54.52 ip-172-20-54-52.ap-south-1.compute.internal&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The output after updating custom route for CoreDNS:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;prod@ip-192-168-X-XXX:/home/devtron$ kubectl exec busybox -- nslookup plapi.ecomexpress.in&lt;/span&gt;

&lt;span class="na"&gt;Server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;    &lt;span class="s"&gt;10.100.0.10&lt;/span&gt;
&lt;span class="na"&gt;Address 1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.100.0.10 kube-dns.kube-system.svc.cluster.local&lt;/span&gt;
&lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;      &lt;span class="s"&gt;plapi.ecomexpress.in&lt;/span&gt;
&lt;span class="na"&gt;Address 1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;35.154.40.19 ec2-35-154-40-19.ap-south-1.compute.amazonaws.com&lt;/span&gt;
&lt;span class="na"&gt;Address 2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;3.6.218.14 ec2-3-6-218-14.ap-south-1.compute.amazonaws.com&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Creating a Production grade EKS Cluster using EKSCTL</title>
      <dc:creator>Devtron</dc:creator>
      <pubDate>Mon, 30 Sep 2024 10:51:00 +0000</pubDate>
      <link>https://forem.com/devtron_inc/creating-a-production-grade-eks-cluster-using-eksctl-3lbk</link>
      <guid>https://forem.com/devtron_inc/creating-a-production-grade-eks-cluster-using-eksctl-3lbk</guid>
      <description>&lt;h4&gt;
  
  
  What Is EKSCTL?
&lt;/h4&gt;

&lt;p&gt;EKSCTL almost automates much of our experience of creating EKS Cluster. EKSCTL is written in Go and makes use of AWS service, CloudFormation. It is the official CLI for Amazon EKS. The current version of &lt;strong&gt;eksctl&lt;/strong&gt; allows you to create a number of clusters, list those, and delete them as well.&lt;/p&gt;

&lt;h4&gt;
  
  
  Amazon Production Grade EKS Cluster with One Command:
&lt;/h4&gt;

&lt;p&gt;When we look at creating a Production grade &lt;a href="https://devtron.ai/blog/upgrade-eks-1-16-cluster-to-eks-1-17-using-eksctl-in-6-steps/" rel="noopener noreferrer"&gt;EKS Cluster&lt;/a&gt;, we can create an EKS Cluster with the following command: &lt;strong&gt;eksctl create cluster&lt;/strong&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;When you run the above command, following things happen:&lt;/strong&gt;
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Sets up the AWS Identity and Access Management(IAM ) Role for the master plane to connect to EKS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creates the Amazon VPC architecture, and the master control plane.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Brings up instances, and deploys the ConfigMap so nodes can join the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provides access to the cluster with a pre-defined kubeconfig file.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Create Production Grade EKS CLuster: Using Config Files
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;You can create Production Grade EKS Cluster using the Config File. Following are the steps:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, attach the following AWS Managed Policies for a role / user / group required for creating an &lt;a href="https://devtron.ai/blog/aws-eks-vs-kops-what-to-chose/" rel="noopener noreferrer"&gt;EKS&lt;/a&gt; Cluster using EKSCTL&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AmazonEC2FullAccess&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IAMFullAccess&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AmazonVPCFullAccess&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWSCloudFormationFullAccess&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Second, Create a &lt;strong&gt;Cluster.yaml&lt;/strong&gt; File&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Properties:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OnDemandPercentageBaseCapacity&lt;/strong&gt;: The minimum amount of the Auto Scaling group’s capacity that must be fulfilled by On-Demand Instances. The default value is 0, in this On-Demand Instances are launched as a percentage of the Auto Scaling group’s desired capacity as per onDemandPercentageAboveBaseCapacity setting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OnDemandPercentageAboveBaseCapacity&lt;/strong&gt;: Controls the percentages of On-Demand Instances and Spot Instances for your additional capacity beyond onDemandPercentageBaseCapacity. The range is 0–100. The default value is 100. Here, this property set to 50, the percentages are 50% for your additional capacity above base capacity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vpc and subnets:&lt;/strong&gt; If you don’t define these two properties, then AWS will automatically create vpc and subnets and assign them with their respective id’s.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;attachPolicyARNs:&lt;/strong&gt; Attaches the specified managed policy to the specified IAM role. Here, you will have to define custom policies along with managed policies because policies are explicitly defined, if you decide to leave it blank AWS will implicitly attach it’s own policies for creating an EKS Cluster.&lt;/p&gt;

&lt;p&gt;Next, run this command to create EKS cluster using your yaml file: &lt;strong&gt;eksctl create cluster -f cluster.yaml&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That’s it ! Your Production Grade EKS CLuster is ready. For eksctl documention, check the following link: &lt;a href="https://eksctl.io/introduction/#getting-started/" rel="noopener noreferrer"&gt;https://eksctl.io/introduction/#getting-started/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To continue learning more about EKS, read this blog post on how to &lt;a href="https://devtron.ai/blog/setting-up-custom-dns-routing-on-eks-cluster/" rel="noopener noreferrer"&gt;set up custom DNS routing on an EKS cluster.&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS Cost Optimization Parameters and Metrics: Part-2</title>
      <dc:creator>Devtron</dc:creator>
      <pubDate>Fri, 27 Sep 2024 12:51:00 +0000</pubDate>
      <link>https://forem.com/devtron_inc/aws-cost-optimization-parameters-and-metrics-part-2-3nim</link>
      <guid>https://forem.com/devtron_inc/aws-cost-optimization-parameters-and-metrics-part-2-3nim</guid>
      <description>&lt;p&gt;Are you trying to learn more about AWS cloud cost management? Is your monthly bill of AWS, surging and you're perplexed, if you are using all the resources that you have paid for? Here, we are back with an extended version of our previous blog on &lt;a href="https://devtron.ai/blog/aws-cost-optimization-parameters-and-metrics-part-1/" rel="noopener noreferrer"&gt;AWS Cost Optimization&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It is a well-known fact that moving to Amazon Web Services(AWS) can provide a huge number of benefits in terms of agility, responsiveness, much-simplified operations, and improved innovation. However, there is an assumption that migrating to the public cloud will lead to cost savings. Though in reality, AWS cost optimization is tougher than you can think of.&lt;/p&gt;

&lt;p&gt;To optimize your costs, you need to know exactly what are your organization needs, how and when they are used, while adapting to the increased demands of agile and fast pacing technology.&lt;/p&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.blazeclan.com%2Fwp-content%2Fuploads%2F2018%2F12%2FCost-optimisation-in-AWS.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.blazeclan.com%2Fwp-content%2Fuploads%2F2018%2F12%2FCost-optimisation-in-AWS.png" alt="aws-cost-optimization"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog, we will cover 4 indispensable pillars of cost optimization&lt;/p&gt;

&lt;h2&gt;Pillar 1: Cost effective resources&lt;/h2&gt;
&lt;br&gt;

&lt;h3&gt;1. Autoscaling&lt;/h3&gt;

&lt;ul&gt;
    &lt;li&gt;Automating the deployment of applications requires software tools and frameworks, in addition to proper infrastructure (with enough resources, such as servers and services).&lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
    &lt;li&gt;You can operate the provisioning of test environments using manual based efforts such as AWS’s APIs or Command Line Interface(CLI) tools. However, the manual intervention leads to lower productivity from the teams. Hence, there is an absolute need to automate infrastructure processes (like provisioning test environments) which will improve productivity and efficiency. Some of the ways in which you can automate processes are,
        &lt;ol&gt;
            &lt;li&gt;
&lt;strong&gt;Provisioning EC2 instances via Amazon Machine Images (AMI):&lt;/strong&gt; AMI encapsulates OS and other software/configuration files. When an instance starts, all the applications come pre-loaded from the AMI. AMIs enable the launch of standardized instances across multiple regions, by allowing the copying of AMIs from one region to another. &lt;/li&gt;
            &lt;li&gt;
&lt;strong&gt;Deploying platforms using Amazon Elastic Beanstalk:&lt;/strong&gt; With AWS Elastic Beanstalk, you can easily deploy and scale web applications and services developed with Node.js, Python etc, on familiar servers such as Apache, Nginx, and IIS. You just need to upload your code and Elastic Beanstalk will automatically handle the deployment, capacity provisioning, load balancing, auto-scaling, and application health monitoring. At the same time, you retain full control over all the cloud resources powering your application.&lt;/li&gt;
        &lt;/ol&gt;
    &lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;2. Right sizing of instances&lt;/h3&gt;
&lt;blockquote&gt;
&lt;p&gt;Based on an analysis of OS instances, in North America, it was found that about 84% of instances were not correctly sized. It was estimated that, right-sizing of instances could lead to a cost reduction by 36% ( USD 55 million). The right-sizing could be achieved by porting them to optimally sized AWS resources.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can achieve right-sizing by making sure:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;You use the correct granularity for the time period of analysis that is required to cover any system cycles.For e.g.,&lt;em&gt; if a two week analysis is performed, you might be overlooking a monthly cycle of high utilization, which could lead to under-provisioning.&lt;/em&gt;
&lt;/li&gt;
    &lt;li&gt;Right-sizing is an iterative process, that gets triggered by changes in usage patterns and external factors like AWS price drops or new AWS resource types.
&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;3. Purchasing options to save cost&lt;/h3&gt;

&lt;p&gt;Amazon EC2 provides a number of purchasing models for instances. Using the Reserved Instance purchasing model, can help you save up to 75% over On-Demand capacity. Spot Instances are another phenomenal way to save money for non-stateful workloads.&lt;/p&gt;


&lt;h4&gt;Reserved instances&lt;/h4&gt;

&lt;p&gt;Reserved Instance is a 1 or 3 year commitment towards purchasing a reservation of capacity. In this case, you will significantly pay a lower hourly rate. If you use reserved instances, they enable up to 75% savings over on-demand capacity. Moreover, you get a chance of selling the unused Reserved Instances.&lt;/p&gt;


&lt;h4&gt;Spot instances&lt;/h4&gt;

&lt;p&gt;Spot Instances provide a way for saving, by allowing you to use spare compute capacity at a significantly lower cost than on Demand instances(up to 90%). You can also use spot instances to increase your computing scale and throughput for the same budget.&lt;br&gt;&lt;br&gt;Spot instances can be used when you need large computing capacity such as for batch processing, scientific research, financial analysis, testing, and when you are not concerned about any interruptions, provided you have ways to deal with such interruptions.&lt;/p&gt;


&lt;h3&gt;4. Use of correct AWS S3 storage cclass&lt;/h3&gt;

&lt;p&gt;Storage in AWS is cheap and finite, but that’s not a good reason to keep your organization’s data there forever.&lt;br&gt;&lt;br&gt;It is necessary, to clean the data from S3 time-to-time, so as to optimize the storage costs.  Here, are some of the storage methods, that can be used:&lt;/p&gt;


&lt;ul&gt;
    &lt;li&gt;
&lt;strong&gt;Amazon S3 Standard Infrequent Access (S3-IA):&lt;/strong&gt; You can use this type of storage for storing data that is accessed less frequently. The data can be retrieved rapidly whenever needed. The major drawback is, you are charged a retrieval fee of $0.01 per GB and uses 128 blocks to store data. If you have smaller objects, then it will be more expensive than the standard storage.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Amazon S3 Glacier:&lt;/strong&gt; It can be used for archives, where a portion of the data might be required to be retrieved within minutes. The data stored in S3 Glacier has a minimum storage duration of 90 days and can be retrieved within 1-5 minutes.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Delete Policy:&lt;/strong&gt; For those files that you think might not be required anymore, set up a delete policy.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;For pricing information on Amazon S3, click on&lt;strong&gt; &lt;/strong&gt;
    &lt;a href="https://aws.amazon.com/s3/pricing/" rel="noopener noreferrer"&gt;&lt;strong&gt;Amazon S3 Pricing&lt;/strong&gt;&lt;/a&gt;
&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h3&gt;5. Geographic selection&lt;/h3&gt;

&lt;ul&gt;
    &lt;li&gt;AWS Cloud infrastructure is built around Regions and Availability Zones. As of March 3, 2020, AWS has 16 public regions and 2 non-public regions. Each region operates within the local market conditions, and resource pricing can be different for each region. You have to choose a specific region to architect your solution, so that you can run at the lowest price globally. &lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
    &lt;li&gt;Let's consider a sample workload - 5 c5.large, 20 GB gp2 EBS storage each, 1 ELB, 5.1 TB data processed. 1 ELB sends traffic to 5 c5.large instances running Amazon Linux in the same availability zone. Each instance has 20 GB of EBS SSD storage, and each instance receives 100 GB/month from the ELB and sends 1 TB/month back to the ELB. Therefore, the ELB processes 5.1 TB/month. If we consider this workload, there is a substantial cost difference in AWS pricing across different Regions. It costs 52% more to deploy this infrastructure in a location in South America, compared to a location in North America.&lt;/li&gt;
&lt;/ul&gt;

&lt;ul&gt;
    &lt;li&gt;While you choose a region to architect your solution based on the geographic location that minimizes your cost, it’s a best practice to place computing resources closer to users, so as to provide lower latency and strong data sovereignty.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Pillar 2: Matching supply and demand&lt;/h2&gt;

&lt;p&gt;You can deliver services at a low cost, when the infrastructure is optimized. This can be done using the following methods:&lt;/p&gt;

&lt;h3&gt;1. Demand-based approach&lt;/h3&gt;

&lt;p&gt;This approach leverages &lt;strong&gt;Elasticity&lt;/strong&gt;(ability to scale up or scale down, managing capacity and provisioning resources as demand changes) of AWS Cloud. AWS provides APIs or Services for the dynamic allocation of cloud resources to your application / solution. As per AWS best practices, you should use &lt;strong&gt;AWS auto-scaling&lt;/strong&gt;. It is a service that makes scaling simple with recommendations that allows you to optimize performance and cost.&lt;/p&gt;

&lt;h3&gt;2. Buffer-based approach&lt;/h3&gt;

&lt;p&gt;A buffer in AWS will allow your applications to communicate with each other when they are running at different rates over time. This approach involves decoupling components of a cloud application and creating a queue that accepts messages. A buffer will queue the request, until the resources are available.&lt;br&gt; This approach is suitable if you have workloads that are not predictable or time-sensitive. Some of the key AWS services that enable this approach are &lt;strong&gt;Amazon SQS&lt;/strong&gt; and &lt;strong&gt;Amazon Kinesis&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;If you have a workload that generates write load, and need not be processed immediately, you can use the buffer to smooth out demands on resources.&lt;/p&gt;

&lt;h3&gt;3. Time-based approach&lt;/h3&gt;

&lt;p&gt;This approach involves aligning resource capacity to demand, that is predictable over specified time periods. If you know when resources are going to be required, you can time your system to make the right resources available at the right time. You can implement time-based resource allocation by timing your auto scaling. However, while using auto scaling for this approach, you need to be careful with the following:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Load based auto scaling is not always appropriate in every situation. For e.g., cloud deployments for small startups which have less than 50 instances and witness unusual patterns. In such cases, the close matching of demand and supply may not be optimal.&lt;/li&gt;
    &lt;li&gt;Auto scaling can add a new instance in 5 mins, and take another 3-5 mins to start a new instance. Due to this mismatch of enough instances to handle the load, over-loading of existing instances can occur. This in turn, slows down the health check, and ELB removes the instance, which can worsen the situation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Pillar 3: Expenditure monitoring&lt;/h2&gt;

&lt;p&gt;One of the most crucial drivers for effective decision making in the organization, is having crystal clear view of AWS Resource Metrics. AWS recommends the following approaches to achieve expenditure monitoring:&lt;/p&gt;

&lt;h3&gt;1. Stakeholders&lt;/h3&gt;

&lt;p&gt;It is a good practice to have necessary stakeholders involved in the expenditure awareness discussions, as it produces better outcomes. It is recommended to involve financial stakeholders such as CFOs, Business Unit Owners, and any Third parties that might be directly involved in resource expenditure.&lt;br&gt;&lt;br&gt;This will bring any hidden costs into forefront, which would provide opportunities for cost optimization, and would make sure that costs will be correctly allocated to the right business unit.&lt;br&gt;&lt;/p&gt;

&lt;h3&gt;2. Reserved instance reporting&lt;/h3&gt;

&lt;p&gt;Reserved Utilization Report and RI Coverage Report are the key tools that help you in analyzing the costs. These reports visualize the percentage of running instance hours, that are covered by reserved instances. The reports may visualize the content in an aggregate way or in a detailed way (by account, instance type, region, availability zone, tags, platform etc).&lt;/p&gt;

&lt;p&gt;What is &lt;strong&gt;Reserved instance utilization report&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;It allows you to visualize RI utilization (% of purchased RI hours consumed by the instances during a period of time) and shows how much savings are accrued, due to the usage of reserved instances.&lt;/p&gt;

&lt;p&gt; What is &lt;strong&gt;Reserved instance coverage report&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;It allows you to discover how much of the overall instance usage is covered by RIs, so you can make informed decisions about when to modify or purchase RIs to ensure maximum coverage.&lt;/p&gt;


&lt;h2&gt;Pillar 4: Optimizing over time&lt;/h2&gt;
&lt;br&gt;

&lt;h3&gt;1. Establishing cost optimization function&lt;/h3&gt;

&lt;ul&gt;
    &lt;li&gt;You can establish a function within your organization.&lt;/li&gt;
    &lt;li&gt;This function will be performed by an existing team such as Cloud COE, or you can even create a new team of key stakeholders from the appropriate BUs in the organization.&lt;/li&gt;
    &lt;li&gt;This function will coordinate and manage all aspects of cost optimization, from your technical teams, to your people and all the processes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;2. Monitor, track and analyze your service usage&lt;/h3&gt;

&lt;ul&gt;
    &lt;li&gt;AWS recommends establishing strong goals and metrics for the organization to measure itself against. These goals should include costs, but should also surface the business output of your systems to quantify the impact of your improvements.&lt;/li&gt;
    &lt;li&gt;AWS suggests using tools like AWS Trusted Advisor and Amazon Cloud Watch to monitor your usage and handle the workloads accordingly.&lt;/li&gt;
    &lt;li&gt;You can use &lt;strong&gt;Consolidated Billing&lt;/strong&gt;, if you have multiple AWS accounts. This service has no additional charges. It is a very helpful method, which enables you to see a combined view of all the charges across all of your AWS accounts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that you know the four pillars, share your cost optimization experience with us in the comments below. Connect with us on our community &lt;a href="https://discord.devtron.ai/" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; server&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Adopting kubernetes also plays a vital role in optimizing cost if implemented with a right approach. In addition to cost optimization, it also helps to &lt;a href="https://devtron.ai/blog/decrease-carbon-footprints-using-kubernetes/" rel="noopener noreferrer"&gt;reduce carbon footprint released by the organization&lt;/a&gt;.
&lt;/p&gt;


&lt;/blockquote&gt;

</description>
    </item>
    <item>
      <title>How to deploy Kubernetes Secrets with AWS Secrets Manager</title>
      <dc:creator>Devtron</dc:creator>
      <pubDate>Mon, 23 Sep 2024 10:50:00 +0000</pubDate>
      <link>https://forem.com/devtron_inc/how-to-deploy-kubernetes-secrets-with-aws-secrets-manager-48dh</link>
      <guid>https://forem.com/devtron_inc/how-to-deploy-kubernetes-secrets-with-aws-secrets-manager-48dh</guid>
      <description>&lt;p&gt;In Kubernetes, external secrets refer to managing sensitive information, such as API keys, database passwords, or other credentials, outside of the Kubernetes cluster and then securely injecting them into the cluster when needed.&lt;/p&gt;

&lt;p&gt;This approach is crucial for several reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;: Storing sensitive information directly in Kubernetes manifests or configuration files is a security risk. External secret helps to reduce the risk by keeping the data separate from the application code and configuration which reduces the chance of accidental exposure or misuse of credentials.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Separation of Concerns&lt;/strong&gt;: Externalizing secrets allows us to separate application code and configuration which is a major concern. It allows developers to focus on writing code without worrying about handling sensitive data. The operations team can manage secrets separately by applying the best practices without impacting application logic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Centralized Management:&lt;/strong&gt; External secrets facilitate centralized management of sensitive information. This means secrets can be stored, rotated, and audited in a centralized system outside the Kubernetes cluster. Centralized management simplifies the task of maintaining and updating credentials without the need to modify application code or configurations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dynamic Secrets&lt;/strong&gt;: Some external secret management systems support dynamic secrets, which are credentials that are generated on-demand and have a limited lifespan. This enhances security by minimizing the exposure window for sensitive information. Kubernetes workloads can request dynamic secrets as needed, reducing the risk of unauthorized access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration with Secret Management Tools&lt;/strong&gt;: External secrets can be integrated with different secret management tools like HashiCorp Vault, AWS Secrets Manager, etc. These tools provide advanced security and features such as encryption, access controls, and audit trails.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Compliance&lt;/strong&gt;: Many organizations need to adhere to specific compliance standards that mandate secure handling of sensitive information. Externalizing secrets and leveraging external secret management tools can help meet these compliance requirements.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some common third-party secrets management tools include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;HashiCorp Vault:&lt;/strong&gt; A tool for managing secrets and protecting sensitive data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Secrets Manager:&lt;/strong&gt; A service for managing sensitive information used by AWS services and applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Azure Key Vault:&lt;/strong&gt; A cloud service for securely storing and managing secrets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Google Secret Manager:&lt;/strong&gt; Managed service provided by Google for storing secrets and confidential data.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopvyoc4ok18obyq78mkw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopvyoc4ok18obyq78mkw.png" alt="Image description" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog, we will dive into AWS Secrets Manager and deploy secrets in Kubernetes using External Secrets Operator.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy secrets from AWS Secret Manager in Kubernetes using Devtron
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Prerequisite: Create Secrets in AWS Secret Manager&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To add secrets in the AWS secret manager, do the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the AWS secret manager console&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Store a new secret&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add and save your secret&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuo85biq218f82pc4hoyr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuo85biq218f82pc4hoyr.png" alt="Image description" width="800" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy secrets in Kubernetes
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step-1: Deploy External Secret Operator (ESO)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;External Secrets Operator is a Kubernetes operator that integrates external secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault, and many more. The operator reads information from external APIs and automatically injects the values into a Kubernetes Secret.&lt;/p&gt;

&lt;p&gt;Helm chart link: &lt;a href="https://charts.external-secrets.io" rel="noopener noreferrer"&gt;https://charts.external-secrets.io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-2: Deploy the aws auth creds in secret name&lt;/strong&gt; &lt;code&gt;aws-secret-auth&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Create a Kubernetes secret in the namespace in which the application is to be deployed using base64 encoded AWS access-key and secret-access-key. You can use Devtron's &lt;code&gt;generic-helm-chart&lt;/code&gt; for it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1rn4ld83pnnxowxrt40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1rn4ld83pnnxowxrt40.png" alt="Image description" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-3: Create a secret for your application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go to the &lt;code&gt;App Configuration&lt;/code&gt; &amp;gt;&amp;gt; &lt;code&gt;Secrets&lt;/code&gt; and click on &lt;code&gt;Add Secret&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyip1b3yl350co2wl51ks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyip1b3yl350co2wl51ks.png" alt="Image description" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select  &lt;code&gt;AWS Secrets Manager&lt;/code&gt; under External Secret Operator (ESO) from the drop-down. You can also see all other options available, and if there are requirement of any other third-party secret which is not available as of now, that can also be supported as long as it is supported by ESO.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS SecretManager&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Google Secret Manager&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Hashi Corp Vault&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmcmibduaqkfbctqez6q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmcmibduaqkfbctqez6q.png" alt="Image description" width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-4: Configure the secret&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To configure the external secret that will be fetched from the &lt;code&gt;AWS Secret Manager&lt;/code&gt; for your application, you will need to provide specific details using the following key-value pairs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgikx8ik1le5etdpz5cnt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgikx8ik1le5etdpz5cnt.png" alt="Image description" width="800" height="570"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you don't want to authenticate using &lt;code&gt;access-key&lt;/code&gt; and &lt;code&gt;secret-access-key&lt;/code&gt;, attach the &lt;code&gt;SecretManagerReadWrite&lt;/code&gt; policy to your node and then the system should automatically fetch the secrets from the secret manager and deploy it into your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhsina2vw9ncxfhrlb3o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhsina2vw9ncxfhrlb3o.png" alt="Image description" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the policies are attached, you can change the &lt;code&gt;SecretStore&lt;/code&gt; configs as specified below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29w3wbd0lboj6yymladm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29w3wbd0lboj6yymladm.png" alt="Image description" width="800" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feel free to check out the official documentation of External Secrets Operator to find out how authentication can be done. &lt;a href="https://external-secrets.io/latest/provider/aws-secrets-manager/#aws-authentication" rel="noopener noreferrer"&gt;https://external-secrets.io/latest/provider/aws-secrets-manager/#aws-authentication&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Interested in mastering the use of HashiCorp Vault within Kubernetes? Dive into our insightful blog to grasp &lt;a href="https://devtron.ai/blog/how-to-deploy-hashicorp-vault-in-kubernetes/" rel="noopener noreferrer"&gt;how to deploy the Hashicorp Vault and integrate the fetched secrets into your Kubernetes applications.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As we wrap up, it's clear that unlocking the vault and harnessing the power of external secrets in Kubernetes is not just about &lt;a href="https://devtron.ai/blog/kubernetes-container-security-devsecops-best-practices/" rel="noopener noreferrer"&gt;securing your applications&lt;/a&gt;—it's about future-proofing your infrastructure, enabling innovation, and staying ahead of emerging threats.&lt;/p&gt;

&lt;p&gt;Our exploration across tools like AWS Secrets Manager and HashiCorp Vault aims to bring order to the often turbulent world of secret management. Investing in these secure secret management practices today is safeguarding your assets while fostering an environment prepared for consistent growth and success.&lt;/p&gt;

&lt;p&gt;If you have any questions feel free to reach out to us. Our thriving &lt;a href="https://rebrand.ly/Devtron-Discord?ref=devtron.ai" rel="noopener noreferrer"&gt;Discord Community&lt;/a&gt; is just a click away!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS EKS vs KOPS: Choosing the Right Kubernetes Solution</title>
      <dc:creator>Devtron</dc:creator>
      <pubDate>Fri, 20 Sep 2024 10:50:00 +0000</pubDate>
      <link>https://forem.com/devtron_inc/aws-eks-vs-kops-choosing-the-right-kubernetes-solution-kjj</link>
      <guid>https://forem.com/devtron_inc/aws-eks-vs-kops-choosing-the-right-kubernetes-solution-kjj</guid>
      <description>&lt;h3&gt;
  
  
  What is AWS EKS?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc12m7edei17227dnedye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc12m7edei17227dnedye.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Web Services (AWS) has revolutionized the way businesses handle their workloads with its Elastic Kubernetes Service (EKS). As a managed service, AWSEKS simplifies the process of deploying, managing, and scaling containerized applications. This is especially beneficial for DevOps teams, as it integrates seamlessly into their workflows, enhancing functionality and efficiency. The service provides a robust orchestration system, allowing teams to automate the deployment and scaling of applications. AWSEKS is designed to be highly available and scalable, ensuring that workloads are efficiently managed without sacrificing performance. By leveraging the power of Kubernetes, a widely adopted container-orchestration system, EKS offers an optimal solution for handling complex applications on the cloud, making it a cornerstone of modern cloud infrastructure. Even though EKS fully manages the Kubernetes control plane, the disadvantage is that you are unable to make changes to this control plane and it doesn’t allow you to have access to master nodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is KOPS?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttlkuzms2e6v0lzktjb8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttlkuzms2e6v0lzktjb8.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kops is an acronym for “Kubernetes Operation”, it offers CLI tools that make creating and managing Kubernetes easy. It came before AWS EKS in 2016. It gives you complete control over the Kubernetes Environment. Using Kops, you can simplify the Kubernetes cluster set up since it gives you access to set up both master and worker nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Kubernetes Cluster Setup
&lt;/h2&gt;

&lt;p&gt;This is the foremost point to consider when evaluating Kubernetes solutions on AWS is how difficult it is to &lt;a href="https://devtron.ai/blog/mistakes-to-avoid-when-configuring-a-kubernetes-cluster/" rel="noopener noreferrer"&gt;set up a working Kubernetes cluster&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Setting up a Kubernetes Cluster with EKS
&lt;/h4&gt;

&lt;p&gt;Setting up a cluster with EKS is fairly complicated and has some prerequisites. Since EKS does not actually create worker nodes automatically so you must manage that process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9uqx1p5pham1k8zqvu1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9uqx1p5pham1k8zqvu1.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes on AWS&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You must have set up AWS CLI and AWS-IAM-authenticator as a prerequisite.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To manage the process of setting up worker nodes since EKS won’t do it for you, this can be done using Cloud Formation templates or EKSCTL, Check Creating Production clusters using EKSCTL.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To manage the process of setting up worker nodes, you can also use Terraform which allows you to set up a VPC and subnets set up to use the EKS module.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Setting up a Kubernetes Cluster with KOPS
&lt;/h4&gt;

&lt;p&gt;Setting up a Kubernetes Cluster with KOPS is simpler than EKS.  Since Kops manage most of the AWS resources required to run a Kubernetes cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It can create and run your Kubernetes cluster with &lt;strong&gt;kops create cluster&lt;/strong&gt; command.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can manage most of the AWS Resources that you need to set up a Kubernetes cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It will work with either a new or existing VPC.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kops also allows you to generate terraform configuration for the AWS resources instead of directly creating them.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Kubernetes Cluster Management
&lt;/h2&gt;

&lt;p&gt;After setting up a Kubernetes cluster, you must also consider what it is like to scale nodes, perform cluster upgrades, and integrate with other services.&lt;/p&gt;

&lt;h4&gt;
  
  
  Managing a Kubernetes Cluster with EKS
&lt;/h4&gt;

&lt;p&gt;Managing a cluster using EKS is easier compared to Kops, the extra effort that you required to set up EKS using either CloudFormation or Terraform just pays off when it comes to cluster maintenance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;With EKS, you don’t have to bring your entire cluster down for upgrades and updates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EKS is much more scalable, because of it’s highly available and fully managed control plane, you don’t have to worry if the cluster gets larger.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EKS also gives you a detailed version for the internal pods management. You can easily know about how pods communicate with each other, with the VPC and other AWS Services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EKS allows you to add worker nodes by increasing the size of your AutoScaling Group.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EKS also allows you to replace worker nodes using &lt;strong&gt;kubectl drain&lt;/strong&gt; and then terminating EC2 instance and do most upgrades without disturbing the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Managing a Kubernetes Cluster with KOPS
&lt;/h4&gt;

&lt;p&gt;Though it is really easy to &lt;strong&gt;create&lt;/strong&gt; a Kubernetes cluster on kops but it’s a real pain when it comes to manage the cluster. This can be observed by the following points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You have to do a lot of work to upgrade and replace master nodes for the newer version of Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It uses private networking for pods by default.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kops is a little further behind on Kubernetes versions than the EKS team, which is kind of an added liability.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. Configuration and Access
&lt;/h2&gt;

&lt;p&gt;One biggest difference between EKS and kops is in how control and access are handled in your Kubernetes Cluster.&lt;/p&gt;

&lt;h4&gt;
  
  
  Configuration and Access using EKS
&lt;/h4&gt;

&lt;p&gt;With EKS, managing the master node, configuring cloud environments and tasks as such are handled by Amazon thus leaving you with absolutely no control over it. It might suit the developers but not for the server administrators who appreciate more control over the entire environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Configuration and Access using KOPS
&lt;/h4&gt;

&lt;p&gt;Kops let you configure cloud environments, include configurations the way you like them. It thus increases efficiency and making you responsible for making sure the cluster is configured correctly by giving you complete control over the cloud environment. When you choose kops, you also have to make sure that you keep the master nodes working properly and always up to date. It is a server administrator’s favorite since they appreciate having complete control over the entire cloud environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Cost
&lt;/h2&gt;

&lt;p&gt;It is another one of the biggest difference while you choose between EKS and Kops. After all, reducing the existing infrastructure cost can be an achievement for any organization.&lt;/p&gt;

&lt;h4&gt;
  
  
  Cost of running AWS EKS Kubernetes Cluster
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2phvo4fu0063vapzahj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2phvo4fu0063vapzahj.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cost and Pricing&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kubernetes control plane can be used for a flat usage fee of $0.20 per hour or ~$145 per month for each EKS cluster that you create, depending upon the cluster size.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You cannot use spot instances for the cluster thus cost is more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You at least need to keep 3 master nodes up for higher availability of AWS EKS Kubernetes Cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you wish to use the Cluster for Production, EKS will be cheaper but for test and Dev Environments EKS will be costlier than kops.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When you have large production clusters with high load, it is profitable to use EKS.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Cost of running KOPS Kubernetes Cluster
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It is an open-source tool and is completely free to use but you are responsible for maintaining the infrastructure created by kops to manage your Kubernetes cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You also have to manage the master nodes in KOPS, which adds additional cost.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is cheaper to use KOPS when you are using a small or temporary cluster because then you don’t have a huge load on the master.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you use 3 x t2.medium (or t3, they are cheaper) instance types as master nodes (~$100/month).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For Dev and Staging environments, you can reduce the running cost of the cluster by keeping single master nodes per cluster, which can further be reduced if you use spot instances for master nodes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;You can significantly reduce your Dev/Staging cluster costs with KOPS by keeping single master node or by running master nodes as spot instances or both. However, for Production grade clusters, that require high availability configuration, EKS is always a cheaper option instead of running 3 Master nodes (atleast t3/m4/c4/r4.large instances) ondemand.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Kubernetes Security
&lt;/h2&gt;

&lt;p&gt;Security should be a top concern for every Kubernetes administrator. As the Kubernetes ecosystem matures, more vulnerabilities will be found and this should not be ignored.&lt;/p&gt;

&lt;h4&gt;
  
  
  Security of EKS Cluster
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvf1lq7phfv3s6akbe4vt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvf1lq7phfv3s6akbe4vt.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Security Kubernetes Cluster&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You will get the benefit of securing your Kubernetes cluster from AWS on a platform and if you have some issues with the control plane that are also resolved by AWS support for EKS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Your cluster has an additional layer of protection, since your AWS Account doesn’t have root access to your master nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can also set up EKS with encrypted root volumes and private networking.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Also, EKS clusters are set up with limited administrator access via IAM.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Security of KOPS Cluster
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The security of cluster while using kops is entirely up to you, you can further increase the security since you have complete control over the master nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kops clusters does benefit from Amazon Shared Responsibility Model but without extra benefits of security expertise or support.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Private networking, encrypted root volumes, and security group controls are already included in most of the kops cluster.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AWS EKS&lt;/strong&gt; gives you easy and hassle-free management of &lt;strong&gt;Kubernetes control plane&lt;/strong&gt;, allows you to easily upgrade or update your Kubernetes cluster. It makes cluster maintenance easier and comes bundled with AWS Security and AWS support for your cluster.&lt;/p&gt;

&lt;p&gt;Whereas, &lt;strong&gt;KOPS&lt;/strong&gt; gives you a better command of your Kubernetes control plane to you but on the other hand is a little complex to manage and upgrade when compared with Amazon EKS. Though the KOPS community offers tutorials and support for using the tool.&lt;/p&gt;

&lt;p&gt;Regardless of which solution you end up with (based on your unique use case and requirements) you’ll want to &lt;a href="https://devtron.ai/blog/manage-kubernetes-like-a-pro-with-kubernetes-dashboard-by-devtron/" rel="noopener noreferrer"&gt;consider adding Devtron immediately&lt;/a&gt; to every K8s cluster you create. Using Devtron will &lt;a href="https://devtron.ai/blog/developers-guide-to-kubernetes/" rel="noopener noreferrer"&gt;abstract away the complexity of Kubernetes from your developers&lt;/a&gt; so they can efficiently &lt;a href="https://devtron.ai/blog/how-to-simplify-ci-with-jira-and-github-plugins/" rel="noopener noreferrer"&gt;build&lt;/a&gt; and deploy their applications across the clusters you manage.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>3-Minute Strategy to Efficient AWS S3 Storage Management</title>
      <dc:creator>Devtron</dc:creator>
      <pubDate>Wed, 18 Sep 2024 12:49:00 +0000</pubDate>
      <link>https://forem.com/devtron_inc/3-minute-strategy-to-efficient-aws-s3-storage-management-134</link>
      <guid>https://forem.com/devtron_inc/3-minute-strategy-to-efficient-aws-s3-storage-management-134</guid>
      <description>&lt;p&gt;In this quick read, let's understand about AWS S3 storage bucket retention policy and its benefits.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lifecycle policy
&lt;/h3&gt;

&lt;p&gt;A lifecycle policy is used to move objects in your bucket, automatically from one storage class to another.&lt;/p&gt;

&lt;h3&gt;
  
  
  S3 storage classes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Standard :&lt;/strong&gt; S3 Standard offers high durability, availability, and performance object storage for frequently accessed data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Intelligent-Tiering:&lt;/strong&gt; The S3 Intelligent-Tiering storage class is designed to &lt;a href="https://devtron.ai/blog/aws-cost-optimization-parameters-and-metrics-part-1/" rel="noopener noreferrer"&gt;optimise costs&lt;/a&gt; by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Standard-IA :&lt;/strong&gt; S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 One Zone-IA :&lt;/strong&gt; S3 One Zone-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 One Zone-IA stores data in a single Availability Zone and costs 20% less than S3 Standard-IA.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 Glacier :&lt;/strong&gt; S3 Glacier is a secure, durable, and low-cost storage class for data archiving.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Benefits of retention policies
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://devtron.ai/blog/aws-cost-optimization-parameters-and-metrics/" rel="noopener noreferrer"&gt;&lt;strong&gt;Cost Optimisation&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt; Rules / Policies will help manage your storage costs by controlling the lifecycle of your objects. Create a lifecycle rule to automatically transition your objects to Standard-IA storage class, archive them to Glacier storage class, and remove them after a specified time period.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Logs Lifecycle Automation:&lt;/strong&gt; You upload logs to S3 bucket, and you need those logs for specific period of time. For e.g., one month or three months. After that, you may want to archive or delete them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick Access:&lt;/strong&gt; To begin with, you place your files in a frequently accessed storage type. But after sometimes, you realise that the files will not be accessed frequently, and you want to archive them for a specific period of time. You might also decide to delete them later.&lt;/p&gt;

&lt;h3&gt;
  
  
  Steps to setup the lifecycle / retention of objects
&lt;/h3&gt;

&lt;p&gt;Let's proceed with the assumption that you are already sending your application logs to S3. We will focus on configuring the lifecycle of the logs. In this case, let's move the logs from &lt;strong&gt;Standard&lt;/strong&gt; to &lt;strong&gt;One Zone-IA&lt;/strong&gt; class storage.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Log on to AWS console -&amp;gt; click on Services -&amp;gt; select S3&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on the bucket you have created, now click on the*&lt;em&gt;Management&lt;/em&gt;* section, and select the &lt;strong&gt;Lifecycle****&lt;/strong&gt;option and clickon &lt;strong&gt;Add lifecycle rule.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After clicking ‘Add lifecycle rule’, a window will appear&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;[add rule name as per your requirement and choose &lt;strong&gt;rule scope]&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prefixes and Tags&lt;/strong&gt;: If your bucket has folders and tags, then you can add their names here, in this prefix and tag fields. This will help you to differentiate between different folder’s lifecycle process. If you don’t have any sub folders, then select your whole bucket (in my case, I have selected whole bucket).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now click on &lt;strong&gt;Next&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the transitioning step, we will add our lifecycle rules.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select your bucket version (if you already enabled &lt;strong&gt;versioning&lt;/strong&gt; while creating a bucket and you want to transition your logs according to versions, select the &lt;strong&gt;previous version,&lt;/strong&gt; if not select &lt;strong&gt;current version&lt;/strong&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now add the transition, and enter your rules. For e.g., in how many days, youwant to move your objects to different storage classes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Here, I am moving the files from Standard to OneZone-IA storage after 180 days and to the Glacier after 365 days of creation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the next step, set the expiration of the objects. In how many days, after the object’s creation date, an object should get automatically deleted. Here, I am deleting all the files after 366 days of creation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As we upload a large number of files in S3, many times, the files fail to upload. In such cases, we can delete the incomplete files in 10 days.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on Next, and review the options you have entered/selected. Once reviewed, click on Create. Your rule will be created and attached to the bucket. You can also see the created life cycle policy under the management section in your bucket.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With few steps, you have learnt to configure your AWS S3 storage in an optimal and cost-effective manner.&lt;/p&gt;

&lt;p&gt;Also check out this blog post about &lt;a href="https://devtron.ai/blog/how-to-use-spot-to-achieve-cost-savings-on-kubernetes/" rel="noopener noreferrer"&gt;using spot to achieve cost savings on Kubernetes&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Unlocking the Power of Ephemeral Environments with Devtron</title>
      <dc:creator>Devtron</dc:creator>
      <pubDate>Mon, 16 Sep 2024 10:48:00 +0000</pubDate>
      <link>https://forem.com/devtron_inc/unlocking-the-power-of-ephemeral-environments-with-devtron-3bp7</link>
      <guid>https://forem.com/devtron_inc/unlocking-the-power-of-ephemeral-environments-with-devtron-3bp7</guid>
      <description>&lt;p&gt;In the world of software development, &lt;strong&gt;ephemeral environments&lt;/strong&gt; are temporary setups that serve specific purposes, such as testing or staging new features. These environments are short-lived, designed to exist only for the duration of their use case—like testing a feature branch—before being dismantled.&lt;/p&gt;

&lt;p&gt;Ephemeral environments contrast with traditional static environments, which are permanent and can lead to inefficiencies, especially when underutilized. They offer a dynamic approach, allowing developers to create an isolated environment on demand without affecting the main codebase or other ongoing development activities.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Technical and Business Value of Ephemeral Environments
&lt;/h2&gt;

&lt;p&gt;Ephemeral environments provide significant advantages in different sectors as mentioned below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Efficiency&lt;/strong&gt;: By creating environments only when needed and tearing them down afterward, organizations avoid the cost of maintaining idle resources. This is particularly beneficial for companies where lower-end environments such as &lt;code&gt;dev-env&lt;/code&gt;, and &lt;code&gt;non-prod&lt;/code&gt; can cost up to five times more than production environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Agility and Speed&lt;/strong&gt;: Developers can quickly spin up environments to test new features or bug fixes without waiting for access to a shared environment. This agility accelerates development cycles and time-to-market.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Risk Reduction&lt;/strong&gt;: Testing in isolated environments ensures that unstable code does not affect the rest of the system, reducing the risk of bugs in production.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Is an Ephemeral Environment Right for You?
&lt;/h2&gt;

&lt;p&gt;Deciding whether ephemeral environments are suitable for your organization involves considering your development needs and organizational goals. Key questions include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Do your development teams frequently need isolated environments for testing?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Are you looking to optimize costs associated with non-production environments?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is there a need to increase deployment speed and reduce risk in production?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you answered "yes" to any of these, ephemeral environments could be highly beneficial for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Traditional Approach to Ephemeral Environment
&lt;/h2&gt;

&lt;p&gt;Ephemeral environment as mentioned above are the short lived environments, created and destroyed once the task is completed. We can create our scripts maybe in Terraform or Ansible or in python/shell to spin up a complete new environment that can be your VM Machines or Kubernetes clusters. Even though the automation can be achieved, there are few disadvantages associated with this approach, that include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Delay in Releases:&lt;/strong&gt; The time taken to bring up the entire infrastructure can lead to delays in testing features or conducting sanity checks for bug fixes, resulting in a longer time to market.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High Complexity:&lt;/strong&gt; Creating and maintaining scripts to standardize environments across different stages can be complex and error-prone&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Manual Interventions:&lt;/strong&gt; Even with automation scripts, manual interventions are often required to configure and install dependencies based on the application's specific requirements, adding to the setup time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DevOps Dependencies:&lt;/strong&gt; Developers typically lack expertise in tools like Terraform or Ansible, making them dependent on DevOps or SRE teams to make changes and install dependencies for their applications, which can slow down the development process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource Management:&lt;/strong&gt; Managing the lifecycle of ephemeral environments can be challenging. These environments need to be deleted once tasks are completed; otherwise, they lead to resource wastage and increased costs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High Infra Cost:&lt;/strong&gt; The costs associated with spinning up and maintaining ephemeral environments, particularly in cloud-based setups, can add significantly to the overall infrastructure expenses.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Rethinking Ephemeral Environments
&lt;/h2&gt;

&lt;p&gt;When we talk in terms of Kubernetes, setting up Ephemeral Environments becomes a lot easier than the traditional approach. Kubernetes has a beautiful thing called namespaces, a logical separation of group of resources, providing isolation of workloads within the same cluster.&lt;/p&gt;

&lt;p&gt;By leveraging namespaces and some advanced autoscaling methods, it becomes much more easier to create a ephemeral environment that is cost-effective, less complex and helps you dynamically bring up the resources and hibernate when not in use.  &lt;/p&gt;

&lt;h3&gt;
  
  
  How to Set Up an Ephemeral Environment in K8s Manually?
&lt;/h3&gt;

&lt;p&gt;Setting up an ephemeral environment, especially within a Kubernetes ecosystem, involves several key steps that ensure agility, efficiency, and cost-effectiveness. Below, we detail a straightforward approach to creating and managing these temporary environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Define Your Infrastructure Requirements&lt;/strong&gt;Before you create an ephemeral environment, it's essential to understand the specific requirements of the application or feature being tested. This includes the necessary computing resources, the required services, and any dependencies that need to be replicated from the production environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Automate the Environment Setup&lt;/strong&gt;Automation is crucial in managing ephemeral environments to ensure they can be spun up and torn down efficiently. Tools like Terraform or Ansible can be used to script the creation of your infrastructure. In Kubernetes, you might automate setting up namespaces, deploying container images, and configuring network policies through CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using Kubernetes Namespaces&lt;/strong&gt;In Kubernetes, namespaces provide a way to divide cluster resources between multiple users. Each ephemeral environment can be created in its namespace, isolating its running processes and resources from other environments&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Deploy Your Application&lt;/strong&gt;Once the namespace is ready, deploy your application using Kubernetes manifests or Helm charts. This step often involves setting up the necessary config maps and secrets to configure the application according to the environment&lt;/p&gt;

&lt;p&gt;Or using Helm&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Configure Autoscaling and Monitoring&lt;/strong&gt;To optimize costs and resource usage, configure autoscaling for your application workloads. Kubernetes Horizontal Pod Autoscaler (HPA) or a more advanced tool like KEDA can be used to automatically adjust the number of pods based on traffic or other metrics&lt;/p&gt;

&lt;p&gt;Monitoring is also essential to track the performance and health of your temporary environment. Tools like Prometheus for monitoring and Grafana for visualization can be integrated to monitor the environment's metrics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Implement Cleanup Procedures&lt;/strong&gt;To ensure that resources are not wasted, set up automatic cleanup procedures to tear down the environment after use. This can be scheduled using cron jobs or integrated into your CI/CD pipeline to destroy the environment once the testing is complete:&lt;/p&gt;

&lt;p&gt;Or a more controlled cleanup with Helm&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Documentation and Training&lt;/strong&gt;Finally, document the entire process and provide training for your teams. This ensures that everyone understands how to efficiently use ephemeral environments, which helps in maximizing the benefits while minimizing potential disruptions or misuse.&lt;/p&gt;

&lt;p&gt;Manually creating and deleting the namespaces, and integrating it within pipelines can be something a big pain when it comes to developer productivity. Integrating different tools such as Grafana, Prometheus, Jenkins, ArgoCD, KEDA, etc can be a tedius task for DevOps / SRE engineers as well. With the the involvement of custom scripting, again the complexities increases, high rish to human errors. With Devtron's simplified workflow, it becomes a lot more easier to automate the process, and improve the developer productivity while reducing its high dependencies from DevOps/ SRE teams.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Devtron Simplifies Ephemeral Environments?
&lt;/h3&gt;

&lt;p&gt;Devtron enhances the management of ephemeral environments through its modern dashboard, simplified workflows, automation and effective cost-management strategies. Here are key features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Namespace Utilization&lt;/strong&gt;: In Kubernetes, namespaces provide logical separation, allowing multiple ephemeral environments within the same cluster without additional cost. Devtron leverages this to minimize the overhead associated with setting up and tearing down environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Management&lt;/strong&gt;: Devtron implements strategies such as leveraging spot instances and right-sizing resources, ensuring that the infrastructure costs are kept to a minimum. For example, by using spot instances, organizations can save up to 70-90% compared to standard costs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated Scaling&lt;/strong&gt;: Devtron employs tools like KEDA for event-driven autoscaling, ensuring resources are used efficiently. Environments can scale down automatically during inactivity and scale up when needed, further optimizing costs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplified Workflow:&lt;/strong&gt; Devtron provides an intuitive dashboard for all operating on Kubernetes, providing Kubernetes-native CI/CD pipelines, simplifying the heavy scripting and stitching up of different tools to complete an end-to-end workflow.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Along with that, there are many other factors which makes the entire process much more seamless, such as visibility of workloads, application metrics, configurations management, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting-up Ephemeral Environments With Devtron
&lt;/h2&gt;

&lt;p&gt;Devtron is a Software Distribution Platform designed for Kubernetes. On its mission to democratize Kubernetes, ephemeral environments are one among the many other features, that make life easier. With Devtron’s intuitive dashboard, operations on Kubernetes become flawless, and it goes with ephemeral environments as well. To get started with ephemeral environment, follow the below steps.&lt;/p&gt;

&lt;p&gt;Step 1: Install the keda-add-on-http from the chart’s marketplace. Navigate to the &lt;a href="https://docs.devtron.ai/usage/deploy-chart/overview-of-charts" rel="noopener noreferrer"&gt;charts store&lt;/a&gt;, search for Keda, and as you can see in the below image, you can see all charts related to Keda. Select the appropriate helm chart and deploy it. To add any helm chart which is not listed on the charts store, &lt;a href="https://devtron.ai/blog/helm-chart-deployment/" rel="noopener noreferrer"&gt;free feel to check out this blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzvlql28bm2oyy6lin9f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzvlql28bm2oyy6lin9f.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 2: Once the controller has been successfully installed, you can see a consolidated view of the deployed helm chart, along with its resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4fokinuoddxu2ws19gc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4fokinuoddxu2ws19gc.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3: Now, let’s move and configure the ephemeral environment for my microservice called, &lt;code&gt;payment-svc&lt;/code&gt;. To configure the ephemeral environment for any application, the process remains the same and you should be able to configure/ clone the workflows for different applications. Navigate to &lt;code&gt;Workflow Editor&lt;/code&gt;, add a workflow for the respective environment where you want to deploy your applications, in our case, its &lt;code&gt;dev-testing&lt;/code&gt; environment as you can see in the below image. To understand more about workflows in Devtron, feel free to refer the &lt;a href="https://docs.devtron.ai/usage/applications/creating-application/workflow/ci-pipeline" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcoecmqfusobokxp73fpx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcoecmqfusobokxp73fpx.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 4: Once the workflow has been created, Devtron automatically creates &lt;code&gt;environment-overrides&lt;/code&gt; for the deployment environment. &lt;a href="https://docs.devtron.ai/usage/applications/creating-application/environment-overrides" rel="noopener noreferrer"&gt;Environment overrides&lt;/a&gt; help you manage your Kubernetes configuration for the specific environment in a more efficient way. Under the &lt;code&gt;environment override&lt;/code&gt; &amp;gt; &lt;code&gt;dev-testing&lt;/code&gt; environment, we can add the relevant configurations required in the deployment template which would create the HttpScaledObject, responsible for bringing the environment up and running dynamically as it receives any HTTP request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuibp54sznq1i3amwitwx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuibp54sznq1i3amwitwx.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 5: After providing the relevant configuration, navigate to &lt;code&gt;Build &amp;amp; Deploy&lt;/code&gt; section, select the relevant image, and deploy it in the &lt;code&gt;dev-testing&lt;/code&gt; environment. Upon successful deployment, you can see the application status as Healthy, and all details about the deployment are as you can see in the below image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvw5d6df50t0e7i0c2zl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvw5d6df50t0e7i0c2zl.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also see all the resources deployed along with the deployment in a resourced grouped view, and perform operations such as checking the logs, events, manifests, or exec into the terminals. You can notice in the below image that, we have a Deployment object but there isn’t any pod running as of now. This is because it automatically scaled down the workload since there is no HTTP request hitting the given hostname/ service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81loepqaom31bblg9ze0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81loepqaom31bblg9ze0.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 6: In the Devtron dashboard, it automatically picks up the ingress host and shows it in &lt;code&gt;URLs&lt;/code&gt; section at the top right of the dashboard as you can see in Fig. 5, and if any request has been made into the hostname, it will automatically scale up the pod and it can serve the traffic as you can see in the below image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qp6mf8t3fb1odivjgi9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qp6mf8t3fb1odivjgi9.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Ephemeral environments offer a flexible, cost-effective solution for managing development stages, particularly in a dynamic and fast-paced software development landscape. Devtron's approach not only simplifies the management of these environments but also enhances cost efficiency and deployment agility.&lt;/p&gt;

&lt;p&gt;Organizations looking to streamline their development processes and reduce costs should consider implementing ephemeral environments, especially those already using Kubernetes. With Devtron, the transition is smoother, allowing teams to focus more on innovation and less on infrastructure management.&lt;/p&gt;

&lt;p&gt;Feel free to join our &lt;a href="https://discord.devtron.ai/" rel="noopener noreferrer"&gt;Discord Community&lt;/a&gt; if you have any questions. Would love to address any queries or questions. If you liked Devtron, do give it a &lt;a href="https://github.com/devtron-labs/devtron" rel="noopener noreferrer"&gt;Star ⭐️ on GitHub&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Gatekeeper: Why Approval-Based Deployments are Essential for Production Environments on Kubernetes</title>
      <dc:creator>Devtron</dc:creator>
      <pubDate>Fri, 13 Sep 2024 09:48:00 +0000</pubDate>
      <link>https://forem.com/devtron_inc/the-gatekeeper-why-approval-based-deployments-are-essential-for-production-environments-on-kubernetes-289m</link>
      <guid>https://forem.com/devtron_inc/the-gatekeeper-why-approval-based-deployments-are-essential-for-production-environments-on-kubernetes-289m</guid>
      <description>&lt;p&gt;In the fast-paced world of software development, speed and efficiency are paramount. However, when it comes to deploying applications to production environments, particularly on Kubernetes, it’s crucial to prioritize control and security. One effective way to achieve this is through approval-based deployments. In this blog post, we’ll explore the importance of this practice and how it helps maintain the integrity and reliability of production systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Approval-Based Deployment?
&lt;/h2&gt;

&lt;p&gt;Approval-based deployment is a process where any changes or updates to the production environment must be reviewed and approved by authorized personnel before they are applied. This step acts as a safeguard, ensuring that only vetted and tested code makes its way into the live environment where it can impact end-users. Approval can be made at two different levels, i.e., approval for the configurations and approval for the container image.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Critical Role of Kubernetes in Modern Deployments
&lt;/h2&gt;

&lt;p&gt;Kubernetes has revolutionized how we deploy, scale, and manage containerized applications. Its flexibility and power make it a favorite among DevOps teams. However, with great power comes great responsibility. Kubernetes allows rapid changes and scaling, but without proper oversight, it can lead to significant risks, including downtime, security vulnerabilities, and performance issues. The adoption of Kubernetes has been increased from 71% to 89% as per &lt;a href="https://www.cncf.io/reports/cncf-annual-survey-2023/" rel="noopener noreferrer"&gt;CNCF 2023 Annual Survey&lt;/a&gt; and out of 89%, 18% are still evaluating Kubernetes to use in production. Though it’s known for its ability to orchestrate a container, Kubernetes does come with a lot of complexities, and using the right set of tools to create an abstraction is very important.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4rkb93fxrru22e6r7cg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4rkb93fxrru22e6r7cg.png" alt="Image description" width="498" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Approval-Based Deployments Matter
&lt;/h2&gt;

&lt;p&gt;Approval-based deployments are important for any critical environment. Here are some of the reasons why approval-based deployments matter:&lt;/p&gt;

&lt;h3&gt;
  
  
  Enhanced Security
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prevent Unauthorized Changes:&lt;/strong&gt; Approval-based deployments ensure that only authorized individuals can make changes to the production environment. This prevents malicious or accidental modifications that could compromise the production environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Review for Vulnerabilities:&lt;/strong&gt; Each deployment undergoes a thorough review, reducing the risk of introducing security vulnerabilities. This is especially important for Kubernetes environments, where misconfigurations and vulnerable images can lead to severe security breaches.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Improved Stability
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Quality Assurance:&lt;/strong&gt; By requiring approvals, teams can ensure that only stable, tested code reaches production. This minimizes the chances of deploying buggy or unstable applications, which can lead to downtime and user dissatisfaction.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Controlled Rollouts:&lt;/strong&gt; Approvals allow for controlled, staged rollouts, ensuring that any issues can be quickly identified and addressed without impacting the entire user base.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Compliance and Auditing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Regulatory Compliance:&lt;/strong&gt; Many industries, primarily Fintech organizations like Banks, are subject to strict regulatory requirements. Approval-based deployments provide an auditable trail of changes, making it easier to demonstrate compliance with these regulations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Accountability:&lt;/strong&gt; The approval process holds individuals accountable for the changes they authorize. This promotes a culture of responsibility and thoroughness within the development and operations teams.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Operational Efficiency
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Preventing Errors:&lt;/strong&gt; By catching potential issues before they reach production, teams can avoid costly and time-consuming rollbacks and fixes. This keeps the system running smoothly and efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Continuous Improvement:&lt;/strong&gt; The approval process often includes a review and feedback loop, fostering continuous improvement in the deployment process and overall code quality.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementing Approval-Based Deployments on Kubernetes
&lt;/h2&gt;

&lt;p&gt;To effectively implement approval-based deployments in a Kubernetes environment, consider the following best practices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Define Clear Approval Policies:&lt;/strong&gt; Establish users who can approve deployments with relevant permissions and clear distinction between the user who can approve the container images and critical environment configurations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automate the Approval Workflow:&lt;/strong&gt; Using tools like Devtron to automate the deployment pipeline, integrating approval gates that require manual sign-off at critical environments like production within the workflow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitor and Audit:&lt;/strong&gt; Implement monitoring and auditing solutions to track changes and approvals. Regularly review these changes to ensure compliance and identify areas for improvement and easy rollback strategies if anything goes wrong with the latest releases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Continuous Training:&lt;/strong&gt; Ensure that all team members are trained on the approval process and understand the importance of their role in maintaining production integrity.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Approval-Based Workflows with Devtron
&lt;/h2&gt;

&lt;p&gt;Devtron streamlines the platform engineering workflows for Kubernetes. It provides an intuitive dashboard to deploy and manage all your workloads on Kubernetes through Kubernetes-native CI/CD pipelines as well as Helm Charts, the K8s package manager. With the help of Devtron, users can easily create approval checks for the configurations as well as for the container images for critical environments. Let’s dive into the approval-based workflows with Devtron.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Devtron comes with granular RBAC which allows you to define different roles for users, who can approve the configurations and container images before it get deployed in critical environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qpbj0l8wrrvq8e20msk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qpbj0l8wrrvq8e20msk.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; After the access has been granted, users can easily create automated workflows with Devtron and integrate the approval checks for configurations and container images. Creating workflows with Devtron is pretty straightforward just with a few clicks and you can create any type of workflows with Devtron, be it sequential or parallel as you can see in the below image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cakpb6hh6rteig0dquo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cakpb6hh6rteig0dquo.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally adding an approval check is also pretty much straightforward. For the deployment pipeline of any critical environment i.e., prod, simply toggle on the approval checks and define the number of approvals required before you can deploy it. There are no custom scripting or external integrations are required as you can in the below image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0cq5y79eama7jsnu82qx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0cq5y79eama7jsnu82qx.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Devtron also comes with Protect Configurations which allows you to protect the configurations of specific environments. For any workflow that you have created, Devtron automatically created &lt;a href="https://docs.devtron.ai/usage/applications/creating-application/environment-overrides" rel="noopener noreferrer"&gt;Environment Overrides&lt;/a&gt; for you as you can see in the [Fig. 3], which ensures the isolation of configurations for multiple deployment environments. To protect the configuration of any environment, simply toggle on the button as you can see in the below image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhv2ybbz0zj0oz47jghj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhv2ybbz0zj0oz47jghj.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If any changes are made in that environment, they first need to be approved by the respective approver, then only those changes can be deployed in the respective environment. Additionally, you can see a diff in the changes made by the user along with the user details. The user asking for approval can also write comments with details on why the changes have been made to provide a proper context to the approver as you can in the below image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpa01red16tndjtzlzzbr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpa01red16tndjtzlzzbr.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; After the workflow has been created, and configurations are done, the user can directly move to the &lt;code&gt;Build &amp;amp; Deploy&lt;/code&gt; section where the user can trigger the pipelines. If it is an automated pipeline, as soon as you make a new commit, the pipeline will be automatically triggered, which would create a container image, deploy it into the &lt;code&gt;utils&lt;/code&gt; environment, perform automated test cases if there are any in the pre/post deployment stages, deploy the same image to the &lt;code&gt;stage&lt;/code&gt; environment and before it gets deployed into production, approval is required. To raise an approval request, click on the approval check button as described in [Fig. 3], there you can see all images and additionally, you can also give &lt;code&gt;Image labels&lt;/code&gt; and &lt;code&gt;Comment&lt;/code&gt; for the change made with this image as you can see in the below image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7u27076wgzhv22vd3xju.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7u27076wgzhv22vd3xju.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The user can select any of the people from the list to approve their container image. The user who is requested will also be notified through email and can approve it via email, additionally, the user can also see the &lt;code&gt;Approval Pending&lt;/code&gt; in the dashboard itself with the details of the user who raised an approval request along with labels and comments. We can also see the user who raised an approval request which I can &lt;code&gt;Approve&lt;/code&gt;. For some images, I can also &lt;code&gt;Cancel Request&lt;/code&gt;, and that’s because I have raised an approval request for those images. The one who raised an approval request cannot approve the self-requests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fn94r5wswkzeq95ljw3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fn94r5wswkzeq95ljw3.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is important to note that, with Devtron you get the entire audit trail of who approved the request, the user who raised a request as well as you can also see the user who has deployed it in the production environment in &lt;code&gt;Deployment History&lt;/code&gt;. Additionally, you can also see the details of the configurations of the last deployment, compare them with the older releases and if anything breaks down, can easily rollback with a single click.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrhwh8p0cu6m4bk76u59.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrhwh8p0cu6m4bk76u59.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; With Devtron’s single pane of glass, you can also check out the deployment metrics critical for your business such as &lt;code&gt;Deployment Frequency&lt;/code&gt;, &lt;code&gt;Change Failure Rate&lt;/code&gt;, &lt;code&gt;MTR&lt;/code&gt;, and &lt;code&gt;MLT&lt;/code&gt; for your production environment as you can see in the below image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd1p2lyfbnm1i0fwd8m2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd1p2lyfbnm1i0fwd8m2y.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In today’s dynamic software landscape, maintaining control over production deployments is more critical than ever. By implementing approval-based deployments for Kubernetes environments, organizations can enhance security, improve stability, ensure compliance, and boost operational efficiency. This practice not only protects the production environment but also fosters a culture of responsibility and continuous improvement within the team. With Devtron, it becomes much easier to set up guardrails around the Kubernetes ecosystem, natively integrating the gatekeeper within your workflows. As you embark on this journey, remember that the gatekeeper’s role is not to slow down progress but to safeguard the environment, ensuring that only the best, most secure code makes it to production.&lt;/p&gt;

&lt;p&gt;If you have any questions, feel free to join our &lt;a href="https://discord.devtron.ai" rel="noopener noreferrer"&gt;Community Discord Server&lt;/a&gt; and shoot out your questions, would be happy to answer them.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
