<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: James Maina</title>
    <description>The latest articles on Forem by James Maina (@mucheru).</description>
    <link>https://forem.com/mucheru</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mucheru"/>
    <language>en</language>
    <item>
      <title>Getting started on MOCO, the MySQL Operator for Kubernetes Part 1</title>
      <dc:creator>James Maina</dc:creator>
      <pubDate>Tue, 14 Jan 2025 05:59:49 +0000</pubDate>
      <link>https://forem.com/aws-builders/getting-started-on-moco-the-mysql-operator-for-kubernetes-part-1-3kc7</link>
      <guid>https://forem.com/aws-builders/getting-started-on-moco-the-mysql-operator-for-kubernetes-part-1-3kc7</guid>
      <description>&lt;p&gt;MOCO (MySQL Operator for Kubernetes) is a robust, cloud-native solution designed to simplify the management of MySQL clusters in Kubernetes environments. It automates the provisioning, scaling, backup, and maintenance of MySQL instances while ensuring high availability and reliability. MOCO leverages Kubernetes resources to create, monitor, and manage MySQL clusters.&lt;/p&gt;

&lt;p&gt;MOCO supports specific versions of MySQL and Kubernetes. As of the latest information, it supports MySQL versions 8.0.28, 8.0.37, 8.0.39, 8.0.40, and 8.4.3, and Kubernetes versions 1.29, 1.30, and 1.31&lt;/p&gt;

&lt;h2&gt;
  
  
  How MOCO Works
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Provisioning Clusters
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;MOCO provisions MySQL clusters by creating Kubernetes StatefulSets for each cluster. The process involves:&lt;/li&gt;
&lt;li&gt;Defining a MySQLCluster custom resource (CR) with desired configurations.&lt;/li&gt;
&lt;li&gt;MOCO controller creates StatefulSets and persistent volume claims (PVCs) for the cluster nodes.&lt;/li&gt;
&lt;li&gt;Configuring MySQL instances with semi-synchronous replication for high availability.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deployment Services
&lt;/h2&gt;

&lt;p&gt;MOCO deploys the following services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Primary Service: Routes traffic to the primary node.&lt;/li&gt;
&lt;li&gt;Replica Service: Routes traffic to replica nodes for read queries.&lt;/li&gt;
&lt;li&gt;Backup Service: Handles backups through sidecars integrated into the StatefulSet.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Types of Deployment
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Single Primary with Replicas: Default mode with one primary and multiple replicas.&lt;/li&gt;
&lt;li&gt;Multi-Region Clusters: For cross-region replication.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Backup and restore
&lt;/h2&gt;

&lt;p&gt;MOCO can take full and incremental backups regularly. The backup data are stored in Amazon S3 compatible object storages.&lt;br&gt;
MOCO supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scheduled backups to object storage (e.g., S3).&lt;/li&gt;
&lt;li&gt;On-demand backups triggered through the Kubernetes API.&lt;/li&gt;
&lt;li&gt;Restorations from backups via simple CR updates.&lt;/li&gt;
&lt;li&gt;Point-in-Time Recovery: Ensures robust data protection in disaster recovery scenarios.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Object storage bucket&lt;/strong&gt;&lt;br&gt;
Bucket is a management unit of objects in S3. MOCO stores backups in a specified bucket.&lt;/p&gt;

&lt;p&gt;MOCO does not remove backups. To remove old backups automatically, you can set a lifecycle configuration to the bucket.&lt;/p&gt;

&lt;p&gt;Read more &lt;a href="https://cybozu-go.github.io/moco/usage.html?highlight=errant#backup-and-restore" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Handling Errant Pods&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Errant pods are MySQL nodes with divergent data. MOCO detects and isolates such pods automatically, preventing replication issues. Manual intervention can also remove these pods if needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Replication Maintenance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;MOCO uses semi-synchronous replication for consistency. The primary writes changes to replicas before committing transactions. Failovers are handled by promoting a replica to primary, ensuring minimal disruption. MOCO also monitors replication delays, helping maintain sync and performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stateless or Stateful?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;MOCO deployments are stateful because MySQL requires persistent data storage. StatefulSets in Kubernetes ensure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Persistent storage using PVCs.&lt;/li&gt;
&lt;li&gt;Stable network identities for MySQL nodes.&lt;/li&gt;
&lt;li&gt;Ordered scaling and rolling updates.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Quick setup
&lt;/h2&gt;

&lt;p&gt;You can choose between two installation methods.&lt;/p&gt;

&lt;p&gt;MOCO depends on cert-manager. If cert-manager is not installed on your cluster, install it as follows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install using raw manifests:&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;$ curl -fsLO https://github.com/cybozu-go/moco/releases/latest/download/moco.yaml&lt;br&gt;
$ kubectl apply -f moco.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install using Helm chart:&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;$ helm repo add moco https://cybozu-go.github.io/moco/&lt;br&gt;
$ helm repo update&lt;br&gt;
$ helm install --create-namespace --namespace moco-system moco moco/moco&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Customize manifests&lt;/strong&gt;&lt;br&gt;
If you want to edit the manifest, &lt;a href="https://github.com/cybozu-go/moco/tree/main/config" rel="noopener noreferrer"&gt;config/&lt;/a&gt; directory contains the source YAML for &lt;a href="https://kustomize.io/" rel="noopener noreferrer"&gt;kustomize&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Creating a Cluster
&lt;/h2&gt;

&lt;p&gt;An empty cluster always has a writable instance called the primary. All other instances are called replicas. Replicas are read-only and replicate data from the primary.&lt;/p&gt;

&lt;p&gt;The following YAML is to create a three-instance cluster. It has an anti-affinity for Pods so that all instances will be scheduled to different Nodes. It also sets the limits for memory and CPU to make the Pod &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="noopener noreferrer"&gt;Guaranteed&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: moco.cybozu.com/v1beta2
kind: MySQLCluster
metadata:
  namespace: default
  name: test
spec:
  # replicas is the number of mysqld Pods.  The default is 1.
  replicas: 3
  podTemplate:
    spec:
      # Make the data directory writable. If moco-init fails with "Permission denied", uncomment the following settings.
      # securityContext:
      #   fsGroup: 10000
      #   fsGroupChangePolicy: "OnRootMismatch"  # available since k8s 1.20
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app.kubernetes.io/name
                operator: In
                values:
                - mysql
              - key: app.kubernetes.io/instance
                operator: In
                values:
                - test
            topologyKey: "kubernetes.io/hostname"
      containers:
      # At least a container named "mysqld" must be defined.
      - name: mysqld
        image: ghcr.io/cybozu-go/moco/mysql:8.4.3
        # By limiting CPU and memory, Pods will have Guaranteed QoS class.
        # requests can be omitted; it will be set to the same value as limits.
        resources:
          limits:
            cpu: "10"
            memory: "10Gi"
  volumeClaimTemplates:
  # At least a PVC named "mysql-data" must be defined.
  - metadata:
      name: mysql-data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, MOCO uses preferredDuringSchedulingIgnoredDuringExecution to prevent Pods from being placed on the same Node.&lt;br&gt;
There are other example manifests in thier &lt;a href="https://github.com/cybozu-go/moco/tree/main/examples" rel="noopener noreferrer"&gt;examples directory&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the cluster
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;kubectl moco&lt;/strong&gt;&lt;br&gt;
From outside of your Kubernetes cluster, you can access MOCO MySQL instances using &lt;code&gt;kubectl-moco&lt;/code&gt;. &lt;code&gt;kubectl-moco&lt;/code&gt; is a plugin for kubectl. Pre-built binaries are available on GitHub releases.&lt;/p&gt;

&lt;p&gt;The following is an example to run mysql command interactively to access the primary instance of test MySQLCluster in foo namespace.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ kubectl moco -n foo mysql -it test&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Connecting to mysqld over network&lt;br&gt;
MOCO prepares two Services for each MySQLCluster. For example, a MySQLCluster named test in foo Namespace has the following Services.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Service Name  DNS Name    Description&lt;br&gt;
moco-test-primary   moco-test-primary.foo.svc   Connect to the primary instance.&lt;br&gt;
moco-test-replica   moco-test-replica.foo.svc   Connect to replica instances.&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cluster status
&lt;/h2&gt;

&lt;p&gt;You can see the health and availability status of MySQLCluster as follows:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ kubectl get mysqlcluster&lt;br&gt;
NAME   AVAILABLE   HEALTHY   PRIMARY   SYNCED REPLICAS   ERRANT REPLICAS&lt;br&gt;
test   True        True      0         3&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The cluster is available when the primary Pod is running and ready.&lt;/li&gt;
&lt;li&gt;The cluster is healthy when there is no problems.&lt;/li&gt;
&lt;li&gt;PRIMARY is the index of the current primary instance Pod.&lt;/li&gt;
&lt;li&gt;SYNCED REPLICAS is the number of ready Pods.&lt;/li&gt;
&lt;li&gt;ERRANT REPLICAS is the number of instances having errant transactions.
You can also use &lt;code&gt;kubectl describe mysqlcluster&lt;/code&gt;to see the recent events on the cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Logs
&lt;/h2&gt;

&lt;p&gt;Error logs from mysqld can be viewed as follows:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ kubectl logs moco-test-0 mysqld&lt;/code&gt;&lt;br&gt;
Slow logs from mysqld can be viewed as follows:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ kubectl logs moco-test-0 slow-log&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Switchover&lt;/strong&gt;&lt;br&gt;
Switchover is an operation to change the live primary to one of the replicas.&lt;/p&gt;

&lt;p&gt;MOCO automatically switch the primary when the Pod of the primary instance is to be deleted.&lt;/p&gt;

&lt;p&gt;Users can manually trigger a switchover with &lt;code&gt;kubectl moco switchover CLUSTER_NAME&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Failover&lt;/strong&gt;&lt;br&gt;
Failover is an operation to replace the dead primary with the most advanced replica. MOCO automatically does this as soon as it detects that the primary is down.&lt;/p&gt;

&lt;p&gt;The most advanced replica is a replica who has retrieved the most up-to-date transaction from the dead primary. Since MOCO configures loss-less semi-synchronous replication, the failover is guaranteed not to lose any user data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Re-initializing an errant replica&lt;/strong&gt;&lt;br&gt;
Delete the PVC and Pod of the errant replica, like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ kubectl delete --wait=false pvc mysql-data-moco-test-0&lt;br&gt;
$ kubectl delete --grace-period=1 pods moco-test-0&lt;/code&gt;&lt;br&gt;
Depending on your Kubernetes version, StatefulSet controller may create a pending Pod before PVC gets deleted. Delete such pending Pods until PVC is actually removed.&lt;/p&gt;

&lt;p&gt;Ref - &lt;a href="https://cybozu-go.github.io/moco/index.html" rel="noopener noreferrer"&gt;https://cybozu-go.github.io/moco/index.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k8s</category>
      <category>mysql</category>
      <category>moco</category>
    </item>
    <item>
      <title>What is Kured (KUbernetes REboot Daemon) in k8s?</title>
      <dc:creator>James Maina</dc:creator>
      <pubDate>Fri, 01 Mar 2024 11:34:08 +0000</pubDate>
      <link>https://forem.com/aws-builders/what-is-kured-kubernetes-reboot-daemon-in-k8s-40ab</link>
      <guid>https://forem.com/aws-builders/what-is-kured-kubernetes-reboot-daemon-in-k8s-40ab</guid>
      <description>&lt;p&gt;Defination from the &lt;a href="https://kured.dev/"&gt;official page&lt;/a&gt; states that &lt;strong&gt;kured&lt;/strong&gt; is a Kubernetes daemonset that performs safe automatic node reboots when the need to do so is indicated by the package management system of the underlying OS.&lt;/p&gt;

&lt;p&gt;By periodically rebooting nodes, Kured ensures that any pending updates or configuration changes take effect, resulting in a more efficient and reliable cluster.&lt;/p&gt;

&lt;p&gt;Here are the key points to note:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kured monitors the operating system for security patches, kernel updates, and system-level changes in Kubernetes nodes. It proactively identifies the need for reboots to keep the cluster secure and up-to-date.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When a reboot is required, Kured gracefully cordons the node, marking it as unschedulable for new pods without disrupting existing ones. It then proceeds to drain the node, evicting existing pods in a controlled manner to ensure a smooth reboot process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kured includes built-in safety mechanisms to prevent unnecessary reboots and allows users to define maintenance windows for avoiding disruptions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The continuous monitoring by Kured ensures that the Kubernetes cluster operates with the latest updates, enhancing performance, security, and stability.&lt;br&gt;
Organizations can leverage Kubernetes clusters more effectively while minimizing risks associated with outdated software and configurations.&lt;/p&gt;

&lt;p&gt;Setting up Kured is a straightforward process that involves deploying it as a DaemonSet in the Kubernetes cluster. This deployment strategy ensures that Kured runs on every node within the cluster, effectively monitoring and managing the rebooting process for each individual node. &lt;/p&gt;

&lt;p&gt;Here is how you can do that&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ClusterRole for kured
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kured
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get", "patch"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["list", "delete", "get"]
- apiGroups: ["apps"]
  resources: ["daemonsets"]
  verbs: ["get"]
- apiGroups: [""]
  resources: ["pods/eviction"]
  verbs: ["create"]

# ClusterRoleBinding for kured
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kured
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kured
subjects:
- kind: ServiceAccount
  name: kured
  namespace: kube-system

# Role for kured in kube-system namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: kube-system
  name: kured
rules:
- apiGroups: ["apps"]
  resources: ["daemonsets"]
  resourceNames: ["kured"]
  verbs: ["update"]

# RoleBinding for kured in kube-system namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  namespace: kube-system
  name: kured
subjects:
- kind: ServiceAccount
  namespace: kube-system
  name: kured
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kured

# ServiceAccount for kured
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kured
  namespace: kube-system

# DaemonSet for kured
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kured
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: kured
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        name: kured
    spec:
      serviceAccountName: kured
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      - key: node-role.kubernetes.io/control-plane
        effect: NoSchedule
      - key: "node-role.kubernetes.io/mysql"
        operator: "Equal"
        effect: "NoSchedule"
      hostPID: true
      restartPolicy: Always
      containers:
      - name: kured
        image: ghcr.io/kubereboot/kured:{{ kured_version }}
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        env:
        - name: KURED_NODE_ID
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        command:
        - /usr/bin/kured
        - --reboot-days=mon,tue,wed,thu
        - --reboot-delay=90s
        - --start-time=3am
        - --end-time=5am
        - --time-zone=UTC
        - --prometheus-url={{ prometheus_url }}
        - --alert-filter-regexp=^Watchdog$
        - --period=15m

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explanation:&lt;br&gt;
This part allows you to define maintenance windows for avoiding disruptions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  command:
        - /usr/bin/kured
        - --reboot-days=mon,tue,wed,thu
        - --reboot-delay=90s
        - --start-time=3am
        - --end-time=5am
        - --time-zone=UTC
        - --prometheus-url={{ prometheus_url }}
        - --alert-filter-regexp=^Watchdog$
        - --period=15m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>kubernetes</category>
      <category>kured</category>
      <category>devops</category>
    </item>
    <item>
      <title>Karpenter: The Better Autoscaling Solution for Kubernetes- Part 1</title>
      <dc:creator>James Maina</dc:creator>
      <pubDate>Sat, 25 Feb 2023 04:33:48 +0000</pubDate>
      <link>https://forem.com/aws-builders/karpenter-the-better-autoscaling-solution-for-kubernetes-part-1-4pd5</link>
      <guid>https://forem.com/aws-builders/karpenter-the-better-autoscaling-solution-for-kubernetes-part-1-4pd5</guid>
      <description>&lt;p&gt;If you're running Kubernetes, you're likely familiar with the standard cluster autoscaler. While it's a useful tool, it has its limitations. Enter Karpenter, an open-source autoscaling solution that offers many advantages over the standard cluster autoscaler. It is a flexible, high-performance Kubernetes cluster autoscaler built with AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Karpenter Works
&lt;/h2&gt;

&lt;p&gt;Karpenter takes a different approach to autoscaling than the standard cluster autoscaler. Instead of adding or removing nodes based on demand, Karpenter provisions nodes based on application requirements. This means that it can optimize resource utilization and reduce costs.&lt;/p&gt;

&lt;p&gt;Karpenter works by creating custom Kubernetes resources called "provisioners." Provisioners are used to define the resources that Karpenter should provision, such as nodes or virtual machines. When an application needs more resources, Karpenter checks the provisioners to see if any need to be created. If so, Karpenter will create the new resources and add them to the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7swc4blu56qrvxs7r7q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7swc4blu56qrvxs7r7q.png" alt="karpenter."&gt;&lt;/a&gt;&lt;br&gt;
Image Courtesy.&lt;/p&gt;
&lt;h2&gt;
  
  
  Comparing Karpenter to Cluster Autoscaler
&lt;/h2&gt;

&lt;p&gt;Karpenter offers several advantages over the standard cluster autoscaler as detailed below.&lt;/p&gt;
&lt;h3&gt;
  
  
  Optimal Resource Utilization
&lt;/h3&gt;

&lt;p&gt;One of the most significant advantages of Karpenter is its ability to optimize resource utilization. It can do this by automatically provisioning nodes based on application needs. This means that you can avoid overprovisioning, which can lead to wasted resources and increased costs.&lt;/p&gt;
&lt;h3&gt;
  
  
  Customizable Scaling
&lt;/h3&gt;

&lt;p&gt;Karpenter offers the ability to customize scaling behaviors based on your specific needs. You can configure scaling based on metrics such as CPU or memory usage, or you can use your own custom metrics.&lt;/p&gt;
&lt;h3&gt;
  
  
  Cost Savings
&lt;/h3&gt;

&lt;p&gt;Because Karpenter optimizes resource utilization, it can lead to significant cost savings. By avoiding overprovisioning, you can reduce the number of nodes required to run your applications, which can result in lower cloud bills.&lt;/p&gt;
&lt;h3&gt;
  
  
  Ease of Use
&lt;/h3&gt;

&lt;p&gt;Karpenter is easy to use and deploy. It can be installed using Helm, and it integrates seamlessly with Kubernetes.&lt;/p&gt;

&lt;p&gt;The standard cluster autoscaler can be more challenging to set up, and it may require more manual configuration.&lt;/p&gt;
&lt;h2&gt;
  
  
  How to Get Started with Karpenter
&lt;/h2&gt;

&lt;p&gt;There are different ways to get started with Karpenter. This article will just highlight the steps. &lt;strong&gt;(Watch out for part 2 with a step-by-step guide on how to install and configure Karpenter)&lt;/strong&gt; We will use Helm Chart to install Karpernter. Here are some steps before having it operational:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create the KarpenterNode IAM Role - Instances launched by Karpenter must run with an InstanceProfile that grants permissions necessary to run containers and configure networking.&lt;/li&gt;
&lt;li&gt;Create the IAM role for Karpenter Controller -  Associate the Kubernetes Service Account and the IAM role using &lt;a href="https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/setting-up-enable-IAM.html" rel="noopener noreferrer"&gt;IRSA&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Update aws-auth ConfigMap - to allow the nodes that use the KarpenterRole IAM Role to join the cluster&lt;/li&gt;
&lt;li&gt;Deploy Karpenter Helm Chart:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm template karpenter oci://public.ecr.aws/karpenter/karpenter --version ${KARPENTER_VERSION}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Create a default Provisioner, (example)
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
  name: default
spec:
  labels:
    intent: apps
  requirements:
    - key: karpenter.sh/capacity-type
      operator: In
      values: ["spot"]
    - key: karpenter.k8s.aws/instance-size
      operator: NotIn
      values: [nano, micro, small, medium, large]
  limits:
    resources:
      cpu: 1000
      memory: 1000Gi
  ttlSecondsAfterEmpty: 30
  ttlSecondsUntilExpired: 2592000
  providerRef:
    name: default
---
apiVersion: karpenter.k8s.aws/v1alpha1
kind: AWSNodeTemplate
metadata:
  name: default
spec:
  subnetSelector:
    alpha.eksctl.io/cluster-name: ${CLUSTER_NAME}
  securityGroupSelector:
    alpha.eksctl.io/cluster-name: ${CLUSTER_NAME}
  tags:
    KarpenerProvisionerName: "default"
    NodeType: "karpenter-workshop"
    IntentLabel: "apps"
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Once you've installed Karpenter, you can begin using it to optimize your Kubernetes cluster's resource utilization and reduce costs.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Provisioner configuration&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Karpenter configuration comes in the form of a Provisioner CRD (Custom Resource Definition). A single Karpenter provisioner is capable of handling many different pod shapes. You can play with it to suit whatever needs you have. For example&lt;/p&gt;

&lt;p&gt;One can limit Karpenter to use either on-demand or spot instances, you can use the &lt;code&gt;spot&lt;/code&gt; field in the &lt;code&gt;provisioner&lt;/code&gt; definition. Here's an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
  name: default
spec:
  labels:
    type: karpenter
  requirements:
    - key: karpenter.sh/capacity-type
      operator: In
      values: ["on-demand"]
        # - key: karpenter.sh/capacity-type
    #   operator: In
    #   values: ["spot"]
    - key: "node.kubernetes.io/instance-type"
      operator: In
      values: ["c5.large", "m5.large", "m5.xlarge"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example we're setting the &lt;code&gt;karpenter.sh/capacity-type&lt;/code&gt;to initially limit Karpenter to provisioning On-Demand instances, and &lt;code&gt;karpenter.k8s.aws/instance-type&lt;/code&gt;to limit to specific instance types.&lt;/p&gt;

&lt;p&gt;One can also limit Karpenter to specific instance types, regions and zones you can use the &lt;code&gt;instanceTypes&lt;/code&gt; field in the &lt;code&gt;provisioner&lt;/code&gt; definition. Here's an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: karpenter.sh/v1alpha1
kind: Provisioner
metadata:
  name: example-provisioner
spec:
  constraints:
    - type: "awsec2"
      region: "us-west-2"
      zones:
        - "a"
      instanceTypes:
        - "t2.micro"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the &lt;code&gt;instanceTypes&lt;/code&gt; field is set to &lt;code&gt;t2.micro&lt;/code&gt;, which means that the provisioner will only use &lt;code&gt;t2.micro&lt;/code&gt; instances. You can add additional instance types to the list if you want to allow for more flexibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations
&lt;/h2&gt;

&lt;p&gt;While Karpenter offers many advantages over the standard cluster autoscaler, it also has some limitations. Some of the key limitations include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Limited support for custom metrics: While Karpenter does offer support for custom metrics, it is more limited than some other solutions. This can make it challenging to implement certain types of custom scaling behaviors.&lt;/li&gt;
&lt;li&gt;Lack of integration with some cloud providers: Karpenter is designed to work with Kubernetes, but it may not integrate seamlessly with all cloud providers. This can make it more challenging to deploy Karpenter in certain environments.&lt;/li&gt;
&lt;li&gt;Complexity: Karpenter is a powerful tool, but it can also be complex to configure and use. It may require more expertise and resources than some other scaling solutions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Despite these limitations, Karpenter is still an excellent choice for many Kubernetes users. Its ability to optimize resource utilization, customize scaling behaviors, and reduce costs make it a compelling option for many organizations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Karpenter is a powerful autoscaling solution that offers many advantages over the standard cluster autoscaler. With its ability to optimize resource utilization, customized scaling, and ease of use, Karpenter is a must-have tool for Kubernetes users. Follow the steps above to get started with Karpenter today!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stay tuned for Part 2 of this blog, where we'll provide a step-by-step guide on how to install and configure Karpenter. Part 2 will be more technical, so if you're interested in getting started with Karpenter, be sure to check it out!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt; &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/aws/introducing-karpenter-an-open-source-high-performance-kubernetes-cluster-autoscaler/" rel="noopener noreferrer"&gt;&lt;strong&gt;Introducing Karpenter an open source high performance Kubernetes cluster autoscaler&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://karpenter.sh/" rel="noopener noreferrer"&gt;&lt;strong&gt;Karpenter&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Follow me at &lt;a href="https://www.linkedin.com/in/mucheruj/" rel="noopener noreferrer"&gt;&lt;strong&gt;LinkedIn&lt;/strong&gt;&lt;/a&gt; and &lt;a href="https://twitter.com/mucheeru" rel="noopener noreferrer"&gt;&lt;strong&gt;Twitter&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>containers</category>
      <category>eks</category>
    </item>
  </channel>
</rss>
