<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: suin</title>
    <description>The latest articles on Forem by suin (@suin).</description>
    <link>https://forem.com/suin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/suin"/>
    <language>en</language>
    <item>
      <title>Kubernetes: Should You Name Your Controller "foo-bar" or "foobar"? A Survey of 13 Open-Source Projects</title>
      <dc:creator>suin</dc:creator>
      <pubDate>Tue, 17 Feb 2026 23:56:55 +0000</pubDate>
      <link>https://forem.com/suin/kubernetes-should-you-name-your-controller-foo-bar-or-foobar-a-survey-of-13-open-source-h1c</link>
      <guid>https://forem.com/suin/kubernetes-should-you-name-your-controller-foo-bar-or-foobar-a-survey-of-13-open-source-h1c</guid>
      <description>&lt;p&gt;In this post, I'll share the results of surveying the source code of &lt;strong&gt;13 major open-source projects&lt;/strong&gt; to answer a question that often comes up when building a Kubernetes Operator: how should you name controllers for multi-word resource types?&lt;/p&gt;

&lt;p&gt;When you have a CRD (Custom Resource Definition) type like &lt;code&gt;CertificateRequest&lt;/code&gt; or &lt;code&gt;MachineDeployment&lt;/code&gt;, should the controller be called &lt;code&gt;foo-bar-controller&lt;/code&gt; (hyphenated) or &lt;code&gt;foobar-controller&lt;/code&gt; (concatenated lowercase)? There's no official guidance on this. So I dug into real-world, widely used OSS implementations to find out what the de facto standard actually looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you'll learn from this post
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;How multi-word CRD types are actually named across the ecosystem&lt;/li&gt;
&lt;li&gt;The conventions 13 OSS projects use for controller names, finalizers, field managers, and more&lt;/li&gt;
&lt;li&gt;Cross-project trend analysis revealing the most common patterns&lt;/li&gt;
&lt;li&gt;The convention adopted by a Kubernetes SIG project — the closest thing to an "official" standard&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Methodology and scope
&lt;/h2&gt;

&lt;p&gt;For this survey, I cloned the repositories of 13 Kubernetes-related OSS projects and examined their naming conventions at the source code level. These findings are based on actual code, not documentation.&lt;/p&gt;

&lt;p&gt;The projects surveyed are listed below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/cert-manager/cert-manager" rel="noopener noreferrer"&gt;cert-manager&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/argoproj/argo-cd" rel="noopener noreferrer"&gt;Argo CD&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/istio/istio" rel="noopener noreferrer"&gt;Istio&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/crossplane/crossplane" rel="noopener noreferrer"&gt;Crossplane&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/knative/serving" rel="noopener noreferrer"&gt;Knative&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/strimzi/strimzi-kafka-operator" rel="noopener noreferrer"&gt;Strimzi Kafka Operator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/prometheus-operator/prometheus-operator" rel="noopener noreferrer"&gt;Prometheus Operator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/vmware-tanzu/velero" rel="noopener noreferrer"&gt;Velero&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/tektoncd/pipeline" rel="noopener noreferrer"&gt;Tekton Pipelines&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/fluxcd/source-controller" rel="noopener noreferrer"&gt;Flux CD&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes-sigs/cluster-api" rel="noopener noreferrer"&gt;Cluster API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubevirt/kubevirt" rel="noopener noreferrer"&gt;KubeVirt&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kyverno/kyverno" rel="noopener noreferrer"&gt;Kyverno&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I looked at six dimensions for each project.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go file and directory names&lt;/li&gt;
&lt;li&gt;Controller name strings&lt;/li&gt;
&lt;li&gt;Finalizer names (the mechanism that controls cleanup before a resource is deleted)&lt;/li&gt;
&lt;li&gt;Field manager names (identifiers for field ownership in Server-Side Apply)&lt;/li&gt;
&lt;li&gt;Logger context values&lt;/li&gt;
&lt;li&gt;Reconciler struct names&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Master comparison table
&lt;/h2&gt;

&lt;p&gt;Let's start with a side-by-side comparison of all 13 projects.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Project&lt;/th&gt;
&lt;th&gt;Resource Type&lt;/th&gt;
&lt;th&gt;Go File / Dir Name&lt;/th&gt;
&lt;th&gt;Controller Name&lt;/th&gt;
&lt;th&gt;Finalizer&lt;/th&gt;
&lt;th&gt;Field Manager&lt;/th&gt;
&lt;th&gt;Logger Context&lt;/th&gt;
&lt;th&gt;Reconciler Struct&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;cert-manager&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;CertificateRequest&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;certificaterequests/&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"certificaterequests-issuer-acme"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"finalizer.acme.cert-manager.io"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"cert-manager-&amp;lt;component&amp;gt;"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"certificaterequests-issuer-acme"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Controller&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Argo CD&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ApplicationSet&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;applicationset_controller.go&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"argocd-application-controller"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"resources-finalizer.argocd.argoproj.io"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Generic (engine-level)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"applicationset"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ApplicationSetReconciler&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Istio&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;VirtualService&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Custom framework&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"istio.io/gateway-controller"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"pilot-discovery"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"KubernetesGateway"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;autoServiceExportController&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Crossplane&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;CompositeResourceDefinition&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;definition/&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"defined/compositeresourcedefinition..."&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"defined.apiextensions.crossplane.io"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"apiextensions.crossplane.io/composite"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Same as controller name&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Reconciler&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Knative&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;DomainMapping&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;domainmapping/&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"domainmapping-controller"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"domainmappings.serving.knative.dev"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Configurable (no default)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"serving.knative.dev.DomainMapping"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Reconciler&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Strimzi&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;KafkaConnect&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;KafkaConnectAssemblyOperator.java&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"KafkaConnect"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;N/A (uses owner refs)&lt;/td&gt;
&lt;td&gt;N/A (Java)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"KafkaConnect(ns/name)"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;KafkaConnectAssemblyOperator&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Prometheus Op&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;PrometheusAgent&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;pkg/prometheus/agent/operator.go&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"prometheusagent-controller"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"monitoring.coreos.com/status-cleanup"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"PrometheusOperator"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"prometheusagent-controller"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Operator&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Velero&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;BackupStorageLocation&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;backup_storage_location_controller.go&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"backup-storage-location"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"velero.io/pod-volume-finalizer"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"velero-cli"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"PodVolumeBackup"&lt;/code&gt; (mixed)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;backupStorageLocationReconciler&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Tekton&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;PipelineRun&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;pipelinerun/&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"PipelineRun"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"chains.tekton.dev/finalizer"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Knative defaults&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"pipelinerun-reconciler"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Reconciler&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Flux CD&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;HelmRelease&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;helmrelease_controller.go&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"helm-controller"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"finalizers.fluxcd.io"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"helm-controller"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Auto (controller-runtime)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;HelmReleaseReconciler&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cluster API&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;MachineDeployment&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;machinedeployment_controller.go&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"machinedeployment"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"cluster.x-k8s.io/machinedeployment"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"capi-machinedeployment"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"machinedeployment"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Reconciler&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;KubeVirt&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;VirtualMachine&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;vm/vm.go&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"virt-controller-vm"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"kubevirt.io/virtualMachineControllerFinalize"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;N/A (no SSA)&lt;/td&gt;
&lt;td&gt;N/A (custom)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Controller&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Kyverno&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ClusterCleanupPolicy&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;cleanup/controller.go&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"cleanup-controller"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"kyverno-{suffix}"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ControllerLogger(name)&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;controller&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Even at a glance, the concatenated lowercase pattern (&lt;code&gt;foobar&lt;/code&gt;) stands out. Let's now look at each project in detail.&lt;/p&gt;




&lt;h2&gt;
  
  
  Project-by-project breakdown
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. cert-manager
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/cert-manager/cert-manager" rel="noopener noreferrer"&gt;cert-manager&lt;/a&gt; automates TLS certificate management within Kubernetes clusters. Its multi-word CRD types include &lt;code&gt;CertificateRequest&lt;/code&gt;, &lt;code&gt;ClusterIssuer&lt;/code&gt;, and &lt;code&gt;CertificateSigningRequest&lt;/code&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Artifact&lt;/th&gt;
&lt;th&gt;Convention&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Go directory name&lt;/td&gt;
&lt;td&gt;Concatenated lowercase&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;certificaterequests/&lt;/code&gt;, &lt;code&gt;certificatesigningrequests/&lt;/code&gt;, &lt;code&gt;clusterissuers/&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Controller name&lt;/td&gt;
&lt;td&gt;Concatenated lowercase + hyphenated suffix&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"certificaterequests-issuer-acme"&lt;/code&gt;, &lt;code&gt;"certificaterequests-approver"&lt;/code&gt;, &lt;code&gt;"clusterissuers"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Finalizer&lt;/td&gt;
&lt;td&gt;Domain-based (not resource-specific)&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"finalizer.acme.cert-manager.io"&lt;/code&gt;, &lt;code&gt;"acme.cert-manager.io/finalizer"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Field manager&lt;/td&gt;
&lt;td&gt;Derived from UserAgent&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"cert-manager-&amp;lt;component&amp;gt;"&lt;/code&gt; (e.g., &lt;code&gt;"cert-manager-test"&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Logger context&lt;/td&gt;
&lt;td&gt;Same as controller name&lt;/td&gt;
&lt;td&gt;&lt;code&gt;logf.FromContext(ctx, "clusterissuers")&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reconciler struct&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;Controller&lt;/code&gt; per package&lt;/td&gt;
&lt;td&gt;Unexported &lt;code&gt;controller&lt;/code&gt; or exported &lt;code&gt;Controller&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The key takeaway here is that the multi-word CRD Kind (&lt;code&gt;CertificateRequest&lt;/code&gt;) is transformed into &lt;strong&gt;concatenated lowercase&lt;/strong&gt; (&lt;code&gt;certificaterequests&lt;/code&gt;). Hyphens only appear when separating functional suffixes like &lt;code&gt;-issuer-acme&lt;/code&gt; or &lt;code&gt;-approver&lt;/code&gt;. There is one exception: a directory named &lt;code&gt;certificate-shim/&lt;/code&gt; uses a hyphen.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Argo CD
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/argoproj/argo-cd" rel="noopener noreferrer"&gt;Argo CD&lt;/a&gt; enables GitOps-based continuous delivery to Kubernetes. &lt;code&gt;ApplicationSet&lt;/code&gt; and &lt;code&gt;AppProject&lt;/code&gt; are its multi-word CRD types.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Artifact&lt;/th&gt;
&lt;th&gt;Convention&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Go file name&lt;/td&gt;
&lt;td&gt;Concatenated lowercase + &lt;code&gt;_controller&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;applicationset_controller.go&lt;/code&gt;, &lt;code&gt;appcontroller.go&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Controller name&lt;/td&gt;
&lt;td&gt;Hyphenated&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"argocd-application-controller"&lt;/code&gt;, &lt;code&gt;"argocd-applicationset-controller"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Finalizer&lt;/td&gt;
&lt;td&gt;Hyphenated + domain&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"resources-finalizer.argocd.argoproj.io"&lt;/code&gt;, &lt;code&gt;"pre-delete-finalizer.argocd.argoproj.io"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Logger context&lt;/td&gt;
&lt;td&gt;Concatenated lowercase field key&lt;/td&gt;
&lt;td&gt;&lt;code&gt;log.WithField("applicationset", req.NamespacedName)&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reconciler struct&lt;/td&gt;
&lt;td&gt;PascalCase&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;ApplicationSetReconciler&lt;/code&gt;, &lt;code&gt;ApplicationController&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Deployment names use hyphenation (&lt;code&gt;argocd-application-controller&lt;/code&gt;), but Go file names and logger keys use concatenated lowercase (&lt;code&gt;applicationset&lt;/code&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Istio
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/istio/istio" rel="noopener noreferrer"&gt;Istio&lt;/a&gt; provides a service mesh. Its CRD types include &lt;code&gt;VirtualService&lt;/code&gt;, &lt;code&gt;DestinationRule&lt;/code&gt;, &lt;code&gt;ServiceEntry&lt;/code&gt;, and &lt;code&gt;WorkloadEntry&lt;/code&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Artifact&lt;/th&gt;
&lt;th&gt;Convention&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Go file name&lt;/td&gt;
&lt;td&gt;Concatenated lowercase&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;autoserviceexportcontroller.go&lt;/code&gt;, &lt;code&gt;deploymentcontroller.go&lt;/code&gt;, &lt;code&gt;configcontroller.go&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Controller name&lt;/td&gt;
&lt;td&gt;Domain + hyphenated&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"istio.io/gateway-controller"&lt;/code&gt;, &lt;code&gt;"istio.io/inference-pool-controller"&lt;/code&gt;, &lt;code&gt;"istio.io/mesh-controller"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Field manager&lt;/td&gt;
&lt;td&gt;Hyphenated&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"pilot-discovery"&lt;/code&gt;, &lt;code&gt;"istio-operator"&lt;/code&gt;, &lt;code&gt;"istio.io/inference-pool-controller"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Schema plurals&lt;/td&gt;
&lt;td&gt;Concatenated lowercase&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"virtualservices"&lt;/code&gt;, &lt;code&gt;"destinationrules"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Log collection names&lt;/td&gt;
&lt;td&gt;PascalCase&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"KubernetesGateway"&lt;/code&gt;, &lt;code&gt;"HTTPRoute"&lt;/code&gt;, &lt;code&gt;"GatewayClasses"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reconciler struct&lt;/td&gt;
&lt;td&gt;camelCase&lt;/td&gt;
&lt;td&gt;&lt;code&gt;autoServiceExportController&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Istio uses its own custom controller framework. File names are concatenated lowercase, but controller identifier strings use hyphenation. Log collection names preserve PascalCase as-is, which is also notable.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Crossplane
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/crossplane/crossplane" rel="noopener noreferrer"&gt;Crossplane&lt;/a&gt; manages cloud infrastructure through Kubernetes. It has particularly long CRD type names like &lt;code&gt;CompositeResourceDefinition&lt;/code&gt; and &lt;code&gt;ManagedResourceActivationPolicy&lt;/code&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Artifact&lt;/th&gt;
&lt;th&gt;Convention&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Go directory name&lt;/td&gt;
&lt;td&gt;Semantic / abbreviated&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;definition/&lt;/code&gt;, &lt;code&gt;activationpolicy/&lt;/code&gt;, &lt;code&gt;watchoperation/&lt;/code&gt;, &lt;code&gt;cronoperation/&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Controller name&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"prefix/lowercasedKind.group"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"defined/compositeresourcedefinition.apiextensions.crossplane.io"&lt;/code&gt;, &lt;code&gt;"mrap/managedresourceactivationpolicy"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Finalizer&lt;/td&gt;
&lt;td&gt;Concatenated lowercase as subdomain&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"defined.apiextensions.crossplane.io"&lt;/code&gt;, &lt;code&gt;"watchoperation.ops.crossplane.io"&lt;/code&gt;, &lt;code&gt;"composite.apiextensions.crossplane.io"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Field manager&lt;/td&gt;
&lt;td&gt;Domain path&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"apiextensions.crossplane.io/composite"&lt;/code&gt;, &lt;code&gt;"apiextensions.crossplane.io/managed"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Logger context&lt;/td&gt;
&lt;td&gt;Controller name value&lt;/td&gt;
&lt;td&gt;&lt;code&gt;WithValues("controller", controllerName)&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reconciler struct&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;Reconciler&lt;/code&gt; per package&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Reconciler&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Controller names use the &lt;strong&gt;lowercased Kind directly&lt;/strong&gt; with no separators at all (&lt;code&gt;compositeresourcedefinition&lt;/code&gt;, &lt;code&gt;managedresourceactivationpolicy&lt;/code&gt;). Finalizers similarly use concatenated lowercase as subdomains.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Knative
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/knative/serving" rel="noopener noreferrer"&gt;Knative&lt;/a&gt; is a platform for running serverless workloads on Kubernetes. Its multi-word CRD types include &lt;code&gt;DomainMapping&lt;/code&gt;, &lt;code&gt;ServerlessService&lt;/code&gt;, &lt;code&gt;PodAutoscaler&lt;/code&gt;, &lt;code&gt;KnativeServing&lt;/code&gt;, and &lt;code&gt;KnativeEventing&lt;/code&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Artifact&lt;/th&gt;
&lt;th&gt;Convention&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Go directory name&lt;/td&gt;
&lt;td&gt;Concatenated lowercase&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;domainmapping/&lt;/code&gt;, &lt;code&gt;serverlessservice/&lt;/code&gt;, &lt;code&gt;knativeserving/&lt;/code&gt;, &lt;code&gt;knativeeventing/&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Controller name&lt;/td&gt;
&lt;td&gt;Concatenated lowercase + &lt;code&gt;-controller&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"domainmapping-controller"&lt;/code&gt;, &lt;code&gt;"serverlessservice-controller"&lt;/code&gt;, &lt;code&gt;"podautoscaler-controller"&lt;/code&gt;, &lt;code&gt;"knativeserving-controller"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Finalizer&lt;/td&gt;
&lt;td&gt;Concatenated lowercase plural + API group&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"domainmappings.serving.knative.dev"&lt;/code&gt;, &lt;code&gt;"serverlessservices.networking.internal.knative.dev"&lt;/code&gt;, &lt;code&gt;"knativeservings.operator.knative.dev"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Log Kind&lt;/td&gt;
&lt;td&gt;PascalCase&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"serving.knative.dev.DomainMapping"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reconciler struct&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;Reconciler&lt;/code&gt; per package (code-generated)&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;reconcilerImpl&lt;/code&gt; (generated), &lt;code&gt;Reconciler&lt;/code&gt; (user-implemented)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;What makes Knative stand out is that its code generator enforces a consistent pattern. Agent names follow the &lt;code&gt;lowercased-kind-controller&lt;/code&gt; format uniformly. This is the most programmatic and consistent approach among all 13 projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Strimzi Kafka Operator
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/strimzi/strimzi-kafka-operator" rel="noopener noreferrer"&gt;Strimzi&lt;/a&gt; is an Operator for running Apache Kafka on Kubernetes. It is implemented in Java and has CRD types such as &lt;code&gt;KafkaConnect&lt;/code&gt;, &lt;code&gt;KafkaMirrorMaker2&lt;/code&gt;, &lt;code&gt;KafkaBridge&lt;/code&gt;, &lt;code&gt;KafkaRebalance&lt;/code&gt;, and &lt;code&gt;KafkaNodePool&lt;/code&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Artifact&lt;/th&gt;
&lt;th&gt;Convention&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Java class name&lt;/td&gt;
&lt;td&gt;PascalCase + &lt;code&gt;AssemblyOperator&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;KafkaConnectAssemblyOperator.java&lt;/code&gt;, &lt;code&gt;KafkaMirrorMaker2AssemblyOperator.java&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resource Kind&lt;/td&gt;
&lt;td&gt;PascalCase&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"KafkaConnect"&lt;/code&gt;, &lt;code&gt;"KafkaMirrorMaker2"&lt;/code&gt;, &lt;code&gt;"KafkaBridge"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resource plural&lt;/td&gt;
&lt;td&gt;Concatenated lowercase&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"kafkaconnects"&lt;/code&gt;, &lt;code&gt;"kafkamirrormaker2s"&lt;/code&gt;, &lt;code&gt;"kafkanodepools"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Labels&lt;/td&gt;
&lt;td&gt;PascalCase values&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"strimzi.io/kind": "KafkaConnect"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lock names&lt;/td&gt;
&lt;td&gt;PascalCase Kind&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"lock::ns::KafkaConnect::name"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Log markers&lt;/td&gt;
&lt;td&gt;PascalCase&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"KafkaConnect(namespace/name)"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Strimzi uses PascalCase (the Kind itself) in most runtime contexts. Plural forms are concatenated lowercase (&lt;code&gt;kafkaconnects&lt;/code&gt;), following the standard Kubernetes API convention.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Prometheus Operator
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/prometheus-operator/prometheus-operator" rel="noopener noreferrer"&gt;Prometheus Operator&lt;/a&gt; automates Prometheus operations on Kubernetes. Its CRD types include &lt;code&gt;ServiceMonitor&lt;/code&gt;, &lt;code&gt;PodMonitor&lt;/code&gt;, &lt;code&gt;ThanosRuler&lt;/code&gt;, &lt;code&gt;PrometheusAgent&lt;/code&gt;, &lt;code&gt;ScrapeConfig&lt;/code&gt;, and &lt;code&gt;AlertmanagerConfig&lt;/code&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Artifact&lt;/th&gt;
&lt;th&gt;Convention&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Go directory / file&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;operator.go&lt;/code&gt; per package&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;pkg/alertmanager/operator.go&lt;/code&gt;, &lt;code&gt;pkg/prometheus/agent/operator.go&lt;/code&gt;, &lt;code&gt;pkg/thanos/operator.go&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Controller name&lt;/td&gt;
&lt;td&gt;Concatenated lowercase + &lt;code&gt;-controller&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"prometheusagent-controller"&lt;/code&gt;, &lt;code&gt;"alertmanager-controller"&lt;/code&gt;, &lt;code&gt;"thanos-controller"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Finalizer&lt;/td&gt;
&lt;td&gt;Shared (not type-specific)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"monitoring.coreos.com/status-cleanup"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Field manager&lt;/td&gt;
&lt;td&gt;Shared PascalCase&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"PrometheusOperator"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;managed-by label&lt;/td&gt;
&lt;td&gt;Hyphenated&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"prometheus-operator"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Logger context&lt;/td&gt;
&lt;td&gt;Controller name value&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"component": "prometheusagent-controller"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reconciler struct&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;Operator&lt;/code&gt; per package&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;alertmanager.Operator&lt;/code&gt;, &lt;code&gt;thanos.Operator&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;What's notable here is how multi-word CRD types are concatenated. &lt;code&gt;PrometheusAgent&lt;/code&gt; becomes &lt;code&gt;"prometheusagent-controller"&lt;/code&gt; — not &lt;code&gt;"prometheus-agent-controller"&lt;/code&gt;. Additionally, &lt;code&gt;ThanosRuler&lt;/code&gt; is shortened to just &lt;code&gt;"thanos-controller"&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Velero
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/vmware-tanzu/velero" rel="noopener noreferrer"&gt;Velero&lt;/a&gt; handles backup and restore for Kubernetes clusters. Its CRD types include &lt;code&gt;BackupStorageLocation&lt;/code&gt;, &lt;code&gt;VolumeSnapshotLocation&lt;/code&gt;, &lt;code&gt;ServerStatusRequest&lt;/code&gt;, &lt;code&gt;PodVolumeBackup&lt;/code&gt;, &lt;code&gt;PodVolumeRestore&lt;/code&gt;, &lt;code&gt;DataDownload&lt;/code&gt;, &lt;code&gt;DataUpload&lt;/code&gt;, and &lt;code&gt;DownloadRequest&lt;/code&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Artifact&lt;/th&gt;
&lt;th&gt;Convention&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Go file name&lt;/td&gt;
&lt;td&gt;Snake case&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;backup_storage_location_controller.go&lt;/code&gt;, &lt;code&gt;pod_volume_backup_controller.go&lt;/code&gt;, &lt;code&gt;server_status_request_controller.go&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Controller name&lt;/td&gt;
&lt;td&gt;Hyphenated (kebab-case)&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"backup-storage-location"&lt;/code&gt;, &lt;code&gt;"pod-volume-backup"&lt;/code&gt;, &lt;code&gt;"server-status-request"&lt;/code&gt;, &lt;code&gt;"data-download"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Finalizer&lt;/td&gt;
&lt;td&gt;Hyphenated + domain&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"velero.io/pod-volume-finalizer"&lt;/code&gt;, &lt;code&gt;"velero.io/data-upload-download-finalizer"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Field manager&lt;/td&gt;
&lt;td&gt;Per-binary&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"velero-cli"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Logger context&lt;/td&gt;
&lt;td&gt;Mixed&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"controller": "PodVolumeBackup"&lt;/code&gt; (PascalCase) in constructors vs. &lt;code&gt;"controller": "podvolumebackup"&lt;/code&gt; (concatenated) in Reconcile&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reconciler struct&lt;/td&gt;
&lt;td&gt;Mixed export&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;backupStorageLocationReconciler&lt;/code&gt; (unexported), &lt;code&gt;PodVolumeBackupReconciler&lt;/code&gt; (exported)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Velero is &lt;strong&gt;the only project among all 13 that consistently uses hyphenated word splitting for controller names&lt;/strong&gt;. Each word from the PascalCase type is separated by a hyphen. Its use of snake case for Go file names is another unique trait not seen in any other project.&lt;/p&gt;

&lt;h3&gt;
  
  
  9. Tekton Pipelines
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/tektoncd/pipeline" rel="noopener noreferrer"&gt;Tekton&lt;/a&gt; is a framework for building CI/CD pipelines on Kubernetes. Its CRD types include &lt;code&gt;PipelineRun&lt;/code&gt;, &lt;code&gt;TaskRun&lt;/code&gt;, &lt;code&gt;CustomRun&lt;/code&gt;, and &lt;code&gt;ResolutionRequest&lt;/code&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Artifact&lt;/th&gt;
&lt;th&gt;Convention&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Go directory name&lt;/td&gt;
&lt;td&gt;Concatenated lowercase&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;pipelinerun/&lt;/code&gt;, &lt;code&gt;taskrun/&lt;/code&gt;, &lt;code&gt;customrun/&lt;/code&gt;, &lt;code&gt;resolutionrequest/&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Controller name (AgentName)&lt;/td&gt;
&lt;td&gt;PascalCase (Kind as-is)&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"PipelineRun"&lt;/code&gt;, &lt;code&gt;"TaskRun"&lt;/code&gt;, &lt;code&gt;"CustomRun"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Finalizer&lt;/td&gt;
&lt;td&gt;Domain-based&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"chains.tekton.dev/finalizer"&lt;/code&gt; (from Tekton Chains)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;managed-by&lt;/td&gt;
&lt;td&gt;Domain path&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"tekton.dev/pipeline"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tracer name&lt;/td&gt;
&lt;td&gt;Concatenated lowercase + &lt;code&gt;-reconciler&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"pipelinerun-reconciler"&lt;/code&gt;, &lt;code&gt;"taskrun-reconciler"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reconciler struct&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;Reconciler&lt;/code&gt; per package&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;Reconciler&lt;/code&gt; (distinguished by package)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Tekton takes the unique approach of using the PascalCase Kind directly as the controller agent name: &lt;code&gt;"PipelineRun"&lt;/code&gt;, not &lt;code&gt;"pipelinerun"&lt;/code&gt; or &lt;code&gt;"pipeline-run"&lt;/code&gt;. Directory names and tracer names, on the other hand, use concatenated lowercase.&lt;/p&gt;

&lt;h3&gt;
  
  
  10. Flux CD
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/fluxcd/source-controller" rel="noopener noreferrer"&gt;Flux CD&lt;/a&gt; is a GitOps toolkit. Its CRD types include &lt;code&gt;GitRepository&lt;/code&gt;, &lt;code&gt;HelmRelease&lt;/code&gt;, &lt;code&gt;HelmChart&lt;/code&gt;, &lt;code&gt;OCIRepository&lt;/code&gt;, &lt;code&gt;HelmRepository&lt;/code&gt;, and &lt;code&gt;ImageUpdateAutomation&lt;/code&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Artifact&lt;/th&gt;
&lt;th&gt;Convention&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Go file name&lt;/td&gt;
&lt;td&gt;Concatenated lowercase + &lt;code&gt;_controller.go&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;helmrelease_controller.go&lt;/code&gt;, &lt;code&gt;gitrepository_controller.go&lt;/code&gt;, &lt;code&gt;ocirepository_controller.go&lt;/code&gt;, &lt;code&gt;imageupdateautomation_controller.go&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Controller name&lt;/td&gt;
&lt;td&gt;Per-binary (not per-CRD)&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"source-controller"&lt;/code&gt;, &lt;code&gt;"helm-controller"&lt;/code&gt;, &lt;code&gt;"image-automation-controller"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Finalizer&lt;/td&gt;
&lt;td&gt;Shared across all types&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"finalizers.fluxcd.io"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Field manager&lt;/td&gt;
&lt;td&gt;Binary name&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"source-controller"&lt;/code&gt;, &lt;code&gt;"helm-controller"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reconciler struct&lt;/td&gt;
&lt;td&gt;PascalCase + Reconciler&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;HelmReleaseReconciler&lt;/code&gt;, &lt;code&gt;GitRepositoryReconciler&lt;/code&gt;, &lt;code&gt;OCIRepositoryReconciler&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Flux doesn't assign per-CRD controller names at all. The controller name is simply the binary name (the executable itself). File names use concatenated lowercase.&lt;/p&gt;

&lt;h3&gt;
  
  
  11. Cluster API (CAPI)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/kubernetes-sigs/cluster-api" rel="noopener noreferrer"&gt;Cluster API&lt;/a&gt; is a Kubernetes SIG project for managing cluster lifecycles. It has a large number of multi-word CRD types: &lt;code&gt;MachineDeployment&lt;/code&gt;, &lt;code&gt;MachineSet&lt;/code&gt;, &lt;code&gt;MachineHealthCheck&lt;/code&gt;, &lt;code&gt;ClusterClass&lt;/code&gt;, &lt;code&gt;ClusterResourceSet&lt;/code&gt;, &lt;code&gt;ClusterResourceSetBinding&lt;/code&gt;, &lt;code&gt;MachinePool&lt;/code&gt;, and more.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Artifact&lt;/th&gt;
&lt;th&gt;Convention&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Go file name&lt;/td&gt;
&lt;td&gt;Concatenated lowercase + &lt;code&gt;_controller.go&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;machinedeployment_controller.go&lt;/code&gt;, &lt;code&gt;machinehealthcheck_controller.go&lt;/code&gt;, &lt;code&gt;clusterresourceset_controller.go&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Go directory name&lt;/td&gt;
&lt;td&gt;Concatenated lowercase&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;machinedeployment/&lt;/code&gt;, &lt;code&gt;machinehealthcheck/&lt;/code&gt;, &lt;code&gt;clusterresourceset/&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Controller name&lt;/td&gt;
&lt;td&gt;Concatenated lowercase&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"machinedeployment"&lt;/code&gt;, &lt;code&gt;"machinehealthcheck"&lt;/code&gt;, &lt;code&gt;"clusterresourceset"&lt;/code&gt;, &lt;code&gt;"clusterresourcesetbinding"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Event recorder&lt;/td&gt;
&lt;td&gt;Concatenated lowercase + &lt;code&gt;-controller&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"machinedeployment-controller"&lt;/code&gt;, &lt;code&gt;"machinehealthcheck-controller"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Finalizer&lt;/td&gt;
&lt;td&gt;Concatenated lowercase in domain&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"cluster.x-k8s.io/machinedeployment"&lt;/code&gt;, &lt;code&gt;"machinedeployment.topology.cluster.x-k8s.io"&lt;/code&gt;, &lt;code&gt;"machinepool.cluster.x-k8s.io"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Field manager (SSA)&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;capi-&lt;/code&gt; + concatenated lowercase&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"capi-machinedeployment"&lt;/code&gt;, &lt;code&gt;"capi-machineset"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Logger context&lt;/td&gt;
&lt;td&gt;Concatenated lowercase&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;WithValues("controller", "machinedeployment")&lt;/code&gt;, &lt;code&gt;WithValues("controller", "clusterresourcesetbinding")&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SSA cache&lt;/td&gt;
&lt;td&gt;Concatenated lowercase&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ssa.NewCache("machinedeployment")&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reconciler struct&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;Reconciler&lt;/code&gt; per package&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;machinedeployment.Reconciler&lt;/code&gt;, &lt;code&gt;machinehealthcheck.Reconciler&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Cluster API has the most comprehensive and consistent naming convention of all 13 projects.&lt;/strong&gt; Every single artifact uses concatenated lowercase for the resource type. &lt;code&gt;MachineDeployment&lt;/code&gt; becomes &lt;code&gt;machinedeployment&lt;/code&gt; everywhere, without exception.&lt;/p&gt;

&lt;p&gt;As a Kubernetes SIG project, this convention can be considered the closest thing to an "official" standard.&lt;/p&gt;

&lt;h3&gt;
  
  
  12. KubeVirt
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/kubevirt/kubevirt" rel="noopener noreferrer"&gt;KubeVirt&lt;/a&gt; enables running virtual machines on Kubernetes. It has particularly long CRD type names: &lt;code&gt;VirtualMachine&lt;/code&gt;, &lt;code&gt;VirtualMachineInstance&lt;/code&gt;, &lt;code&gt;VirtualMachineInstanceReplicaSet&lt;/code&gt;, &lt;code&gt;VirtualMachinePool&lt;/code&gt;, and &lt;code&gt;VirtualMachineInstanceMigration&lt;/code&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Artifact&lt;/th&gt;
&lt;th&gt;Convention&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Go directory / file&lt;/td&gt;
&lt;td&gt;Abbreviated&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;vm/vm.go&lt;/code&gt;, &lt;code&gt;vmi/vmi.go&lt;/code&gt;, &lt;code&gt;replicaset/replicaset.go&lt;/code&gt;, &lt;code&gt;pool/pool.go&lt;/code&gt;, &lt;code&gt;migration/migration.go&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Work queue name&lt;/td&gt;
&lt;td&gt;Abbreviated + hyphenated prefix&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"virt-controller-vm"&lt;/code&gt;, &lt;code&gt;"virt-controller-vmi"&lt;/code&gt;, &lt;code&gt;"virt-controller-replicaset"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Event recorder&lt;/td&gt;
&lt;td&gt;Concatenated lowercase + &lt;code&gt;-controller&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"virtualmachine-controller"&lt;/code&gt;, &lt;code&gt;"virtualmachinereplicaset-controller"&lt;/code&gt;, &lt;code&gt;"virtualmachinepool-controller"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Finalizer&lt;/td&gt;
&lt;td&gt;camelCase&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"kubevirt.io/foregroundDeleteVirtualMachine"&lt;/code&gt;, &lt;code&gt;"kubevirt.io/virtualMachineControllerFinalize"&lt;/code&gt;, &lt;code&gt;"kubevirt.io/migrationJobFinalize"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Controller struct&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;Controller&lt;/code&gt; per package&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;vm.Controller&lt;/code&gt;, &lt;code&gt;vmi.Controller&lt;/code&gt;, &lt;code&gt;migration.Controller&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;KubeVirt uses abbreviations (&lt;code&gt;VM&lt;/code&gt;, &lt;code&gt;VMI&lt;/code&gt;) aggressively in its internal code, while event recorders use concatenated lowercase (&lt;code&gt;virtualmachine&lt;/code&gt;). Its use of camelCase strings for finalizers is unique among all 13 projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  13. Kyverno
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/kyverno/kyverno" rel="noopener noreferrer"&gt;Kyverno&lt;/a&gt; is a policy engine for Kubernetes. Its CRD types include &lt;code&gt;ClusterPolicy&lt;/code&gt;, &lt;code&gt;ClusterCleanupPolicy&lt;/code&gt;, &lt;code&gt;PolicyReport&lt;/code&gt;, and &lt;code&gt;GlobalContextEntry&lt;/code&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Artifact&lt;/th&gt;
&lt;th&gt;Convention&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Go directory name&lt;/td&gt;
&lt;td&gt;By functional concern&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;cleanup/&lt;/code&gt;, &lt;code&gt;policycache/&lt;/code&gt;, &lt;code&gt;policystatus/&lt;/code&gt;, &lt;code&gt;globalcontext/&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Controller name&lt;/td&gt;
&lt;td&gt;Hyphenated (concern-based)&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"cleanup-controller"&lt;/code&gt;, &lt;code&gt;"policycache-controller"&lt;/code&gt;, &lt;code&gt;"global-context"&lt;/code&gt;, &lt;code&gt;"admissionpolicy-generator"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Field manager&lt;/td&gt;
&lt;td&gt;Prefixed&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"kyverno-{suffix}"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Logger context&lt;/td&gt;
&lt;td&gt;Controller name&lt;/td&gt;
&lt;td&gt;&lt;code&gt;logging.ControllerLogger(ControllerName)&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reconciler struct&lt;/td&gt;
&lt;td&gt;Unexported generic name&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;controller&lt;/code&gt; (per package)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;What distinguishes Kyverno is that controllers are named by &lt;strong&gt;functional concern&lt;/strong&gt; rather than CRD type. When multi-word concepts appear, it mixes concatenated lowercase (&lt;code&gt;policycache&lt;/code&gt;, &lt;code&gt;admissionpolicy&lt;/code&gt;) with hyphenation (&lt;code&gt;global-context&lt;/code&gt;), resulting in some inconsistency.&lt;/p&gt;




&lt;h2&gt;
  
  
  Cross-project analysis
&lt;/h2&gt;

&lt;p&gt;Based on the individual findings above, let's now analyze overall trends by artifact type.&lt;/p&gt;

&lt;h3&gt;
  
  
  Go file and directory names
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pattern&lt;/th&gt;
&lt;th&gt;Projects Using It&lt;/th&gt;
&lt;th&gt;Share&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Concatenated lowercase (&lt;code&gt;foobar_controller.go&lt;/code&gt;, &lt;code&gt;foobar/&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;cert-manager, Argo CD, Knative, Tekton, Flux, Cluster API, Crossplane, Prometheus Op&lt;/td&gt;
&lt;td&gt;~70%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Snake case (&lt;code&gt;foo_bar_controller.go&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;Velero&lt;/td&gt;
&lt;td&gt;~8%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Abbreviated (&lt;code&gt;vm.go&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;KubeVirt&lt;/td&gt;
&lt;td&gt;~8%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;By concern (&lt;code&gt;controller.go&lt;/code&gt; in a semantic directory)&lt;/td&gt;
&lt;td&gt;Kyverno, Crossplane (partially)&lt;/td&gt;
&lt;td&gt;~15%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Concatenated lowercase is the overwhelming favorite. &lt;code&gt;FooBar&lt;/code&gt; becomes &lt;code&gt;foobar_controller.go&lt;/code&gt; or &lt;code&gt;foobar/controller.go&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Controller name strings
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pattern&lt;/th&gt;
&lt;th&gt;Projects Using It&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Concatenated lowercase (bare or + &lt;code&gt;-controller&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;Cluster API, Knative, Prometheus Op, cert-manager, Crossplane&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"machinedeployment"&lt;/code&gt;, &lt;code&gt;"domainmapping-controller"&lt;/code&gt;, &lt;code&gt;"prometheusagent-controller"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hyphenated (word-split)&lt;/td&gt;
&lt;td&gt;Velero&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"backup-storage-location"&lt;/code&gt;, &lt;code&gt;"pod-volume-backup"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PascalCase (Kind as-is)&lt;/td&gt;
&lt;td&gt;Tekton, Strimzi, Flux (logger only)&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"PipelineRun"&lt;/code&gt;, &lt;code&gt;"KafkaConnect"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Binary name (not per-CRD)&lt;/td&gt;
&lt;td&gt;Flux, Istio&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"helm-controller"&lt;/code&gt;, &lt;code&gt;"pilot-discovery"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;By concern&lt;/td&gt;
&lt;td&gt;Kyverno&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"cleanup-controller"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Again, concatenated lowercase is the most common. &lt;code&gt;FooBar&lt;/code&gt; becomes &lt;code&gt;"foobar"&lt;/code&gt; or &lt;code&gt;"foobar-controller"&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Finalizer names
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pattern&lt;/th&gt;
&lt;th&gt;Projects Using It&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Concatenated lowercase in domain&lt;/td&gt;
&lt;td&gt;Cluster API, Knative, Crossplane&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"cluster.x-k8s.io/machinedeployment"&lt;/code&gt;, &lt;code&gt;"domainmappings.serving.knative.dev"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Shared / generic&lt;/td&gt;
&lt;td&gt;Flux, Prometheus Op, cert-manager&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"finalizers.fluxcd.io"&lt;/code&gt;, &lt;code&gt;"monitoring.coreos.com/status-cleanup"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hyphenated&lt;/td&gt;
&lt;td&gt;Velero&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"velero.io/pod-volume-finalizer"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;camelCase&lt;/td&gt;
&lt;td&gt;KubeVirt&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"kubevirt.io/virtualMachineControllerFinalize"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Field manager names
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pattern&lt;/th&gt;
&lt;th&gt;Projects Using It&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Prefix + concatenated lowercase&lt;/td&gt;
&lt;td&gt;Cluster API&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"capi-machinedeployment"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Binary name&lt;/td&gt;
&lt;td&gt;Flux, Istio&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"source-controller"&lt;/code&gt;, &lt;code&gt;"pilot-discovery"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Shared operator name&lt;/td&gt;
&lt;td&gt;Prometheus Op&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"PrometheusOperator"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Domain path&lt;/td&gt;
&lt;td&gt;Crossplane&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"apiextensions.crossplane.io/composite"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Reconciler struct names
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pattern&lt;/th&gt;
&lt;th&gt;Projects Using It&lt;/th&gt;
&lt;th&gt;Share&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Generic &lt;code&gt;Reconciler&lt;/code&gt; per package&lt;/td&gt;
&lt;td&gt;Cluster API, Knative, Tekton, Crossplane&lt;/td&gt;
&lt;td&gt;~40%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PascalCase &lt;code&gt;FooBarReconciler&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Flux, Argo CD&lt;/td&gt;
&lt;td&gt;~20%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;Operator&lt;/code&gt; per package&lt;/td&gt;
&lt;td&gt;Prometheus Op&lt;/td&gt;
&lt;td&gt;~8%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;Controller&lt;/code&gt; per package&lt;/td&gt;
&lt;td&gt;KubeVirt, cert-manager&lt;/td&gt;
&lt;td&gt;~15%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Unexported &lt;code&gt;controller&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Kyverno&lt;/td&gt;
&lt;td&gt;~8%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The dominant convention: concatenated lowercase
&lt;/h3&gt;

&lt;p&gt;The clear winner across all naming dimensions is &lt;strong&gt;concatenated lowercase&lt;/strong&gt; — lowercasing the PascalCase Kind with no separator.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;FooBar&lt;/code&gt; → &lt;code&gt;foobar&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here's a breakdown of adoption rates for this pattern.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;~70% of projects use it for file/directory names&lt;/li&gt;
&lt;li&gt;~50% of projects use it for controller name strings&lt;/li&gt;
&lt;li&gt;~40% of projects use it for finalizer domain components&lt;/li&gt;
&lt;li&gt;100% of projects use it for Kubernetes API resource plurals (this is a Kubernetes API standard itself)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Who uses hyphenation?
&lt;/h3&gt;

&lt;p&gt;Only &lt;strong&gt;Velero&lt;/strong&gt; consistently word-splits CRD types with hyphens (&lt;code&gt;"backup-storage-location"&lt;/code&gt;, &lt;code&gt;"pod-volume-backup"&lt;/code&gt;) — that's 1 out of 13 projects.&lt;/p&gt;

&lt;p&gt;Kyverno partially uses hyphens (&lt;code&gt;"global-context"&lt;/code&gt;), but also mixes in concatenated lowercase (&lt;code&gt;policycache&lt;/code&gt;, &lt;code&gt;admissionpolicy&lt;/code&gt;), so it is not consistent.&lt;/p&gt;

&lt;h3&gt;
  
  
  The &lt;code&gt;-controller&lt;/code&gt; suffix
&lt;/h3&gt;

&lt;p&gt;When appending a &lt;code&gt;-controller&lt;/code&gt; suffix to a name, every single project connects it with a hyphen. It's always &lt;code&gt;"foobar-controller"&lt;/code&gt;, never &lt;code&gt;"foobarcontroller"&lt;/code&gt;. This is universal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Recommended naming convention
&lt;/h3&gt;

&lt;p&gt;Based on the survey results, here is a recommended set of conventions. The &lt;a href="https://github.com/kubernetes-sigs/cluster-api" rel="noopener noreferrer"&gt;Cluster API&lt;/a&gt; conventions (a Kubernetes SIG project) are particularly worth following, as they are the most comprehensive and consistent of any project surveyed.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Artifact&lt;/th&gt;
&lt;th&gt;Recommended Convention&lt;/th&gt;
&lt;th&gt;Example for &lt;code&gt;FooBar&lt;/code&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Go file name&lt;/td&gt;
&lt;td&gt;Concatenated lowercase + underscore + &lt;code&gt;controller.go&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;&lt;code&gt;foobar_controller.go&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Go directory&lt;/td&gt;
&lt;td&gt;Concatenated lowercase&lt;/td&gt;
&lt;td&gt;&lt;code&gt;foobar/&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Controller name&lt;/td&gt;
&lt;td&gt;Concatenated lowercase&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"foobar"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Controller name (with suffix)&lt;/td&gt;
&lt;td&gt;Concatenated lowercase + hyphen + &lt;code&gt;controller&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"foobar-controller"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Finalizer&lt;/td&gt;
&lt;td&gt;Concatenated lowercase in domain&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;"example.io/foobar"&lt;/code&gt; or &lt;code&gt;"foobar.example.io"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Field manager&lt;/td&gt;
&lt;td&gt;Prefix + concatenated lowercase&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"myoperator-foobar"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Logger context&lt;/td&gt;
&lt;td&gt;Concatenated lowercase&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"foobar"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reconciler struct&lt;/td&gt;
&lt;td&gt;PascalCase or generic per package&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;FooBarReconciler&lt;/code&gt; or &lt;code&gt;Reconciler&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Why concatenated lowercase wins
&lt;/h3&gt;

&lt;p&gt;There are several reasons why concatenated lowercase is so widely adopted.&lt;/p&gt;

&lt;p&gt;First, it aligns with &lt;strong&gt;the Kubernetes API's own conventions&lt;/strong&gt;. The standard resource plurals are already concatenated lowercase: &lt;code&gt;deployments&lt;/code&gt;, &lt;code&gt;replicasets&lt;/code&gt;, &lt;code&gt;statefulsets&lt;/code&gt;, &lt;code&gt;daemonsets&lt;/code&gt;. CRD resources naturally follow this same pattern.&lt;/p&gt;

&lt;p&gt;Second, it fits well with Go package naming conventions. Go packages are conventionally single lowercase words, and a name like &lt;code&gt;machinedeployment&lt;/code&gt; fits right in.&lt;/p&gt;

&lt;p&gt;Third, it eliminates ambiguity. With hyphenation, splitting &lt;code&gt;FooBar&lt;/code&gt; into &lt;code&gt;foo-bar&lt;/code&gt; is straightforward, but &lt;code&gt;FooBarBaz&lt;/code&gt; could plausibly be &lt;code&gt;foo-bar-baz&lt;/code&gt; or &lt;code&gt;foo-barbaz&lt;/code&gt; — the word boundaries aren't always obvious. With concatenated lowercase, you simply lowercase the Kind as-is. It's a mechanical transformation with no room for interpretation.&lt;/p&gt;

&lt;p&gt;And perhaps the most compelling reason: &lt;strong&gt;Cluster API, a Kubernetes SIG project, uses this convention with complete consistency.&lt;/strong&gt; If you want to follow the crowd in the Kubernetes ecosystem, this is the convention to pick.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hyphenation has its merits too
&lt;/h3&gt;

&lt;p&gt;That said, hyphenation has legitimate advantages.&lt;/p&gt;

&lt;p&gt;There's no denying that &lt;code&gt;backup-storage-location&lt;/code&gt; is easier for humans to read than &lt;code&gt;backupstoragelocation&lt;/code&gt;. In contexts where humans read names directly — deployment names, CLI output, log messages — readability is a significant benefit.&lt;/p&gt;

&lt;p&gt;Additionally, hyphenated names are the standard convention in Kubernetes label values, annotation keys, and resource names in manifests.&lt;/p&gt;

&lt;p&gt;In other words, it comes down to a choice: &lt;strong&gt;concatenated lowercase for consistency with APIs and code, or hyphenation for human readability.&lt;/strong&gt; Neither is wrong — but the ecosystem has spoken, and concatenated lowercase is the overwhelming favorite.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;In this post, I surveyed the source code of 13 Kubernetes-related OSS projects and mapped out the real-world naming conventions for multi-word CRD types. The verdict: concatenated lowercase (&lt;code&gt;foobar&lt;/code&gt;) is the de facto standard, with Velero being the only project that consistently uses hyphenation (&lt;code&gt;foo-bar&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;If you're building a Kubernetes Operator and unsure which convention to follow, Cluster API's approach is a safe bet. That said, there's no absolute right answer here. The most pragmatic approach is to prioritize consistency within your own project and go with whatever your team finds readable.&lt;/p&gt;

&lt;p&gt;Thanks for reading to the end! I tweet about tech topics that don't make it into my blog posts, so feel free to follow me if you're interested → &lt;a href="https://twitter.com/suin" rel="noopener noreferrer"&gt;Twitter@suin&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>go</category>
      <category>devops</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>controller-runtime: What Happens When You Do Partial Server-Side Apply?</title>
      <dc:creator>suin</dc:creator>
      <pubDate>Thu, 18 Sep 2025 00:35:38 +0000</pubDate>
      <link>https://forem.com/suin/controller-runtime-what-happens-when-you-do-partial-server-side-apply-1oi0</link>
      <guid>https://forem.com/suin/controller-runtime-what-happens-when-you-do-partial-server-side-apply-1oi0</guid>
      <description>&lt;p&gt;In this post, I'll explore what happens when you repeatedly apply partial manifests using Kubernetes Server-Side Apply (SSA). Specifically, I'll investigate through experiments what occurs when you omit fields that were previously managed by a field manager in subsequent applies.&lt;/p&gt;

&lt;p&gt;While developing controllers, I found myself wondering: "What happens to fields I don't include when applying partial manifests with SSA?" If only the submitted portions are merged as a diff, we could simplify our code by sending only the fields we want to manage. On the other hand, if we need to send all managed fields every time, we need to be conscious of this when coding, or we might accidentally create bugs that unintentionally delete fields.&lt;/p&gt;

&lt;p&gt;Developing controllers with incorrect assumptions could lead to bugs that unintentionally destroy resources. While this is about understanding the basic specifications of SSA, I felt it was essential to have a clear understanding, which is why I'm writing this post.&lt;/p&gt;

&lt;p&gt;The experimental code referenced in this article is available in the following repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/suinplayground/controller-runtime/tree/main/02-server-side-apply-partials" rel="noopener noreferrer"&gt;https://github.com/suinplayground/controller-runtime/tree/main/02-server-side-apply-partials&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What You'll Learn
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The behavior when applying partial manifests with Server-Side Apply&lt;/li&gt;
&lt;li&gt;Field ownership and the role of field managers&lt;/li&gt;
&lt;li&gt;The deletion mechanism that occurs when managed fields are omitted&lt;/li&gt;
&lt;li&gt;Collaborative management by multiple field managers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Purpose of the Experiment
&lt;/h2&gt;

&lt;p&gt;The goal of this experiment is to understand what happens when a field manager omits a field in a subsequent apply that was included in a previously applied manifest.&lt;/p&gt;

&lt;p&gt;In real controller development, you might encounter scenarios like: "Initially I was managing the breed field, but later I only want to update the color field." What happens to the breed field in this case?&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiment 1: What Happens When You Omit Fields Managed by the Same Manager?
&lt;/h2&gt;

&lt;p&gt;In this experiment, I'll use a custom resource called &lt;code&gt;Cat&lt;/code&gt;. I'll specify &lt;code&gt;cat-owner&lt;/code&gt; as the "field manager" that manages field ownership.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: First Apply (breed field only)
&lt;/h3&gt;

&lt;p&gt;First, I'll apply a partial manifest containing only &lt;code&gt;spec.breed&lt;/code&gt; to the Cat resource &lt;code&gt;my-cat&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;applycatv1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Cat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"my-cat"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"default"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;
    &lt;span class="n"&gt;WithSpec&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;applycatv1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CatSpec&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;
        &lt;span class="n"&gt;WithBreed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Maine Coon"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;cl&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Apply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cat&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;FieldOwner&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"cat-owner"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ForceOwnership&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this operation, the resource state looks like this. You can see that &lt;code&gt;cat-owner&lt;/code&gt; managing &lt;code&gt;spec.breed&lt;/code&gt; is recorded in &lt;code&gt;managedFields&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"spec"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"breed"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Maine Coon"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"managedFields"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"manager"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cat-owner"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"fieldsV1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"f:spec"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"f:breed"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Second Apply (color field only, breed omitted)
&lt;/h3&gt;

&lt;p&gt;Next, I'll apply a manifest containing only &lt;code&gt;spec.color&lt;/code&gt; to the same &lt;code&gt;my-cat&lt;/code&gt; resource, using the same &lt;code&gt;cat-owner&lt;/code&gt;. The crucial point is that this manifest doesn't include &lt;code&gt;spec.breed&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;applycatv1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Cat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"my-cat"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"default"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;
    &lt;span class="n"&gt;WithSpec&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;applycatv1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CatSpec&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;
        &lt;span class="n"&gt;WithColor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Black"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="c"&gt;// no breed&lt;/span&gt;

&lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;cl&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Apply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cat&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;FieldOwner&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"cat-owner"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ForceOwnership&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Observation
&lt;/h3&gt;

&lt;p&gt;After the second Apply, while &lt;code&gt;spec.color&lt;/code&gt; was added to the resource, the &lt;code&gt;spec.breed&lt;/code&gt; field completely disappeared.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"spec"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Black"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"managedFields"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"manager"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cat-owner"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"fieldsV1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"f:spec"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"f:color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Looking at &lt;code&gt;managedFields&lt;/code&gt;, we can see that &lt;code&gt;cat-owner&lt;/code&gt;'s managed target has been updated from &lt;code&gt;spec.breed&lt;/code&gt; to &lt;code&gt;spec.color&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Was the Field Deleted?
&lt;/h2&gt;

&lt;p&gt;This behavior relates to Server-Side Apply's "declarative ownership management." The Kubernetes official documentation explains exactly this case:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you remove a field from a manifest and apply that manifest, Server-Side Apply checks if there are any other field managers that also own the field. If the field is not owned by any other field managers, it is either deleted from the live object or reset to its default value, if it has one. The same rule applies to associative list or map items.&lt;/p&gt;

&lt;p&gt;-- &lt;a href="https://kubernetes.io/docs/reference/using-api/server-side-apply/#field-management" rel="noopener noreferrer"&gt;Field management - Kubernetes&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This explanation perfectly describes our experimental results:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the first Apply, &lt;code&gt;cat-owner&lt;/code&gt; became the sole owner of the &lt;code&gt;spec.breed&lt;/code&gt; field&lt;/li&gt;
&lt;li&gt;In the second Apply, &lt;code&gt;cat-owner&lt;/code&gt; didn't include &lt;code&gt;spec.breed&lt;/code&gt; in the manifest. This is interpreted as "I no longer want to manage &lt;code&gt;spec.breed&lt;/code&gt;"&lt;/li&gt;
&lt;li&gt;Since there were no other owners of &lt;code&gt;spec.breed&lt;/code&gt;, Server-Side Apply deleted this field from the live object&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is a cleanup feature that prevents unmanaged fields from persisting indefinitely. To avoid unintentionally deleting fields, you need to understand that you must include all fields you want to manage in every Apply request.&lt;/p&gt;

&lt;h2&gt;
  
  
  Supplementary Experiment: What Happens When Another Field Manager Appears?
&lt;/h2&gt;

&lt;p&gt;So what happens when another field manager enters the picture? Let's look at the ownership aspect of SSA.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Third Apply by a Different Manager
&lt;/h3&gt;

&lt;p&gt;From the previous state (where only &lt;code&gt;spec.color&lt;/code&gt; exists), a new field manager called &lt;code&gt;age-controller&lt;/code&gt; applies a manifest containing only &lt;code&gt;spec.age&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Observation
&lt;/h3&gt;

&lt;p&gt;After applying, the resource has both &lt;code&gt;spec.color&lt;/code&gt; and &lt;code&gt;spec.age&lt;/code&gt;. The &lt;code&gt;spec.color&lt;/code&gt; managed by &lt;code&gt;cat-owner&lt;/code&gt; remains unaffected.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"spec"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Black"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"age"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"managedFields"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"manager"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"age-controller"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"fieldsV1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"f:spec"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"f:age"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"manager"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cat-owner"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"fieldsV1"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"f:spec"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"f:color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;managedFields&lt;/code&gt; records two entries for &lt;code&gt;age-controller&lt;/code&gt; and &lt;code&gt;cat-owner&lt;/code&gt;, clearly tracking which fields each owns.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;age-controller&lt;/code&gt; isn't omitting any fields it previously owned (none existed in this case), so it simply claimed ownership of &lt;code&gt;spec.age&lt;/code&gt;. It didn't interfere with fields owned by &lt;code&gt;cat-owner&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;From this experiment, we can understand two key aspects of using Server-Side Apply:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Manifest Completeness = Include All Fields&lt;/strong&gt;: When a field manager omits a field it was managing from the next Apply manifest, that field will be deleted (if no other owners exist). Therefore, always include fields under management in your manifest to maintain manifest completeness.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Completeness is Per-Owner&lt;/strong&gt;: Multiple field managers can safely manage different fields of the same resource without interfering with each other, so you don't need to include fields managed by others in your manifest.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Server-Side Apply is a crucial feature in Kubernetes environments where multiple controllers need to collaboratively manage a single resource. The phenomenon of fields "disappearing" isn't a bug—it's a powerful "feature" that makes declarative configuration management more robust. Understanding this behavior enables safer resource management.&lt;/p&gt;

&lt;p&gt;Thank you for reading to the end!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>controllerruntime</category>
    </item>
    <item>
      <title>Flagger Does Not Manage Services with Names Different from Their Deployments</title>
      <dc:creator>suin</dc:creator>
      <pubDate>Fri, 18 Apr 2025 02:37:35 +0000</pubDate>
      <link>https://forem.com/suin/flagger-does-not-manage-services-with-names-different-from-their-deployments-28ja</link>
      <guid>https://forem.com/suin/flagger-does-not-manage-services-with-names-different-from-their-deployments-28ja</guid>
      <description>&lt;h1&gt;
  
  
  Flagger Does Not Manage Services with Names Different from Their Deployments
&lt;/h1&gt;

&lt;p&gt;Typically, Flagger creates Services with the same name as their Deployments. If a Service with the same name already exists, Flagger is known to take it under management (making it a reconciliation target) and modify properties like spec.selector.app.&lt;/p&gt;

&lt;p&gt;But what happens when a Service is related to a Deployment but has a different name? Will Flagger recognize and manage it, or will it leave it alone? We conducted an investigation to find out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Test Environment
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes: Kind v1.32.2&lt;/li&gt;
&lt;li&gt;Flagger: v1.41.0&lt;/li&gt;
&lt;li&gt;Gateway API: v1.2.0&lt;/li&gt;
&lt;li&gt;Envoy Gateway: v1.3.2&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Testing Method
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Set up the test environment

&lt;ul&gt;
&lt;li&gt;Create a Kind cluster&lt;/li&gt;
&lt;li&gt;Install Gateway API (v1.2.0)&lt;/li&gt;
&lt;li&gt;Install Cert Manager (v1.17.1)&lt;/li&gt;
&lt;li&gt;Install Envoy Gateway (v1.3.2)&lt;/li&gt;
&lt;li&gt;Create GatewayClass and Gateway&lt;/li&gt;
&lt;li&gt;Install Flagger (v1.41.0)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Detailed setup procedure&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create Kind cluster&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kind create cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install Gateway API (v1.2.0)&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install Cert Manager (v1.17.1)&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://github.com/cert-manager/cert-manager/releases/download/v1.17.1/cert-manager.yaml
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;available &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;60s deployment/cert-manager &lt;span class="nt"&gt;-n&lt;/span&gt; cert-manager
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;available &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;60s deployment/cert-manager-webhook &lt;span class="nt"&gt;-n&lt;/span&gt; cert-manager
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;available &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;60s deployment/cert-manager-cainjector &lt;span class="nt"&gt;-n&lt;/span&gt; cert-manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install Envoy Gateway (v1.3.2)&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;--server-side&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; https://github.com/envoyproxy/gateway/releases/download/v1.3.2/install.yaml
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;available &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;600s deployment/envoy-gateway &lt;span class="nt"&gt;-n&lt;/span&gt; envoy-gateway-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create GatewayClass and Gateway&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/gatewayclass.yaml
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;accepted &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;60s gatewayclass/eg
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/gateway.yaml
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;programmed &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;60s gateway/eg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  gatewayclass.yaml
&lt;/h4&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GatewayClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eg&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;controllerName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.envoyproxy.io/gatewayclass-controller&lt;/span&gt;
&lt;span class="na"&gt;parametersRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.envoyproxy.io&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;EnvoyProxy&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;clusterip-config&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;envoy-gateway-system&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.envoyproxy.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;EnvoyProxy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;clusterip-config&lt;/span&gt;
&lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;envoy-gateway-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Kubernetes&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;envoyService&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  gateway.yaml
&lt;/h4&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Gateway&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eg&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;gatewayClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eg&lt;/span&gt;
&lt;span class="na"&gt;listeners&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install Flagger (v1.41.0)&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/fluxcd/flagger/v1.41.0/artifacts/flagger/crd.yaml
kubectl get ns flagger-system &lt;span class="o"&gt;||&lt;/span&gt; kubectl create ns flagger-system
helm repo add flagger https://flagger.app
helm upgrade &lt;span class="nt"&gt;-i&lt;/span&gt; flagger flagger/flagger &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--version&lt;/span&gt; 1.41.0 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--namespace&lt;/span&gt; flagger-system &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--set&lt;/span&gt; prometheus.install&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;meshProvider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;gatewayapi:v1 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;metricsServer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;none
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ol&gt;
&lt;li&gt;Deploy the following resources:

&lt;ul&gt;
&lt;li&gt;Deployment name: &lt;code&gt;podinfo&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;Service name: &lt;code&gt;podinfo-different-name&lt;/code&gt; (different from Deployment name)&lt;/li&gt;
&lt;li&gt;Canary name: &lt;code&gt;podinfo&lt;/code&gt; (same as Deployment name)
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/deployment.yaml
   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/service.yaml
   kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/canary.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;br&gt;
   Content of the deployed manifest files&lt;/p&gt;

&lt;p&gt;### deployment.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
   &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
   &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
     &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
     &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
           &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
       &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
           &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;stefanprodan/podinfo:latest&lt;/span&gt;
           &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
           &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;### service.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
   &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo-different-name&lt;/span&gt;
   &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
     &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
       &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;### canary.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flagger.app/v1beta1&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Canary&lt;/span&gt;
   &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
   &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;targetRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
       &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
       &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
     &lt;span class="na"&gt;progressDeadlineSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;60&lt;/span&gt;
     &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
       &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
       &lt;span class="na"&gt;gatewayRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eg&lt;/span&gt;
           &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
     &lt;span class="na"&gt;analysis&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1s&lt;/span&gt;
       &lt;span class="na"&gt;threshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
       &lt;span class="na"&gt;maxWeight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;50&lt;/span&gt;
       &lt;span class="na"&gt;stepWeight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h2&gt;
  
  
  Test Results
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Services after deployment
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get services
NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;    AGE
kubernetes               ClusterIP   10.96.0.1       &amp;lt;none&amp;gt;        443/TCP    8m30s
podinfo                  ClusterIP   10.96.126.169   &amp;lt;none&amp;gt;        9898/TCP   5m39s
podinfo-canary           ClusterIP   10.96.20.177    &amp;lt;none&amp;gt;        9898/TCP   5m49s
podinfo-different-name   ClusterIP   10.96.141.217   &amp;lt;none&amp;gt;        9898/TCP   6m12s
podinfo-primary          ClusterIP   10.96.130.137   &amp;lt;none&amp;gt;        9898/TCP   5m49s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: The &lt;code&gt;podinfo&lt;/code&gt; service (same name as Deployment) is created slightly after the Canary resource is created.&lt;/p&gt;

&lt;h3&gt;
  
  
  Original Service (podinfo-different-name)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# No ownerReferences - not managed by Flagger&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo-different-name&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# Selector remains unchanged&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  podinfo-primary service created by Flagger
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo-primary&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo-primary&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;ownerReferences&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flagger.app/v1beta1&lt;/span&gt;
    &lt;span class="na"&gt;blockOwnerDeletion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;controller&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Canary&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
    &lt;span class="na"&gt;uid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;c388a7b6-6dc9-4aa9-b9f1-e55ad8f31575&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo-primary&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  podinfo-canary service created by Flagger
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo-canary&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo-canary&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;ownerReferences&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flagger.app/v1beta1&lt;/span&gt;
    &lt;span class="na"&gt;blockOwnerDeletion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;controller&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Canary&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
    &lt;span class="na"&gt;uid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;c388a7b6-6dc9-4aa9-b9f1-e55ad8f31575&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Service with the same name as Deployment (podinfo) created by Flagger
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;helm.toolkit.fluxcd.io/driftDetection&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disabled&lt;/span&gt;
    &lt;span class="na"&gt;kustomize.toolkit.fluxcd.io/reconcile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disabled&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;ownerReferences&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flagger.app/v1beta1&lt;/span&gt;
    &lt;span class="na"&gt;blockOwnerDeletion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;controller&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Canary&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
    &lt;span class="na"&gt;uid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;c388a7b6-6dc9-4aa9-b9f1-e55ad8f31575&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo-primary&lt;/span&gt;  &lt;span class="c1"&gt;# Routes traffic to primary&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Based on our investigation, we confirmed the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Flagger does not recognize or manage Services with names different from their Deployments

&lt;ul&gt;
&lt;li&gt;The Service &lt;code&gt;podinfo-different-name&lt;/code&gt;, which has a different name from Deployment &lt;code&gt;podinfo&lt;/code&gt;, was not controlled or modified by Flagger&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Flagger creates its own services

&lt;ul&gt;
&lt;li&gt;When deploying a Canary resource, Flagger created the following three Services:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;podinfo-primary&lt;/code&gt; - Service that routes traffic to primary version pods&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;podinfo-canary&lt;/code&gt; - Service that routes traffic to canary version pods&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;podinfo&lt;/code&gt; - Service with the same name as the Deployment (created slightly later). This receives actual end-user traffic and internally routes to &lt;code&gt;podinfo-primary&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;All of these Services are managed by Flagger and have ownerReferences set&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;The original Service remains unchanged

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;podinfo-different-name&lt;/code&gt; Service remains in its original state without being modified by Flagger&lt;/li&gt;
&lt;li&gt;Its selector and other settings remain unchanged&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Flagger's internal mechanism

&lt;ul&gt;
&lt;li&gt;Flagger creates a service with the same name as the Deployment (&lt;code&gt;podinfo&lt;/code&gt;) to receive end-user traffic and internally controls traffic between &lt;code&gt;podinfo-primary&lt;/code&gt; and &lt;code&gt;podinfo-canary&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;This control enables canary deployments that gradually shift traffic to the canary version&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;From this investigation, it is clear that Flagger does not recognize or manage services with names that differ from their deployments. This confirms that creating a Service with a name different from its Deployment is an effective strategy to avoid automatic management by Flagger.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>flagger</category>
    </item>
    <item>
      <title>The 'hosts' Field in Flagger Canary Resources Can Be Omitted</title>
      <dc:creator>suin</dc:creator>
      <pubDate>Tue, 15 Apr 2025 07:44:17 +0000</pubDate>
      <link>https://forem.com/suin/the-hosts-field-in-flagger-canary-resources-can-be-omitted-3pie</link>
      <guid>https://forem.com/suin/the-hosts-field-in-flagger-canary-resources-can-be-omitted-3pie</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;In Flagger's Canary resources, the &lt;code&gt;hosts&lt;/code&gt; field can be safely omitted without affecting functionality.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Even when integrating with the Gateway API, you can simply configure &lt;code&gt;gatewayRefs&lt;/code&gt; without specifying the &lt;code&gt;hosts&lt;/code&gt; field, and HTTPRoutes will be generated correctly, allowing canary releases to function properly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background and Purpose
&lt;/h2&gt;

&lt;p&gt;When using Flagger for canary releases with Gateway API integration, you typically specify the &lt;code&gt;hosts&lt;/code&gt; field as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
    &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;podinfo.example.com&lt;/span&gt;
    &lt;span class="na"&gt;gatewayRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway-name&lt;/span&gt;
        &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway-namespace&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, it wasn't clear whether the &lt;code&gt;hosts&lt;/code&gt; field is mandatory or what happens when it's omitted. This article investigates the behavior when the &lt;code&gt;hosts&lt;/code&gt; field is omitted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Environment
&lt;/h2&gt;

&lt;p&gt;The following environment was used for testing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes: Kind cluster&lt;/li&gt;
&lt;li&gt;Gateway API: Envoy Gateway&lt;/li&gt;
&lt;li&gt;Flagger: Latest version&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Testing Procedure
&lt;/h2&gt;

&lt;p&gt;The following steps can be reproduced using kubectl commands.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Create a Gateway Resource
&lt;/h3&gt;

&lt;p&gt;First, create a basic Gateway resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Gateway&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eg&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;gatewayClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eg&lt;/span&gt;
  &lt;span class="na"&gt;listeners&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; gateway.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Create the Initial Deployment
&lt;/h3&gt;

&lt;p&gt;Deploy a test application (podinfo):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;stefanprodan/podinfo:6.0.0&lt;/span&gt;
        &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
          &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
          &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;500m&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;256Mi&lt;/span&gt;
          &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100m&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;64Mi&lt;/span&gt;
        &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/healthz&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
          &lt;span class="na"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
        &lt;span class="na"&gt;readinessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/readyz&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
          &lt;span class="na"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Create a Canary Resource Without the hosts Field
&lt;/h3&gt;

&lt;p&gt;This is the core of our test. Create a Canary resource without the &lt;code&gt;hosts&lt;/code&gt; field:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flagger.app/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Canary&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;targetRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;podinfo&lt;/span&gt;
  &lt;span class="na"&gt;progressDeadlineSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;60&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9898&lt;/span&gt;
    &lt;span class="c1"&gt;# hosts field is omitted&lt;/span&gt;
    &lt;span class="na"&gt;gatewayRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eg&lt;/span&gt;
        &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;  &lt;span class="c1"&gt;# Change to match your actual namespace&lt;/span&gt;
  &lt;span class="na"&gt;analysis&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1s&lt;/span&gt;
    &lt;span class="na"&gt;threshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
    &lt;span class="na"&gt;maxWeight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;50&lt;/span&gt;
    &lt;span class="na"&gt;stepWeight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; canary.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Check the Canary Resource Status
&lt;/h3&gt;

&lt;p&gt;Verify that the Canary resource has been initialized correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get canary podinfo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME      STATUS        WEIGHT   LASTTRANSITIONTIME
podinfo   Initialized   0        2025-04-15T07:05:52Z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, check that the HTTPRoute has been generated correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get httproute podinfo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME      HOSTNAMES   AGE
podinfo               2m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that the &lt;code&gt;HOSTNAMES&lt;/code&gt; field is empty, indicating that no specific hostnames are set because we omitted the &lt;code&gt;hosts&lt;/code&gt; field.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Verify HTTP Requests
&lt;/h3&gt;

&lt;p&gt;Identify the service created by Envoy Gateway and send a request to it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Get the service name&lt;/span&gt;
&lt;span class="nv"&gt;SERVICE_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get service &lt;span class="nt"&gt;-n&lt;/span&gt; envoy-gateway-system &lt;span class="nt"&gt;-l&lt;/span&gt; gateway.envoyproxy.io/owning-gateway-namespace&lt;span class="o"&gt;=&lt;/span&gt;default &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.items[0].metadata.name}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Service name: &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SERVICE_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Send an HTTP request using curl&lt;/span&gt;
kubectl run curl-test &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;curlimages/curl &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Never &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  curl &lt;span class="nt"&gt;-v&lt;/span&gt; http://&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SERVICE_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.envoy-gateway-system.svc.cluster.local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example output (excerpt):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Service name: envoy-default-eg-xxxxxxxx
{
  "hostname": "podinfo-primary-xxxxxxxxxx-xxxxx",
  "version": "6.0.0",
  "message": "greetings from podinfo v6.0.0",
  ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This confirms that we can access the application through the Gateway even without specifying the &lt;code&gt;hosts&lt;/code&gt; field.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Update the Version
&lt;/h3&gt;

&lt;p&gt;Next, update the Deployment to deploy a new version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl patch deployment podinfo &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;'{"spec":{"template":{"spec":{"containers":[{"name":"podinfo","image":"stefanprodan/podinfo:6.1.0"}]}}}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  7. Monitor the Canary Release Progress
&lt;/h3&gt;

&lt;p&gt;Check that Flagger is progressing with the canary release:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get canary podinfo &lt;span class="nt"&gt;-w&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME      STATUS      WEIGHT   LASTTRANSITIONTIME
podinfo   Progressing 0        2025-04-15T07:06:10Z
podinfo   Progressing 10       2025-04-15T07:06:20Z
podinfo   Progressing 20       2025-04-15T07:06:30Z
...
podinfo   Promoting   50       2025-04-15T07:07:00Z
...
podinfo   Succeeded   0        2025-04-15T07:07:20Z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  8. Verify HTTP Requests After Update
&lt;/h3&gt;

&lt;p&gt;Finally, confirm that we can access the updated version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Get the service name&lt;/span&gt;
&lt;span class="nv"&gt;SERVICE_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get service &lt;span class="nt"&gt;-n&lt;/span&gt; envoy-gateway-system &lt;span class="nt"&gt;-l&lt;/span&gt; gateway.envoyproxy.io/owning-gateway-namespace&lt;span class="o"&gt;=&lt;/span&gt;default &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.items[0].metadata.name}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Send an HTTP request using curl&lt;/span&gt;
kubectl run curl-test-after-update &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;curlimages/curl &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Never &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  curl &lt;span class="nt"&gt;-v&lt;/span&gt; http://&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SERVICE_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.envoy-gateway-system.svc.cluster.local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example output (excerpt):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "hostname": "podinfo-primary-xxxxxxxxxx-xxxxx",
  "version": "6.1.0",
  "message": "greetings from podinfo v6.1.0",
  ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Test Results
&lt;/h2&gt;

&lt;p&gt;Our testing confirmed the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating a Canary resource without the &lt;code&gt;hosts&lt;/code&gt; field was successful&lt;/li&gt;
&lt;li&gt;The HTTPRoute was generated correctly and reached the Accepted status&lt;/li&gt;
&lt;li&gt;We could access the application through the Gateway&lt;/li&gt;
&lt;li&gt;After updating the Deployment, the canary release progressed normally and eventually completed promotion&lt;/li&gt;
&lt;li&gt;We could access the new version (v6.1.0) after the update&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This confirms that the &lt;code&gt;hosts&lt;/code&gt; field in Flagger's Canary resources is not mandatory and can be omitted without affecting functionality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;When integrating Flagger with the Gateway API, the &lt;code&gt;hosts&lt;/code&gt; field is optional. If you don't need routing based on specific hostnames, you can simply specify &lt;code&gt;gatewayRefs&lt;/code&gt; to achieve canary releases with a simpler configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/suinplayground/kubernetes-playground/tree/main/flagger/01-hostnameless" rel="noopener noreferrer"&gt;Repository with the test setup described in this article&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernates</category>
      <category>flagger</category>
    </item>
    <item>
      <title>Troubleshooting Docker credsStore Auto-Configuration Issues in VS Code Dev Containers</title>
      <dc:creator>suin</dc:creator>
      <pubDate>Thu, 26 Dec 2024 06:31:49 +0000</pubDate>
      <link>https://forem.com/suin/troubleshooting-docker-credsstore-auto-configuration-issues-in-vs-code-dev-containers-2o46</link>
      <guid>https://forem.com/suin/troubleshooting-docker-credsstore-auto-configuration-issues-in-vs-code-dev-containers-2o46</guid>
      <description>&lt;p&gt;When using VS Code's Dev Container extension, you might encounter credentials-related errors while trying to access registries through &lt;code&gt;docker pull&lt;/code&gt;, &lt;code&gt;az acr login&lt;/code&gt;, or other CLI tools. These errors might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;error getting credentials - err: exit status 255, out: ``
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You might also experience failures with commands like &lt;code&gt;kcl mod add&lt;/code&gt; or &lt;code&gt;az acr login&lt;/code&gt;, or encounter 403 errors related to authentication. This article explains the cause and solutions to these issues.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note about "Host Machine" in this article&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
In Dev Containers, the "host machine" refers to &lt;strong&gt;the machine running VS Code&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
Even if your Docker daemon is running on a cloud or remote server and you're only running VS Code on your local Mac or PC, we'll refer to the &lt;strong&gt;VS Code side (local PC) = host machine&lt;/strong&gt; in this context.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Commands like &lt;code&gt;kcl mod add&lt;/code&gt; or &lt;code&gt;docker pull&lt;/code&gt; succeed when executed from &lt;strong&gt;VS Code's&lt;/strong&gt; terminal inside the container, but fail with &lt;code&gt;error getting credentials&lt;/code&gt; when accessing the container through &lt;strong&gt;SSH or &lt;code&gt;docker exec&lt;/code&gt;&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Looking at &lt;code&gt;~/.docker/config.json&lt;/code&gt; &lt;strong&gt;inside the container&lt;/strong&gt;, you'll find &lt;code&gt;credsStore&lt;/code&gt; is set to something like &lt;code&gt;dev-containers-xxxxxx...&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Removing this &lt;code&gt;credsStore&lt;/code&gt; temporarily fixes the error, but it reappears when reconnecting to the Dev Container in VS Code.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Root Cause
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Understanding Docker Credential Helper and VS Code Dev Containers Extension
&lt;/h3&gt;

&lt;p&gt;VS Code's Dev Container extension includes a feature that automatically sets up a &lt;strong&gt;Docker Credential Helper&lt;/strong&gt;. This mechanism attempts to share registry authentication information with the container using the &lt;code&gt;~/.docker/config.json&lt;/code&gt; and credsStore from &lt;strong&gt;the host machine running VS Code&lt;/strong&gt;, not from within the container.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When starting a Dev Container, if there's &lt;strong&gt;no&lt;/strong&gt; &lt;code&gt;credsStore&lt;/code&gt; specified in the container's &lt;code&gt;~/.docker/config.json&lt;/code&gt;, and Dev Container settings are default (or &lt;code&gt;dev.containers.dockerCredentialHelper&lt;/code&gt; is enabled), the VS Code Dev Containers extension automatically inserts a &lt;code&gt;credsStore&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;This results in the following content in the container's &lt;code&gt;~/.docker/config.json&lt;/code&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json-doc"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"credsStore"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"dev-containers-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Consequently, when running &lt;code&gt;docker pull&lt;/code&gt; or &lt;code&gt;az acr login&lt;/code&gt; inside the container, it attempts to call a helper like &lt;code&gt;docker-credential-dev-containers-xxxxxx&lt;/code&gt;.

&lt;ul&gt;
&lt;li&gt;This helper is a generated script inside the container that performs &lt;strong&gt;IPC (Inter-Process Communication) with VS Code (Dev Container extension)&lt;/strong&gt; to query authentication information tied to the host machine's &lt;code&gt;~/.docker/config.json&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The REMOTE_CONTAINERS_IPC Environment Variable
&lt;/h3&gt;

&lt;p&gt;The success of this authentication depends on the &lt;code&gt;REMOTE_CONTAINERS_IPC&lt;/code&gt; environment variable.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When attaching to a container through &lt;strong&gt;VS Code's terminal&lt;/strong&gt;, the Dev Container extension automatically sets &lt;code&gt;REMOTE_CONTAINERS_IPC&lt;/code&gt;, allowing successful authentication via &lt;code&gt;credsStore&lt;/code&gt; for commands like &lt;code&gt;docker pull&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;However, when accessing the container through &lt;strong&gt;SSH or &lt;code&gt;docker exec&lt;/code&gt;&lt;/strong&gt; without VS Code, this environment variable isn't set, causing the &lt;code&gt;docker-credential-xxx&lt;/code&gt; to fail its IPC communication, resulting in the "error getting credentials - err: exit status 255" error.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Solutions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Remove &lt;code&gt;credsStore&lt;/code&gt; from the Container's &lt;code&gt;~/.docker/config.json&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Opening &lt;code&gt;~/.docker/config.json&lt;/code&gt; inside the container and removing the &lt;code&gt;credsStore&lt;/code&gt; line will temporarily allow &lt;code&gt;docker pull&lt;/code&gt; and similar commands to succeed.&lt;br&gt;
Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Inside container: ~/.docker/config.json&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
  // &lt;span class="s2"&gt;"credsStore"&lt;/span&gt;: &lt;span class="s2"&gt;"dev-containers-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"&lt;/span&gt;
  &lt;span class="s2"&gt;"auths"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt; ... &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, this isn't a permanent solution if VS Code settings remain default, as &lt;code&gt;credsStore&lt;/code&gt; will be re-added on each startup.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Disable &lt;code&gt;dev.containers.dockerCredentialHelper&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;In your Dev Container configuration file (&lt;code&gt;devcontainer.json&lt;/code&gt;), you can prevent the Docker Credential Helper from being installed by setting &lt;code&gt;dev.containers.dockerCredentialHelper: false&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json-doc"&gt;&lt;code&gt;&lt;span class="c1"&gt;// devcontainer.json example&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"customizations"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"vscode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"settings"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="c1"&gt;// Set this to false&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"dev.containers.dockerCredentialHelper"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This prevents automatic &lt;code&gt;credsStore&lt;/code&gt; configuration in the container's &lt;code&gt;~/.docker/config.json&lt;/code&gt;, eliminating IPC errors. However, you'll lose the ability to inherit Docker authentication from the host machine, requiring separate &lt;code&gt;docker login&lt;/code&gt; commands in the container and consideration of associated security risks.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Enable &lt;code&gt;REMOTE_CONTAINERS_IPC&lt;/code&gt; for SSH/Alternative Shells
&lt;/h3&gt;

&lt;p&gt;If you want to maintain IPC functionality even when accessing the container without VS Code, you can persist the &lt;code&gt;REMOTE_CONTAINERS_IPC&lt;/code&gt; value set by VS Code across shells.&lt;/p&gt;

&lt;p&gt;For example, using Dev Container's &lt;code&gt;postAttachCommand&lt;/code&gt;, you could write the environment variable to &lt;code&gt;~/.config/fish/conf.d/&lt;/code&gt; (for fish shell):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;REMOTE_CONTAINERS_IPC&lt;/span&gt;&lt;span class="p"&gt;+x&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt;
  &lt;span class="c"&gt;# Export environment variable for fish shell&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"set -x REMOTE_CONTAINERS_IPC &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;REMOTE_CONTAINERS_IPC&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; ~/.config/fish/conf.d/remote-containers-ipc.fish
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows recognition of &lt;code&gt;REMOTE_CONTAINERS_IPC&lt;/code&gt; even when accessing via SSH. However, since &lt;code&gt;REMOTE_CONTAINERS_IPC&lt;/code&gt; might have random values per connection, you'll need to establish operational rules.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Clean Up Host Machine Credentials
&lt;/h3&gt;

&lt;p&gt;Authentication issues like 403 errors during &lt;code&gt;docker pull&lt;/code&gt; might occur if credential information in the host machine's &lt;code&gt;~/.docker/config.json&lt;/code&gt; or key management (like Keychain Access) is corrupted.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On macOS, try deleting credentials for &lt;code&gt;ghcr.io&lt;/code&gt;, &lt;code&gt;quay.io&lt;/code&gt;, etc., from &lt;strong&gt;Keychain Access&lt;/strong&gt; and then run &lt;code&gt;docker login&lt;/code&gt; again.&lt;/li&gt;
&lt;li&gt;Verify that tokens obtained after &lt;code&gt;docker login&lt;/code&gt; have necessary permissions like &lt;code&gt;packages:read&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Reviewing these aspects can help prevent &lt;strong&gt;403 errors&lt;/strong&gt; (insufficient permissions).&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;VS Code's Dev Container extension includes a &lt;strong&gt;Docker Credential Helper&lt;/strong&gt; that inherits Docker authentication from the host machine.

&lt;ul&gt;
&lt;li&gt;Here, "host machine" means &lt;strong&gt;the machine running VS Code&lt;/strong&gt;, not the Docker daemon host.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;This mechanism automatically configures &lt;code&gt;credsStore&lt;/code&gt; in the container's &lt;code&gt;~/.docker/config.json&lt;/code&gt;, but can &lt;strong&gt;fail authentication in environments without &lt;code&gt;REMOTE_CONTAINERS_IPC&lt;/code&gt; (like SSH)&lt;/strong&gt;.&lt;/li&gt;

&lt;li&gt;Available solutions include:

&lt;ol&gt;
&lt;li&gt;Removing &lt;code&gt;credsStore&lt;/code&gt; from the container's &lt;code&gt;~/.docker/config.json&lt;/code&gt; (though it may be re-added)&lt;/li&gt;
&lt;li&gt;Disabling the Credential Helper by setting &lt;code&gt;dev.containers.dockerCredentialHelper&lt;/code&gt; to &lt;strong&gt;false&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Implementing a custom mechanism to persist &lt;code&gt;REMOTE_CONTAINERS_IPC&lt;/code&gt; across external shells&lt;/li&gt;
&lt;li&gt;Cleaning up Docker credentials on the host machine (checking/re-logging via Keychain Access)&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/microsoft/vscode-remote-release/issues/7982" rel="noopener noreferrer"&gt;Devcontainer version 0.275 - auto adding ~/.docker/config.json with credsStore specified which cause az acr login to fail · Issue #7982 · microsoft/vscode-remote-release&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>vscode</category>
      <category>devcontainer</category>
      <category>docker</category>
      <category>debugging</category>
    </item>
    <item>
      <title>Deploying Helm Charts to Multiple Kubernetes Clusters with Cluster API's HelmChartProxy</title>
      <dc:creator>suin</dc:creator>
      <pubDate>Thu, 21 Nov 2024 09:59:49 +0000</pubDate>
      <link>https://forem.com/suin/deploying-helm-charts-to-multiple-kubernetes-clusters-with-cluster-apis-helmchartproxy-3fa6</link>
      <guid>https://forem.com/suin/deploying-helm-charts-to-multiple-kubernetes-clusters-with-cluster-apis-helmchartproxy-3fa6</guid>
      <description>&lt;p&gt;In this article, I'll introduce you to HelmChartProxy, a powerful feature of Cluster API that enables bulk deployment of Helm charts across multiple Kubernetes clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;When managing multiple Kubernetes clusters, you often need to deploy the same Helm charts across different environments. Traditionally, this meant running Helm commands individually for each cluster. However, with Cluster API's HelmChartProxy, you can now streamline this process with bulk deployments.&lt;/p&gt;

&lt;p&gt;We'll explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bulk deployment to all clusters&lt;/li&gt;
&lt;li&gt;Targeted deployment using labels&lt;/li&gt;
&lt;li&gt;Deployment from private registries&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting Up the Test Environment
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Docker (16GB+ memory recommended)&lt;/li&gt;
&lt;li&gt;kubectl&lt;/li&gt;
&lt;li&gt;kind&lt;/li&gt;
&lt;li&gt;clusterctl&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  1. Creating the Management Cluster
&lt;/h3&gt;

&lt;p&gt;First, let's create a management cluster using kind. Create the following configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# kind-cluster-with-extramounts.yaml&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cluster&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind.x-k8s.io/v1alpha4&lt;/span&gt;
&lt;span class="na"&gt;networking&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ipFamily&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dual&lt;/span&gt;
&lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;control-plane&lt;/span&gt;
  &lt;span class="na"&gt;extraMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/run/docker.sock&lt;/span&gt;
      &lt;span class="na"&gt;containerPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/run/docker.sock&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the cluster using this configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kind create cluster &lt;span class="nt"&gt;--config&lt;/span&gt; kind-cluster-with-extramounts.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Setting Up Cluster API
&lt;/h3&gt;

&lt;p&gt;Install Cluster API on the management cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CLUSTER_TOPOLOGY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true
&lt;/span&gt;clusterctl init &lt;span class="nt"&gt;--infrastructure&lt;/span&gt; docker &lt;span class="nt"&gt;--addon&lt;/span&gt; helm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait for all components to be ready:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Available &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;300s &lt;span class="nt"&gt;-n&lt;/span&gt; capi-system deployments &lt;span class="nt"&gt;--all&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Available &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;300s &lt;span class="nt"&gt;-n&lt;/span&gt; capi-kubeadm-bootstrap-system deployments &lt;span class="nt"&gt;--all&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Available &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;300s &lt;span class="nt"&gt;-n&lt;/span&gt; capi-kubeadm-control-plane-system deployments &lt;span class="nt"&gt;--all&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Available &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;300s &lt;span class="nt"&gt;-n&lt;/span&gt; capd-system deployments &lt;span class="nt"&gt;--all&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Available &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;300s &lt;span class="nt"&gt;-n&lt;/span&gt; caaph-system deployments &lt;span class="nt"&gt;--all&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Creating Workload Clusters
&lt;/h3&gt;

&lt;p&gt;We'll create two clusters named "muscat" and "delaware":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Generate manifest for muscat cluster&lt;/span&gt;
clusterctl generate cluster muscat &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--flavor&lt;/span&gt; development &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--kubernetes-version&lt;/span&gt; v1.28.0 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--control-plane-machine-count&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--worker-machine-count&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; muscat.yaml

&lt;span class="c"&gt;# Generate manifest for delaware cluster&lt;/span&gt;
clusterctl generate cluster delaware &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--flavor&lt;/span&gt; development &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--kubernetes-version&lt;/span&gt; v1.28.0 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--control-plane-machine-count&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--worker-machine-count&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; delaware.yaml

&lt;span class="c"&gt;# Create the clusters&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; muscat.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; delaware.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get the kubeconfig files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;clusterctl get kubeconfig muscat &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; muscat-kubeconfig.yaml
clusterctl get kubeconfig delaware &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; delaware-kubeconfig.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Setting Up CNI
&lt;/h3&gt;

&lt;p&gt;Install Calico on both clusters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install Calico on muscat&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;muscat-kubeconfig.yaml apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml

&lt;span class="c"&gt;# Install Calico on delaware&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;delaware-kubeconfig.yaml apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying with HelmChartProxy
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Deploying to All Clusters
&lt;/h3&gt;

&lt;p&gt;Let's look at an example of deploying nginx to all clusters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# helm-chart-proxy-nginx.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;addons.cluster.x-k8s.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HelmChartProxy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;clusterSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt; &lt;span class="c1"&gt;# Empty selector means "select all clusters"&lt;/span&gt;
  &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;oci://registry-1.docker.io/bitnamicharts&lt;/span&gt;
  &lt;span class="na"&gt;chartName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;18.2.5"&lt;/span&gt;
  &lt;span class="na"&gt;releaseName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;waitForJobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;atomic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;wait&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5m&lt;/span&gt;
    &lt;span class="na"&gt;install&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;createNamespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;valuesTemplate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
    &lt;span class="s"&gt;service:&lt;/span&gt;
      &lt;span class="s"&gt;type: NodePort&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy and verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Deploy nginx&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; helm-chart-proxy-nginx.yaml

&lt;span class="c"&gt;# Wait for HelmReleaseProxy to be ready&lt;/span&gt;
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Ready &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;300s helmreleaseproxy &lt;span class="nt"&gt;--all&lt;/span&gt;

&lt;span class="c"&gt;# Check pods on muscat&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;muscat-kubeconfig.yaml get pods &lt;span class="nt"&gt;-n&lt;/span&gt; nginx

&lt;span class="c"&gt;# Check pods on delaware&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;delaware-kubeconfig.yaml get pods &lt;span class="nt"&gt;-n&lt;/span&gt; nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Targeted Deployment Using Labels
&lt;/h3&gt;

&lt;p&gt;You can deploy to specific clusters using labels:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# helm-chart-proxy-nginx.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;addons.cluster.x-k8s.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HelmChartProxy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;clusterSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;use-nginx&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;  &lt;span class="c1"&gt;# Only deploy to clusters with this label&lt;/span&gt;
  &lt;span class="c1"&gt;# ... (other settings remain the same)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Usage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Deploy nginx&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; helm-chart-proxy-nginx.yaml

&lt;span class="c"&gt;# Label the muscat cluster&lt;/span&gt;
kubectl label cluster muscat use-nginx&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# Check deployment status&lt;/span&gt;
kubectl get helmreleaseproxy &lt;span class="nt"&gt;-A&lt;/span&gt;

&lt;span class="c"&gt;# Verify nginx is running on muscat&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;muscat-kubeconfig.yaml get pods &lt;span class="nt"&gt;-n&lt;/span&gt; nginx

&lt;span class="c"&gt;# Verify nginx is not running on delaware&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;delaware-kubeconfig.yaml get pods &lt;span class="nt"&gt;-n&lt;/span&gt; nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploying from Private Registries
&lt;/h3&gt;

&lt;p&gt;Let's use GitHub Container Registry as an example:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create a GitHub Personal Access Token (needs read:packages permission)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set up registry credentials:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create config.json with base64 encoded credentials&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"YOUR_GITHUB_USERNAME:YOUR_GITHUB_TOKEN"&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; auth.txt

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; config.json &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
{
  "auths": {
    "ghcr.io": {
      "auth": "&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;auth.txt&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;"
    }
  }
}
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="c"&gt;# Create the secret&lt;/span&gt;
kubectl create secret generic github-creds &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;config.json &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-n&lt;/span&gt; caaph-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create HelmChartProxy for private charts:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;addons.cluster.x-k8s.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HelmChartProxy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;private-chart&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;clusterSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
  &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;oci://ghcr.io/YOUR_GITHUB_USERNAME&lt;/span&gt;
  &lt;span class="na"&gt;chartName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;YOUR_CHART_NAME&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.1.0"&lt;/span&gt;
  &lt;span class="na"&gt;credentials&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;github-creds&lt;/span&gt;
      &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;caaph-system&lt;/span&gt;
    &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config.json&lt;/span&gt;
  &lt;span class="c1"&gt;# ... (other settings)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;HelmChartProxy significantly simplifies the process of deploying Helm charts across multiple clusters. Key benefits include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduced operational overhead through bulk deployments&lt;/li&gt;
&lt;li&gt;Flexible deployment control using labels&lt;/li&gt;
&lt;li&gt;Support for private registries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're managing multiple Kubernetes clusters, consider incorporating HelmChartProxy into your workflow. It can greatly streamline your Helm chart deployment process and make your multi-cluster management more efficient.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>helm</category>
      <category>capi</category>
      <category>caaph</category>
    </item>
    <item>
      <title>How to Host Helm Charts on GitHub Container Registry</title>
      <dc:creator>suin</dc:creator>
      <pubDate>Thu, 21 Nov 2024 07:14:40 +0000</pubDate>
      <link>https://forem.com/suin/how-to-host-helm-charts-on-github-container-registry-43kp</link>
      <guid>https://forem.com/suin/how-to-host-helm-charts-on-github-container-registry-43kp</guid>
      <description>&lt;p&gt;Recently, I've been working more with application management in Kubernetes environments, which led me to think deeply about Helm chart management. Today, I'll show you how to host Helm charts using GitHub Container Registry (GHCR). What makes GHCR particularly attractive is its free private registry feature, making it an ideal choice for personal projects or small team collaborations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;You'll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;A GitHub account with a Classic Personal Access Token

&lt;ul&gt;
&lt;li&gt;Your token needs the &lt;code&gt;write:packages&lt;/code&gt; scope&lt;/li&gt;
&lt;li&gt;Note: Fine-grained permissions aren't currently supported for GHCR&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Our Example Chart
&lt;/h2&gt;

&lt;p&gt;For this tutorial, we'll use a minimalist chart that creates a &lt;code&gt;hello-world&lt;/code&gt; namespace. Here's our chart structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;charts/hello-world/
├── Chart.yaml
├── templates/
│   └── namespace.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Chart.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v2&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;A simple Helm chart that creates hello-world namespace&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.1.0&lt;/span&gt;
&lt;span class="na"&gt;appVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.0.0"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# templates/namespace.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Publishing to GHCR
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Log into GitHub Container Registry
&lt;/h3&gt;

&lt;p&gt;First, log in using the Helm CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm registry login ghcr.io &lt;span class="nt"&gt;-u&lt;/span&gt; YOUR_GITHUB_USERNAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When prompted, enter your Personal Access Token (PAT) as the password.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Package Your Chart
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm package charts/hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a &lt;code&gt;hello-world-0.1.0.tgz&lt;/code&gt; file.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Push to GHCR
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm push hello-world-0.1.0.tgz oci://ghcr.io/YOUR_GITHUB_USERNAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! Simple and straightforward.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Your Published Chart
&lt;/h2&gt;

&lt;p&gt;To install the chart from GHCR:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;hello-world oci://ghcr.io/YOUR_GITHUB_USERNAME/hello-world &lt;span class="nt"&gt;--version&lt;/span&gt; 0.1.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;

&lt;p&gt;After testing, you can clean up with these commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Uninstall the Helm release&lt;/span&gt;
helm uninstall hello-world

&lt;span class="c"&gt;# Delete the created namespace&lt;/span&gt;
kubectl delete namespace hello-world

&lt;span class="c"&gt;# Remove the local package file&lt;/span&gt;
&lt;span class="nb"&gt;rm &lt;/span&gt;hello-world-0.1.0.tgz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why GHCR?
&lt;/h2&gt;

&lt;p&gt;GitHub Container Registry offers several advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Free private registries&lt;/li&gt;
&lt;li&gt;Seamless GitHub integration for easier management&lt;/li&gt;
&lt;li&gt;OCI standard compatibility&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, having your Helm charts and source code in the same ecosystem simplifies your DevOps workflow. The ability to manage access through GitHub's familiar interface is a significant bonus for teams already using GitHub for their development work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;Hosting Helm charts on GitHub Container Registry is a straightforward process that provides a robust solution for chart management. Whether you're working on personal projects or collaborating with a small team, GHCR offers a reliable, cost-effective platform for your Helm charts.&lt;/p&gt;

&lt;p&gt;Give it a try and let me know how it works for your use case! Feel free to share your experiences or ask questions in the comments below.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>ghcr</category>
      <category>helm</category>
      <category>github</category>
    </item>
    <item>
      <title>How to Use a GitHub Private Repository as a Helm Chart Repository</title>
      <dc:creator>suin</dc:creator>
      <pubDate>Thu, 21 Nov 2024 06:03:46 +0000</pubDate>
      <link>https://forem.com/suin/how-to-use-a-github-private-repository-as-a-helm-chart-repository-390k</link>
      <guid>https://forem.com/suin/how-to-use-a-github-private-repository-as-a-helm-chart-repository-390k</guid>
      <description>&lt;p&gt;In this article, I'll show you how to create your own private Helm Chart repository using a GitHub private repository. This approach is perfect for personal or team use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use GitHub as a Helm Chart Repository?
&lt;/h2&gt;

&lt;p&gt;While this setup might not be ideal for production environments, it's excellent for development and testing purposes because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can securely manage private Kubernetes manifests&lt;/li&gt;
&lt;li&gt;It's easy to share Helm Charts within your team&lt;/li&gt;
&lt;li&gt;No additional infrastructure management is required&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;You'll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A GitHub private repository&lt;/li&gt;
&lt;li&gt;A GitHub Personal Access Token

&lt;ul&gt;
&lt;li&gt;For Classic Token: requires &lt;code&gt;repo&lt;/code&gt; scope&lt;/li&gt;
&lt;li&gt;For Fine-grained Token: needs read access to "Contents"&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Helm CLI installed&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;git&lt;/code&gt; command line tool&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Sample Chart Overview
&lt;/h2&gt;

&lt;p&gt;For this tutorial, we'll use a minimal &lt;code&gt;hello-world&lt;/code&gt; chart. Here's what it looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Chart.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v2&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;A simple Helm chart that creates hello-world namespace&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.1.0&lt;/span&gt;
&lt;span class="na"&gt;appVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.0.0"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# templates/namespace.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This chart is intentionally simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It creates a namespace called &lt;code&gt;hello-world&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;It has no configurable values&lt;/li&gt;
&lt;li&gt;When deployed, it creates a new namespace in your Kubernetes cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While real-world charts are typically more complex, we're keeping it minimal to focus on the repository setup process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-by-Step Guide
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Package Your Chart
&lt;/h3&gt;

&lt;p&gt;First, let's package our &lt;code&gt;hello-world&lt;/code&gt; chart:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;private-repo
helm package hello-world/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates &lt;code&gt;hello-world-0.1.0.tgz&lt;/code&gt;, containing our Chart.yaml and templates/namespace.yaml.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Generate the Index File
&lt;/h3&gt;

&lt;p&gt;Create an index file to tell Helm about available charts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo index &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This generates &lt;code&gt;index.yaml&lt;/code&gt; with content like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;entries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hello-world&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v2&lt;/span&gt;
      &lt;span class="na"&gt;appVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.0.0&lt;/span&gt;
      &lt;span class="na"&gt;created&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2024-11-21T10:00:00.000000000Z"&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;A simple Helm chart that creates hello-world namespace&lt;/span&gt;
      &lt;span class="na"&gt;digest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1234567890abcdef...&lt;/span&gt; &lt;span class="c1"&gt;# actual hash will vary&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application&lt;/span&gt;
      &lt;span class="na"&gt;urls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;hello-world-0.1.0.tgz&lt;/span&gt;
      &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.1.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Push to GitHub
&lt;/h3&gt;

&lt;p&gt;Upload everything to GitHub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git init
git add &lt;span class="nb"&gt;.&lt;/span&gt;
git commit &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Initial commit"&lt;/span&gt;
git branch &lt;span class="nt"&gt;-M&lt;/span&gt; main
git remote add origin git@github.com:your-username/repo-name.git
git push &lt;span class="nt"&gt;-u&lt;/span&gt; origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Configure Helm
&lt;/h3&gt;

&lt;p&gt;Add the repository to your local Helm configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add &lt;span class="nt"&gt;--username&lt;/span&gt; your-github-username &lt;span class="nt"&gt;--password&lt;/span&gt; your-github-token private-repo &lt;span class="s1"&gt;'https://raw.githubusercontent.com/your-username/repo-name/main'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the repository information:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Verify Setup
&lt;/h3&gt;

&lt;p&gt;Check if your chart is discoverable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm search repo private-repo/hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                        CHART VERSION   APP VERSION   DESCRIPTION
private-repo/hello-world    0.1.0          1.0.0        A simple Helm chart that creates hello-world namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using Your Chart
&lt;/h2&gt;

&lt;p&gt;Before installing, it's good practice to do a dry-run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;test-hello private-repo/hello-world &lt;span class="nt"&gt;--dry-run&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Source: hello-world/templates/namespace.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-world&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything looks good, proceed with the installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;hello-world private-repo/hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Successful installation output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME: hello-world
LAST DEPLOYED: Thu Nov 21 14:45:40 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get namespace hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Repository Structure
&lt;/h2&gt;

&lt;p&gt;Your final repository structure should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── README.md
├── index.yaml              # Helm repository index
├── hello-world-0.1.0.tgz   # Packaged chart
└── hello-world/           # Chart source
    ├── Chart.yaml         # Chart metadata
    └── templates/         # Kubernetes manifest templates
        └── namespace.yaml # Namespace manifest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! You now have your own private Helm Chart repository. This setup is perfect for sharing charts within your team and managing versions effectively.&lt;/p&gt;

&lt;p&gt;Feel free to reach out in the comments if you have any questions or run into issues!&lt;/p&gt;

</description>
      <category>helm</category>
      <category>kubernetes</category>
      <category>github</category>
      <category>devops</category>
    </item>
    <item>
      <title>Getting Started with Cluster API Locally: A Developer's Guide to CAPD</title>
      <dc:creator>suin</dc:creator>
      <pubDate>Wed, 20 Nov 2024 06:54:27 +0000</pubDate>
      <link>https://forem.com/suin/getting-started-with-cluster-api-locally-a-developers-guide-to-capd-1edg</link>
      <guid>https://forem.com/suin/getting-started-with-cluster-api-locally-a-developers-guide-to-capd-1edg</guid>
      <description>&lt;p&gt;Interested in Cluster API for automating Kubernetes cluster management but hesitant because you think you need a cloud environment? Good news! You can actually try Cluster API right on your local machine with just Docker. In this guide, we'll walk through setting up a testing environment using CAPD (Cluster API Provider Docker).&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;You'll need the following tools installed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;kubectl&lt;/li&gt;
&lt;li&gt;kind&lt;/li&gt;
&lt;li&gt;clusterctl&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Here's what we'll cover:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating a management cluster using kind&lt;/li&gt;
&lt;li&gt;Initializing Cluster API (CAPD)&lt;/li&gt;
&lt;li&gt;Creating a workload cluster&lt;/li&gt;
&lt;li&gt;Setting up CNI&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's dive in!&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Creating the Management Cluster
&lt;/h2&gt;

&lt;p&gt;First, we'll create a management cluster using kind that will serve as the foundation for CAPD.&lt;/p&gt;

&lt;p&gt;Create a configuration file that enables access to the host's Docker socket:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# kind-cluster-with-extramounts.yaml&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cluster&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind.x-k8s.io/v1alpha4&lt;/span&gt;
&lt;span class="na"&gt;networking&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ipFamily&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dual&lt;/span&gt;
&lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;control-plane&lt;/span&gt;
  &lt;span class="na"&gt;extraMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/run/docker.sock&lt;/span&gt;
      &lt;span class="na"&gt;containerPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/run/docker.sock&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, create the kind cluster using this configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kind create cluster &lt;span class="nt"&gt;--config&lt;/span&gt; kind-cluster-with-extramounts.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see output similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Creating cluster &lt;span class="s2"&gt;"kind"&lt;/span&gt; ...
 ✓ Ensuring node image &lt;span class="o"&gt;(&lt;/span&gt;kindest/node:v1.31.0&lt;span class="o"&gt;)&lt;/span&gt; 🖼
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
Set kubectl context to &lt;span class="s2"&gt;"kind-kind"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Initializing Cluster API
&lt;/h2&gt;

&lt;p&gt;Next, let's install Cluster API on our management cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CLUSTER_TOPOLOGY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; clusterctl init &lt;span class="nt"&gt;--infrastructure&lt;/span&gt; docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command installs all necessary components. If successful, you'll see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Fetching providers
Installing cert-manager &lt;span class="nv"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"v1.16.0"&lt;/span&gt;
Waiting &lt;span class="k"&gt;for &lt;/span&gt;cert-manager to be available...
Installing &lt;span class="nv"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"cluster-api"&lt;/span&gt; &lt;span class="nv"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"v1.8.5"&lt;/span&gt; &lt;span class="nv"&gt;targetNamespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"capi-system"&lt;/span&gt;
Installing &lt;span class="nv"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"bootstrap-kubeadm"&lt;/span&gt; &lt;span class="nv"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"v1.8.5"&lt;/span&gt; &lt;span class="nv"&gt;targetNamespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"capi-kubeadm-bootstrap-system"&lt;/span&gt;
Installing &lt;span class="nv"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"control-plane-kubeadm"&lt;/span&gt; &lt;span class="nv"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"v1.8.5"&lt;/span&gt; &lt;span class="nv"&gt;targetNamespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"capi-kubeadm-control-plane-system"&lt;/span&gt;
Installing &lt;span class="nv"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"infrastructure-docker"&lt;/span&gt; &lt;span class="nv"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"v1.8.5"&lt;/span&gt; &lt;span class="nv"&gt;targetNamespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"capd-system"&lt;/span&gt;
Your management cluster has been initialized successfully!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Creating a Workload Cluster
&lt;/h2&gt;

&lt;p&gt;Now let's create a Kubernetes cluster for testing. We'll name it "muscat" 🍇:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;clusterctl generate cluster muscat &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--flavor&lt;/span&gt; development &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--kubernetes-version&lt;/span&gt; v1.31.0 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--control-plane-machine-count&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--worker-machine-count&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; muscat.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; muscat.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the cluster status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME     CLUSTERCLASS   PHASE         AGE     VERSION
muscat   quick-start    Provisioned   5m47s   v1.31.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Setting Up CNI
&lt;/h2&gt;

&lt;p&gt;Finally, let's enable networking functionality by installing Calico:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# First, get the kubeconfig&lt;/span&gt;
clusterctl get kubeconfig muscat &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; kubeconfig.muscat.yaml
&lt;span class="c"&gt;# Install Calico&lt;/span&gt;
kubectl &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;./kubeconfig.muscat.yaml apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a few moments, all nodes should be in &lt;code&gt;Ready&lt;/code&gt; state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;./kubeconfig.muscat.yaml get nodes
NAME                            STATUS   ROLES           AGE     VERSION
muscat-md-0-9mhvz-4xxcd-42nh8   Ready    &amp;lt;none&amp;gt;          4m20s   v1.31.0
muscat-md-0-9mhvz-4xxcd-8hghp   Ready    &amp;lt;none&amp;gt;          4m25s   v1.31.0
muscat-md-0-9mhvz-4xxcd-mxg7k   Ready    &amp;lt;none&amp;gt;          4m20s   v1.31.0
muscat-r65sn-592c8              Ready    control-plane   3m35s   v1.31.0
muscat-r65sn-xrzfl              Ready    control-plane   4m41s   v1.31.0
muscat-worker-08sx08            Ready    &amp;lt;none&amp;gt;          4m17s   v1.31.0
muscat-worker-u6l39f            Ready    &amp;lt;none&amp;gt;          4m17s   v1.31.0
muscat-worker-ydhg40            Ready    &amp;lt;none&amp;gt;          4m17s   v1.31.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You now have a fully functional Cluster API testing environment on your local machine! Here's what we've accomplished:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A management cluster (kind) running Cluster API&lt;/li&gt;
&lt;li&gt;A workload cluster with three control plane nodes and three worker nodes&lt;/li&gt;
&lt;li&gt;All this running locally with just Docker - no cloud provider needed!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You're now ready to start experimenting with various Cluster API features in this local environment. Happy clustering! 🎉&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>clusterapi</category>
      <category>capd</category>
      <category>docker</category>
    </item>
    <item>
      <title>Sharing Secrets Between Kubernetes Clusters Using external-secrets PushSecret</title>
      <dc:creator>suin</dc:creator>
      <pubDate>Thu, 14 Nov 2024 07:12:41 +0000</pubDate>
      <link>https://forem.com/suin/sharing-secrets-between-kubernetes-clusters-using-external-secrets-pushsecret-l96</link>
      <guid>https://forem.com/suin/sharing-secrets-between-kubernetes-clusters-using-external-secrets-pushsecret-l96</guid>
      <description>&lt;p&gt;In this article, I'll explain how to share secrets between Kubernetes clusters using the PushSecret feature of external-secrets. In multi-cluster environments, secret synchronization is one of the crucial operational challenges. By building a system that securely shares and automatically synchronizes secrets between clusters, we can improve both operational efficiency and security.&lt;/p&gt;

&lt;h3&gt;
  
  
  Desired Architecture
&lt;/h3&gt;

&lt;p&gt;We'll implement the following architecture to automatically synchronize secrets from the source cluster to the target cluster:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmermaid.ink%2Fimg%2Fpako%3AeNpdkMFOwzAMhl8l8rnhASo0aRrcQEiU0wgHL_Gaam0SOYnGtO7dSdsVCXz64_-z5fxX0N4Q1HDs_Vlb5CRe3pUTpWI-tIzBil2fYyKW208Fjc-saW0p-FrYqRrSTGliZvHHe27eikHfZcZhL-NMxMcDb3wgxuT5Py7kg9yMZ0zajvfVi0_OKLfIGSuUgp0Pl5WCUXwgt5Tk_UqoYCAesDPln9dpVEGyNJCCukiDfFKg3K1wmJNvLk5DnThTBexza6E-Yh_LKweDiZ46LLkMv10yXbn_dYlxTrOCgG7v_crcfgBBgHVt%3Ftype%3Dpng" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmermaid.ink%2Fimg%2Fpako%3AeNpdkMFOwzAMhl8l8rnhASo0aRrcQEiU0wgHL_Gaam0SOYnGtO7dSdsVCXz64_-z5fxX0N4Q1HDs_Vlb5CRe3pUTpWI-tIzBil2fYyKW208Fjc-saW0p-FrYqRrSTGliZvHHe27eikHfZcZhL-NMxMcDb3wgxuT5Py7kg9yMZ0zajvfVi0_OKLfIGSuUgp0Pl5WCUXwgt5Tk_UqoYCAesDPln9dpVEGyNJCCukiDfFKg3K1wmJNvLk5DnThTBexza6E-Yh_LKweDiZ46LLkMv10yXbn_dYlxTrOCgG7v_crcfgBBgHVt%3Ftype%3Dpng" width="540" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;To follow along with this article, you'll need the following tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;k3d&lt;/li&gt;
&lt;li&gt;kubectl&lt;/li&gt;
&lt;li&gt;Helm&lt;/li&gt;
&lt;li&gt;kubectx (optional: for easier context switching)&lt;/li&gt;
&lt;li&gt;kubectl-view-secret (optional: for viewing secret contents)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Environment Setup
&lt;/h2&gt;

&lt;p&gt;We'll set up two clusters using k3d:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Source cluster: &lt;code&gt;k3d-source-cluster&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Target cluster: &lt;code&gt;k3d-target-cluster&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Creating a Shared Network
&lt;/h3&gt;

&lt;p&gt;First, let's create a shared network to enable communication between clusters. This network will facilitate communication between the two Kubernetes clusters we'll create later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker network create shared-net &lt;span class="nt"&gt;--subnet&lt;/span&gt; 172.28.0.0/16 &lt;span class="nt"&gt;--gateway&lt;/span&gt; 172.28.0.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command creates a network with the following characteristics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Subnet: 172.28.0.0/16&lt;/li&gt;
&lt;li&gt;Gateway: 172.28.0.1&lt;/li&gt;
&lt;li&gt;Network name: shared-net&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setting Up the Source Cluster
&lt;/h3&gt;

&lt;p&gt;The source cluster will provide the secrets. Create a &lt;code&gt;source-cluster.yaml&lt;/code&gt; with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k3d.io/v1alpha5&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Simple&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;source-cluster&lt;/span&gt;
&lt;span class="na"&gt;servers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="na"&gt;agents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.io/rancher/k3s:v1.30.0-k3s1&lt;/span&gt;
&lt;span class="na"&gt;kubeAPI&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.0.0.0&lt;/span&gt;
  &lt;span class="na"&gt;hostIP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;127.0.0.1&lt;/span&gt;
  &lt;span class="na"&gt;hostPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;6443"&lt;/span&gt;
&lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;8080:80&lt;/span&gt;
    &lt;span class="na"&gt;nodeFilters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;loadbalancer&lt;/span&gt;
&lt;span class="na"&gt;registries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;create&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry.localhost&lt;/span&gt;
    &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;127.0.0.1&lt;/span&gt;
    &lt;span class="na"&gt;hostPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;15000"&lt;/span&gt;
&lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared-net&lt;/span&gt;
&lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;k3d&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;wait&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;kubeconfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;updateDefaultKubeconfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;switchCurrentContext&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;k3s&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;extraArgs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# Different CIDR ranges to avoid overlap&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;arg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--cluster-cidr=10.42.0.0/16"&lt;/span&gt;
        &lt;span class="na"&gt;nodeFilters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;server:*&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;arg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--service-cidr=10.43.0.0/16"&lt;/span&gt;
        &lt;span class="na"&gt;nodeFilters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;server:*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key points of this configuration:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Network Settings&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;KubeAPI: Bound to port 6443&lt;/li&gt;
&lt;li&gt;Load balancer: Port mapping 8080:80&lt;/li&gt;
&lt;li&gt;Network: Uses the shared network created earlier&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Registry Settings&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Creates a local registry (name: registry.localhost)&lt;/li&gt;
&lt;li&gt;Bound to port 15000&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CIDR Settings&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Cluster CIDR: 10.42.0.0/16&lt;/li&gt;
&lt;li&gt;Service CIDR: 10.43.0.0/16
(Important to avoid network overlaps between clusters)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Create the cluster with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;k3d cluster create &lt;span class="nt"&gt;--config&lt;/span&gt; source-cluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setting Up the Target Cluster
&lt;/h3&gt;

&lt;p&gt;The target cluster will receive the secrets. Create a &lt;code&gt;target-cluster.yaml&lt;/code&gt; with different settings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k3d.io/v1alpha5&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Simple&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;target-cluster&lt;/span&gt;
&lt;span class="na"&gt;servers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="na"&gt;agents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.io/rancher/k3s:v1.30.0-k3s1&lt;/span&gt;
&lt;span class="na"&gt;kubeAPI&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.0.0.0&lt;/span&gt;
  &lt;span class="na"&gt;hostIP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;127.0.0.1&lt;/span&gt;
  &lt;span class="na"&gt;hostPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;6444"&lt;/span&gt;  &lt;span class="c1"&gt;# Different from source cluster&lt;/span&gt;
&lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;8081:80&lt;/span&gt;   &lt;span class="c1"&gt;# Different from source cluster&lt;/span&gt;
    &lt;span class="na"&gt;nodeFilters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;loadbalancer&lt;/span&gt;
&lt;span class="na"&gt;registries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;use&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;registry.localhost&lt;/span&gt;  &lt;span class="c1"&gt;# Use source cluster's registry&lt;/span&gt;
&lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;shared-net&lt;/span&gt;
&lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;k3d&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;wait&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;kubeconfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;updateDefaultKubeconfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;switchCurrentContext&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;k3s&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;extraArgs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# Different CIDR ranges from source cluster&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;arg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--cluster-cidr=10.44.0.0/16"&lt;/span&gt;
        &lt;span class="na"&gt;nodeFilters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;server:*&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;arg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--service-cidr=10.45.0.0/16"&lt;/span&gt;
        &lt;span class="na"&gt;nodeFilters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;server:*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key differences from the source cluster:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Different Ports&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;KubeAPI port: 6444 (source uses 6443)&lt;/li&gt;
&lt;li&gt;Load balancer port: 8081:80 (source uses 8080:80)&lt;/li&gt;
&lt;li&gt;CIDR ranges: 10.44.0.0/16 and 10.45.0.0/16&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Registry Setup&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Uses the registry created by source cluster&lt;/li&gt;
&lt;li&gt;Specified in the &lt;code&gt;use&lt;/code&gt; section&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Settings&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Uses the same shared network&lt;/li&gt;
&lt;li&gt;Enables inter-cluster communication&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Create the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;k3d cluster create &lt;span class="nt"&gt;--config&lt;/span&gt; target-cluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting Up external-secrets
&lt;/h2&gt;

&lt;p&gt;Now that our clusters are ready, let's set up external-secrets.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing on the Source Cluster
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Switch to source cluster context&lt;/span&gt;
kubectl config use-context k3d-source-cluster

helm repo add external-secrets https://charts.external-secrets.io
helm repo update

helm &lt;span class="nb"&gt;install &lt;/span&gt;external-secrets &lt;span class="se"&gt;\&lt;/span&gt;
   external-secrets/external-secrets &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-n&lt;/span&gt; external-secrets &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--create-namespace&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configuring Target Cluster Authentication
&lt;/h3&gt;

&lt;p&gt;Let's set up the authentication credentials needed for the source cluster to access the target cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting Authentication Information&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, get the target cluster's client certificate information:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config view &lt;span class="nt"&gt;--raw&lt;/span&gt;

&lt;span class="c"&gt;# Note down these values:&lt;/span&gt;
&lt;span class="c"&gt;# - client-certificate-data&lt;/span&gt;
&lt;span class="c"&gt;# - client-key-data&lt;/span&gt;
&lt;span class="c"&gt;# - certificate-authority-data&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Setting Up Authentication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;target-cluster-credentials.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;target-cluster-credentials&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;client-certificate-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;...&lt;/span&gt; &lt;span class="c1"&gt;# Base64 encoded data from kubeconfig&lt;/span&gt;
  &lt;span class="na"&gt;client-key-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;...&lt;/span&gt; &lt;span class="c1"&gt;# Base64 encoded data from kubeconfig&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the credentials to the source cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Switch to source cluster context&lt;/span&gt;
kubectl config use-context k3d-source-cluster

&lt;span class="c"&gt;# Apply credentials&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; target-cluster-credentials.yaml

&lt;span class="c"&gt;# Verify the created Secret&lt;/span&gt;
kubectl get secret target-cluster-credentials &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setting Up SecretStore
&lt;/h3&gt;

&lt;p&gt;The SecretStore defines the backend store where external-secrets will store and retrieve secrets.&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;secret-store.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-secrets.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SecretStore&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;target-cluster&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;remoteNamespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;  &lt;span class="c1"&gt;# Target cluster namespace&lt;/span&gt;
      &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://k3d-target-cluster-server-0:6443&lt;/span&gt;  &lt;span class="c1"&gt;# Using k3d internal hostname&lt;/span&gt;
        &lt;span class="na"&gt;caBundle&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;...&lt;/span&gt; &lt;span class="c1"&gt;# certificate-authority-data from kubeconfig&lt;/span&gt;
      &lt;span class="na"&gt;auth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;clientCert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;target-cluster-credentials&lt;/span&gt;
            &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;client-certificate-data&lt;/span&gt;
          &lt;span class="na"&gt;clientKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;target-cluster-credentials&lt;/span&gt;
            &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;client-key-data&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply and verify the SecretStore:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Apply SecretStore&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; secret-store.yaml

&lt;span class="c"&gt;# Check status&lt;/span&gt;
kubectl describe secretstore target-cluster

&lt;span class="c"&gt;# Expected output should include:&lt;/span&gt;
Status:
  Conditions:
    Last Transition Time:  ...
    Message:               SecretStore validated
    Reason:               Valid
    Status:               True
    Type:                 Ready
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting Up and Testing PushSecret
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Creating PushSecret
&lt;/h3&gt;

&lt;p&gt;PushSecret automatically pushes secrets from the source cluster to the target cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a Sample Secret (Source Cluster)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create sample secret&lt;/span&gt;
kubectl create secret generic my-secret &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;admin &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;supersecret

&lt;span class="c"&gt;# Verify created secret&lt;/span&gt;
kubectl get secret my-secret &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create PushSecret Definition&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;push-secret.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-secrets.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PushSecret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pushsecret-example&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# Replace existing secrets in provider&lt;/span&gt;
  &lt;span class="na"&gt;updatePolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Replace&lt;/span&gt;
  &lt;span class="c1"&gt;# Delete provider secret when PushSecret is deleted&lt;/span&gt;
  &lt;span class="na"&gt;deletionPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Delete&lt;/span&gt;
  &lt;span class="c1"&gt;# Resync interval&lt;/span&gt;
  &lt;span class="na"&gt;refreshInterval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
  &lt;span class="c1"&gt;# SecretStore to push secrets to&lt;/span&gt;
  &lt;span class="na"&gt;secretStoreRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;target-cluster&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SecretStore&lt;/span&gt;
  &lt;span class="c1"&gt;# Target Secret&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-secret&lt;/span&gt;  &lt;span class="c1"&gt;# Source cluster Secret name&lt;/span&gt;
  &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;secretKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;username&lt;/span&gt;  &lt;span class="c1"&gt;# Source cluster Secret key&lt;/span&gt;
        &lt;span class="na"&gt;remoteRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;remoteKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-secret-copy&lt;/span&gt;  &lt;span class="c1"&gt;# Target cluster Secret name&lt;/span&gt;
          &lt;span class="na"&gt;property&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;username-copy&lt;/span&gt;    &lt;span class="c1"&gt;# Target cluster Secret key&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;secretKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;  &lt;span class="c1"&gt;# Source cluster Secret key&lt;/span&gt;
        &lt;span class="na"&gt;remoteRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;remoteKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-secret-copy&lt;/span&gt;  &lt;span class="c1"&gt;# Target cluster Secret name&lt;/span&gt;
          &lt;span class="na"&gt;property&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password-copy&lt;/span&gt;    &lt;span class="c1"&gt;# Target cluster Secret key&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply PushSecret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; push-secret.yaml

&lt;span class="c"&gt;# Check status&lt;/span&gt;
kubectl describe pushsecret pushsecret-example
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verifying Operation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Check Secret in Target Cluster&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Switch to target cluster&lt;/span&gt;
kubectl config use-context k3d-target-cluster

&lt;span class="c"&gt;# Check secret&lt;/span&gt;
kubectl describe secret my-secret-copy

&lt;span class="c"&gt;# Expected output:&lt;/span&gt;
Name:         my-secret-copy
Namespace:    default
Labels:       &amp;lt;none&amp;gt;
Annotations:  &amp;lt;none&amp;gt;

Type:  Opaque

Data
&lt;span class="o"&gt;====&lt;/span&gt;
password-copy:  11 bytes
username-copy:  5 bytes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Verify Secret Contents&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Using kubectl-view-secret plugin&lt;/span&gt;
kubectl-view-secret my-secret-copy &lt;span class="nt"&gt;--all&lt;/span&gt;

&lt;span class="c"&gt;# Or directly using base64 decode&lt;/span&gt;
kubectl get secret my-secret-copy &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.data.username-copy}'&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;
kubectl get secret my-secret-copy &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.data.password-copy}'&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;password-copy='supersecret'
username-copy='admin'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Test Automatic Synchronization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Update the secret in the source cluster and verify it's reflected in the target cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Switch to source cluster&lt;/span&gt;
kubectl config use-context k3d-source-cluster

&lt;span class="c"&gt;# Update secret&lt;/span&gt;
kubectl create secret generic my-secret &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;newadmin &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;newsecret &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dry-run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;client &lt;span class="nt"&gt;-o&lt;/span&gt; yaml | kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; -

&lt;span class="c"&gt;# Switch to target cluster and verify&lt;/span&gt;
kubectl config use-context k3d-target-cluster
kubectl-view-secret my-secret-copy &lt;span class="nt"&gt;--all&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cleanup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Switch to source cluster&lt;/span&gt;
kubectl config use-context k3d-source-cluster

&lt;span class="c"&gt;# Delete PushSecret&lt;/span&gt;
kubectl delete pushsecret pushsecret-example

&lt;span class="c"&gt;# Verify secret deletion in target cluster&lt;/span&gt;
kubectl config use-context k3d-target-cluster
kubectl get secret my-secret-copy
&lt;span class="c"&gt;# Should show: "Error from server (NotFound): secrets "my-secret-copy" not found"&lt;/span&gt;

&lt;span class="c"&gt;# Delete clusters&lt;/span&gt;
k3d cluster delete &lt;span class="nt"&gt;--config&lt;/span&gt; source-cluster.yaml
k3d cluster delete &lt;span class="nt"&gt;--config&lt;/span&gt; target-cluster.yaml

&lt;span class="c"&gt;# Delete shared network&lt;/span&gt;
docker network &lt;span class="nb"&gt;rm &lt;/span&gt;shared-net
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we've explored how to use external-secrets' PushSecret feature to share secrets between Kubernetes clusters. In production environments, this feature can significantly improve secret management efficiency in multi-cluster setups!&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://external-secrets.io/" rel="noopener noreferrer"&gt;external-secrets Official Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://external-secrets.io/v0.9.0/api/pushsecret/" rel="noopener noreferrer"&gt;PushSecret API Reference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noopener noreferrer"&gt;Kubernetes Secrets&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>security</category>
      <category>devops</category>
      <category>externalsecrets</category>
    </item>
    <item>
      <title>Sync Kubernetes Secrets to AWS Secrets Manager Using external-secrets PushSecret</title>
      <dc:creator>suin</dc:creator>
      <pubDate>Thu, 14 Nov 2024 01:25:54 +0000</pubDate>
      <link>https://forem.com/suin/sync-kubernetes-secrets-to-aws-secrets-manager-using-external-secrets-pushsecret-4i3f</link>
      <guid>https://forem.com/suin/sync-kubernetes-secrets-to-aws-secrets-manager-using-external-secrets-pushsecret-4i3f</guid>
      <description>&lt;p&gt;When managing sensitive information in Kubernetes, you can use an operator called &lt;a href="https://external-secrets.io/" rel="noopener noreferrer"&gt;external-secrets&lt;/a&gt; to integrate with external secret providers like AWS Secrets Manager.&lt;/p&gt;

&lt;p&gt;While the common usage pattern is to synchronize sensitive information stored in AWS Secrets Manager as Kubernetes Secrets, this article introduces &lt;code&gt;PushSecret&lt;/code&gt;, which enables reverse synchronization - pushing Kubernetes Secrets to AWS Secrets Manager.&lt;/p&gt;

&lt;p&gt;Let's explore the basic usage of &lt;code&gt;PushSecret&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes cluster&lt;/li&gt;
&lt;li&gt;AWS account&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installing external-secrets
&lt;/h2&gt;

&lt;p&gt;First, install external-secrets using Helm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add external-secrets https://charts.external-secrets.io

helm &lt;span class="nb"&gt;install &lt;/span&gt;external-secrets &lt;span class="se"&gt;\&lt;/span&gt;
   external-secrets/external-secrets &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-n&lt;/span&gt; external-secrets &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--create-namespace&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting Up AWS Credentials
&lt;/h2&gt;

&lt;p&gt;Set AWS credentials as environment variables and verify they are configured correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;xxxx
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;xxxx
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ap-northeast-1

&lt;span class="c"&gt;# Verify credentials&lt;/span&gt;
aws sts get-caller-identity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, store these credentials as a Kubernetes Secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create secret generic aws-credentials &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;access-key-id&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;secret-access-key&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuring SecretStore
&lt;/h2&gt;

&lt;p&gt;Create a &lt;code&gt;SecretStore&lt;/code&gt; to connect to AWS Secrets Manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-secrets.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SecretStore&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-secretsmanager&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;aws&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SecretsManager&lt;/span&gt;
      &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ap-northeast-1&lt;/span&gt;
      &lt;span class="na"&gt;auth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;secretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;accessKeyIDSecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-credentials&lt;/span&gt;
            &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;access-key-id&lt;/span&gt;
          &lt;span class="na"&gt;secretAccessKeySecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-credentials&lt;/span&gt;
            &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret-access-key&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; secret-store.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuring PushSecret
&lt;/h2&gt;

&lt;p&gt;Create a &lt;code&gt;PushSecret&lt;/code&gt; to synchronize Kubernetes Secrets to AWS Secrets Manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-secrets.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PushSecret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pushsecret-example&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# Overwrite existing secrets in provider during sync&lt;/span&gt;
  &lt;span class="na"&gt;updatePolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Replace&lt;/span&gt;
  &lt;span class="c1"&gt;# Delete provider secrets when PushSecret is deleted&lt;/span&gt;
  &lt;span class="na"&gt;deletionPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Delete&lt;/span&gt;
  &lt;span class="c1"&gt;# Resync interval&lt;/span&gt;
  &lt;span class="na"&gt;refreshInterval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
  &lt;span class="c1"&gt;# SecretStore to push secrets to&lt;/span&gt;
  &lt;span class="na"&gt;secretStoreRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-secretsmanager&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SecretStore&lt;/span&gt;
  &lt;span class="c1"&gt;# Target Secret for synchronization&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-secret&lt;/span&gt;
  &lt;span class="c1"&gt;# Key configuration for synchronization&lt;/span&gt;
  &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;secretKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;foo&lt;/span&gt;  &lt;span class="c1"&gt;# Secret key&lt;/span&gt;
        &lt;span class="na"&gt;remoteRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;remoteKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-secret-foo&lt;/span&gt;  &lt;span class="c1"&gt;# AWS Secrets Manager secret name&lt;/span&gt;
      &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;secretPushFormat&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; push-secret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verification
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Creating a Secret
&lt;/h3&gt;

&lt;p&gt;First, create the target Secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create secret generic my-secret &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;foo&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;bar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that it has been synchronized to AWS Secrets Manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws secretsmanager get-secret-value &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--secret-id&lt;/span&gt; my-secret-foo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If successful, you should see a response like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ARN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:secretsmanager:ap-northeast-1:000000000000:secret:my-secret-foo-rUBCkr"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my-secret-foo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"VersionId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"00000000-0000-0000-0000-000000000001"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"SecretString"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bar"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"VersionStages"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"AWSCURRENT"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"CreatedDate"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-11-14T09:50:34.787000+09:00"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Updating the Secret
&lt;/h3&gt;

&lt;p&gt;Let's update the Secret value:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create secret generic my-secret &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;foo&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;baz &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dry-run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;client &lt;span class="nt"&gt;-o&lt;/span&gt; yaml | kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that the value has been updated in AWS Secrets Manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws secretsmanager get-secret-value &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--secret-id&lt;/span&gt; my-secret-foo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can confirm that &lt;code&gt;SecretString&lt;/code&gt; has changed from &lt;code&gt;bar&lt;/code&gt; to &lt;code&gt;baz&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ARN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:secretsmanager:ap-northeast-1:000000000000:secret:my-secret-foo-rUBCkr"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my-secret-foo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"VersionId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"00000000-0000-0000-0000-000000000002"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"SecretString"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"baz"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"VersionStages"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"AWSCURRENT"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"CreatedDate"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-11-14T10:03:41.913000+09:00"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Verifying Deletion Behavior
&lt;/h3&gt;

&lt;p&gt;Delete the Secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete secret my-secret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, the secret in AWS Secrets Manager is not deleted.&lt;/p&gt;

&lt;p&gt;Next, delete the PushSecret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete pushsecret pushsecret-example
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This operation will also delete the secret from AWS Secrets Manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws secretsmanager list-secrets
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"SecretList"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;

&lt;p&gt;When you delete the external-secrets &lt;code&gt;PushSecret&lt;/code&gt;, &lt;code&gt;my-secret-foo&lt;/code&gt; remains in AWS Secrets Manager as "scheduled for deletion". To immediately delete the secret from AWS Secrets Manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws secretsmanager delete-secret &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--secret-id&lt;/span&gt; my-secret-foo &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--force-delete-without-recovery&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Delete external-secrets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm uninstall external-secrets &lt;span class="nt"&gt;-n&lt;/span&gt; external-secrets
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Delete AWS credentials:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete secret aws-credentials
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;We've seen how external-secrets' &lt;code&gt;PushSecret&lt;/code&gt; can synchronize Kubernetes Secrets to AWS Secrets Manager. This feature can be useful in several scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sharing sensitive information managed in Kubernetes with other AWS services&lt;/li&gt;
&lt;li&gt;Sharing Secrets between Kubernetes clusters via AWS Secrets Manager&lt;/li&gt;
&lt;li&gt;Backing up Kubernetes Secrets to AWS Secrets Manager&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While this example used external-secrets with AWS Secrets Manager, there are various other providers available for SecretStore. One particularly interesting provider is Kubernetes itself. I'm personally interested in trying out direct Secret synchronization between Kubernetes clusters and plan to write about that experience in a future article.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://external-secrets.io/" rel="noopener noreferrer"&gt;external-secrets Official Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://external-secrets.io/latest/api/pushsecret/" rel="noopener noreferrer"&gt;PushSecret Reference&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>externalsecrets</category>
      <category>secretsmanager</category>
      <category>aws</category>
    </item>
    <item>
      <title>Securing external-dns: Encrypting TXT Registry Records</title>
      <dc:creator>suin</dc:creator>
      <pubDate>Wed, 06 Nov 2024 07:55:40 +0000</pubDate>
      <link>https://forem.com/suin/securing-external-dns-encrypting-txt-registry-records-11m4</link>
      <guid>https://forem.com/suin/securing-external-dns-encrypting-txt-registry-records-11m4</guid>
      <description>&lt;p&gt;In my &lt;a href="https://dev.to/suin/automated-dns-record-management-for-kubernetes-resources-using-external-dns-and-aws-route53-cnm"&gt;previous article&lt;/a&gt;, we explored how to automate DNS record management using external-dns with AWS Route53. We briefly mentioned that management information stored in TXT records is publicly visible. Today, let's dive into how to secure this information using external-dns's TXT record encryption feature.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding TXT Registry in external-dns
&lt;/h2&gt;

&lt;p&gt;external-dns stores management information in what's called a Registry. While multiple Registry options exist, including DynamoDB and AWS Service Discovery, TXT Registry is particularly interesting because it avoids cloud vendor lock-in. However, since TXT records are publicly accessible, encrypting them adds an important security layer while maintaining the benefits of vendor independence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A configured Kubernetes cluster&lt;/li&gt;
&lt;li&gt;Required command-line tools installed:

&lt;ul&gt;
&lt;li&gt;kubectl&lt;/li&gt;
&lt;li&gt;helm&lt;/li&gt;
&lt;li&gt;aws&lt;/li&gt;
&lt;li&gt;dig&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;AWS account access and secret keys&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementation Steps
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Creating an AWS Route53 Hosted Zone
&lt;/h3&gt;

&lt;p&gt;First, set your AWS credentials as environment variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;AKIAXXXXXXXXXXXXXXXX
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify your AWS credentials:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws sts get-caller-identity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a Hosted Zone:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws route53 create-hosted-zone &lt;span class="nt"&gt;--name&lt;/span&gt; example-tutorial.com &lt;span class="nt"&gt;--caller-reference&lt;/span&gt; external-dns-tutorial-&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%s&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The caller-reference must be unique for each creation attempt. We're using a timestamp to ensure uniqueness.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Installing external-dns with TXT Encryption
&lt;/h3&gt;

&lt;p&gt;We'll use Bitnami's Helm chart as it offers more configuration options and simplifies AWS Route53 setup compared to the kubernetes-sigs chart:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;external-dns &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;provider&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;aws &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; aws.zoneType&lt;span class="o"&gt;=&lt;/span&gt;public &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; aws.credentials.accessKey&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; aws.credentials.secretKey&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;txtOwnerId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;example-owner-id-123 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="s2"&gt;"domainFilters[0]=example-tutorial.com"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;policy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;sync&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="s2"&gt;"sources[0]=crd"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; crd.create&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; crd.apiversion&lt;span class="o"&gt;=&lt;/span&gt;externaldns.k8s.io/v1alpha1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; crd.kind&lt;span class="o"&gt;=&lt;/span&gt;DNSEndpoint &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; txtEncrypt.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; txtEncrypt.aesKey&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  oci://registry-1.docker.io/bitnamicharts/external-dns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Installation Options Explained
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Option&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;provider=aws&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Specifies AWS Route53 as the provider&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;aws.zoneType=public&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Specifies the use of a public zone&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;aws.credentials.accessKey&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;AWS access key&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;aws.credentials.secretKey&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;AWS secret key&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;txtOwnerId&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;TXT record owner ID (can be any value)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;domainFilters[0]&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Domain to monitor&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;policy=sync&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;DNS record synchronization policy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;sources[0]=crd&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Enables CRD usage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;crd.create=true&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Creates the CRD&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;crd.apiversion&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;CRD API version&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;crd.kind&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;CRD kind&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;txtEncrypt.enabled=true&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Enables TXT record encryption&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;txtEncrypt.aesKey&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;AES key for encryption (auto-generated if empty)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;After installation, verify that external-dns is running with encryption enabled:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &lt;span class="nt"&gt;-l&lt;/span&gt; app.kubernetes.io/name&lt;span class="o"&gt;=&lt;/span&gt;external-dns &lt;span class="nt"&gt;-f&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see logs indicating encrypted TXT records are enabled:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;time="2024-11-06T07:03:47Z" level=info msg="config: {...TXTEncryptEnabled:true...}"
time="2024-11-06T07:03:47Z" level=info msg="Instantiating new Kubernetes client"
time="2024-11-06T07:03:47Z" level=info msg="Using inCluster-config based on serviceaccount-token"
time="2024-11-06T07:03:47Z" level=info msg="Created Kubernetes client https://10.43.0.1:443"
time="2024-11-06T07:03:49Z" level=info msg="Applying provider record filter for domains: [example-tutorial.com. .example-tutorial.com.]"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that the AES encryption key was generated and stored in the secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get secret external-dns &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the &lt;code&gt;txt_aes_encryption_key&lt;/code&gt; field in the secret data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;txt_aes_encryption_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Q2NCSUF6c2I1N215SGY4RWZtWmZvWm1keUl2SHBsTDY=&lt;/span&gt;  &lt;span class="c1"&gt;# Base64 encoded AES key&lt;/span&gt;
&lt;span class="c1"&gt;# ... other fields omitted for brevity&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Creating DNS Records
&lt;/h3&gt;

&lt;p&gt;Let's verify that external-dns creates encrypted DNS records. We'll use the DNSEndpoint CRD as it requires no actual resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# test.example-tutorial.com.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;externaldns.k8s.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DNSEndpoint&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test.example-tutorial.com&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;dnsName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test.example-tutorial.com&lt;/span&gt;
    &lt;span class="na"&gt;recordTTL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;180&lt;/span&gt;
    &lt;span class="na"&gt;recordType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;A&lt;/span&gt;
    &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;127.0.0.1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; test.example-tutorial.com.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a moment, external-dns will create the DNS records. Check the logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;time="2024-11-06T07:12:52Z" level=info msg="Desired change: CREATE a-test.example-tutorial.com TXT" profile=default zoneID=/hostedzone/Z00418233KGBJI8AZJFPR zoneName=example-tutorial.com.
time="2024-11-06T07:12:52Z" level=info msg="Desired change: CREATE test.example-tutorial.com A" profile=default zoneID=/hostedzone/Z00418233KGBJI8AZJFPR zoneName=example-tutorial.com.
time="2024-11-06T07:12:52Z" level=info msg="2 record(s) were successfully updated" profile=default zoneID=/hostedzone/Z00418233KGBJI8AZJFPR zoneName=example-tutorial.com.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the DNS records using AWS CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws route53 list-resource-record-sets &lt;span class="nt"&gt;--hosted-zone-id&lt;/span&gt; Z00418233KGBJI8AZJFPR
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see both the A record and the encrypted TXT record:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ResourceRecordSets"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"test.example-tutorial.com."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"TTL"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;180&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"ResourceRecords"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="nl"&gt;"Value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"a-test.example-tutorial.com."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"TXT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"TTL"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"ResourceRecords"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="nl"&gt;"Value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;YwPTDxmRgtKjryuSqYrqA35DoRkFw94ZxoojvZ9goHiyXbd8zYS8wBqS7t3ZtZoqREqDDaLtLcB0wbzTpw9n1+HxgGrJc795b4ISnJXRI03+sJ+DgN71dU7hCCyoPx25w/jYbOX3/zP DP59BmZaAly/OLmCEcDTW7dl697qdj4lsNHBrr+6Z1lAFKHAKfX3pM9w6RFGmpGl4WULtAA==&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With TXT encryption enabled (&lt;code&gt;txtEncrypt.enabled=true&lt;/code&gt;), the TXT record content is encrypted using AES encryption. While it still contains the same management information (heritage, owner, and resource), it's now secured from unauthorized access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cleanup
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Removing DNS Records
&lt;/h4&gt;

&lt;p&gt;Delete the CRD resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; test.example-tutorial.com.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;external-dns will remove the DNS records shortly after. Check the logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;time="2024-11-06T05:48:24Z" level=info msg="Desired change:  DELETE  a-test.example-tutorial.com TXT" profile=default zoneID=/hostedzone/Z08033563HFN15GSXJ766 zoneName=example-tutorial.com.
time="2024-11-06T05:48:24Z" level=info msg="Desired change:  DELETE  test.example-tutorial.com A" profile=default zoneID=/hostedzone/Z08033563HFN15GSXJ766 zoneName=example-tutorial.com.
time="2024-11-06T05:48:24Z" level=info msg="Desired change:  DELETE  test.example-tutorial.com TXT" profile=default zoneID=/hostedzone/Z08033563HFN15GSXJ766 zoneName=example-tutorial.com.
time="2024-11-06T05:48:24Z" level=info msg="3 record(s) were successfully updated" profile=default zoneID=/hostedzone/Z08033563HFN15GSXJ766 zoneName=example-tutorial.com.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Removing external-dns
&lt;/h4&gt;

&lt;p&gt;Delete the Helm release:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm uninstall external-dns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  3. Removing the Hosted Zone
&lt;/h4&gt;

&lt;p&gt;First, list the zones:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws route53 list-hosted-zones
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then delete the zone using its ID:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws route53 delete-hosted-zone &lt;span class="nt"&gt;--id&lt;/span&gt; Z00418233KGBJI8AZJFPR
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we've explored how to secure external-dns management information by implementing TXT record encryption. While TXT Registry offers a vendor-independent way to store management information, encryption adds an essential security layer. By following these steps, you can maintain the benefits of TXT Registry while ensuring your management information remains secure.&lt;/p&gt;

&lt;p&gt;When combined with the setup described in the &lt;a href="https://dev.to/suin/automated-dns-record-management-for-kubernetes-resources-using-external-dns-and-aws-route53-cnm"&gt;previous article&lt;/a&gt;, you'll have a robust and secure DNS automation solution. Give it a try in your environment!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>security</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
