<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Raul Castellanos</title>
    <description>The latest articles on Forem by Raul Castellanos (@rcastellanosm).</description>
    <link>https://forem.com/rcastellanosm</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/rcastellanosm"/>
    <language>en</language>
    <item>
      <title>Managing database migrations safely in high replicated k8s deployment.</title>
      <dc:creator>Raul Castellanos</dc:creator>
      <pubDate>Sun, 13 Nov 2022 12:49:04 +0000</pubDate>
      <link>https://forem.com/rcastellanosm/managing-database-migrations-safely-in-high-replicated-k8s-deployment-4cbo</link>
      <guid>https://forem.com/rcastellanosm/managing-database-migrations-safely-in-high-replicated-k8s-deployment-4cbo</guid>
      <description>&lt;p&gt;So, you want to run migrations in a cloud native application running on a &lt;code&gt;Kubernetes&lt;/code&gt; cluster, and don't die trying huh!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Well you're in the right place!!&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After I break some applications in terms of database migrations in a multi replica and concurrent deployment process, I want to give you some advices based on my faults, on how you can run you migrations in a safely way, with native &lt;code&gt;k8s&lt;/code&gt; specs and without hacks of any kind. (no &lt;code&gt;helm&lt;/code&gt;, no &lt;code&gt;external deployers&lt;/code&gt;, pure and plain&lt;code&gt;k8s&lt;/code&gt; process well orchestrated)&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Is very common, modern application evolves faster, new features arise from product to satisfy the final user, and with every new deploy is too common the need to alter your database in some form and you have, many tools to allow you to manage the execution of the migrations against your database, BUT, not when they occur.&lt;/p&gt;

&lt;p&gt;If you have an application pod, let say with 4 replicas, and you deploy it, the all 4 will try to run the migrations at the same time potentially causing data corruption and data loss, and nobody wants that.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to run migrations &lt;code&gt;(the workflow)&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;In the &lt;code&gt;old-way&lt;/code&gt; of run migrations we used to have something like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Put the application in &lt;code&gt;maintenance mode&lt;/code&gt; (divert traffic to a special page)&lt;/li&gt;
&lt;li&gt;Run database migrations&lt;/li&gt;
&lt;li&gt;Deploy new base code&lt;/li&gt;
&lt;li&gt;Disable maintenance mode on application&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Obviously, this isn't acceptable approach if you want to achieve &lt;code&gt;zero-downtime&lt;/code&gt; deployments in the actual always-on world, we need to achieve &lt;em&gt;(at leats)&lt;/em&gt; the following steps to assure that migrations and application run in a safety way.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run migrations while the old version of the application is still running, and do the rolling update "only" when migrations are successfully run.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3v1I928q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1668344068616/A3UYB5av5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3v1I928q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1668344068616/A3UYB5av5.jpg" alt="Pipeline + cluster proposals-2.jpg" width="880" height="962"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;Job&lt;/code&gt;, &lt;code&gt;InitContainer&lt;/code&gt; and &lt;code&gt;RollingUpdates&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;So after choose our strategy to run &lt;code&gt;migrations&lt;/code&gt; on &lt;code&gt;k8s&lt;/code&gt;, we need to write our manifests in order to accomplish the defined workflow.&lt;/p&gt;

&lt;p&gt;First, the migrations job itself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;batch/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Job&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;availability-api-migrations&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ttlSecondsAfterFinished&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;60&lt;/span&gt;
  &lt;span class="na"&gt;backoffLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;migrations&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;availability-api-migrations&lt;/span&gt;
          &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/bin/sh'&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-c'&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;bin/console&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;doctrine:migrations:migrate&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--no-interaction&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-v'&lt;/span&gt;
          &lt;span class="na"&gt;envFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;secretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;protected-credentials-from-vault&lt;/span&gt;
      &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Never&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now in the deployment manifest of the application we need to defined 2 things very important to allow our workflow work as expected.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The rolling update strategy&lt;/li&gt;
&lt;li&gt;the init container and command to forbid deployment init until migrations are done.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s"&gt;...&lt;/span&gt;
  &lt;span class="s"&gt;strategy&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;rollingUpdate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;maxSurge&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;25%&lt;/span&gt;
      &lt;span class="na"&gt;maxUnavailable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
&lt;span class="err"&gt;   &lt;/span&gt;&lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;initContainers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;wait-for-migrations-job&lt;/span&gt;
            &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bitnami/kubectl:1.25&lt;/span&gt;
            &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;kubectl'&lt;/span&gt;
            &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;wait'&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--for=condition=complete'&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--timeout=600s'&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;job/availability-api-migrations'&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What is the meaning of that snippet above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;RollingUpdate&lt;/code&gt; strategy:

&lt;ul&gt;
&lt;li&gt;The rolling update allow us, to define the strategy on how many pod replicas we want to update with the new code at a time (you can also choose between other &lt;code&gt;k8s&lt;/code&gt; strategies like &lt;code&gt;recreate&lt;/code&gt;, &lt;code&gt;blue/green&lt;/code&gt; or &lt;code&gt;canary&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;InitContainer&lt;/code&gt; migration job watcher

&lt;ul&gt;
&lt;li&gt;Now we want, to &lt;code&gt;forbid&lt;/code&gt; in some way that the rollout deployment begins until the migrations job was finished in completed status. Fortunately the &lt;code&gt;kubectl cli&lt;/code&gt; tools allow us to ask and wait the status of &lt;code&gt;k8s&lt;/code&gt; component, we use that advantage of the kubelete api to use here, and "block" in some way the deployment
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$kubectl&lt;/span&gt; &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for-condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;complete&lt;/span&gt; &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;600 job/availability-api-migrations

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We wait for the job &lt;code&gt;availability-api-migrations&lt;/code&gt; status complete with a &lt;code&gt;timeout&lt;/code&gt; o five minutes, the kubectl will ask permanently until timeout was reach. Obviously if the migration job finish in there milliseconds, automatically the wait loop ends and allow the deploy to begin otherwise, the deploy will fail. The timeout is the MAX allowed time to wait for ask for complete status on a job.&lt;/p&gt;

&lt;p&gt;At least with this we can assure that data consistency is preserve, but it's importan follow some guidelines in terms on how to write and deploy migrations in you applications, below we'll cover that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Safety recommendations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Write your migrations always thinking in a fast execution, if you expect to run migrations that took more than 5 minutes, considering ask for a maintenance window to restrict traffic access to the application.&lt;/li&gt;
&lt;li&gt;Do your migrations always thinking in retro compatibility of your current code running on production.

&lt;ul&gt;
&lt;li&gt;For example, do not alter a table adding a column NOT NULLABLE or without DEFAULT value.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;If you need to add a column and delete other, the best strategy to follow is to do 2 separates deployments. 

&lt;ul&gt;
&lt;li&gt;Run the first deploy adding the column, and validate that all is running smoothly in production.&lt;/li&gt;
&lt;li&gt;Run a new deployment only deleting the old column from the database.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Support me
&lt;/h2&gt;

&lt;p&gt;If you like what you just read and you find it valuable, then you can buy me a coffee by clicking the link in the image below, it would be appreciated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/rcastellanosm"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Rb9xKs4G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" width="545" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>php</category>
      <category>migrations</category>
      <category>kubernetes</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How to build a CI/CD workflow with Skaffold for your application (Part III)</title>
      <dc:creator>Raul Castellanos</dc:creator>
      <pubDate>Wed, 09 Nov 2022 09:06:04 +0000</pubDate>
      <link>https://forem.com/rcastellanosm/how-to-build-a-cicd-workflow-with-skaffold-for-your-application-part-iii-3625</link>
      <guid>https://forem.com/rcastellanosm/how-to-build-a-cicd-workflow-with-skaffold-for-your-application-part-iii-3625</guid>
      <description>&lt;h2&gt;
  
  
  Let's recap: &lt;code&gt;The Workflow&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;This is the &lt;code&gt;workflow&lt;/code&gt; so far:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;📣 You can check how to get to this point in the firsts two delivery of the series.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ujGWKyom--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667561874111/Qhz2KHP07.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ujGWKyom--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667561874111/Qhz2KHP07.jpeg" alt="2fl6qCIhG.png.jpeg" width="880" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Gitlab &lt;code&gt;K8s agent&lt;/code&gt; and Security
&lt;/h2&gt;

&lt;p&gt;This main part in the integration of &lt;code&gt;k8s&lt;/code&gt; and &lt;code&gt;Gitlab&lt;/code&gt; with the &lt;code&gt;Gitlab K8s Agent&lt;/code&gt;, is, in my experience, the best and easy way I find to integrate K8s with a DevOps platform like Gitlab.&lt;/p&gt;

&lt;p&gt;Let's recap some steps&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need to add an Agent and them run a helm chart into your cluster to allow the secure communication between both.&lt;/li&gt;
&lt;li&gt;the agent can be configured in 2 ways:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;CI_ACCESS&lt;/code&gt;: Allow access from the project repository pipeline to the cluster and then you are in charge to manage how to deploy in the cluster.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GITOPS_ACCESS&lt;/code&gt;: This allow a full gitops flow like &lt;code&gt;ArgoCD&lt;/code&gt; for example, updating your cluster in a pull based way in sync with the main branch of the repository.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In my case a use the first one &lt;code&gt;CI_ACCESS&lt;/code&gt; since I want to manage in a more granular way, the whole process with &lt;code&gt;skaffold&lt;/code&gt;, so mi configuration is way simpler&lt;/p&gt;

&lt;p&gt;I have 2 repositories in a application group, one for the micro service itself and one for the agent (the agent also could be put in the micro service repository, but if you want more granular access or share the agent/cluster between applications of the same stack, this is the best way)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Hm3iUpBM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667982028996/VdcNVOaUU.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Hm3iUpBM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667982028996/VdcNVOaUU.png" alt="Screenshot 2022-11-09 at 09.17.42.png" width="880" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So in the K8s-agents repository, we only have the declarative config.yaml file for every agent that we want to create (for this example I have 2, one for lower-envs/runner and one for production since they are 2 different clusters)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QPkarOsp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667982211419/lgoKLKteB.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QPkarOsp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667982211419/lgoKLKteB.png" alt="Screenshot 2022-11-09 at 09.23.17.png" width="432" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And in the config itself, I give access to all the projects that I want to use in the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LTrCrUUQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667982282906/c40e50lXX.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LTrCrUUQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667982282906/c40e50lXX.png" alt="Screenshot 2022-11-09 at 09.24.37.png" width="611" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And last, bit no least, the need to link the agent with the cluster, for that, you should go to the k8s project, Kubernetes Cluster menu, and there you will see an interface where you will receive the instructions on how to link agent/cluster via a helm chart to be installed in the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZF7Fojhb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667982484851/8zvk9zHf8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZF7Fojhb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667982484851/8zvk9zHf8.png" alt="Screenshot 2022-11-09 at 09.26.58.png" width="880" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ORHyPzsZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667982568682/jDSedEjlDe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ORHyPzsZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667982568682/jDSedEjlDe.png" alt="Screenshot 2022-11-09 at 09.27.05.png" width="880" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that, you cluster and gitlab instance are now linked, and you applications can use the cluster as Kubernetes-Executor runners and also for dynamic review environments (aka dynamic QA instances to say so)&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment and Safety Recommendations for &lt;code&gt;K8s Agents&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;To restrict access to your cluster, you can use impersonation. To specify impersonations, use the &lt;code&gt;access_as&lt;/code&gt; attribute in your Agent's configuration file and use K8s RBAC rules to manage impersonated account permissions.&lt;/p&gt;

&lt;p&gt;You can impersonate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Agent itself (default) = The CI job that accesses the cluster&lt;/li&gt;
&lt;li&gt;A specific user or system account defined within the cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Impersonation give some benefits in terms of security:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allows you to leverage your K8s authorisation capabilities to limit the permissions of what can be done with the CI/CD tunnel on your running cluster&lt;/li&gt;
&lt;li&gt;Lowers the risk of providing unlimited access to your K8s cluster with the CI/CD tunnel&lt;/li&gt;
&lt;li&gt;Segments fine-grained permissions with the CI/CD tunnel at the project or group level&lt;/li&gt;
&lt;li&gt;Controls permissions with the CI/CD tunnel at the username or service account&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Provisioning cluster with &lt;code&gt;terraform&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;As I said, the main goal of this tutorial is try to get the same tooling for local development, pipeline and deployment, but (always got a but), we have 2 sets of the terraform configuration instructions, of example for local development I want, as developer, to get as much of the observability tools that I have on production, in case I need to test metrics, build dashboards on grafana, etc, but without the complexity of production infrastructure architecture.&lt;/p&gt;

&lt;p&gt;So in this case, we can give the application diagram for the first part here, as recap of 'how our local development stack&lt;code&gt;looks like, and how to achieved with&lt;/code&gt;terraform `.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://www.plantuml.com/plantuml/png/RPDFSzem4CNl_XGg9p9juaiFcPxY6gRj55eFX4DFZ90Nq4H_t9L4OJhvxbqXrqGaXqpqPt_xtZxX1-Sv-g1LyKuQeK8BREzzvpwL9V8_Tplfzs4J7A2mneFnTyBgibFSHERM-LR9JLb_l6tYqMe-ApLt7f2ErhNLdJMHwMB_ObRz-hbwNC-g7vDbNJNJyKrHB4zKhTVJen-JW0iQy0CRrKeIDg9xnlgAppQObkDf_7JlgEBxlMDLroafk9VMi5g5A3kwON-9OOFq6E40wA11UpmHzuZ0j_9fHCj5kc7dAzBAC07evzpmNV9psKKoNifjb0QcpySwsMMlxVABoQgJ15S7BXNVI2NzYLNDDxO4F4W1iR6M0YrpwU3rBF49q2e5-4QVo3TV6_QUBInl5y4Om7ugmhYaxMH3SRJInU7Z_uW8BlQGacOBK5TvPOfeWuV8n1y88ObOJ_AmhWBlq1va2sSjUZfMBoONT9MDr7lhBT72YoJpJ7z5FWUrrU3t42BG39j8pS6Z5EwSQumWPySxv5loIeLVqkhCM2EzHMbsPMMuEdbga48Xcvd9JDW9v1qmdHIpQD9uWrY61PVoQBddh0y8YNhElWSuUa0oq_G5aPpsP_712RWsznQ2y3k0y__DqLYiIEQ63ov_im5nBvZY0KmRjFe7"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5WsIukrN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667247028660/_7nejmil9.png" alt="application-diagram.png" width="880" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, for this application structure, i want to get in my local environment the cluster and it's 'pre-requisites' for my architecture, understood as pre-requisites all the others components inside the cluster that no belongs to the application itself (monitoring stack, traefik, cert-manager, etc).&lt;/p&gt;

&lt;p&gt;So for that, I write simple modules to install that dependencies inside the local cluster, and get them available to use when a run my application locally with &lt;code&gt;skaffold&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;My structure for infrastructure folder looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j-xLbsJq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667983890051/d2r2cjn0I.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j-xLbsJq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667983890051/d2r2cjn0I.png" alt="Screenshot 2022-11-09 at 09.49.38.png" width="307" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every module has inside their &lt;code&gt;main.tf&lt;/code&gt; configuration file setting the desired state for my cluster after it's applied.&lt;/p&gt;

&lt;p&gt;Lets take a look for one of this module (prometheus) main file:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
terraform {&lt;br&gt;
  required_providers {&lt;br&gt;
    kubernetes = {&lt;br&gt;
      source = "hashicorp/kubernetes"&lt;br&gt;
      version = "&amp;gt;= 2.13.1"&lt;br&gt;
    }&lt;br&gt;
    helm = {&lt;br&gt;
      source = "hashicorp/helm"&lt;br&gt;
      version = "&amp;gt;= 2.7.0"&lt;br&gt;
    }&lt;br&gt;
    kubectl = {&lt;br&gt;
      source = "gavinbunney/kubectl"&lt;br&gt;
      version = "&amp;gt;= 1.14.0"&lt;br&gt;
    }&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "kubernetes_namespace_v1" "monitoring_namespace" {&lt;br&gt;
  metadata {&lt;br&gt;
    name = var.monitoring_stack_namespace&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "helm_release" "prometheus_stack" {&lt;br&gt;
  name = var.monitoring_stack_prometheus_name&lt;br&gt;
  repository = "&lt;a href="https://prometheus-community.github.io/helm-charts"&gt;https://prometheus-community.github.io/helm-charts&lt;/a&gt;"&lt;br&gt;
  chart = "prometheus"&lt;br&gt;
  version = var.monitoring_stack_prometheus_version_number&lt;br&gt;
  namespace = var.monitoring_stack_namespace&lt;br&gt;
  create_namespace = false&lt;/p&gt;

&lt;p&gt;values = [&lt;br&gt;
    file("${path.module}/manifests/prometheus-override-values.yaml")&lt;br&gt;
  ]&lt;/p&gt;

&lt;p&gt;depends_on = [&lt;br&gt;
    kubernetes_namespace_v1.monitoring_namespace&lt;br&gt;
  ]&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "kubectl_manifest" "prometheus_stack_ingress" {&lt;br&gt;
  yaml_body = file("${path.module}/manifests/prometheus-ingress.yaml")&lt;/p&gt;

&lt;p&gt;depends_on = [&lt;br&gt;
    helm_release.prometheus_stack&lt;br&gt;
  ]&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And then, in the root &lt;code&gt;main.tf&lt;/code&gt; configuration file, you can wrap as many modules as you want, for my case, with my 4 modules was enough (prometheus, traefik, cert-manager, grafana)&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
module "cert_manager_stack" {&lt;br&gt;
  source = "./module/cert-manager"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;module "traefik_stack" {&lt;br&gt;
  source = "./module/traefik"&lt;/p&gt;

&lt;p&gt;depends_on = [&lt;br&gt;
    module.cert_manager_stack&lt;br&gt;
  ]&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;module "prometheus_stack" {&lt;br&gt;
  source = "./module/prometheus"&lt;/p&gt;

&lt;p&gt;depends_on = [&lt;br&gt;
    module.traefik_stack&lt;br&gt;
  ]&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;module "grafana_stack" {&lt;br&gt;
  source = "./module/grafana"&lt;/p&gt;

&lt;p&gt;depends_on = [&lt;br&gt;
    module.prometheus_stack&lt;br&gt;
  ]&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This also, has a handy target in our Makefile, allowing developers and operators, easily setup and remove cluster pre-requisites&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NtIF4Lmj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667984415271/B6SI0SOw1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NtIF4Lmj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667984415271/B6SI0SOw1.png" alt="Screenshot 2022-11-09 at 09.48.58.png" width="880" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6ygcfJai--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667984399818/Yh-hM8j4L.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6ygcfJai--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667984399818/Yh-hM8j4L.png" alt="Screenshot 2022-11-09 at 09.49.06.png" width="880" height="119"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Setup the pre-requisites took about 3 minutes, but if something that you need to do, time to time, you can setup your cluster today, work on your feature for some days, and then shutdown.&lt;/p&gt;

&lt;p&gt;After all this, you will have a fully functional Local-To-Prod pipeline. (if you need to see how the Gitlab CI file looks like, is in the second part of this series)&lt;/p&gt;

&lt;h2&gt;
  
  
  Next
&lt;/h2&gt;

&lt;p&gt;This is the las delivery of the series, but now I'll write about the others tools that i use to address different challenges in my day to day work.&lt;/p&gt;

&lt;p&gt;If you are interested the next topics I'll write on are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Managing database migrations at scale in &lt;code&gt;Kubernetes&lt;/code&gt; for &lt;code&gt;PHP&lt;/code&gt; application with &lt;code&gt;symfony/migrations&lt;/code&gt; component&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Istio&lt;/code&gt;, &lt;code&gt;Cert-Manager&lt;/code&gt; and &lt;code&gt;Let's Encrypt&lt;/code&gt;: Secured your &lt;code&gt;k8&lt;/code&gt; clusters communication with automated generation and provisioning of SSL Certificates&lt;/li&gt;
&lt;li&gt;Internal Developer Platform: A modern way to run engineering teams.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;Digital War Room&lt;/code&gt; or how to get observability for Engineering Managers across applications and teams.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Support me
&lt;/h2&gt;

&lt;p&gt;If you like what you just read and you find it valuable, then you can buy me a coffee by clicking the link in the image below, it would be appreciated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/rcastellanosm"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Rb9xKs4G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" width="545" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>php</category>
      <category>skaffold</category>
      <category>gitops</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>How to build a CI/CD workflow with Skaffold for your application (Part II)</title>
      <dc:creator>Raul Castellanos</dc:creator>
      <pubDate>Mon, 07 Nov 2022 08:45:42 +0000</pubDate>
      <link>https://forem.com/rcastellanosm/how-to-build-a-cicd-workflow-with-skaffold-for-your-application-part-ii-315j</link>
      <guid>https://forem.com/rcastellanosm/how-to-build-a-cicd-workflow-with-skaffold-for-your-application-part-ii-315j</guid>
      <description>&lt;h2&gt;
  
  
  Lets recap the &lt;code&gt;Workflow&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;As you remind - and if not, you can read the first release of this tutorial 😅 - my main idea is to implement one tool - &lt;code&gt;skaffold&lt;/code&gt; - as building block for my &lt;code&gt;CI/CD workflow&lt;/code&gt;, who should be managed from a single &lt;code&gt;makefile&lt;/code&gt; as entrypoint for local development and pipelines - on &lt;code&gt;gitlab&lt;/code&gt; - and all this should be deployed in a &lt;code&gt;K8s&lt;/code&gt; cluster in &lt;code&gt;GCP&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And also, for simplicity of this tutorial, we use the &lt;code&gt;image&lt;/code&gt; and &lt;code&gt;artefact&lt;/code&gt; repository in the same &lt;code&gt;gitlab&lt;/code&gt; SAAS, but you can use whatever you want for this task (&lt;code&gt;Amazon S3&lt;/code&gt;, &lt;code&gt;Docker&lt;/code&gt; Registry, &lt;code&gt;Private Registries&lt;/code&gt;, &lt;code&gt;Azure&lt;/code&gt; Object Storage, etc)&lt;/p&gt;

&lt;p&gt;This is the &lt;code&gt;workflow&lt;/code&gt; so far:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;📣 The tool setup and local workflow was covered in the first delivery of this tutorial.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ujGWKyom--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667561874111/Qhz2KHP07.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ujGWKyom--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667561874111/Qhz2KHP07.jpeg" alt="2fl6qCIhG.png.jpeg" width="880" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Makefile
&lt;/h2&gt;

&lt;p&gt;As said, the &lt;code&gt;makefile&lt;/code&gt;, is, the main entrypoint for commands to be executed by &lt;code&gt;developers&lt;/code&gt; when they do development work locally, and for &lt;code&gt;gitlab pipelines&lt;/code&gt; in their different stages (must vary on your implementation), and that &lt;code&gt;makefile&lt;/code&gt; commands mostly are wraps for &lt;code&gt;skaffold&lt;/code&gt; ones, with the difference, that i need to pass dynamic values to those stages to work I expect to, so it's better to wrap then in a makefile target that received that dynamic params and then run the 'skaffold' command itself. (later you will now why)&lt;/p&gt;

&lt;p&gt;As for the time I was writing this article, my targets are:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PGbm_YMQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667763214992/_MmsTbgzE.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PGbm_YMQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667763214992/_MmsTbgzE.png" alt="Screenshot 2022-11-06 at 20.33.21.png" width="880" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hopefully for local development we have an unique command since &lt;code&gt;skaffold run&lt;/code&gt; do the full pipeline cycle &lt;code&gt;(build, test, deploy, hot reload)&lt;/code&gt; for development , so, why then i need to wrap this single command in a &lt;code&gt;makefile target&lt;/code&gt;?, mostly because I need that this work no matter what type/kind of local cluster technology the developer use (docker-desktop, Minikube, etc) and no matter what OS developer machine was (MacOS, *Nix) and for that, I need to pass the &lt;code&gt;kube-context&lt;/code&gt; parameter to &lt;code&gt;skaffold&lt;/code&gt; that in my case is &lt;code&gt;docker-desktop&lt;/code&gt; (since docker desktop already bring me a pre installed &lt;code&gt;k8s cluster for local development&lt;/code&gt; and that free me to the need to install manually a cluster in my machine - a win for docker-desktop here -.&lt;/p&gt;

&lt;p&gt;So you will encounter that the majority of the &lt;code&gt;makefile&lt;/code&gt; wraps, come in the form of rehuse the same command in multiple stages (pipelines) based on the received parameter in the make execution, and also, generating random seeds to prefix namespaces (because you will have more than one developer working in the same code base at once) and I want to avoid collision between &lt;code&gt;gitlab pipelines&lt;/code&gt; and deploys in lower environments when N developers are working in the same code base.&lt;/p&gt;

&lt;p&gt;We must focused in &lt;code&gt;pipeline&lt;/code&gt; target ones (all of then with &lt;code&gt;skaffold&lt;/code&gt; tool), since the infrastructure ones, related to install prerequisites in local cluster to comply with the architecture design, isn't cover in this series, but in the another series that 'll wrote in next weeks)&lt;/p&gt;

&lt;p&gt;Here is how my &lt;code&gt;makefile&lt;/code&gt; looks at the time of writing this article, and for those stages (because in technology everything evolves quickly I always remind this 😅)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#------ Development and Pipeline targets ----------#
run: ## DEVELOPMENT[skaffold]: Up and running stack in development mode with hot reloading in the local machine
    @$(MAKE) _requirements
    @skaffold dev -f $(DEPLOY_DIR)/skaffold.yaml -p development -n $(PROJECT_NAME) --no-prune=false --cache-artifacts=false
unit: ## DEVELOPMENT[skaffold]: build, deploy and run unit tests =&amp;gt; FOR PIPELINE: `make unit profile=pipeline kube_context=cluster-gitlab-context`
    @$(MAKE) _run_test_suite SUITE=unit PROFILE=$(PROFILE) NAMESPACE=$(DYNAMIC_NAMESPACE) KUBE_CONTEXT=$(KUBE_CONTEXT) || $(MAKE) _cleanup KUBE_CONTEXT=$(KUBE_CONTEXT) PROFILE=$(profile) NAMESPACE=$(DYNAMIC_NAMESPACE)
integration: ## DEVELOPMENT[skaffold]: build, deploy and run integration tests =&amp;gt; FOR PIPELINE: `make integration profile=pipeline kube_context=cluster-gitlab-context`
    @$(MAKE) _run_test_suite SUITE=integration PROFILE=$(PROFILE) NAMESPACE=$(DYNAMIC_NAMESPACE) KUBE_CONTEXT=$(KUBE_CONTEXT) || $(MAKE) _cleanup KUBE_CONTEXT=$(KUBE_CONTEXT) PROFILE=$(profile) NAMESPACE=$(DYNAMIC_NAMESPACE)
functional: ## DEVELOPMENT[skaffold]: build, deploy and run functional tests =&amp;gt; FOR PIPELINE: `make functional profile=pipeline kube_context=cluster-gitlab-context`
    @$(MAKE) _run_test_suite SUITE=functional PROFILE=$(PROFILE) NAMESPACE=$(DYNAMIC_NAMESPACE) KUBE_CONTEXT=$(KUBE_CONTEXT) || $(MAKE) _cleanup KUBE_CONTEXT=$(KUBE_CONTEXT) PROFILE=$(profile) NAMESPACE=$(DYNAMIC_NAMESPACE)
build: ## PIPELINE[skaffold]: build and push images to registry =&amp;gt; `make build tag=1.0.0|71dcab00 kube_context=docker-desktop`
    @skaffold build -f $(DEPLOY_DIR)/skaffold.yaml -p production -t $(tag) --kube-context=$(kube_context) --file-output=pipeline-artifacts.json
render: ## PIPELINE[skaffold]: render manifests and push to artifact registry =&amp;gt; `make render namespace=availability tag=1.0.0|71dcab00`
    @skaffold render -f $(DEPLOY_DIR)/skaffold.yaml -p production -n $(PROJECT_NAME) -a pipeline-artifacts.json -o $(PROJECT_NAME)-api-$(tag)-production.yaml
deploy: ## PIPELINE[skaffold]: apply hydrated manifests to desired namespace on cluster `make deploy tag=1.0.0|71dcab00 profile=production namespace=availability kube_context=docker-desktop`
    @kubectl create namespace $(namespace) --context=$(kube_context)
    @skaffold apply -f $(DEPLOY_DIR)/skaffold.yaml -p $(profile) -n $(namespace) --kube-context=$(kube_context) --status-check=true $(PROJECT_NAME)-api-$(tag)-production.yaml || $(MAKE) _cleanup KUBE_CONTEXT=$(kube_context) PROFILE=$(profile) NAMESPACE=$(namespace)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, all targets are warps for &lt;code&gt;skaffold&lt;/code&gt;, but, allowing me to pass dynamic data, context and profiles (we'll look this in the &lt;code&gt;skaffold&lt;/code&gt; file explanation below), the complete file is more larger, but with this snippet you can get an idea on how you can build something like that.&lt;/p&gt;

&lt;h2&gt;
  
  
  The &lt;code&gt;skaffold&lt;/code&gt; file
&lt;/h2&gt;

&lt;p&gt;The main &lt;code&gt;workflow orchestrator&lt;/code&gt; has a main config file, when we can define, how the application should be builded, tested, render their manifest and deployed, so it's a vertebral part of this strategy, and now I'll explain you mine.&lt;/p&gt;

&lt;p&gt;I have two &lt;code&gt;skaffold&lt;/code&gt; profiles: one called &lt;code&gt;development&lt;/code&gt; and the other &lt;code&gt;production, and a common part share between them like tag strategy, deploy strategy, and both of them use the same&lt;/code&gt;Dockerfile&lt;code&gt;to build their images pointing to the correct&lt;/code&gt;target`. (You can use separate Dockerfiles, but in my case I want to maintain as simple as possible because the difference between 2 images dependencies are minimum)&lt;/p&gt;

&lt;p&gt;the skaffold fiel lives inside my deploy folder (do you remember my file organisation? You can re-checked in the first delivery of this series), and all the deployment related files are store there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TPcy7gTS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667568630626/BK1BkpCMP.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TPcy7gTS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667568630626/BK1BkpCMP.png" alt="Screenshot 2022-11-04 at 14.28.59.png" width="507" height="535"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see the &lt;code&gt;skaffold.yaml&lt;/code&gt; file is in the root of the deploy folder since is my main &lt;code&gt;workflow orchestrator&lt;/code&gt;, in the other folder we have the main &lt;code&gt;k8s manifests&lt;/code&gt; and inside overlays we have the yaml patch's for every profile (most of the cases the same as an environment) that we want to declare.&lt;/p&gt;

&lt;h2&gt;
  
  
  The &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; Pipeline
&lt;/h2&gt;

&lt;p&gt;Since I use trunk base development strategy in micro services, we need to design 2 flows of pipelines, one for features branches and one for main branch, additionally to that, to be able to reach production we first need to tag a commit, so that's the real trigger for Go-To-Prod Pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Feature Branches Workflow (trigger by merge_request commit):
&lt;/h3&gt;

&lt;p&gt;In this stage, the developer needs to be able to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run all tests suites&lt;/li&gt;
&lt;li&gt;Build and Image with production dependencies&lt;/li&gt;
&lt;li&gt;Deploy to a dynamic namespace in the same cluster where pipeline runs (a Dynamic QA to say so, allowing to every developer to have "their own" QA server while their are working on a feature)&lt;/li&gt;
&lt;li&gt;Destroy the dynamic deployment (remove review app and namespace) &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a3oyF8P1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667759285752/uJZIk6dpo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a3oyF8P1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667759285752/uJZIk6dpo.png" alt="Screenshot 2022-11-06 at 19.27.58.png" width="880" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6zRP9fEw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667759296006/aVxZ8ellf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6zRP9fEw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667759296006/aVxZ8ellf.png" alt="Screenshot 2022-11-06 at 15.46.45.png" width="880" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also we use gitlab environments to see in the Gitlab UI "what's is deployed in what environment", and also to visualise production and stages latest deployment status and artefacts.&lt;/p&gt;

&lt;p&gt;So, when a new "dynamic QA" environment is deployed, you'll see this in environment page of your project:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v0lAGe21--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667760771094/pZsqLNroN.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v0lAGe21--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667760771094/pZsqLNroN.png" alt="Screenshot 2022-11-06 at 19.52.14.png" width="838" height="578"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And also on the MR View on Gitlab you'll see in what "review" environment is deployed that MR.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PPSczTSo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667760778935/DCwFN5n6x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PPSczTSo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667760778935/DCwFN5n6x.png" alt="Screenshot 2022-11-06 at 19.52.36.png" width="810" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Main Branch Workflow (trigger by a TAG):
&lt;/h3&gt;

&lt;p&gt;In this stage, the developer needs to be able to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run all tests suites&lt;/li&gt;
&lt;li&gt;Build and Image with production dependencies&lt;/li&gt;
&lt;li&gt;Create a Release Package (this a gitlab feature like Github to display release contents in a special page on gitlab)&lt;/li&gt;
&lt;li&gt;Deploy to Production&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xHkxH1cU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667759456750/KMqd5KDZa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xHkxH1cU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667759456750/KMqd5KDZa.png" alt="Screenshot 2022-11-06 at 16.57.35.png" width="880" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--n4XqzEnj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667759490932/OG7wFHc0A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--n4XqzEnj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667759490932/OG7wFHc0A.png" alt="Screenshot 2022-11-06 at 19.31.20.png" width="880" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The main workflow has 3 stages more, including deploying to staging cluster/namespace, but more important has the creation of the release package and the deployment to production.&lt;/p&gt;

&lt;p&gt;The only manual job is deploy to production.&lt;/p&gt;

&lt;p&gt;Gitlab Release&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vfryvD0E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667761248338/FXomvjxOy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vfryvD0E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667761248338/FXomvjxOy.png" alt="Screenshot 2022-11-06 at 20.00.37.png" width="880" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Production Environment after Deployment&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sF9cXq4b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667761214382/VQzJafEOg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sF9cXq4b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667761214382/VQzJafEOg.png" alt="Screenshot 2022-11-06 at 20.00.08.png" width="868" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All the best of all, all this is DONE automatically via the automated Pipeline on Gitlab. You can view the skeleton of the pipeline in this Snippet:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gitlab.com/playground-arena/api-symfony-roadrunner-cqrs/-/snippets/2448780"&gt;https://gitlab.com/playground-arena/api-symfony-roadrunner-cqrs/-/snippets/2448780&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  BONUS: &lt;code&gt;Kaniko&lt;/code&gt; image Builder
&lt;/h2&gt;

&lt;p&gt;Since version 1.24, &lt;a href="https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/"&gt;&lt;code&gt;Kubernetes&lt;/code&gt; was moving away from &lt;code&gt;dockershim&lt;/code&gt;&lt;/a&gt; as container runtime, so I want a solution that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allow me to build container inside a K8s cluster&lt;/li&gt;
&lt;li&gt;Continue to writing Dockerfiles as today&lt;/li&gt;
&lt;li&gt;Not need to share or mount a socket into a pod (that's the main security reason to not use docker)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I discover the last year &lt;code&gt;Kaniko&lt;/code&gt;, this is another tool on the &lt;code&gt;Google Container Tools&lt;/code&gt; on &lt;code&gt;Github&lt;/code&gt;, that allows us to build an container image from a Dockerfile without Docker. Marvellous.&lt;/p&gt;

&lt;p&gt;It has som benefits:&lt;/p&gt;

&lt;p&gt;*&lt;code&gt;Kaniko&lt;/code&gt; doesn't depend on a &lt;code&gt;Docker daemon&lt;/code&gt; and executes each command within a Dockerfile completely in userspace. (no need to mount a docker socket anymore)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides a handy docker image to use in pipelines (a very light weight one, so your jobs doesn't take much time to run)&lt;/li&gt;
&lt;li&gt;Provides a Caching system in the same repository where the images are stored, so in every job that tries to build the image, &lt;code&gt;Kaniko&lt;/code&gt; will check first the repo cache layers and downloaded speeding up the build job.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a build a image I check my repo and I can see the caching layers separated from the final images (see image below)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oFFjs3Yn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667762313501/PjAyGdJLh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oFFjs3Yn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667762313501/PjAyGdJLh.png" alt="Screenshot 2022-11-06 at 20.18.09.png" width="717" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cache Layers&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P7L7LtqJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667762327292/bBwJblXs_.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P7L7LtqJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667762327292/bBwJblXs_.png" alt="Screenshot 2022-11-06 at 20.18.13.png" width="760" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Builded and Tagged Images&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k5WkVJ1N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667762342390/LI9SwKP31.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k5WkVJ1N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667762342390/LI9SwKP31.png" alt="Screenshot 2022-11-06 at 20.18.21.png" width="880" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment safety considerations
&lt;/h2&gt;

&lt;p&gt;All in life has it's tradeoff, and this isn't the exception, however, is possible to joint team autonomy with security and governance, no only with this example, but in general in software development.&lt;/p&gt;

&lt;p&gt;Some of my personal recommendations on this are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Secrets: Always use a secret vault (in my case i use Hashicorp one) &lt;/li&gt;
&lt;li&gt;Fine-grained permissions control with the CI/CD tunnel via impersonation and k8s agent:

&lt;ul&gt;
&lt;li&gt;Allows you to leverage your K8s authorization capabilities to limit the permissions of what can be done with the CI/CD tunnel on your running cluster&lt;/li&gt;
&lt;li&gt;Lowers the risk of providing unlimited access to your K8s cluster with the CI/CD tunnel&lt;/li&gt;
&lt;li&gt;Segments fine-grained permissions with the CI/CD tunnel at the project or group level&lt;/li&gt;
&lt;li&gt;Controls permissions with the CI/CD tunnel at the username or service account&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Make your high environment namespaces inmutable (on K8s namespace creation time)&lt;/li&gt;
&lt;li&gt;Fine Grained RBAC on Gitlab Roles (I think this isn't available on Free and Self-Managed version)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Chapter
&lt;/h2&gt;

&lt;p&gt;In the next chapter I'll wrap up all things that we cover in this first 2 releases of the series in a full functional pipeline from local to production with a small k8s cluster and i expect, that you can see all the work in action and some final remarks to allow you to test in your own projects.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

&lt;h2&gt;
  
  
  Support me
&lt;/h2&gt;

&lt;p&gt;If you like what you just read and you find it valuable, then you can buy me a coffee by clicking the link in the image below, it would be appreciated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/rcastellanosm"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Rb9xKs4G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" width="545" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>php</category>
      <category>gitops</category>
      <category>kubernetes</category>
      <category>skaffold</category>
    </item>
    <item>
      <title>How to build a CI/CD workflow with Skaffold for your application (Part I)</title>
      <dc:creator>Raul Castellanos</dc:creator>
      <pubDate>Mon, 31 Oct 2022 20:39:26 +0000</pubDate>
      <link>https://forem.com/rcastellanosm/how-to-build-a-cicd-workflow-with-skaffold-for-your-application-part-i-2df4</link>
      <guid>https://forem.com/rcastellanosm/how-to-build-a-cicd-workflow-with-skaffold-for-your-application-part-i-2df4</guid>
      <description>&lt;p&gt;&lt;strong&gt;Skaffold&lt;/strong&gt; (part of the &lt;code&gt;Google Container Tools&lt;/code&gt; ) was on the market since 2018, but was in 2020 when, (at least for me), they reach a prod-grade maturity level on the tool.&lt;/p&gt;

&lt;p&gt;And I was more than fascinated on how, this tool not only can facilitate the developer work on local machines, but also, as a complete pipeline from development to production environment, if is used with a couple of other tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Easy and Repeatable Kubernetes Development&lt;/strong&gt;, no matter if you are a developer, lead, platform engineer, SRE or head of Engineering, all we're agree on that 🙋🏽.&lt;/p&gt;

&lt;p&gt;We want an easy, repeatable, reproducible development workflow, that brings more autonomy in the teams, to bring more product value to the final user in a secure way.&lt;/p&gt;

&lt;p&gt;I want to show you, how I use &lt;strong&gt;Skaffold&lt;/strong&gt; as building-block for my micro service CI/CD pipeline from local to production.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Toolset&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;skaffold&lt;/strong&gt; cli (you can use the provided docker image or install in you machine)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kubectl&lt;/strong&gt; and &lt;code&gt;Kustomize&lt;/code&gt; (kustomize is part of kubectl cli already)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;K8s&lt;/strong&gt; cluster (local and remote) - if you use docker-desktop you've already one cluster installed by default, to use for local development.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Isn't intended in this article to show, how to install a k8s cluster for local development, you have various alternatives like Minikube out there. Like I said, I my case since I use docker-desktop and they come with a k8s inside by default.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Workflow
&lt;/h2&gt;

&lt;p&gt;The main idea is, to use 'skaffold' as a building block from local to production environment, simplifying the tooling used by the developer and facilitating the integration with the actual &lt;code&gt;gitlab&lt;/code&gt; repository service.&lt;/p&gt;

&lt;p&gt;The more complex part would be, the &lt;code&gt;integration&lt;/code&gt; and &lt;code&gt;functional&lt;/code&gt; tests, since integration and functional tests needs the complete application and dependencies running, a little more work needs to be done to accomplish that, however isn't as complex as sounds, since I used a &lt;code&gt;Kubernetes gitlab runner&lt;/code&gt; to run the pipelines, so, we can use the same runner to deploy the application in a special namespace, run our tests, and then, remove the application from the runner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AcmEpgPa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667243854951/2fl6qCIhG.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AcmEpgPa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667243854951/2fl6qCIhG.png" alt="Screenshot 2022-10-31 at 20.17.18.png" width="880" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Be aware, that you need to do a cleanup process after each pipeline or stage run, to avoid left running process consuming capacity and space in your Kubernetes cluster and to avoid recurring in unexpected operational costs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Application
&lt;/h2&gt;

&lt;p&gt;It's the simplest micro service you may know, but it's intended for demonstration purposes. Was made in &lt;code&gt;Symfony&lt;/code&gt; with &lt;code&gt;RoadRunner&lt;/code&gt; as application server, expose metrics to a &lt;code&gt;prometheus&lt;/code&gt; metrics server, and then this metrics are fetched from the &lt;code&gt;monitoring stack&lt;/code&gt; to be used in &lt;code&gt;grafana&lt;/code&gt; dashboard for monitoring and observability purposes.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://www.plantuml.com/plantuml/png/RPDFSzem4CNl_XGg9p9juaiFcPxY6gRj55eFX4DFZ90Nq4H_t9L4OJhvxbqXrqGaXqpqPt_xtZxX1-Sv-g1LyKuQeK8BREzzvpwL9V8_Tplfzs4J7A2mneFnTyBgibFSHERM-LR9JLb_l6tYqMe-ApLt7f2ErhNLdJMHwMB_ObRz-hbwNC-g7vDbNJNJyKrHB4zKhTVJen-JW0iQy0CRrKeIDg9xnlgAppQObkDf_7JlgEBxlMDLroafk9VMi5g5A3kwON-9OOFq6E40wA11UpmHzuZ0j_9fHCj5kc7dAzBAC07evzpmNV9psKKoNifjb0QcpySwsMMlxVABoQgJ15S7BXNVI2NzYLNDDxO4F4W1iR6M0YrpwU3rBF49q2e5-4QVo3TV6_QUBInl5y4Om7ugmhYaxMH3SRJInU7Z_uW8BlQGacOBK5TvPOfeWuV8n1y88ObOJ_AmhWBlq1va2sSjUZfMBoONT9MDr7lhBT72YoJpJ7z5FWUrrU3t42BG39j8pS6Z5EwSQumWPySxv5loIeLVqkhCM2EzHMbsPMMuEdbga48Xcvd9JDW9v1qmdHIpQD9uWrY61PVoQBddh0y8YNhElWSuUa0oq_G5aPpsP_712RWsznQ2y3k0y__DqLYiIEQ63ov_im5nBvZY0KmRjFe7"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5WsIukrN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667247028660/_7nejmil9.png" alt="application-diagram.png" width="880" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, that we have all the context clear, lets begin with the next steps: first at all, i need to setup our &lt;code&gt;repository skeleton&lt;/code&gt; and &lt;code&gt;directory structure&lt;/code&gt;, in order to be, as functional as possible to my intended workflow and development process.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Take into account, that this, is how I setup my repositories, and you should fit this to your expectations and operational workflow.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Additionally, my principal goal when I began to work with this &lt;code&gt;GitOps&lt;/code&gt; approach was also: reduce the cognitive load and operational complexity for new an current colleagues, reducing also the onboarding time and the number of tools that we need to do our work.&lt;/p&gt;

&lt;p&gt;If you need a more complex scenario for metrics scaling, i normally try to use &lt;code&gt;thanos&lt;/code&gt; for that job, since allows me to easily scale &lt;code&gt;prometheus&lt;/code&gt;, and get &lt;code&gt;long term storage&lt;/code&gt; in commonly knows cloud object storage services, like &lt;code&gt;Amazon S3&lt;/code&gt; or &lt;code&gt;GCP Cloud Storage&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Below you can find a diagram of the same application but implementing thanos.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://www.plantuml.com/plantuml/png/ZLHDSzCm4BtxLuYSqe7M1pXqEDKudSAGW4cQ0oUF8cyIKLao-WWDJFyxkvQdYUqomodoQj--js-rkN6UMnzgbRoIMgXG0TjxtxZtQMhvhwkTzFkm2GwiCDg3zbV2r6cZk2RCfVELafiqVtTPK6YzcASrTnuiXihSr8tHX6ceVZBFldzTtvVpxCjibMV5xVGYILP7pAxBsqS_HG8NQh1ls2HN4c6Jq_q74tJ5xN7wSEtm_lErOrdJA2cubqQpN0KYdLomFmbZpxnJ2mUm3Wfh7ey8kxV0j_9XWiTbl67j5HATemHONzPSyrqKWv-B-4L8vnIZ3BabTd0jU2YJdyHbZKHKTk1IyOrKqXzPLdnY2ociSM0FKa3KtPE0PbkZ5DWNiAIY-5YmrsnfUBKCMbFhiO3sNEBdR8EzLz9Hf_HB4C757Z2F4fUW1kRq6Aq97beGlGN4H4Wv6tWpyBUnvY0h0iOHvSlP2RlkDTMfwqJXmOl8yvJqPk7tN1jNEYmhkAKPjW6sYW528diDVW_1iGLuAqKSoQY6m00NtfnLoRlG_zLfdXDwsOIj8u3HGDjX99rXemPwHRRWnPxmksMH8og2vZsctc2SiBm15kdw0ueUZtiT2XYJ9aylxbdPiNJ3N1WjiQ3Kk-6w3Ot-6S1AEBFvMmpyosJMAxApVCirnzoxU2BOYJpDD5T77wVpRDYGUTGqtHn7JgzFR2FjmSM7N77FMNpPr3AQrVlNWaSF5YKLNGRPTTl5scMzECyscnyWVEcyiNm7c3etwESzs9gjOemeLs_Jkxn8iz_1jWiRNzBvGtY9TUrEAzkw6Zli_bR7sse1ctN-7DCnZH_HI3UTm9qCxIEZYsFSU0uteAjGgxy0"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zn1N0o9_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667247112713/y0Hhlyreq.png" alt="application-diagram-extent.png" width="880" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's continue this guide with the simplified version of the application (the one with only &lt;code&gt;prometheus&lt;/code&gt; and &lt;code&gt;Grafana&lt;/code&gt; components for monitoring)&lt;/p&gt;

&lt;h2&gt;
  
  
  Folder Structure and Orchestration process
&lt;/h2&gt;

&lt;p&gt;Lets recap on some concepts about the tooling that we're implementing here, to fulfil the promise of a, full workflow with skaffold from local to production.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I deploy on &lt;code&gt;K8s&lt;/code&gt; cluster with &lt;code&gt;kubectl&lt;/code&gt; and &lt;code&gt;kustomize&lt;/code&gt; (kustomize is part of k8s bundle).&lt;/li&gt;
&lt;li&gt;I use &lt;code&gt;skaffold&lt;/code&gt; as workflow building block through their cli command steps (build, test, render, deploy, verify)&lt;/li&gt;
&lt;li&gt;I build images locally with &lt;code&gt;docker&lt;/code&gt; cli (is a pre requisite for my workflow) and in &lt;code&gt;gitlab&lt;/code&gt; i have a couple of options alongside docker (&lt;code&gt;Kaniko&lt;/code&gt; or &lt;code&gt;Docker in Docker&lt;/code&gt; variations), but i'll cover that in the next steps. &lt;/li&gt;
&lt;li&gt;I use a &lt;code&gt;Makefile&lt;/code&gt; as command "collector" entrypoint no only for local development but for gitlab pipeline to group commands in single word ones (make run, build, unit, etc).&lt;/li&gt;
&lt;li&gt;I use &lt;code&gt;terraform&lt;/code&gt; declarative configuration files, to set the desired state of my working cluster (in that case the local one), in this desired state we have some pre-requisites needed in my architecture definition, like &lt;code&gt;cert-manager&lt;/code&gt;, &lt;code&gt;traefik&lt;/code&gt;, &lt;code&gt;prometheus&lt;/code&gt; and &lt;code&gt;grafana&lt;/code&gt;, like my machines on staging and production.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;- deploy/
    manifests/ &lt;span class="c"&gt;# the place where k8s yaml resides&lt;/span&gt;
         - &lt;span class="k"&gt;*&lt;/span&gt;k8s.yaml
         - kuztomization.yaml
    overlays/ &lt;span class="c"&gt;# for every environment that you want, you should have an overlay&lt;/span&gt;
        development/
              - &lt;span class="k"&gt;*&lt;/span&gt;.k8s.patch.yaml
              - kuztomization.yaml
        production/
             - &lt;span class="k"&gt;*&lt;/span&gt;.k8s.patch.yaml
             - kuztomization.yaml
   - skaffold.yaml
- infrastructure/ &lt;span class="c"&gt;#terraform scripts to install cluster pre requisites vault, cert-manager, treafik&lt;/span&gt;
- src/ &lt;span class="c"&gt;# all the source code of your application&lt;/span&gt;
    - Dockerfile 
- Makefile
- .gitlab-ci.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another important component of this setup is: the &lt;code&gt;Dockerfile&lt;/code&gt;, to be able to use the same dockerfile to build images for development and production environments (with the dependencies of each of them), i build a &lt;code&gt;multi-stage&lt;/code&gt; dockerfile that allows me to get a target for development, and a target for production, that we can point to in the &lt;code&gt;skaffold&lt;/code&gt; build phase.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q0VXx55U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667244719111/LWX13klyl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q0VXx55U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667244719111/LWX13klyl.png" alt="Screenshot 2022-10-31 at 20.31.39.png" width="880" height="555"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Kustomize&lt;/code&gt; files, (&lt;code&gt;kustomization.yaml&lt;/code&gt; ones), allow me to declare a &lt;code&gt;patch or merge&lt;/code&gt; of the a part of the main &lt;code&gt;k8s&lt;/code&gt; manifest, to apply the changes that I need to change in some environment without the necessity of duplicate the entire YAML, so, for example, if I have the following &lt;code&gt;k8s&lt;/code&gt; manifest declaring a api with 1 replica, and then, i can declare a patch to set that number to 4 replicas if the environment is production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ebPQGciE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667210150883/uVgpZtVjf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ebPQGciE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667210150883/uVgpZtVjf.png" alt="Screenshot 2022-10-31 at 10.54.48.png" width="880" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following image shows you, how is a main &lt;code&gt;k8s&lt;/code&gt; manifest and their corresponding patch for production. You can patch anything you want, adding all the data, metadata and others labels to every manifest in environment overlays.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F81fHFFi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667148986552/Zj2sXJ9Oc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F81fHFFi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667148986552/Zj2sXJ9Oc.png" alt="Screenshot 2022-10-30 at 17.54.36.png" width="880" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, we have our skaffold file, in charge of the orchestration process of the workflow itself:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LisuMbVH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667244800783/oBMDbsByO.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LisuMbVH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667244800783/oBMDbsByO.png" alt="Screenshot 2022-10-31 at 20.30.24.png" width="834" height="1180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since &lt;code&gt;skaffold&lt;/code&gt; allows us to use &lt;code&gt;kustomization&lt;/code&gt; as deployment strategy, i organise my profiles to do so, and also, with this, the development team has a lot of maneoveur to modify and deploy changes with zero effort.&lt;/p&gt;

&lt;p&gt;Now, i can run everything in one shot to see how this work, so, it's everything is ok, i'll capable to access all the tools (via browser) and the API requesting them.&lt;/p&gt;

&lt;p&gt;To run skaffold you need to run the following command:&lt;code&gt;skaffold dev -p development&lt;/code&gt;, but since we use a &lt;code&gt;Makefile&lt;/code&gt; as command entrypoint, you can see above that make run do the same job as we need to run skaffold in development mode.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KTjAi_lI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667151507712/gsBwGFfET.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KTjAi_lI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667151507712/gsBwGFfET.gif" alt="ezgif-4-f2a066ca18.gif" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I use my domain and a self signed certificate to access, all applications via a &lt;code&gt;FQDN&lt;/code&gt; over https (using &lt;code&gt;cert-manager&lt;/code&gt; and &lt;code&gt;traefik&lt;/code&gt; for that), now, I'll be able to access all of them via that URLS (on the local machine this urls points to the loopback &lt;code&gt;127.0.0.1&lt;/code&gt; in the &lt;code&gt;/etc/hosts&lt;/code&gt; file)&lt;/p&gt;

&lt;p&gt;We should have at lest this applications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Grafana &lt;/li&gt;
&lt;li&gt;Traefik Dashboard &lt;/li&gt;
&lt;li&gt;API /Application&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lets see in this animated GIF, those applications running: &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MA0KNGZS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667153104081/m5Rk9v6Nx.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MA0KNGZS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn.hashnode.com/res/hashnode/image/upload/v1667153104081/m5Rk9v6Nx.gif" alt="ezgif-4-162073e2b1.gif" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;😅 with this , i already have a full local development cycle for my local environment, now the next milestone is, make my &lt;code&gt;gitlab&lt;/code&gt; pipelines compliance with this pipeline and make the way to Low and Prod environments .&lt;/p&gt;

&lt;p&gt;Let's stop here for now. I'll prepare the material for the next blog entry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Chapter
&lt;/h2&gt;

&lt;p&gt;In the next chapter of this tutorial, I'll try to implement this local workflow in &lt;code&gt;gitlab&lt;/code&gt; pipeline, allowing me to use the &lt;code&gt;tests, build, render, deploy and verify&lt;/code&gt; skaffold stages in my entire pipelines and deploy the application to a &lt;code&gt;k8s&lt;/code&gt; cluster in &lt;code&gt;GCP&lt;/code&gt; in a full GitOps manner.&lt;/p&gt;

&lt;p&gt;Thanks for reading and see you the next week for more! 😃&lt;/p&gt;

&lt;p&gt;A Big KUDOS to the team #skaffold for the great job, if you wan to know more about you can reach them at &lt;a href="https://kubernetes.slack.com/archives/CABQMSZA6"&gt;slack&lt;/a&gt; or in their &lt;a href="https://github.com/GoogleContainerTools/skaffold"&gt;repo&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Support me
&lt;/h2&gt;

&lt;p&gt;If you like what you just read and you find it valuable, then you can buy me a coffee by clicking the link in the image below, it would be appreciated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/rcastellanosm"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Rb9xKs4G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" width="545" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
