<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Alok</title>
    <description>The latest articles on Forem by Alok (@alokm).</description>
    <link>https://forem.com/alokm</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/alokm"/>
    <language>en</language>
    <item>
      <title>Understanding Multi-arch Containers, Benefits and CI/CD Integration</title>
      <dc:creator>Alok</dc:creator>
      <pubDate>Wed, 14 Jun 2023 05:54:31 +0000</pubDate>
      <link>https://forem.com/infracloud/understanding-multi-arch-containers-benefits-and-cicd-integration-ko5</link>
      <guid>https://forem.com/infracloud/understanding-multi-arch-containers-benefits-and-cicd-integration-ko5</guid>
      <description>&lt;p&gt;Have you ever seen &lt;strong&gt;“exec /docker-entrypoint.sh: exec format error”&lt;/strong&gt; error message on your server while running any docker image or Kubernetes pods? This is most probably because you are running some other CPU architecture container image on your server OR did you ever use  &lt;code&gt;--platform linux/x86_64&lt;/code&gt; option on your Apple silicon M1, M2 MacBook? If yes, then you are &lt;a href="https://docs.docker.com/desktop/troubleshoot/known-issues/"&gt;not getting the native performance&lt;/a&gt; of Apple silicon and it may be draining your MacBook battery.&lt;br&gt;
To avoid this kind of error and performance issue, we need to run the correct multi-arch container image or we may need to build our own image because all container public image does not have multi-arch image available.&lt;/p&gt;

&lt;p&gt;In this blog post, we will learn what are multi-arch container images? How it works? How to build and promote them? and we will write a sample code for building a multi-arch image in the CI/CD pipeline.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is a Multi-arch Container Image?
&lt;/h2&gt;

&lt;p&gt;A multi-arch Docker image is a list of images that has references to binaries and libraries compiled for multiple CPU architectures. This type of image is useful when we need to run the same application on different CPU architectures (ARM, x86, RISC-V, etc) without creating separate images for each architecture.&lt;/p&gt;
&lt;h2&gt;
  
  
  Multi-arch Container Use Cases
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Performance and cost optimization:&lt;/strong&gt; Container multi-arch is used to optimize performance on different CPU architectures. By building and deploying images that are optimized for specific architecture, we can achieve better performance and reduce resource usage. Using &lt;a href="https://www.infracloud.io/blogs/kubernetes-workload-management-karpenter/"&gt;Karpenter&lt;/a&gt; we can easily deploy our workload to arm64 and get the benefit of &lt;a href="https://aws.amazon.com/ec2/graviton/"&gt;AWS Graviton’s&lt;/a&gt; performance and cost savings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-platform development:&lt;/strong&gt; If you are developing an application that needs to run on multiple platforms, such as ARM and x86, you can &lt;a href="https://github.com/docker/buildx"&gt;use buildx to build multi-arch Docker images&lt;/a&gt; and test the application on different architectures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IoT devices:&lt;/strong&gt; Many IoT/Edge devices use ARM processors, which require different binaries and libraries than x86 processors. With multi-arch images, you can create an image that can run on ARM, x86, and RISCV devices, making it easier to deploy your application to a wide range of IoT devices.&lt;/p&gt;
&lt;h2&gt;
  
  
  Benefits of Using Multi-arch Container Image
&lt;/h2&gt;

&lt;p&gt;Several advantages of using multi-arch container images are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ability to run Docker image on multiple CPU architectures&lt;/li&gt;
&lt;li&gt;Enables us to choose eco-friendly CPU architecture&lt;/li&gt;
&lt;li&gt;Seamless migration from one architecture to another&lt;/li&gt;
&lt;li&gt;Better performance and &lt;a href="https://aws.amazon.com/blogs/opensource/how-zomato-boosted-performance-25-and-cut-compute-cost-30-migrating-trino-and-druid-workloads-to-aws-graviton/"&gt;cost saving using arm64&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Ability to support more cores per CPU using arm64&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  How to Build Multi-arch Container Image?
&lt;/h2&gt;

&lt;p&gt;There are multiple ways to build a multi-arch container but we will be focusing on widely used and easy methods.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Traditional Docker build command&lt;/li&gt;
&lt;li&gt;Using &lt;a href="https://docs.docker.com/engine/reference/commandline/buildx/"&gt;Docker buildx&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Using Traditional Docker Build Command
&lt;/h3&gt;

&lt;p&gt;In this tutorial, we will manually build both images on different CPU architecture machines and push them to the container registry (eg. Dockerhub) and then create the manifest file which has both image references. A manifest file is a simple JSON file containing the index of container images and its metadata like the size of image, sha256 digest, OS, etc. We will see more about manifest file later in this blog.&lt;/p&gt;

&lt;p&gt;For eg. this is our basic Dockerfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; nginx&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; “Hello multiarch” &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /usr/share/nginx/html/index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;########## on amd64 machine ##########&lt;/span&gt;
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; username/custom-nginx:v1-amd64 &lt;span class="nb"&gt;.&lt;/span&gt;
docker push username/custom-nginx:v1-amd64

&lt;span class="c"&gt;########## on arm64 machine ##########&lt;/span&gt;
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; username/custom-nginx:v1-arm64 &lt;span class="nb"&gt;.&lt;/span&gt;
docker push username/custom-nginx:v1-arm64

&lt;span class="c"&gt;########## Create a manifest index file ##########&lt;/span&gt;
docker manifest create &lt;span class="se"&gt;\&lt;/span&gt;
    username/custom-nginx:v1 &lt;span class="se"&gt;\&lt;/span&gt;
    username/custom-nginx:v1-amd64 &lt;span class="se"&gt;\&lt;/span&gt;
    username/custom-nginx:v1-arm64

&lt;span class="c"&gt;########## Push manifest index file ##########&lt;/span&gt;
docker manifest push username/custom-nginx:v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Using Docker Buildx
&lt;/h3&gt;

&lt;p&gt;With buildx, we just need to run one single command with parameterized architecture.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker buildx build &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--push&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--platform&lt;/span&gt; linux/arm64,linux/amd64 &lt;span class="se"&gt;\ &lt;/span&gt;
&lt;span class="nt"&gt;-t&lt;/span&gt; username/custom-nginx:v1 &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the background, the Docker buildx command uses buildkit so when we run the above command, it creates one container with &lt;a href="https://github.com/moby/buildkit"&gt;moby/buildkitd&lt;/a&gt; image, which has &lt;a href="https://github.com/multiarch/qemu-user-static#qemu-user-static"&gt;QEMU binary&lt;/a&gt; for multiple CPU architectures which are responsible for the emulating CPU instruction sets. We can view these QEMU binaries by running &lt;code&gt;ls  /usr/bin/buildkit-qemu-*&lt;/code&gt; inside the running &lt;a href="https://docs.docker.com/build/buildkit/"&gt;buildkit&lt;/a&gt; container.&lt;/p&gt;

&lt;p&gt;In the above command, we passed &lt;code&gt;--platform linux/arm64,linux/amd64&lt;/code&gt; so it uses the &lt;code&gt;/usr/bin/buildkit-qemu-aarch64&lt;/code&gt; QEMU binary for building linux/arm64 image and linux/amd64 are natively built on the host machine. Once both images are built, then it uses the &lt;strong&gt;&lt;code&gt;--push&lt;/code&gt;&lt;/strong&gt; option to create the manifest file and pushes both images to the registry server with the manifest file.&lt;/p&gt;

&lt;p&gt;By inspecting the manifest file we can see &lt;strong&gt;“Ref”&lt;/strong&gt; contains the actual image link which will be fetched when &lt;code&gt;platform[0].architecture&lt;/code&gt; matches the host system architecture.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker manifest inspect &lt;span class="nt"&gt;-v&lt;/span&gt; nginx

&lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"Ref"&lt;/span&gt;: &lt;span class="s2"&gt;"docker.io/library/nginx:latest@sha256:bfb112db4075460ec042ce13e0b9c3ebd982f93ae0be155496d050bb70006750"&lt;/span&gt;,
                &lt;span class="s2"&gt;"Descriptor"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                        &lt;span class="s2"&gt;"mediaType"&lt;/span&gt;: &lt;span class="s2"&gt;"application/vnd.docker.distribution.manifest.v2+json"&lt;/span&gt;,
                        &lt;span class="s2"&gt;"digest"&lt;/span&gt;: &lt;span class="s2"&gt;"sha256:bfb112db4075460ec042ce13e0b9c3ebd982f93ae0be155496d050bb70006750"&lt;/span&gt;,
                        &lt;span class="s2"&gt;"size"&lt;/span&gt;: 1570,
                        &lt;span class="s2"&gt;"platform"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                                &lt;span class="s2"&gt;"architecture"&lt;/span&gt;: &lt;span class="s2"&gt;"amd64"&lt;/span&gt;,
                                &lt;span class="s2"&gt;"os"&lt;/span&gt;: &lt;span class="s2"&gt;"linux"&lt;/span&gt;
                        &lt;span class="o"&gt;}&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="s2"&gt;"SchemaV2Manifest"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                        &lt;span class="s2"&gt;"schemaVersion"&lt;/span&gt;: 2,
                        &lt;span class="s2"&gt;"mediaType"&lt;/span&gt;: &lt;span class="s2"&gt;"application/vnd.docker.distribution.manifest.v2+json"&lt;/span&gt;,
                        &lt;span class="s2"&gt;"config"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                                &lt;span class="s2"&gt;"mediaType"&lt;/span&gt;: &lt;span class="s2"&gt;"application/vnd.docker.container.image.v1+json"&lt;/span&gt;,
                                &lt;span class="s2"&gt;"size"&lt;/span&gt;: 7916,
                                &lt;span class="s2"&gt;"digest"&lt;/span&gt;: &lt;span class="s2"&gt;"sha256:080ed0ed8312deca92e9a769b518cdfa20f5278359bd156f3469dd8fa532db6b"&lt;/span&gt;
                        &lt;span class="o"&gt;}&lt;/span&gt;,
….

        &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="s2"&gt;"Ref"&lt;/span&gt;: &lt;span class="s2"&gt;"docker.io/library/nginx:latest@sha256:3be40d1de9db30fdd9004193c2b3af9d31e4a09f43b88f52f1f67860f7db4cb2"&lt;/span&gt;,
                &lt;span class="s2"&gt;"Descriptor"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                        &lt;span class="s2"&gt;"mediaType"&lt;/span&gt;: &lt;span class="s2"&gt;"application/vnd.docker.distribution.manifest.v2+json"&lt;/span&gt;,
                        &lt;span class="s2"&gt;"digest"&lt;/span&gt;: &lt;span class="s2"&gt;"sha256:3be40d1de9db30fdd9004193c2b3af9d31e4a09f43b88f52f1f67860f7db4cb2"&lt;/span&gt;,
                        &lt;span class="s2"&gt;"size"&lt;/span&gt;: 1570,
                        &lt;span class="s2"&gt;"platform"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                                &lt;span class="s2"&gt;"architecture"&lt;/span&gt;: &lt;span class="s2"&gt;"arm64"&lt;/span&gt;,
                                &lt;span class="s2"&gt;"os"&lt;/span&gt;: &lt;span class="s2"&gt;"linux"&lt;/span&gt;,
                                &lt;span class="s2"&gt;"variant"&lt;/span&gt;: &lt;span class="s2"&gt;"v8"&lt;/span&gt;
                        &lt;span class="o"&gt;}&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;,
                &lt;span class="s2"&gt;"SchemaV2Manifest"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                        &lt;span class="s2"&gt;"schemaVersion"&lt;/span&gt;: 2,
                        &lt;span class="s2"&gt;"mediaType"&lt;/span&gt;: &lt;span class="s2"&gt;"application/vnd.docker.distribution.manifest.v2+json"&lt;/span&gt;,
                        &lt;span class="s2"&gt;"config"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
                                &lt;span class="s2"&gt;"mediaType"&lt;/span&gt;: &lt;span class="s2"&gt;"application/vnd.docker.container.image.v1+json"&lt;/span&gt;,
                                &lt;span class="s2"&gt;"size"&lt;/span&gt;: 7932,
                                &lt;span class="s2"&gt;"digest"&lt;/span&gt;: &lt;span class="s2"&gt;"sha256:f71a4866129b6332cfd0dddb38f2fec26a5a125ebb0adde99fbaa4cb87149ead"&lt;/span&gt;
                        &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can also use buildx &lt;a href="https://docs.docker.com/engine/reference/commandline/buildx_imagetools/"&gt;imagetools&lt;/a&gt; command to view the same output in a more human-readable format.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker buildx imagetools inspect sonarqube:10.0.0-community

Name:      docker.io/library/sonarqube:10.0.0-community
MediaType: application/vnd.docker.distribution.manifest.list.v2+json
Digest:    sha256:51588fac6153b949af07660decfe20b5754da9fd12c82db5d95a0900b6024196

Manifests:
  Name:      docker.io/library/sonarqube:10.0.0-community@sha256:8b536568cd64faf15e1e5be916cf21506df70e2177061edfedfd22f255a7b1a0
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/amd64

  Name:      docker.io/library/sonarqube:10.0.0-community@sha256:2163e9563bbba2eba30abef8c25e68da4eb20e6e0bb3e6ecc902a150321fae6b
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/arm64/v8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are having any issues building multi-arch images, you can run the following command to reset the &lt;a href="https://github.com/multiarch/qemu-user-static#multiarchqemu-user-static-images"&gt;/proc/sys/fs/binfmt_misc&lt;/a&gt; entries.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--privileged&lt;/span&gt; multiarch/qemu-user-static &lt;span class="nt"&gt;--reset&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nb"&gt;yes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can also build multi-arch container images &lt;a href="https://danmanners.com/posts/2022-01-buildah-multi-arch/"&gt;using Buildah&lt;/a&gt; as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do Multi-arch Container Images Work?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="/assets/img/Blog/multi-arch-container-image/docker-multi-arch-buildx.png" class="article-body-image-wrapper"&gt;&lt;img src="/assets/img/Blog/multi-arch-container-image/docker-multi-arch-buildx.png" alt="Docker multi-arch buildx working"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see in the diagram, the host machine has &lt;strong&gt;x86/amd64&lt;/strong&gt; CPU architecture, and on top of that, we install operating systems which can be Windows or Linux. Windows requires &lt;a href="https://learn.microsoft.com/en-us/windows/wsl/about"&gt;WSL&lt;/a&gt; or &lt;a href="https://github.com/linuxkit/linuxkit"&gt;LinuxKit&lt;/a&gt; to run Docker. It uses QEMU to emulate multiple CPU architectures and Dockerfile builds run inside this emulation.&lt;/p&gt;

&lt;p&gt;&lt;a href="/assets/img/Blog/multi-arch-container-image/docker-pull-diagram.png" class="article-body-image-wrapper"&gt;&lt;img src="/assets/img/Blog/multi-arch-container-image/docker-pull-diagram.png" alt="Docker pull diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we run the &lt;strong&gt;docker pull&lt;/strong&gt; or &lt;strong&gt;build command&lt;/strong&gt;, it fetches the requested manifest file from the registry server. These manifest files are JSON file that can have one Docker image reference or contains more than one image list. It fetches the correct image depending on the host machine's CPU architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Integrate Multi-arch Container Build with CI/CD?
&lt;/h2&gt;

&lt;p&gt;If your workload runs on multiple machines with different CPU architectures, it is always better to build multi-arch Docker images for your application. Integrating multi-arch build into CI/CD streamlines the image build and scan process easier, adds only one Docker tag, and saves time. Below we have written Jenkins and GitHub CI sample code for building multi-arch images. &lt;/p&gt;

&lt;h3&gt;
  
  
  Jenkins Multi-arch CI
&lt;/h3&gt;

&lt;p&gt;Currently, the &lt;a href="https://plugins.jenkins.io/docker-plugin/"&gt;Jenkins Docker plugin&lt;/a&gt; does not support multi-arch building so we can use buildx to build multi-arch images.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight groovy"&gt;&lt;code&gt;&lt;span class="n"&gt;pipeline&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;label&lt;/span&gt; &lt;span class="s1"&gt;'worker1'&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;timestamps&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;time:&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nl"&gt;unit:&lt;/span&gt; &lt;span class="s1"&gt;'MINUTES'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;buildDiscarder&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;logRotator&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;numToKeepStr:&lt;/span&gt; &lt;span class="s1"&gt;'10'&lt;/span&gt;&lt;span class="o"&gt;))&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;environment&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;DOCKER_REGISTRY_PATH&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://registry.example.com"&lt;/span&gt;
        &lt;span class="n"&gt;DOCKER_TAG&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"v1"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;


    &lt;span class="n"&gt;stages&lt;/span&gt;
    &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'build-and-push'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;steps&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;
          &lt;span class="n"&gt;script&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;withRegistry&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;DOCKER_REGISTRY_PATH&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ecrcred_dev&lt;/span&gt;&lt;span class="o"&gt;){&lt;/span&gt;
            &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'''
              ####### check multiarch env ###########
              export DOCKER_BUILDKIT=1
              if [[ $(docker buildx inspect --bootstrap | head -n 2 | grep Name | awk -F" " '{print $NF}') != "multiarch" ]]
              then
                docker buildx rm multiarch | exit 0
                docker buildx create --name multiarch --use
                docker buildx inspect --bootstrap
              fi
              ####### Push multiarch ###########
              docker buildx build --push --platform linux/arm64,linux/amd64 -t "$DOCKER_REGISTRY_PATH"/username/custom-nginx:"$DOCKER_TAG" .
            '''&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
          &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Otherwise, we can use the traditional Docker build command as shown above in Jenkins stages with different sets of Jenkins worker nodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub CI Pipeline for Building Multi-arch Container Images
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.infracloud.io/blogs/github-actions-demystified/"&gt;GitHub Actions&lt;/a&gt; also supports multi-arch container images. It also uses QEMU CPU emulation in the background.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker-multi-arch-push&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;main'&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;docker-build-push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-20.04&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout Code&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v3&lt;/span&gt;


      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up QEMU&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/setup-qemu-action@v2&lt;/span&gt;


      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up Docker Buildx&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/setup-buildx-action@v2&lt;/span&gt;


      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Login to docker.io container registry&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/login-action@v2&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOCKER_USER }}&lt;/span&gt;
          &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOCKER_TOKEN }}&lt;/span&gt;


      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and push&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker_build&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/build-push-action@v3&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
          &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./Dockerfile&lt;/span&gt;
          &lt;span class="na"&gt;platforms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;linux/amd64,linux/arm64&lt;/span&gt;
          &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;username/custom-nginx:latest&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to Promote your Multi-arch Image to Higher Environments?
&lt;/h2&gt;

&lt;p&gt;Promoting Docker multi-arch requires a few extra steps as the &lt;code&gt;docker pull&lt;/code&gt; command only pulls a single image based on the host machine's CPU architecture. To promote multi-arch Docker images we need to pull all CPU architecture images one by one by using &lt;code&gt;–plarform=linux/$ARCH&lt;/code&gt; and then create a manifest file and push them to the new registry server. To avoid these complex steps we can leverage the following tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/containers/skopeo"&gt;Skopeo&lt;/a&gt; or &lt;a href="https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane.md"&gt;Crane&lt;/a&gt; can be used to promote our multi-arch image from one account to another using just a single command. In the background, what these tools do is use &lt;a href="https://docs.docker.com/engine/api/"&gt;Docker API&lt;/a&gt; to fetch all multi-arch images and then create manifest and push all images and manifest.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;skopeo login &lt;span class="nt"&gt;--username&lt;/span&gt; &lt;span class="nv"&gt;$USER&lt;/span&gt; docker.io

&lt;span class="nv"&gt;$ &lt;/span&gt;skopeo copy &lt;span class="nt"&gt;-a&lt;/span&gt; docker://dev-account/custom-nginx:v1 docker://prod-account/custom-nginx:v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What if you want to use only the Docker command to promote this image to a higher environments (Production)?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;####### Pull DEV images ###########&lt;/span&gt;
docker pull &lt;span class="nt"&gt;--platform&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;amd64 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_IMAGE_NAME_DEV&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;:&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_TAG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
docker pull &lt;span class="nt"&gt;--platform&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;arm64 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_IMAGE_NAME_DEV&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;:&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_TAG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;


&lt;span class="c"&gt;####### Tag DEV image with STAGE ###########&lt;/span&gt;
docker tag &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_IMAGE_NAME_DEV&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;:&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_TAG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_IMAGE_NAME_STAGE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;:&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_TAG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nt"&gt;-amd64&lt;/span&gt;


docker tag &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_IMAGE_NAME_DEV&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;:&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_TAG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_IMAGE_NAME_STAGE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;:&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_TAG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nt"&gt;-arm64&lt;/span&gt;


&lt;span class="c"&gt;####### Push amd64 and arm64 image to STAGE ###########&lt;/span&gt;
docker push &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_IMAGE_NAME_STAGE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;:&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_TAG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nt"&gt;-amd64&lt;/span&gt;
docker push &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_IMAGE_NAME_STAGE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;:&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_TAG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nt"&gt;-arm64&lt;/span&gt;


&lt;span class="c"&gt;####### Create mainfest and push to STAGE ###########&lt;/span&gt;
docker manifest create &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_IMAGE_NAME_STAGE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;:&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_TAG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--amend&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_IMAGE_NAME_STAGE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;:&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_TAG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nt"&gt;-amd64&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--amend&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_IMAGE_NAME_STAGE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;:&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_TAG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nt"&gt;-arm64&lt;/span&gt;


docker manifest push &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_IMAGE_NAME_STAGE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;:&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_TAG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to Scan Multi-arch Images for Vulnerabilities?
&lt;/h2&gt;

&lt;p&gt;We can use any tool like &lt;a href="https://github.com/aquasecurity/trivy"&gt;Trivy&lt;/a&gt;, &lt;a href="https://github.com/anchore/grype"&gt;Gryp&lt;/a&gt;, or &lt;a href="https://docs.docker.com/engine/scan/"&gt;Docker scan&lt;/a&gt; for image scanning but we have to pull multi-arch images one by one and then scan them because by default Docker pull command will only fetch the one image that matches with the Host CPU. We can leverage the Docker pull command with &lt;code&gt;--platform={amd64, arm64}&lt;/code&gt; to pull different CPU arch images. Here is how we can do that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;####### Pull amd64 image and scan ###########&lt;/span&gt;
docker pull &lt;span class="nt"&gt;--platform&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;amd64 nginx:latest
trivy image nginx:latest


&lt;span class="c"&gt;####### Pull arm64 image and scan ###########&lt;/span&gt;
docker pull &lt;span class="nt"&gt;--platform&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;arm64 nginx:latest
trivy image nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Some Caveats of Using Multi-arch Container
&lt;/h2&gt;

&lt;p&gt;There are prominent benefits of using multi-arch containers, but there are some caveats that you should certainly be aware of before taking the leap.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It takes extra storage for storing other arch images.&lt;/li&gt;
&lt;li&gt;Takes time to build the multi-arch container image also while building the arm64 on QEMU emulation consumes lots of time and resources.&lt;/li&gt;
&lt;li&gt;Performance is significantly slower with emulation to run binaries on different CPUs compared to running the binaries natively.&lt;/li&gt;
&lt;li&gt;There are still some issues with buildx building arm64 images like the base image not being available in arm64 and also performing &lt;a href="https://github.com/docker/buildx/issues/1335"&gt;sudo level access&lt;/a&gt; or building cross-compile &lt;a href="https://github.com/emk/rust-musl-builder"&gt;statically linked binary requires an extra step&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Need container scanning for all images one by one.&lt;/li&gt;
&lt;li&gt;Buildx multi-arch builds are only supported on amd64 CPU architecture.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this blog, we saw what are multi-arch containers and their use cases. We integrated multi-arch build with Jenkins and Github CI with sample code and provides you the several ways to promote and scan multi-arch container images, and finally, we learned the caveats of using multi-arch containers.&lt;/p&gt;

&lt;p&gt;Using multi-arch images gives us the ability to build once and run everywhere. We can seamlessly migrate from one CPU arch to another CPU with ease. Also by deploying images that are optimized for specific architectures, we can achieve better performance and reduce resource costs.&lt;/p&gt;

&lt;p&gt;Thanks for reading this post. We hope it was informative and engaging for you. We would love to hear your thoughts on this post, so do start a conversation on &lt;a href="https://www.linkedin.com/in/alok-maurya-091ba682/"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Looking for help with building your DevOps strategy or want to outsource DevOps to the experts? Learn why so many startups &amp;amp; enterprises consider us as one of the &lt;a href="https://www.infracloud.io/devops-consulting-services/"&gt;best DevOps consulting &amp;amp; services companies&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Refrences
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/"&gt;Multi-arch build and images, the simple way&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://danmanners.com/posts/2022-01-buildah-multi-arch/"&gt;Building Multi-Architecture Containers with Buildah&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@artur.klauser/building-multi-architecture-docker-images-with-buildx-27d80f7e2408"&gt;Building Multi-Architecture Docker Images With Buildx&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/multiarch/qemu-user-static"&gt;Multiarch qemu-user-static&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://speedscale.com/blog/how-to-build-multi-arch-docker-images/"&gt;How to Build Multi-Arch Docker Images&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/moby/buildkit/blob/master/docs/multi-platform.md"&gt;Building multi-platform images&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/blog/multi-arch-images/"&gt;Building Multi-Arch Images for Arm and x86 with Docker Desktop&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Implement DevSecOps to Secure your CI/CD pipeline</title>
      <dc:creator>Alok</dc:creator>
      <pubDate>Wed, 28 Sep 2022 06:16:37 +0000</pubDate>
      <link>https://forem.com/alokm/implement-devsecops-to-secure-your-cicd-pipeline-52e7</link>
      <guid>https://forem.com/alokm/implement-devsecops-to-secure-your-cicd-pipeline-52e7</guid>
      <description>&lt;p&gt;Before understanding DevSecOps, let’s understand what is DevOps. DevOps is the combination of cultural philosophies, practices, and tools that increase an organization’s ability to deliver applications and services at high velocity.&lt;/p&gt;

&lt;p&gt;In fast-moving projects, security often lags behind and given low priority which may lead to buggy code and hacks. Let’s see how we can reduce the risk of attack by integrating security in our DevOps pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is to DevSecOps (DevOps + Security)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.to/devsecops-consulting-services/"&gt;DevSecOps&lt;/a&gt; is a cultural approach where every team and person working on an application considers security throughout it's lifecycle. It ensures that security is implemented at every stage of the application software development lifecycle (SDLC) by incorporating required security checks embedded into &lt;a href="https://dev.to/ci-cd-consulting/"&gt;CI/CD automation&lt;/a&gt; using appropriate tools.&lt;/p&gt;

&lt;p&gt;For example, let's see how the DevSecOps process can detect and prevent zero-day vulnerabilities like &lt;a href="https://www.wsj.com/articles/what-is-the-log4j-vulnerability-11639446180" rel="noopener noreferrer"&gt;log4j&lt;/a&gt;. Using &lt;a href="https://github.com/anchore/syft" rel="noopener noreferrer"&gt;Syft&lt;/a&gt; tool, we can generate SBOM for our application code and pass this SBOM report to &lt;a href="https://github.com/anchore/grype" rel="noopener noreferrer"&gt;Grype&lt;/a&gt; which can detect these new vulnerabilities and report to us if there is any fix or patch available. As these steps are part of our CI/CD, we can alert our developers and security team to remediate this issue as soon as it is identified.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are the benefits of using DevSecOps?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Finds vulnerability and bugs at an earlier stage of development&lt;/li&gt;
&lt;li&gt;  Streamlined compliance&lt;/li&gt;
&lt;li&gt;  Speedy recovery&lt;/li&gt;
&lt;li&gt;  Secure supply chain&lt;/li&gt;
&lt;li&gt;  Cost saving&lt;/li&gt;
&lt;li&gt;  Can include AI-based monitoring for detecting anomalies&lt;/li&gt;
&lt;li&gt;  Reduces the risk of surface attack and increases confidence&lt;/li&gt;
&lt;li&gt;  Full visibility of potential threats and possible ways to remediate it&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to make security culture your default state?
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Unless you’ve included security in every employee’s onboarding, creating a widespread security culture mindset will be challenging. Employees will need to think differently, behave differently, and eventually turn those changes into habits so that security becomes a natural part of their day-to-day work.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Read more about &lt;a href="https://about.gitlab.com/blog/2020/07/15/security-culture-devsecops" rel="noopener noreferrer"&gt;Gitlab's DevSeOps security culture&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What does DevSecOps CI/CD pipeline look like?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd33wubrfki0l68.cloudfront.net%2F81fbed5f6b2223cbb02b799bbd6216860c2d0c32%2Fe0ecd%2Fassets%2Fimg%2Fblog%2Fdevsecops-pipeline%2Fdevsecops-pipeline-diagram.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd33wubrfki0l68.cloudfront.net%2F81fbed5f6b2223cbb02b799bbd6216860c2d0c32%2Fe0ecd%2Fassets%2Fimg%2Fblog%2Fdevsecops-pipeline%2Fdevsecops-pipeline-diagram.png" alt="DevSecOps CI/CD Pipeline"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Jenkins DevSecOps pipeline
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd33wubrfki0l68.cloudfront.net%2Fb5b451ecc262c62223800bfc611831e9f79dea74%2Fde2dd%2Fassets%2Fimg%2Fblog%2Fdevsecops-pipeline%2Fjenkins-devsecops-pipeline.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd33wubrfki0l68.cloudfront.net%2Fb5b451ecc262c62223800bfc611831e9f79dea74%2Fde2dd%2Fassets%2Fimg%2Fblog%2Fdevsecops-pipeline%2Fjenkins-devsecops-pipeline.png" alt="Jenkins DevSecOps Pipeline"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this post, we will cover the following standard CI/CD stages and how to secure them:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Plan/Design&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Develop&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Build and Code analysis&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Test&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Deploy&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Monitor and Alert&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Alright, let's dive into how to implement DevSecOps! &lt;/p&gt;

&lt;h2&gt;
  
  
  1. Plan/Design
&lt;/h2&gt;

&lt;p&gt;In this stage, we outline where, how, and when integration, deployment, and security testing will be done.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.1 Threat modeling:
&lt;/h3&gt;

&lt;p&gt;It effectively puts you in the mindset of an attacker and allows us to see the application through the attacker's eyes and block their attack before they get a chance to do anything about it. We can use &lt;a href="https://owasp.org/www-community/Threat_Modeling" rel="noopener noreferrer"&gt;OWASP threat modeling&lt;/a&gt; or &lt;a href="https://docs.microsoft.com/en-us/security/compass/applications-services#simple-questions-method" rel="noopener noreferrer"&gt;Simple questions method from Microsoft&lt;/a&gt; to design our threat modeling. We can also use &lt;a href="https://owasp.org/www-project-threat-dragon/" rel="noopener noreferrer"&gt;OWASP Threat Dragon&lt;/a&gt; and &lt;a href="https://github.com/cairis-platform/cairis" rel="noopener noreferrer"&gt;Cairis&lt;/a&gt; open source threat modeling tools to create threat model diagrams for our secure development lifecycle.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.2 Secure SDLC
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;A Secure SDLC requires adding security testing at each software development stage, from design to development, to deployment, and beyond. Examples include designing applications to ensure that your architecture will be secure, as well as including security risk factors as part of the initial planning phase.&lt;br&gt;&lt;br&gt;
&lt;a href="https://snyk.io/learn/secure-sdlc" rel="noopener noreferrer"&gt;- Snyk Secure SDLC&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In a secure software development cycle, we should separate our development, testing, and production environments and also have authorization processes that control the deployment promotion from one environment to another. This reduces the risk of a developer making unauthorized changes. This ensures that any modifications pass through a standard approval process.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Develop
&lt;/h2&gt;

&lt;p&gt;The Development stage starts with writing code and we can use shift-left security &lt;a href="https://spectralops.io/resources/how-to-choose-a-secret-scanning-solution-to-protect-credentials-in-your-code/" rel="noopener noreferrer"&gt;best practice&lt;/a&gt; which incorporates security thinking in the earliest stages of development. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Install linting tools inside the code editor like &lt;a href="https://code.visualstudio.com/" rel="noopener noreferrer"&gt;Visual Studio Code&lt;/a&gt;. One of the most popular linting tools is &lt;a href="https://marketplace.visualstudio.com/items?itemName=SonarSource.sonarlint-vscode" rel="noopener noreferrer"&gt;SonarLint&lt;/a&gt;. Which highlights bugs and security vulnerabilities as you write code.&lt;/li&gt;
&lt;li&gt;  Use &lt;a href="https://github.com/pre-commit/pre-commit" rel="noopener noreferrer"&gt;Pre-commit hooks&lt;/a&gt; to prevent adding any secrets to code.&lt;/li&gt;
&lt;li&gt;  Setup &lt;a href="https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches" rel="noopener noreferrer"&gt;Protected branch&lt;/a&gt; and code reviews process.&lt;/li&gt;
&lt;li&gt;  Sign &lt;a href="https://docs.gitlab.com/ee/user/project/repository/gpg_signed_commits/" rel="noopener noreferrer"&gt;git commit with GPG key&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;  Always verify the downloaded binary/file hash.&lt;/li&gt;
&lt;li&gt;  Enable 2-factor authentication.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. Build and code analysis
&lt;/h2&gt;

&lt;p&gt;Before building, we need to scan our code for vulnerabilities and secrets. By doing static code analysis, it may detect a bug or a possible overflow in code and these overflows lead to a memory leak, which degrades the system performance by reducing the amount of memory available for each program. Sometimes it can be used as an attack surface by hackers to exploit the data.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.1 Scan for secrets and credentials
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/Yelp/detect-secrets" rel="noopener noreferrer"&gt;detect-secret&lt;/a&gt; is an enterprise-friendly tool for detecting and preventing secrets in the code base. We can also scan the non-git tracked files. There are other tools as well like &lt;a href="https://github.com/zricethezav/gitleaks" rel="noopener noreferrer"&gt;Gitleaks&lt;/a&gt; which also provide similar functionality.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;detect-secrets scan test_data/ &lt;span class="nt"&gt;--all-files&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3.2 Software Bill of Materials (SBOM)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.linuxfoundation.org/tools/the-state-of-software-bill-of-materials-sbom-and-cybersecurity-readiness/" rel="noopener noreferrer"&gt;SBOM&lt;/a&gt; lets us Identify all software components, libraries, and modules that are running in our environment, even their dependencies. It speeds up response time for new vulnerabilities - including zero-day vulnerabilities like Log4j.&lt;/p&gt;

&lt;p&gt;We can use below tools for SBOM report.&lt;/p&gt;

&lt;h4&gt;
  
  
  3.2.1 Syft with Grype and Trivy
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://github.com/anchore/syft" rel="noopener noreferrer"&gt;Syft&lt;/a&gt; tool gives container image and filesystem SBOM result in &lt;a href="https://github.com/CycloneDX" rel="noopener noreferrer"&gt;CycloneDX&lt;/a&gt; open source format which can be shared easily. Syft also supports cosign attestations for verifying legit images.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;syft nginx:latest &lt;span class="nt"&gt;-o&lt;/span&gt; cyclonedx-json&lt;span class="o"&gt;=&lt;/span&gt;nginx.sbom.cdx.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So we have generated an SBOM report that shows what libs and modules are running in our software. Now, let's scan for vulnerabilities in SBOM reports using &lt;a href="https://github.com/anchore/grype" rel="noopener noreferrer"&gt;Grype&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;root@laptop ~]# grype sbom:./nginx.sbom.cdx.json | &lt;span class="nb"&gt;head&lt;/span&gt;
 ✔ Vulnerability DB       &lt;span class="o"&gt;[&lt;/span&gt;no update available]
 ✔ Scanned image          &lt;span class="o"&gt;[&lt;/span&gt;157 vulnerabilities]
NAME            INSTALLED           FIXED-IN        TYPE  VULNERABILITY     SEVERITY   
apt             2.2.4                                       deb   CVE-2011-3374     Negligible  
bsdutils        1:2.36.1-8+deb11u1                      deb   CVE-2022-0563     Negligible  
coreutils       8.32-4+b1                                   deb   CVE-2017-18018    Negligible  
coreutils       8.32-4+b1           &lt;span class="o"&gt;(&lt;/span&gt;won&lt;span class="s1"&gt;'t fix)     deb   CVE-2016-2781     Low      
curl            7.74.0-1.3+deb11u1                      deb   CVE-2022-32208    Unknown  
curl            7.74.0-1.3+deb11u1                      deb   CVE-2022-27776    Medium   
curl            7.74.0-1.3+deb11u1  (won'&lt;/span&gt;t fix&lt;span class="o"&gt;)&lt;/span&gt;     deb   CVE-2021-22947    Medium   
curl            7.74.0-1.3+deb11u1  &lt;span class="o"&gt;(&lt;/span&gt;won&lt;span class="s1"&gt;'t fix)     deb   CVE-2021-22946    High     
curl            7.74.0-1.3+deb11u1  (won'&lt;/span&gt;t fix&lt;span class="o"&gt;)&lt;/span&gt;     deb   CVE-2021-22945    Critical  

&lt;span class="c"&gt;# Or we can directly use Grype for SBOM scanning&lt;/span&gt;
grype nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; - Many of the vulnerabilities that are scanned by SCA tools are neither exploitable nor fixable via regular updates. curl and glibc are some examples. So these tools show them as not fixable or won't fix.&lt;/p&gt;

&lt;p&gt;The latest version of &lt;a href="https://github.com/aquasecurity/trivy#highlights" rel="noopener noreferrer"&gt;Trivy&lt;/a&gt; can also generate SBOM reports, but it's mostly used for finding vulnerabilities in containers and filesystems.&lt;/p&gt;

&lt;h4&gt;
  
  
  3.2.2 OWASP Dependency-Check
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://owasp.org/www-project-dependency-check/" rel="noopener noreferrer"&gt;OWASP Dependency-Check&lt;/a&gt; a Software Composition Analysis (SCA) tool that attempts to detect publicly disclosed vulnerabilities contained within a project's dependencies. It does this by determining if there is a Common Platform Enumeration (CPE) identifier for a given dependency. If found, it will generate a report linking to the associated CVE entries. We can also publish our SBOM report to &lt;a href="https://www.infracloud.io/blogs/manage-vulnerabilities-dependency-track/" rel="noopener noreferrer"&gt;Dependency-Track&lt;/a&gt; and visualize our software components and their vulnerabilities.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dependency-check.sh &lt;span class="nt"&gt;--scan&lt;/span&gt; /project_path
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we know what type of vulnerabilities are present in our software, we can patch them and make our application safe and secure.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.3 Static Application Security Testing (SAST)
&lt;/h3&gt;

&lt;p&gt;It's a method of debugging code without running the program. It analyzes the code based on predefined rule sets. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/SonarSource/sonarqube" rel="noopener noreferrer"&gt;SonarQube&lt;/a&gt; allows all developers to write cleaner and safer code. It supports lots of programming languages for scanning (Java, Kotlin, Go, JavaScript). It also supports running unit testing for code coverage. It can be easily integrated with Jenkins and Azure DevOps. &lt;a href="https://checkmarx.com/product/cxsast-source-code-scanning/" rel="noopener noreferrer"&gt;Checkmarx&lt;/a&gt;, &lt;a href="https://www.veracode.com/products/binary-static-analysis-sast" rel="noopener noreferrer"&gt;Veracode&lt;/a&gt;, and &lt;a href="https://www.perforce.com/products/klocwork" rel="noopener noreferrer"&gt;Klocwork&lt;/a&gt; also provide similar functionality but these are paid tools.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;SONAR_HOST_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"http://&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SONARQUBE_URL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;SONAR_LOGIN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"AuthenticationToken"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;YOUR_REPO&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:/usr/src"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    sonarsource/sonar-scanner-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Source: &lt;a href="https://docs.sonarqube.org/latest/analysis/scan/sonarscanner" rel="noopener noreferrer"&gt;Running SonarScanner from the Docker image&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.4 Unit test
&lt;/h3&gt;

&lt;p&gt;In Unit tests, individual software code components are checked if it is working as expected or not. Unit tests isolate a function or module of code and verify its correctness. We can use tools like &lt;a href="https://github.com/jacoco/jacoco" rel="noopener noreferrer"&gt;JaCoCo&lt;/a&gt; for Java and &lt;a href="https://mochajs.org/" rel="noopener noreferrer"&gt;Mocha&lt;/a&gt;, and &lt;a href="https://jasmine.github.io/" rel="noopener noreferrer"&gt;Jasmine&lt;/a&gt; for NodeJS to generate unit test reports. We can also send these reports to SonarQube which shows us code coverage and the percentage of your code covered by your test cases.&lt;/p&gt;

&lt;p&gt;Once SAST is done, we can scan our Dockerfile as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.5 Dockerfile static scanning
&lt;/h3&gt;

&lt;p&gt;Always scan the Dockerfile for vulnerabilities as while writing Dockerfile we may miss some of the &lt;a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/" rel="noopener noreferrer"&gt;best practices&lt;/a&gt; which may lead to vulnerable containers. To name a few common mistakes that we can avoid.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Do not use the &lt;code&gt;latest&lt;/code&gt; docker image tag&lt;/li&gt;
&lt;li&gt; Ensure that a user for the container has been created&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://github.com/bridgecrewio/checkov" rel="noopener noreferrer"&gt;Checkov&lt;/a&gt; or &lt;a href="https://docs.docker.com/engine/scan/#get-a-detailed-scan-report" rel="noopener noreferrer"&gt;docker scan&lt;/a&gt; can be used to scan Dockerfile which follows best practice rules to write Dockerfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:/output bridgecrew/checkov &lt;span class="nt"&gt;-f&lt;/span&gt; /output/Dockerfile &lt;span class="nt"&gt;-o&lt;/span&gt; json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After building a container image, we scan it for vulnerabilities and sign our container image.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.6 Container image scan
&lt;/h3&gt;

&lt;p&gt;Scanning images gives the security state of the container images and let us take actions that result in a more secure container image. We should avoid installing unnecessary packages and use a multistage method. This keeps the image clean and safe. Scanning of images should be done in both development and production environments. &lt;/p&gt;

&lt;p&gt;Below are a few well-known open-source and paid tools that we can use for container scanning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Open source:&lt;/strong&gt; &lt;a href="https://github.com/aquasecurity/trivy" rel="noopener noreferrer"&gt;Trivy&lt;/a&gt;, &lt;a href="https://github.com/anchore/grype" rel="noopener noreferrer"&gt;Gryp&lt;/a&gt; and &lt;a href="https://github.com/quay/clair" rel="noopener noreferrer"&gt;Clair&lt;/a&gt; are widely used open source tools for container scanning.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;a href="https://docs.docker.com/engine/scan/#how-to-scan-images" rel="noopener noreferrer"&gt;Docker scan&lt;/a&gt;:&lt;/strong&gt; It uses &lt;a href="https://snyk.io/product/container-vulnerability-management/" rel="noopener noreferrer"&gt;Snyk&lt;/a&gt; as the backend engine for scanning. It can also scan Dockerfile.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;a href="https://www.aquasec.com/products/container-security/" rel="noopener noreferrer"&gt;Aqua scan&lt;/a&gt;:&lt;/strong&gt; Provides container image scanning but it has one unique feature &lt;a href="https://blog.aquasec.com/dynamic-container-analysis" rel="noopener noreferrer"&gt;Aqua DTA (Dynamic Threat Analysis)&lt;/a&gt; for containers which monitors behavioral patterns and Indicators of Compromise (IoCs) such as malicious behavior and network activity, to detect container escapes, malware, cryptocurrency miners, code injection backdoors and additional threats.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;trivy image nginx:latest
&lt;span class="c"&gt;# OR&lt;/span&gt;
docker scan nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3.7 Container image signing and verifying
&lt;/h3&gt;

&lt;p&gt;If the container build process is compromised, it makes users vulnerable to accidentally using the malicious image instead of the actual container image. Signing and verifying the container always ensure we are running the actual container image.&lt;/p&gt;

&lt;p&gt;Using &lt;a href="https://github.com/GoogleContainerTools/distroless" rel="noopener noreferrer"&gt;distroless&lt;/a&gt; images not only reduces the size of the container image it also reduces the surface attack. The need for container image signing is because even with the distroless images there is a chance of facing some security threats such as receiving a malicious image. We can use &lt;a href="https://github.com/sigstore/cosign" rel="noopener noreferrer"&gt;cosign&lt;/a&gt; or &lt;a href="https://github.com/containers/skopeo" rel="noopener noreferrer"&gt;skopeo&lt;/a&gt; for container signing and verifying. You can read more about &lt;a href="https://dev.to/blogs/secure-containers-cosign-distroless-images/"&gt;securing containers with Cosign and Distroless Images in this blog&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cosign sign &lt;span class="nt"&gt;--key&lt;/span&gt; cosign.key custom-nginx:latest
cosign verify &lt;span class="nt"&gt;-key&lt;/span&gt; cosign.pub custom-nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3.8 Container image validation test
&lt;/h3&gt;

&lt;p&gt;Adding an extra layer of security on the container image to verify if it is working as expected and has all required files with correct permissions. We can use &lt;a href="https://github.com/aelsabbahy/goss/tree/master/extras/dgoss" rel="noopener noreferrer"&gt;dgoss&lt;/a&gt; to do validation tests of container images.&lt;/p&gt;

&lt;p&gt;For example, let's do a validation test for the nginx image that is running on port 80, has internet access, and verifies the correct file permission of &lt;code&gt;/etc/nginx/nginx.conf&lt;/code&gt;, and the nginx user shell in the container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dgoss edit nginx
goss add port 80
goss add http https://google.com
goss add file /etc/nginx/nginx.conf
goss add user nginx

&lt;span class="c"&gt;# Once we exit it will copy the goss.yaml from the container to the current directory and we can modify it as per our validation.&lt;/span&gt;
&lt;span class="c"&gt;# Validate&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;root@home ~]# dgoss run &lt;span class="nt"&gt;-p&lt;/span&gt; 8000:80 nginx
INFO: Starting docker container
INFO: Container ID: 5f8d9e20
INFO: Sleeping &lt;span class="k"&gt;for &lt;/span&gt;0.2
INFO: Container health
INFO: Running Tests
Port: tcp:80: listening: matches expectation: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
Port: tcp:80: ip: matches expectation: &lt;span class="o"&gt;[[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0"&lt;/span&gt;&lt;span class="o"&gt;]]&lt;/span&gt;
HTTP: https://google.com: status: matches expectation: &lt;span class="o"&gt;[&lt;/span&gt;200]
File: /etc/nginx/nginx.conf: exists: matches expectation: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
File: /etc/nginx/nginx.conf: mode: matches expectation: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0644"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
File: /etc/nginx/nginx.conf: owner: matches expectation: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"root"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
File: /etc/nginx/nginx.conf: group: matches expectation: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"root"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
User: nginx: uid: matches expectation: &lt;span class="o"&gt;[&lt;/span&gt;101]
User: nginx: gid: matches expectation: &lt;span class="o"&gt;[&lt;/span&gt;101]
User: nginx: home: matches expectation: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/nonexistent"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
User: nginx: &lt;span class="nb"&gt;groups&lt;/span&gt;: matches expectation: &lt;span class="o"&gt;[[&lt;/span&gt;&lt;span class="s2"&gt;"nginx"&lt;/span&gt;&lt;span class="o"&gt;]]&lt;/span&gt;
User: nginx: shell: matches expectation: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/bin/false"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
Total Duration: 0.409s
Count: 13, Failed: 0, Skipped: 0
INFO: Deleting container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can also use &lt;a href="https://github.com/aelsabbahy/goss/tree/master/extras/kgoss" rel="noopener noreferrer"&gt;kgoss&lt;/a&gt; to do validation tests on pods.&lt;/p&gt;

&lt;p&gt;Till now we have built and scanned the container image but before deploying let's test and scan the deployment or Helm chart.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Test
&lt;/h2&gt;

&lt;p&gt;Testing ensure that the application is working as expected and has no bug or vulnerabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.1 Smoke test
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.geeksforgeeks.org/smoke-testing-software-testing/" rel="noopener noreferrer"&gt;Smoke tests&lt;/a&gt; are small but check critical components and functionality of the application. When implemented, It runs on every application build to verify critical functionality passes before integration and end-to-end testing can take place which can be time-consuming. Smoke tests help create fast feedback loops that are vital to the software development life cycle.&lt;/p&gt;

&lt;p&gt;For example, in a smoke test, we can run the curl command on API to get the HTTP response code and latency.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.2 API testing
&lt;/h3&gt;

&lt;p&gt;Today's applications might expose hundreds of highly valuable endpoints that are very appealing to hackers. Ensuring your APIs are secure before, during, and after production is crucial. Hence we need to test our APIs.&lt;/p&gt;

&lt;p&gt;API Testing reports what type of authentication is required and whether sensitive data is encrypted over HTTP and SQL injections allowing you to bypass the login phase.&lt;/p&gt;

&lt;p&gt;We can use &lt;a href="https://jmeter.apache.org/usermanual/get-started.html#non_gui" rel="noopener noreferrer"&gt;Jmeter&lt;/a&gt;, &lt;a href="https://github.com/Blazemeter/taurus" rel="noopener noreferrer"&gt;Taurus&lt;/a&gt;, &lt;a href="https://learning.postman.com/docs/integrations/ci-integrations/#configuring-ci-integration" rel="noopener noreferrer"&gt;Postman&lt;/a&gt;, and &lt;a href="https://www.soapui.org/tools/soapui/" rel="noopener noreferrer"&gt;SoapUI&lt;/a&gt; tools for API testing. Below is a small example using Jmeter where &lt;code&gt;test.jmx&lt;/code&gt; contains the API test cases.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;jmeter &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nt"&gt;--t&lt;/span&gt; test.jmx &lt;span class="nt"&gt;-l&lt;/span&gt; result.jtl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4.3 Dynamic application security testing (DAST)
&lt;/h3&gt;

&lt;p&gt;DAST is a web application security test that finds security issues in the running application. DAST tools are also known as web application vulnerability scanners which can detect common vulnerabilities like SQL injection, cross-site scripting, security misconfigurations, and other common issues detailed in &lt;a href="https://owasp.org/www-project-top-ten/" rel="noopener noreferrer"&gt;OWASP Top 10&lt;/a&gt;. We can use &lt;a href="https://www.hcltechsw.com/appscan" rel="noopener noreferrer"&gt;HCL Appscan&lt;/a&gt;, &lt;a href="https://owasp.org/www-project-zap/" rel="noopener noreferrer"&gt;ZAP&lt;/a&gt;, &lt;a href="https://portswigger.net/burp" rel="noopener noreferrer"&gt;Burp Suite&lt;/a&gt;, and &lt;a href="https://www.invicti.com/" rel="noopener noreferrer"&gt;Invicti&lt;/a&gt; which finds vulnerabilities in the running web application. Here is a &lt;a href="https://owasp.org/www-community/Vulnerability_Scanning_Tools" rel="noopener noreferrer"&gt;list of DAST scanning tools&lt;/a&gt; provided by OWASP. We can easily integrate these tools with our CI/CD pipeline.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;zap.sh &lt;span class="nt"&gt;-cmd&lt;/span&gt; &lt;span class="nt"&gt;-quickurl&lt;/span&gt; http://example.com/ &lt;span class="nt"&gt;-quickprogress&lt;/span&gt; &lt;span class="nt"&gt;-quickout&lt;/span&gt; example.report.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Deploy
&lt;/h2&gt;

&lt;p&gt;Deployment can be of infrastructure or application; however, we should scan our deployment files. We can also add a manual trigger where the pipeline waits for external user validation before proceeding to the next stage, or it can be an automated trigger.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.1 Static scan of Kubernete manifest file or Helm chart
&lt;/h3&gt;

&lt;p&gt;It is always a good practice to scan your Kubernetes deployment or Helm chart before deploying. We can use &lt;a href="https://github.com/bridgecrewio/checkov" rel="noopener noreferrer"&gt;Checkov&lt;/a&gt; to scans Kubernetes manifests and identifies security and configuration issues. It also supports Helm chart scanning. We can also use &lt;a href="https://github.com/tenable/terrascan" rel="noopener noreferrer"&gt;terrascan&lt;/a&gt; and &lt;a href="https://github.com/stackrox/kube-linter" rel="noopener noreferrer"&gt;kubeLinter&lt;/a&gt; to scan the Kubernetes manifest.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:/output bridgecrew/checkov &lt;span class="nt"&gt;-f&lt;/span&gt; /output/keycloak-deploy.yml &lt;span class="nt"&gt;-o&lt;/span&gt; json
&lt;span class="c"&gt;# For Helm&lt;/span&gt;
docker run &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:/output bridgecrew/checkov &lt;span class="nt"&gt;-d&lt;/span&gt; /output/ &lt;span class="nt"&gt;--framework&lt;/span&gt; helm &lt;span class="nt"&gt;-o&lt;/span&gt; json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5.2 Pre-deploy policy check Kubernete manifest YAML file
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/kyverno/kyverno/" rel="noopener noreferrer"&gt;Kyverno&lt;/a&gt; adds an extra layer of security where only the allowed type of manifest is deployed onto kubernetes, otherwise, it will reject or we can set &lt;code&gt;validationFailureAction&lt;/code&gt; to audit which only logs the policy violation message for reporting. &lt;a href="https://www.kubewarden.io/" rel="noopener noreferrer"&gt;Kubewarden&lt;/a&gt; and &lt;a href="https://github.com/open-policy-agent/gatekeeper" rel="noopener noreferrer"&gt;Gatekeeper&lt;/a&gt; are alternative tools available to enforce policies on Kubernetes CRD.&lt;/p&gt;

&lt;p&gt;Here is a simple &lt;a href="https://kyverno.io/policies/" rel="noopener noreferrer"&gt;Kyverno policy&lt;/a&gt; to &lt;a href="https://kyverno.io/policies/best-practices/disallow_latest_tag/disallow_latest_tag/" rel="noopener noreferrer"&gt;disallow the image's latest tag&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.3 kube-bench for CIS scan
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/aquasecurity/kube-bench" rel="noopener noreferrer"&gt;kube-bench&lt;/a&gt; checks whether Kubernetes is deployed securely by running the checks documented in the CIS Kubernetes Benchmark. We can &lt;a href="https://dev.to/blogs/securing-kubernetes-cluster-kubescape-kubebench/"&gt;deploy kube-bench&lt;/a&gt; as a Job that runs daily and consume its report in CI/CD to pass or fail the pipeline based on the level of severity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; eks-job.yaml
kubectl logs kube-bench-pod-name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5.4 IaC scanning:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://github.com/bridgecrewio/checkov" rel="noopener noreferrer"&gt;Checkov&lt;/a&gt;, &lt;a href="https://github.com/tenable/terrascan" rel="noopener noreferrer"&gt;Terrascan,&lt;/a&gt; and &lt;a href="https://github.com/Checkmarx/kics" rel="noopener noreferrer"&gt;Kics&lt;/a&gt; can be used to scan our Infrastructure code. It supports Terraform, Cloudformation, and Azure ARM resources.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://github.com/gruntwork-io/terratest" rel="noopener noreferrer"&gt;Terratest&lt;/a&gt; can be used to test infrastructure in real-time.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
terraform plan &lt;span class="nt"&gt;-out&lt;/span&gt; tf.plan
terraform show &lt;span class="nt"&gt;-json&lt;/span&gt; tf.plan | jq &lt;span class="s1"&gt;'.'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; tf.json
checkov &lt;span class="nt"&gt;-f&lt;/span&gt; tf.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After scanning for Kubernetes deployment and kube-bench we can deploy our application and start with the testing stage.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Monitoring and Alerting
&lt;/h2&gt;

&lt;p&gt;Monitoring and alerting is the process of collecting logs and metrics about everything happening in our infrastructure and sending notifications based on the metrics threshold value.&lt;/p&gt;

&lt;h4&gt;
  
  
  6.1 Metrics monitoring
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://github.com/prometheus/prometheus" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt;: It's a widely used open source tool for metrics monitoring. It provides &lt;a href="https://prometheus.io/docs/instrumenting/exporters/" rel="noopener noreferrer"&gt;various exporters&lt;/a&gt; that can be used for monitoring systems or application metrics. We can also use &lt;a href="https://github.com/grafana/grafana" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt; to visualize prometheus metrics.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://www.nagios.org/projects/nagios-core/" rel="noopener noreferrer"&gt;Nagios&lt;/a&gt; and &lt;a href="https://github.com/zabbix/zabbix" rel="noopener noreferrer"&gt;Zabbix&lt;/a&gt;: These are open source software tools to monitor IT infrastructures such as networks, servers, virtual machines, and cloud services.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://docs.sensu.io/sensu-go/latest/" rel="noopener noreferrer"&gt;Sensu Go&lt;/a&gt;: It is a complete solution for monitoring and observability at scale.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  6.2 Log monitoring
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://opensearch.org/" rel="noopener noreferrer"&gt;OpenSearch&lt;/a&gt;/&lt;a href="https://github.com/elastic/elasticsearch" rel="noopener noreferrer"&gt;Elasticsearch&lt;/a&gt;: It is a real-time distributed and analytic engine that helps in performing various kinds of search operations.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://www.graylog.org/products/open-source" rel="noopener noreferrer"&gt;Graylog&lt;/a&gt;: It provides centralized log management functionality for collecting, storing, and analyzing data.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://grafana.com/oss/loki/" rel="noopener noreferrer"&gt;Grafana Loki&lt;/a&gt;: It's a lightweight log aggregation system designed to store and query logs from all your applications and infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  6.3 Alerting
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://prometheus.io/docs/alerting/latest/alertmanager/" rel="noopener noreferrer"&gt;Prometheus Alertmanager&lt;/a&gt;: The Alertmanager handles alerts sent by client applications such as the Prometheus server.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://github.com/grafana/oncall" rel="noopener noreferrer"&gt;Grafana OnCall&lt;/a&gt;: Developer-friendly incident response with phone calls, SMS, slack, and telegram notifications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Security-focused logging and monitoring policy is used to prevent sensitive information from being logged in plain text. We can write a test case in our logging system to look for certain patterns of data. For example, a regex to find out sensitive information so that we can detect the logs in a lower environment.&lt;/p&gt;

&lt;p&gt;Application performance Monitoring (&lt;a href="https://www.elastic.co/observability/application-performance-monitoring" rel="noopener noreferrer"&gt;APM&lt;/a&gt;) improves the visibility into a distributed microservices architecture. The APM data can help enhance software security by allowing a full view of an application. &lt;a href="https://www.dynatrace.com/news/blog/what-is-distributed-tracing/" rel="noopener noreferrer"&gt;Distributed tracing&lt;/a&gt; tools like &lt;a href="https://github.com/openzipkin/zipkin" rel="noopener noreferrer"&gt;Zipkin&lt;/a&gt; and &lt;a href="https://github.com/jaegertracing/jaeger" rel="noopener noreferrer"&gt;Jaeger&lt;/a&gt; kind of stitch all logs together and bring full visibility of requests from start to end. It speeds up response time for new bugs or attacks.&lt;/p&gt;

&lt;p&gt;Although all cloud providers have their own monitoring toolsets and some tools are accessible from the marketplace. Also, there are paid monitoring tool providers like &lt;a href="https://newrelic.com/" rel="noopener noreferrer"&gt;Newrelic&lt;/a&gt;, &lt;a href="https://www.datadoghq.com/" rel="noopener noreferrer"&gt;Datadog&lt;/a&gt;, &lt;a href="https://www.appdynamics.com/" rel="noopener noreferrer"&gt;Appdynamics&lt;/a&gt;, and &lt;a href="https://www.splunk.com/" rel="noopener noreferrer"&gt;Splunk&lt;/a&gt; that provide all types of monitoring.&lt;/p&gt;

&lt;h3&gt;
  
  
  6.4 Security information and event management (SIEM)
&lt;/h3&gt;

&lt;p&gt;Security information and event management (&lt;a href="https://www.splunk.com/en_us/data-insider/what-is-siem.html" rel="noopener noreferrer"&gt;SIEM&lt;/a&gt;) offer real-time monitoring and analysis of events as well as tracking and logging of security data for compliance or auditing purposes. &lt;a href="https://www.splunk.com/en_us/data-insider/what-is-siem.html" rel="noopener noreferrer"&gt;Splunk&lt;/a&gt;, &lt;a href="https://www.elastic.co/security/siem" rel="noopener noreferrer"&gt;Elastic SIEM&lt;/a&gt;, and &lt;a href="https://github.com/wazuh/wazuh" rel="noopener noreferrer"&gt;Wazuh&lt;/a&gt; which give automated detection of suspicious activity and tools with behavior-based rules also can detect anomalies using prebuilt ML jobs.&lt;/p&gt;

&lt;h3&gt;
  
  
  6.5 Auditing
&lt;/h3&gt;

&lt;p&gt;After the deployment visibility comes from the level of auditing that has been put in place on application and infrastructure. The goal would be to have your auditing at a level that allows you to feed info into a security tool to give needed data. We can enable audits on the AWS cloud using &lt;a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html" rel="noopener noreferrer"&gt;CloudTrail&lt;/a&gt; and on Azure with &lt;a href="https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/platform-logs-overview" rel="noopener noreferrer"&gt;platform logs&lt;/a&gt;. For auditing applications, we can enable inbuilt audit logs and send the audit data to any logging tool like &lt;a href="https://www.elastic.co/enterprise-search" rel="noopener noreferrer"&gt;Elasticseach&lt;/a&gt; using &lt;a href="https://www.elastic.co/beats/auditbeat" rel="noopener noreferrer"&gt;auditbeat&lt;/a&gt; or &lt;a href="https://www.splunk.com/" rel="noopener noreferrer"&gt;Splunk&lt;/a&gt; and create an auditing dashboard.&lt;/p&gt;

&lt;h3&gt;
  
  
  6.6 Kubernetes runtime security monitoring
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://falco.org/" rel="noopener noreferrer"&gt;Falco&lt;/a&gt; is a cloud native Kubernetes threat detection tool. It can detect unexpected behavior, intrusions, and data theft in real time. In the backend, it uses Linux eBPF technology to trace your system and applications at runtime. For example, it can detect if someone tries to read a secret file inside a container, access a pod as a root user, etc, and trigger a webhook or send logs to the monitoring system. There are similar tools like &lt;a href="https://github.com/cilium/tetragon" rel="noopener noreferrer"&gt;Tetragon&lt;/a&gt;, &lt;a href="https://github.com/kubearmor/KubeArmor" rel="noopener noreferrer"&gt;KubeArmor&lt;/a&gt;, and &lt;a href="https://github.com/aquasecurity/tracee" rel="noopener noreferrer"&gt;Tracee&lt;/a&gt; which also provide Kubernetes runtime security.&lt;/p&gt;

&lt;p&gt;Till now, we have seen how DevSecOps CI/CD pipeline looks like. Now, let's dive into adding more security layer on top.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best practices to secure your infrastructure for DevSecOps CI/CD
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Network Security
&lt;/h3&gt;

&lt;p&gt;Networking is our first defense against any kind of attack and to prevent attacks on our application we should harden our network.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Create a separate private network for the workload (eg. App and DB) and only allow internet access from NAT.&lt;/li&gt;
&lt;li&gt;  Set fine-grained access on inbound and outbound network rules. Also, we can use &lt;a href="https://cloudcustodian.io/docs/aws/examples/securitygroupsdetectremediate.html" rel="noopener noreferrer"&gt;Cloud custodian to set the security compliance&lt;/a&gt; which automatically removes any unwanted network traffic.&lt;/li&gt;
&lt;li&gt;  Always configure &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html" rel="noopener noreferrer"&gt;Network ACL&lt;/a&gt; (NACL) for subnets in AWS. The best practice would be to block all outbound traffic and then allow the required rules.&lt;/li&gt;
&lt;li&gt;  Use Web Application Firewall (WAF).&lt;/li&gt;
&lt;li&gt;  Enable DDOS protection.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://nmap.org/" rel="noopener noreferrer"&gt;Nmap&lt;/a&gt; and &lt;a href="https://www.wireshark.org/" rel="noopener noreferrer"&gt;Wireshark&lt;/a&gt;, &lt;a href="https://www.tcpdump.org/" rel="noopener noreferrer"&gt;tcpdump&lt;/a&gt; tools can be used to scan networks and packets.&lt;/li&gt;
&lt;li&gt;  Use &lt;a href="https://www.cisco.com/c/en_in/products/security/vpn-endpoint-security-clients/what-is-vpn.html" rel="noopener noreferrer"&gt;VPN&lt;/a&gt; or &lt;a href="https://goteleport.com/blog/ssh-bastion-host/" rel="noopener noreferrer"&gt;Bastion host&lt;/a&gt; for connecting to infrastructure networks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Web Application Firewall (WAF)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.cloudflare.com/en-gb/learning/ddos/glossary/web-application-firewall-waf/" rel="noopener noreferrer"&gt;WAF&lt;/a&gt; is a layer7 firewall that protects our web applications against common web exploits (like XSS and SQL injection) and bots that may affect availability, compromise security, or consume excessive resources. Most cloud service provider provides WAF and with a few clicks, we can easily integrate it with our application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/curiefense/curiefense" rel="noopener noreferrer"&gt;Curiefense&lt;/a&gt; is an open source cloud native self-managed WAF tool that can be used to protect all forms of web traffic, services, DDoS, and APIs. We can also use WAF as a service from &lt;a href="https://www.cloudflare.com/waf/" rel="noopener noreferrer"&gt;Cloudflare&lt;/a&gt; and &lt;a href="https://www.imperva.com/products/web-application-firewall-waf/" rel="noopener noreferrer"&gt;Imperva&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Identity Access Management (IAM)
&lt;/h3&gt;

&lt;p&gt;IAM is a centrally defined policy to control access to data, applications, and other network assets. Below are a few methods that help prevent unauthorized access.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Have centralized user management using Active Directory or LDAP.&lt;/li&gt;
&lt;li&gt;  Use RBAC access management.&lt;/li&gt;
&lt;li&gt;  Have fine-grained access Policy for AWS IAM role.&lt;/li&gt;
&lt;li&gt;  Rotate user's access and secret keys periodically.&lt;/li&gt;
&lt;li&gt;  Use &lt;a href="https://goteleport.com/" rel="noopener noreferrer"&gt;Teleport&lt;/a&gt; for centralized connectivity, authentication, authorization, and audit.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://spectralops.io/blog/top-9-secret-management-tools-for-2022/" rel="noopener noreferrer"&gt;Store secrets&lt;/a&gt; into vaults and ensure that it is only accessible to authorized users.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://www.microsoft.com/en-us/insidetrack/implementing-a-zero-trust-security-model-at-microsoft" rel="noopener noreferrer"&gt;Implement Zero trust&lt;/a&gt; within your services.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cloud, server, and application hardening
&lt;/h3&gt;

&lt;p&gt;We can use &lt;a href="https://www.cisecurity.org/cis-benchmarks/" rel="noopener noreferrer"&gt;CIS benchmark&lt;/a&gt; to harden cloud, operating system, and application. It is always a good practice to use a hardened OS as it reduces the attack surface of the server. Most of the cloud providers provide a hardened image or we can create our own custom hardened image.&lt;/p&gt;

&lt;p&gt;Nowadays, most applications run inside containers. We need to harden our applications as well as containers by doing static analysis and container image scanning.&lt;/p&gt;

&lt;p&gt;To protect against viruses, trojans, malware, and other malicious threats we can install Antivirus like &lt;a href="https://www.crowdstrike.com/products/endpoint-security/falcon-prevent-antivirus/" rel="noopener noreferrer"&gt;Falcon&lt;/a&gt;, &lt;a href="https://www.sentinelone.com/" rel="noopener noreferrer"&gt;SentinelOne&lt;/a&gt;, or &lt;a href="http://www.clamav.net/" rel="noopener noreferrer"&gt;Clamav&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Server patching
&lt;/h3&gt;

&lt;p&gt;The most common attack vector exploits vulnerabilities in the OS or applications running on servers. Running regular vulnerability scans against the environments and updating regular packages reduces the risk of vulnerability. &lt;/p&gt;

&lt;p&gt;We can create an automation pipeline to patch the server using &lt;a href="https://www.theforeman.org/" rel="noopener noreferrer"&gt;Foreman&lt;/a&gt; or &lt;a href="https://www.redhat.com/en/technologies/management/satellite" rel="noopener noreferrer"&gt;Red Hat Satellite&lt;/a&gt; and for scanning, we can use &lt;a href="https://github.com/greenbone/openvas-scanner" rel="noopener noreferrer"&gt;OpenVAS&lt;/a&gt; or &lt;a href="https://www.tenable.com/products/nessus" rel="noopener noreferrer"&gt;Nessus&lt;/a&gt; to get the list of vulnerabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Securing Kubernetes
&lt;/h3&gt;

&lt;p&gt;Kubernetes has become the backbone of modern infrastructure and to make sure we are running it securely we can use the below tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Use correct &lt;a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#securitycontext-v1-core" rel="noopener noreferrer"&gt;security-context&lt;/a&gt; in Kubernetes YAML file.&lt;/li&gt;
&lt;li&gt;  Use &lt;a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="noopener noreferrer"&gt;Network Policy&lt;/a&gt; to block all the traffic by default and allow only required traffic.&lt;/li&gt;
&lt;li&gt;  Use Service Mesh (&lt;a href="https://github.com/linkerd/linkerd2" rel="noopener noreferrer"&gt;Linkerd&lt;/a&gt;, &lt;a href="https://github.com/istio/istio" rel="noopener noreferrer"&gt;Istio&lt;/a&gt;) to have &lt;a href="https://linkerd.io/2.11/features/automatic-mtls/#what-is-mtls" rel="noopener noreferrer"&gt;mTLS&lt;/a&gt; communication between microservices and implement Authorization to have fine-grained access.&lt;/li&gt;
&lt;li&gt;  Implement &lt;a href="https://github.com/aquasecurity/kube-bench" rel="noopener noreferrer"&gt;kube-bench&lt;/a&gt; for a CIS benchmark report for the kubernetes cluster. We can run this scan daily in our Kubernete cluster and fix any reported vulnerabilities.&lt;/li&gt;
&lt;li&gt;  Use tool like &lt;a href="https://github.com/aquasecurity/kube-hunter" rel="noopener noreferrer"&gt;Kube-hunter&lt;/a&gt;, &lt;a href="https://github.com/derailed/popeye" rel="noopener noreferrer"&gt;Popeye&lt;/a&gt; and &lt;a href="https://github.com/armosec/kubescape" rel="noopener noreferrer"&gt;Kubescape&lt;/a&gt; for security weaknesses and misconfigurations in kubernetes clusters and visibility of security issues.&lt;/li&gt;
&lt;li&gt;  Use &lt;a href="https://github.com/bridgecrewio/checkov" rel="noopener noreferrer"&gt;Checkov&lt;/a&gt;, &lt;a href="https://github.com/stackrox/kube-linter" rel="noopener noreferrer"&gt;KubeLinter&lt;/a&gt;, and &lt;a href="https://github.com/tenable/terrascan" rel="noopener noreferrer"&gt;Terrascan &lt;/a&gt; for scanning Kubernete YAML, Helm chart with best practices and vulnerabilities.&lt;/li&gt;
&lt;li&gt;  Implement pre-deployment policy checks like &lt;a href="https://github.com/kyverno/kyverno/" rel="noopener noreferrer"&gt;Kyverno&lt;/a&gt;, &lt;a href="https://www.kubewarden.io/" rel="noopener noreferrer"&gt;Kubewarden&lt;/a&gt;, and &lt;a href="https://github.com/open-policy-agent/gatekeeper" rel="noopener noreferrer"&gt;Gatekeeper&lt;/a&gt; can block non-standard Deployment.&lt;/li&gt;
&lt;li&gt;  Use a hardened image for the worker server. All cloud providers provide CIS benchmark harden images. We can also build our own custom hardened image using &lt;a href="https://github.com/awslabs/amazon-eks-ami" rel="noopener noreferrer"&gt;amazon-eks-ami&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;  Store Kubernetes secret in an encrypted format or &lt;a href="https://www.infracloud.io/blogs/kubernetes-secrets-hashicorp-vault/" rel="noopener noreferrer"&gt;use an external secret manager like Vault&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;  Use &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html" rel="noopener noreferrer"&gt;IAM roles for service accounts&lt;/a&gt; to assign AWS roles directly to Kubernete service accounts.&lt;/li&gt;
&lt;li&gt;  Implement &lt;a href="https://github.com/chaos-mesh/chaos-mesh" rel="noopener noreferrer"&gt;Chaos Mesh&lt;/a&gt; and &lt;a href="https://github.com/litmuschaos/litmus" rel="noopener noreferrer"&gt;Litmus&lt;/a&gt; chaos engineering framework to understand the behavior and stability of application in real-world use cases.&lt;/li&gt;
&lt;li&gt;  Follow &lt;a href="https://www.aquasec.com/cloud-native-academy/kubernetes-in-production/kubernetes-security-best-practices-10-steps-to-securing-Kubernetes/" rel="noopener noreferrer"&gt;best practice to secure kubernetes&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;  Use tool like &lt;a href="https://falco.org/" rel="noopener noreferrer"&gt;Falco&lt;/a&gt;, &lt;a href="https://github.com/aquasecurity/tracee" rel="noopener noreferrer"&gt;Tracee&lt;/a&gt; to monitor runtime privileged and unwanted system calls.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Containers
&lt;/h3&gt;

&lt;p&gt;Containers are the smallest level of abstraction for running any workload in modern infrastructure. Below are a few methods to secure our containers and we have also seen above how to integrate them into our CI/CD pipeline.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Scan the Container image and Dockerfile.&lt;/li&gt;
&lt;li&gt;  Reduce the Docker image size with a &lt;a href="https://docs.docker.com/develop/develop-images/multistage-build/" rel="noopener noreferrer"&gt;multi-stage&lt;/a&gt; build and using a distroless image to reduce the attack surface.&lt;/li&gt;
&lt;li&gt;  Don't use the root user and privileged containers.&lt;/li&gt;
&lt;li&gt;  Have &lt;a href="https://gvisor.dev/" rel="noopener noreferrer"&gt;Gvisor&lt;/a&gt; and &lt;a href="https://katacontainers.io/" rel="noopener noreferrer"&gt;Kata containers&lt;/a&gt; for kernel isolation.&lt;/li&gt;
&lt;li&gt;  Use container image signing and verification.&lt;/li&gt;
&lt;li&gt;  Set up a list of known good containers registry.&lt;/li&gt;
&lt;li&gt;  Implement the &lt;a href="https://blog.aquasec.com/docker-security-best-practices" rel="noopener noreferrer"&gt;Container security best practices&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Software supply chain security
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Supply chain security is crucial to the overall security of a software product. An attacker who can control a step in the supply chain can alter the product for malicious intents that range from introducing backdoors in the source code to including vulnerable libraries in the final product.&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/in-toto/docs/blob/master/in-toto-spec.md#12-motivation" rel="noopener noreferrer"&gt;- in-toto Specification&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As per &lt;a href="https://anchore.com/software-supply-chain-security-report-2022/" rel="noopener noreferrer"&gt;Anchore 2022 Software Supply Chain Security Report&lt;/a&gt; 62% of Organizations Surveyed have been Impacted by Software Supply Chain Attacks. &lt;a href="https://dev.to/devsecops-consulting-services/"&gt;Implementing DevSecOps CI/CD&lt;/a&gt; significantly reduces the Supply chain attack as we scan all our software components like Code, SBOM, Containers, infrastructure, Sign and verify containers, etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Center for Internet Security (CIS) standards
&lt;/h3&gt;

&lt;p&gt;CIS is a non-profit organization that provides security standards benchmark to secure our infrastructure. One of the benefits of following the &lt;a href="https://www.cisecurity.org/cis-benchmarks/" rel="noopener noreferrer"&gt;CIS benchmark&lt;/a&gt; is that it directly maps to several established standards guidelines including the NIST Cybersecurity Framework (CSF), ISO 27000 series of standards, PCI DSS, HIPAA, and others. Also, &lt;a href="https://www.cisecurity.org/cis-benchmarks/cis-benchmarks-faq#:~:text=The%20Level%202%20profile" rel="noopener noreferrer"&gt;Level 2 CIS&lt;/a&gt; benchmark gives more security controls.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vulnerability Assessment &amp;amp; Penetration Testing (VAPT)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.valencynetworks.com/articles/how-to-decide-frequency-of-vapt-vulnerability-assessment-penetration-testing.html" rel="noopener noreferrer"&gt;VAPT&lt;/a&gt; is a security testing method used by organizations to test their applications, network, endpoint, and cloud. It can be performed by internal and external third-party vendors. Depending upon compliance and regulation also how risky the technology is, organizations do schedule VPAT scans quarterly, half-yearly, or annually.&lt;/p&gt;

&lt;h4&gt;
  
  
  Vulnerability Assessment
&lt;/h4&gt;

&lt;p&gt;A Vulnerability Assessment (VA) is a security process to identify weaknesses or vulnerabilities in an application, system, and network. The goal of a vulnerability assessment is to determine all vulnerabilities and help the operator fix them. DAST scanning is also part of vulnerability assessment and it's often quick ranging from 10 minutes to 48 hours depending on the configuration. It is easier to integrate with our CI/CD pipeline whereas pen testing goes beyond VA and has aggressive scanning and exploitation after discovering any vulnerabilities.&lt;/p&gt;

&lt;h4&gt;
  
  
  Penetration Testing
&lt;/h4&gt;

&lt;p&gt;Pen testing is a proactive cybersecurity practice where security experts target individual components or whole applications to find vulnerabilities that can be exploited to compromise the application and data. &lt;a href="https://owasp.org/www-project-zap/" rel="noopener noreferrer"&gt;ZAP&lt;/a&gt;, &lt;a href="https://www.metasploit.com/" rel="noopener noreferrer"&gt;Metasploit&lt;/a&gt;, and &lt;a href="https://portswigger.net/burp" rel="noopener noreferrer"&gt;Burp Suite&lt;/a&gt; can be used for doing pen tests and it can discover vulnerabilities that might be missed by SAST and DAST tools. The downside of a pen test is that it takes more time depending on the coverage and configuration. The proper pen test might take up to several weeks, and with DevOps development speed, it becomes unsustainable. However, it's still worth adding Internal VAPT which can be done on every feature release to move fast and external VAPT can be done biannually or annually to keep overall security in check.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;To quickly recap what we have done to make the DevSecOps pipeline, we scanned for secrets, SAST and SBOM to find any vulnerabilities in our code. After that, we scanned our Dockerfile, container image, Kubernete manifest and did a container validation test, and signed our container image to ensure it's safe and secure. After deployment, we did Smoke, API test, and DAST scanning to ensure there is no bug in deployment. Always remember security requires constant attention and improvement. However, these can be the first few steps on the never-ending journey towards DevSecOps.&lt;/p&gt;

&lt;p&gt;Implementing DevSecOps best practices reduces the risk of vulnerabilities and hacking. Scanning all parts of your infrastructure and application gives full visibility of potential threats and possible ways to remediate them. "The only way to do security right is to have multiple layers of security" hence we talked about multiple methods and tools that can be used for finding vulnerabilities.&lt;/p&gt;

&lt;p&gt;Here is a list of a few tools that we have used to set up our DevSecOps pipeline.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Tools&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Threat modeling&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://owasp.org/www-project-threat-dragon/" rel="noopener noreferrer"&gt;Threat dragon&lt;/a&gt;, &lt;a href="https://cairis.org/cairis/tmdocsmore/" rel="noopener noreferrer"&gt;Cairis&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Secret scan&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/Yelp/detect-secrets" rel="noopener noreferrer"&gt;detect-secret&lt;/a&gt;, &lt;a href="https://github.com/zricethezav/gitleaks" rel="noopener noreferrer"&gt;Gitleaks&lt;/a&gt;, &lt;a href="https://github.com/awslabs/git-secrets" rel="noopener noreferrer"&gt;git-secrets&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SBOM scan&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/anchore/syft" rel="noopener noreferrer"&gt;Syft&lt;/a&gt;, &lt;a href="https://github.com/anchore/grype" rel="noopener noreferrer"&gt;Grype&lt;/a&gt;, &lt;a href="https://github.com/aquasecurity/trivy" rel="noopener noreferrer"&gt;Trivy&lt;/a&gt;, &lt;a href="https://owasp.org/www-project-dependency-check/" rel="noopener noreferrer"&gt;Dependency-check&lt;/a&gt;, &lt;a href="https://github.com/DependencyTrack/dependency-track" rel="noopener noreferrer"&gt;Dependency-track&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SAST scan&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/SonarSource/sonarqube" rel="noopener noreferrer"&gt;SonarQube&lt;/a&gt;, &lt;a href="https://checkmarx.com/product/cxsast-source-code-scanning/" rel="noopener noreferrer"&gt;Checkmarx&lt;/a&gt;, &lt;a href="https://www.veracode.com/products/binary-static-analysis-sast" rel="noopener noreferrer"&gt;Veracode&lt;/a&gt;, &lt;a href="https://www.perforce.com/products/klocwork" rel="noopener noreferrer"&gt;Klocwork&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Unit testing&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/jacoco/jacoco" rel="noopener noreferrer"&gt;JaCoCo&lt;/a&gt;, &lt;a href="https://mochajs.org/" rel="noopener noreferrer"&gt;Mocha&lt;/a&gt;, &lt;a href="https://jasmine.github.io/" rel="noopener noreferrer"&gt;Jasmine&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dockerfile scan&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/bridgecrewio/checkov" rel="noopener noreferrer"&gt;Checkov&lt;/a&gt;, &lt;a href="https://docs.docker.com/engine/scan/" rel="noopener noreferrer"&gt;docker scan&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Container scan&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/aquasecurity/trivy" rel="noopener noreferrer"&gt;Trivy&lt;/a&gt;, &lt;a href="https://github.com/anchore/grype" rel="noopener noreferrer"&gt;Grype&lt;/a&gt;, &lt;a href="https://github.com/quay/clair" rel="noopener noreferrer"&gt;Clair&lt;/a&gt;, &lt;a href="https://docs.docker.com/engine/scan/" rel="noopener noreferrer"&gt;docker scan&lt;/a&gt;, &lt;a href="https://www.aquasec.com/products/container-analysis/" rel="noopener noreferrer"&gt;Aqua scan&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Container signing&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/sigstore/cosign" rel="noopener noreferrer"&gt;Cosign&lt;/a&gt;, &lt;a href="https://github.com/containers/skopeo" rel="noopener noreferrer"&gt;Skopeo&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Container validation&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/aelsabbahy/goss" rel="noopener noreferrer"&gt;goss&lt;/a&gt;, &lt;a href="https://github.com/aelsabbahy/goss/tree/master/extras/kgoss" rel="noopener noreferrer"&gt;kgoss&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Kubernete manifest scan&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/bridgecrewio/checkov" rel="noopener noreferrer"&gt;Checkov&lt;/a&gt;, &lt;a href="https://github.com/tenable/terrascan" rel="noopener noreferrer"&gt;Terrascan&lt;/a&gt;, &lt;a href="https://github.com/stackrox/kube-linter" rel="noopener noreferrer"&gt;KubeLinter&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Kubernetes manifest pre-check&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/kyverno/kyverno" rel="noopener noreferrer"&gt;Kyverno&lt;/a&gt;, &lt;a href="https://www.kubewarden.io/" rel="noopener noreferrer"&gt;Kubewarden&lt;/a&gt;, &lt;a href="https://github.com/open-policy-agent/gatekeeper" rel="noopener noreferrer"&gt;Gatekeeper&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CIS scan&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/aquasecurity/kube-bench" rel="noopener noreferrer"&gt;kube-bench&lt;/a&gt;, &lt;a href="https://www.cisecurity.org/cybersecurity-tools/cis-cat-pro" rel="noopener noreferrer"&gt;CIS-CAT Pro&lt;/a&gt;, &lt;a href="https://github.com/prowler-cloud/prowler" rel="noopener noreferrer"&gt;Prowler&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IaC scan&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/bridgecrewio/checkov" rel="noopener noreferrer"&gt;Checkov&lt;/a&gt;, &lt;a href="https://github.com/tenable/terrascan" rel="noopener noreferrer"&gt;Terrascan&lt;/a&gt;, &lt;a href="https://github.com/Checkmarx/kics" rel="noopener noreferrer"&gt;KICS&lt;/a&gt;, &lt;a href="https://github.com/gruntwork-io/terratest" rel="noopener noreferrer"&gt;Terratest&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API testing&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/apache/jmeter" rel="noopener noreferrer"&gt;JMeter&lt;/a&gt;, &lt;a href="https://github.com/Blazemeter/taurus" rel="noopener noreferrer"&gt;Taurus&lt;/a&gt;, &lt;a href="https://www.postman.com/" rel="noopener noreferrer"&gt;Postman&lt;/a&gt;, &lt;a href="https://github.com/SmartBear/soapui" rel="noopener noreferrer"&gt;SoapUI&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DAST scan&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://owasp.org/www-project-zap/" rel="noopener noreferrer"&gt;ZAP&lt;/a&gt;,&lt;a href="https://www.hcltechsw.com/appscan" rel="noopener noreferrer"&gt;HCL Appscan&lt;/a&gt;, &lt;a href="https://portswigger.net/burp" rel="noopener noreferrer"&gt;Burp Suite&lt;/a&gt;, &lt;a href="https://www.invicti.com/learn/dynamic-application-security-testing-dast/" rel="noopener noreferrer"&gt;Invicti&lt;/a&gt;, &lt;a href="https://checkmarx.com/product/application-security-platform/" rel="noopener noreferrer"&gt;Checkmarx&lt;/a&gt;, &lt;a href="https://www.rapid7.com/products/insightappsec/" rel="noopener noreferrer"&gt;InsightAppSec&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Distributed tracing&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/openzipkin/zipkin" rel="noopener noreferrer"&gt;Zipkin&lt;/a&gt;, &lt;a href="https://github.com/jaegertracing/jaeger" rel="noopener noreferrer"&gt;Jaeger&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud native runtime security&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/falcosecurity/falco" rel="noopener noreferrer"&gt;Falco&lt;/a&gt;, &lt;a href="https://github.com/cilium/tetragon" rel="noopener noreferrer"&gt;Tetragon&lt;/a&gt;, &lt;a href="https://github.com/kubearmor/KubeArmor" rel="noopener noreferrer"&gt;Kubearmor&lt;/a&gt;, &lt;a href="https://github.com/aquasecurity/tracee" rel="noopener noreferrer"&gt;Tracee&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Service mesh&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/istio/istio" rel="noopener noreferrer"&gt;Istio&lt;/a&gt;, &lt;a href="https://github.com/linkerd/linkerd2" rel="noopener noreferrer"&gt;Linkerd&lt;/a&gt;, &lt;a href="https://github.com/cilium/cilium" rel="noopener noreferrer"&gt;Cilium&lt;/a&gt;, &lt;a href="https://github.com/traefik/traefik" rel="noopener noreferrer"&gt;Traefik&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Network security scan&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/nmap/nmap" rel="noopener noreferrer"&gt;Nmap&lt;/a&gt;, &lt;a href="https://github.com/wireshark/wireshark" rel="noopener noreferrer"&gt;Wireshark&lt;/a&gt;, &lt;a href="https://www.tcpdump.org/" rel="noopener noreferrer"&gt;tcpdump&lt;/a&gt;, &lt;a href="https://github.com/greenbone/openvas-scanner" rel="noopener noreferrer"&gt;OpenVAS&lt;/a&gt;, &lt;a href="https://docs.rapid7.com/metasploit/discovery-scan/" rel="noopener noreferrer"&gt;Metasploit&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Antivirus scan&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://www.crowdstrike.com/products/endpoint-security/falcon-prevent-antivirus/" rel="noopener noreferrer"&gt;Falcon&lt;/a&gt;, &lt;a href="https://www.sentinelone.com/" rel="noopener noreferrer"&gt;SentinelOne&lt;/a&gt;, &lt;a href="http://www.clamav.net/" rel="noopener noreferrer"&gt;Clamav&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OS vulnerability scan&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/greenbone/openvas-scanner" rel="noopener noreferrer"&gt;OpenVAS&lt;/a&gt;, &lt;a href="https://www.tenable.com/products/nessus" rel="noopener noreferrer"&gt;Nessus&lt;/a&gt;, &lt;a href="https://www.rapid7.com/products/nexpose/" rel="noopener noreferrer"&gt;Nexpose&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OS patching&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://www.theforeman.org/" rel="noopener noreferrer"&gt;Foreman&lt;/a&gt;, &lt;a href="https://www.redhat.com/en/technologies/management/satellite" rel="noopener noreferrer"&gt;Red Hat Satellite&lt;/a&gt;, &lt;a href="https://www.uyuni-project.org/" rel="noopener noreferrer"&gt;Uyuni&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pen testing&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://owasp.org/www-project-zap/" rel="noopener noreferrer"&gt;ZAP&lt;/a&gt;, &lt;a href="https://www.metasploit.com/" rel="noopener noreferrer"&gt;Metasploit&lt;/a&gt;, &lt;a href="https://portswigger.net/burp" rel="noopener noreferrer"&gt;Burp Suite&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That's a wrap folks :) Hope the article was informative and you enjoyed reading it. For more posts like this one, do subscribe to our weekly newsletter. I'd love to hear your thoughts on this post, so do start a conversation on &lt;a href="https://www.linkedin.com/in/alok-maurya-091ba682" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; :)&lt;/p&gt;

&lt;p&gt;Looking for help with DevSecOps? do check out our capabilities and how we’re helping startups &amp;amp; enterprises as an &lt;a href="https://www.infracloud.io/devsecops-consulting-services/" rel="noopener noreferrer"&gt;DevSecOps consulting services provider&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  References &amp;amp; further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/devops/what-is-devops/" rel="noopener noreferrer"&gt;AWS what is devops?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/devsecops/awesome-devsecops" rel="noopener noreferrer"&gt;Awesome DevSecOps&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/sottlmarek/DevSecOps" rel="noopener noreferrer"&gt;Ultimate DevSecOps library&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/hahwul/DevSecOps" rel="noopener noreferrer"&gt;DevSecOps collection&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://about.gitlab.com/blog/2021/06/01/gitlab-is-setting-standard-for-devsecops/" rel="noopener noreferrer"&gt;Gitlab is setting standard for devsecops&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@anshuman2121/devsecops-implement-security-on-cicd-pipeline-19eb7aa22626" rel="noopener noreferrer"&gt;DevSecOps: Implement security on CI/CD Pipeline&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/secure/devsecops-controls" rel="noopener noreferrer"&gt;DevSecOps controls&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.xenonstack.com/insights/a-quick-guide-devsecops-pipeline" rel="noopener noreferrer"&gt;Guide-devsecops-pipeline&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.aquasec.com/cloud-native-academy/devsecops/devsecops/" rel="noopener noreferrer"&gt;Aquasec devsecops&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>devsecops</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Implementing Cloud Governance as a Code using Cloud Custodian</title>
      <dc:creator>Alok</dc:creator>
      <pubDate>Thu, 09 Dec 2021 10:45:38 +0000</pubDate>
      <link>https://forem.com/alokm/implementing-cloud-governance-as-a-code-using-cloud-custodian-2l68</link>
      <guid>https://forem.com/alokm/implementing-cloud-governance-as-a-code-using-cloud-custodian-2l68</guid>
      <description>&lt;p&gt;In today’s scaling cloud infrastructure it's hard to manage all resources compliance. Every organization has a set of policies to follow for detecting violations and taking remediation actions on their cloud resources. This is generally done by writing multiple custom scripts and using some 3rd party tool and integration. Many development teams know how hard it is to manage and write custom scripts and keep a track of those. This is where we can leverage Cloud Custodian DSL policies to manage our Cloud resources with ease.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is cloud governance?
&lt;/h2&gt;

&lt;p&gt;Cloud governance is a framework which defines how developers can create policies to control costs, minimize security risks, improve efficiency and accelerate deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are other tools that provide governance as code?
&lt;/h3&gt;

&lt;h4&gt;
  
  
  AWS Config
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/config/"&gt;AWS config&lt;/a&gt; records and monitors all configuration data of AWS resources and We can build rules to help us enforce compliance. Setting up a &lt;a href="https://docs.aws.amazon.com/config/latest/developerguide/aggregate-data.html"&gt;Multi account and Multi Zone&lt;/a&gt; option is available. It also provides some predefined AWS managed rule that we can use or we can write our own custom rules. We can also take remediation action based on matches. For Custom policy we need to write our own lambda function for taking action.&lt;/p&gt;

&lt;p&gt;However we can use Cloud Custodian to set up AWS Config rule and Custom rule which supports Multi account and Multi region using c7n-org. Also it can automatically provision aws lambda function.&lt;/p&gt;

&lt;h4&gt;
  
  
  Azure Policy
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/governance/policy/overview"&gt;Azure policy&lt;/a&gt; enforces organization standards across Azure resources.  It provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity.(eg. Users are only allowed to create A and B series Virtual Machines). We can turn on in-built policies or create custom policies for all resources. It can also take auto remediation action on non-compliant resources.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Azure Policy is reliable and efficient for building a custom validation layer on deployments to prevent deviation from customer defined rules. Cloud Custodian and Azure Policy have significant overlap in scenarios they can accomplish with regard to compliance implementations. When reviewing your requirements, we recommend first identifying the requirements that can be implemented via Azure Policy. Custodian can then be used to implement the remaining requirements. Custodian is also frequently used to add a second layer of protection or mitigation actions to requirements covered by Azure Policy. This way we can ensure that policy is configured correctly.&lt;/p&gt;

&lt;p&gt;— &lt;a href="https://cloudcustodian.io/docs/azure/advanced/azurepolicy.html"&gt;Azure Policy Comparison&lt;/a&gt; &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Till now, we have seen What is cloud governance and what are other tools available in the market. Let's see now what Cloud Custodian can provide us in cloud governance.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Cloud Custodian?
&lt;/h2&gt;

&lt;p&gt;Cloud Custodian is CNCF sandbox project for governing public cloud resources in real-time. It helps us write governance as code the same way we write infrastructure as code. It detects the non-complaints resource and takes action to remediate it. Custodian is a cloud native tool. It can be used with multiple cloud providers(AWS, AZURE, GCP, etc)&lt;/p&gt;

&lt;p&gt;We can use Cloud Custodian as below,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Compliance and Security as code&lt;/strong&gt; - We can write Simple YAML DSL policy as a code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost savings&lt;/strong&gt; - Removing unwanted resources and Implementing the on/off hours policy can save costs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational efficiency&lt;/strong&gt; -By adding governance as code it reduces the friction for innovating securely in the cloud and also increases developer efficiency.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How does it work?
&lt;/h2&gt;

&lt;p&gt;When we run Cloud Custodian command depending on the Cloud provider it takes resources, filters, action as input and translate into Cloud provider API Call(eg. AWS Boto3 API). No need to worry about custom script or aws cli commands. We get clean, readable policies and numerous common filters and actions that have been built into Cloud Custodian. If we need custom filters we can always use JMESPath to write our filter.&lt;/p&gt;

&lt;p&gt;There can be situations where we may need to run our policy periodically or based on some events. For this Cloud Custodian automatically provision lambda function and CloudWatch event rule. CloudWatch event rules can be scheduled (every 10 minutes) or triggered in response to API calls by CloudTrail, EC2 instance state events, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BOGc0It4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d33wubrfki0l68.cloudfront.net/bc48c7348e1bf6c366360db901b68634c7968ef8/fac12/assets/img/blog/cloud-governance-using-cloud-custodian/cloud-governance-using-cloud-custodian-workflow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BOGc0It4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d33wubrfki0l68.cloudfront.net/bc48c7348e1bf6c366360db901b68634c7968ef8/fac12/assets/img/blog/cloud-governance-using-cloud-custodian/cloud-governance-using-cloud-custodian-workflow.png" alt="Cloud Custodian Workflow" width="880" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to install and set up Cloud Custodian ?
&lt;/h2&gt;

&lt;p&gt;We can simply install Cloud Custodian with python pip command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 &lt;span class="nt"&gt;-m&lt;/span&gt; venv custodian
&lt;span class="nb"&gt;source &lt;/span&gt;custodian/bin/activate
pip &lt;span class="nb"&gt;install &lt;/span&gt;c7n       &lt;span class="c"&gt;# This includes AWS support&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;c7n_azure &lt;span class="c"&gt;# Install Azure package&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;c7n_gcp   &lt;span class="c"&gt;# Install GCP Package&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Using Cloud Custodian docker image
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run  &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/output:/opt/custodian/output &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/policy.yml:/opt/custodian/policy.yml &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--env-file&lt;/span&gt; &amp;lt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"^AWS&lt;/span&gt;&lt;span class="se"&gt;\|&lt;/span&gt;&lt;span class="s2"&gt;^AZURE&lt;/span&gt;&lt;span class="se"&gt;\|&lt;/span&gt;&lt;span class="s2"&gt;^GOOGLE|^KUBECONFIG"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
     cloudcustodian/c7n run &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nt"&gt;--cache-period&lt;/span&gt; 0 &lt;span class="nt"&gt;-s&lt;/span&gt; /opt/custodian/output /opt/custodian/policy.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: ACCESS and SECRET KEY, DEFAULT_REGION and KUBECONFIG are fetched from ENV variables and users should have access to required IAM Roles and Policies that we define in policy YAML file. Another option is to mount the file/directory inside the container.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Custodian policy.yaml explained
&lt;/h2&gt;

&lt;p&gt;Cloud Custodian has simple yaml file which includes &lt;strong&gt;Resource&lt;/strong&gt;, &lt;strong&gt;Filter&lt;/strong&gt; and &lt;strong&gt;Action&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt; Custodian is able to target several cloud providers (AWS, GCP, Azure) and each provider has its own resource type.(eg ec2, s3 bucket)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Filters:&lt;/strong&gt; &lt;a href="https://cloudcustodian.io/docs/filters.html"&gt;Filters&lt;/a&gt; are the way in Custodian to target a specific subset of resources. It could be based on some date, tag etc. We can write our custom filter using the &lt;a href="https://jmespath.org/"&gt;JMESPath&lt;/a&gt; expression.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Actions:&lt;/strong&gt; Actions is the actual decision you make on resources that match the filter. This action can be as simple as sending a report to the owner, stating that the resource does not match the Cloud governance rule or delete the resource.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both actions and filters can combine as many rules as you want to express your needs perfectly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;first-policy&lt;/span&gt;
  &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;name-of-cloud-resource&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Description of policy&lt;/span&gt;
    &lt;span class="s"&gt;filters&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;(some filter that will select a subset of resource)&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;(more filters)&lt;/span&gt;
    &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;(an action to trigger on filtered resource)&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;(more actions)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Cloud Custodian sample policy
&lt;/h2&gt;

&lt;p&gt;Although Official docs cover most of the &lt;a href="https://cloudcustodian.io/docs/aws/examples/index.html"&gt;aws policies examples&lt;/a&gt;, We have picked up some policies which can be used from day 1 for cost saving and Compliance.&lt;/p&gt;

&lt;h3&gt;
  
  
  ebs-snapshots-month-old.yml
&lt;/h3&gt;

&lt;p&gt;One of the most common issues the organization faces is the complexity of removing old ami,snapshot and volume which lie there in our environment for more than 1 years and add more bills. Eventually we have to write multiple custom scripts to deal with the situation. &lt;/p&gt;

&lt;p&gt;Below is a simple policy which removes snapshots which are older than 30 days.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;policies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ebs-snapshots-month-old&lt;/span&gt;
    &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ebs-snapshot&lt;/span&gt;
    &lt;span class="na"&gt;filters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;age&lt;/span&gt;
        &lt;span class="na"&gt;days&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;
        &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ge&lt;/span&gt;
    &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;delete&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is an example of how we can run the Cloud Custodian policy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;custodian run &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /tmp/output /tmp/ebs-snapshots-month-old.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kFM3d7JV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d33wubrfki0l68.cloudfront.net/97c590f8d17e6f00a95f50001d434f4b9f2a1492/e2a8b/assets/img/blog/cloud-governance-using-cloud-custodian/cloud-custodian-policy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kFM3d7JV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d33wubrfki0l68.cloudfront.net/97c590f8d17e6f00a95f50001d434f4b9f2a1492/e2a8b/assets/img/blog/cloud-governance-using-cloud-custodian/cloud-custodian-policy.png" alt="cloud-custodian-policy" width="880" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every time we run the Custodian command it creates/appends files inside policies.name output directory passed with &lt;strong&gt;-s&lt;/strong&gt; option (eg. /tmp/output/&lt;strong&gt;ebs-snapshot-month-old&lt;/strong&gt;/custodian-run.log)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;custodian-run.log&lt;/strong&gt; : All console logs are stored here&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;resources.json&lt;/strong&gt; : Filtered resources list&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;metadata.json&lt;/strong&gt; : Metadata about filtered resources&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;action-&lt;/strong&gt;* : resources list on which action was taken&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;$HOME/.cache/cloud-custodian.cache&lt;/strong&gt; : All cloud api call results are cached here. Default value is 15 minutes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To get a filtered resource report we can run the below command. By default it provides reports in csv format but we can change it by passing &lt;strong&gt;--format json&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;custodian report &lt;span class="nt"&gt;-s&lt;/span&gt; /tmp/output/ &lt;span class="nt"&gt;--format&lt;/span&gt; csv ebs-snapshots-month-old.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qfXtzjZA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d33wubrfki0l68.cloudfront.net/eae51517185d88a85f933e20d9aa151c5be9825f/b3252/assets/img/blog/cloud-governance-using-cloud-custodian/cloud-custodian-code.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qfXtzjZA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d33wubrfki0l68.cloudfront.net/eae51517185d88a85f933e20d9aa151c5be9825f/b3252/assets/img/blog/cloud-governance-using-cloud-custodian/cloud-custodian-code.png" alt="cloud-custodian-code" width="880" height="55"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  only-approved-ami.yml
&lt;/h3&gt;

&lt;p&gt;Stop running ec2 which does not match with the trusted AMI list.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;policies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;only-approved-ami&lt;/span&gt;
  &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ec2&lt;/span&gt;
  &lt;span class="na"&gt;comment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;Stop running EC2 instances that are using invalid AMIs&lt;/span&gt;
  &lt;span class="na"&gt;filters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;State.Name"&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;running&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;value&lt;/span&gt;
      &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ImageId&lt;/span&gt;
      &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;not-in&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ami-04db49c0fb2215364&lt;/span&gt;   &lt;span class="c1"&gt;# Amazon Linux 2 AMI (HVM)&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ami-06a0b4e3b7eb7a300&lt;/span&gt;  &lt;span class="c1"&gt;# Red Hat Enterprise Linux 8 (HVM)&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ami-0b3acf3edf2397475&lt;/span&gt;    &lt;span class="c1"&gt;# SUSE Linux Enterprise Server 15 SP2 (HVM)&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ami-0c1a7f89451184c8b&lt;/span&gt;   &lt;span class="c1"&gt;# Ubuntu Server 20.04 LTS (HVM)&lt;/span&gt;
  &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;stop&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Security-group-check.yml
&lt;/h3&gt;

&lt;p&gt;One of the more common issues that we see when Developers tend to allow all traffic on SSH while creating POC VM OR during testing we sometimes allow port 22 to ALL but forget to remove the rule. Below policy can take care of these issues by automatically removing SSH access from ALL and adding only VPN IP to the security group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;policies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sg-remove-permission&lt;/span&gt;
    &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;security-group&lt;/span&gt;
    &lt;span class="na"&gt;filters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;or&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
             &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress&lt;/span&gt;
               &lt;span class="na"&gt;IpProtocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-1"&lt;/span&gt;
               &lt;span class="na"&gt;Ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;22&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
               &lt;span class="na"&gt;Cidr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0/0"&lt;/span&gt;
             &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress&lt;/span&gt;
               &lt;span class="na"&gt;IpProtocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-1"&lt;/span&gt;
               &lt;span class="na"&gt;Ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;22&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
               &lt;span class="na"&gt;CidrV6&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;::/0"&lt;/span&gt;
    &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;set-permissions&lt;/span&gt;
        &lt;span class="na"&gt;remove-ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;matched&lt;/span&gt;
        &lt;span class="na"&gt;add-ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;IpPermissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;IpProtocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
              &lt;span class="na"&gt;FromPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;22&lt;/span&gt;
              &lt;span class="na"&gt;ToPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;22&lt;/span&gt;
              &lt;span class="na"&gt;IpRanges&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VPN1 Access&lt;/span&gt;
                  &lt;span class="na"&gt;CidrIp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10.10.0.0/16"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Support Kubernetes resources
&lt;/h2&gt;

&lt;p&gt;We can now manage Kubernetes resources like deployment, pod, Daemonset, Volume. Below are some sample policies that we can write with Cloud Custodian.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Delete POC and untagged resources&lt;/li&gt;
&lt;li&gt;Update labels and patch on k8 resources&lt;/li&gt;
&lt;li&gt;Call webhooks based on findings&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  kubernetes-delete-poc-resource.yml
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;policies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;delete-poc-namespace&lt;/span&gt;
    &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8s.namespace&lt;/span&gt;
    &lt;span class="na"&gt;filters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;value&lt;/span&gt;
      &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;metadata.name'&lt;/span&gt;
      &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;regex&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;^.*poc.*$'&lt;/span&gt;
    &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;delete&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;delete-poc-deployments&lt;/span&gt;
    &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8s.deployment&lt;/span&gt;
    &lt;span class="na"&gt;filters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;value&lt;/span&gt;
      &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;metadata.name'&lt;/span&gt;
      &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;regex&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;^.*poc.*$'&lt;/span&gt;
    &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;delete&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Cloud Custodian kubernetes resources still work in progress. We can &lt;a href="https://github.com/cloud-custodian/cloud-custodian/tree/master/tools/c7n_kube"&gt;check the status of the plugin here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are the types of modes that we can call Cloud Custodian?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;pull&lt;/strong&gt; - Default method can be run manually. Preferred to add it in CICD tool cron.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;periodic&lt;/strong&gt; - Provision cloud resource (eg. Aws lambda with CloudWatch cron) as per policy and executes as scheduled.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom mode as per cloud provider&lt;/strong&gt; -  Executes when the event matches &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Integrate Cloud Custodian with Jenkins CI
&lt;/h2&gt;

&lt;p&gt;For simplicity we are using Cloud Custodian docker image and injecting the credentials as environment variables.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: secret file should have keys in &lt;strong&gt;upper case&lt;/strong&gt; and default region. In case of kubernetes the KUBECONFIG file should be mounted inside the container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;YOUR_AWS_ACCESS_KEY&amp;gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;YOUR_AWS_SECRET_ACCESS_KEY&amp;gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_DEFAULT_REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;YOUR_DEFAULT_REGION&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight groovy"&gt;&lt;code&gt;&lt;span class="n"&gt;pipeline&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="n"&gt;label&lt;/span&gt; &lt;span class="s1"&gt;'worker1'&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;stages&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'cloudcustodian-non-prod'&lt;/span&gt;&lt;span class="o"&gt;){&lt;/span&gt;
            &lt;span class="n"&gt;steps&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;dir&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"non-prod"&lt;/span&gt;&lt;span class="o"&gt;){&lt;/span&gt;
                    &lt;span class="n"&gt;withCredentials&lt;/span&gt;&lt;span class="o"&gt;([&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;credentialsId:&lt;/span&gt; &lt;span class="s1"&gt;'secretfile'&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nl"&gt;variable:&lt;/span&gt; &lt;span class="s1"&gt;'var_secretfile'&lt;/span&gt;&lt;span class="o"&gt;)])&lt;/span&gt;
                    &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'''
                    source $var_secretfile  &amp;gt; /dev/null 2&amp;gt;&amp;amp;1
                    env | grep "^AWS\\|^AZURE\\|^GOOGLE\\|^KUBECONFIG" &amp;gt; envfile

                    for files in $(ls | egrep '.yml|.yaml')
                    do
                        docker run --rm -t \
                        -v $(pwd)/output:/opt/custodian/output \
                        -v $(pwd):/opt/custodian/ \
                        --env-file envfile \
                        cloudcustodian/c7n run -v  -s /opt/custodian/output /opt/custodian/$files
                    done
                    '''&lt;/span&gt;
                    &lt;span class="o"&gt;}&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"cloudcustodian-prod"&lt;/span&gt;&lt;span class="o"&gt;){&lt;/span&gt;
            &lt;span class="n"&gt;steps&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;dir&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"prod"&lt;/span&gt;&lt;span class="o"&gt;){&lt;/span&gt;
                    &lt;span class="n"&gt;withCredentials&lt;/span&gt;&lt;span class="o"&gt;([&lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;credentialsId:&lt;/span&gt; &lt;span class="s1"&gt;'secretfile'&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nl"&gt;variable:&lt;/span&gt; &lt;span class="s1"&gt;'var_secretfile'&lt;/span&gt;&lt;span class="o"&gt;)])&lt;/span&gt;
                    &lt;span class="o"&gt;{&lt;/span&gt;
                    &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'''
                    source $var_secretfile  &amp;gt; /dev/null 2&amp;gt;&amp;amp;1
                    env | grep "^AWS\\|^AZURE\\|^GOOGLE\\|^KUBECONFIG" &amp;gt; envfile

                    for files in $(ls | egrep '.yml|.yaml')
                    do
                        docker run --rm -t \
                        -v $(pwd)/output:/opt/custodian/output \
                        -v $(pwd):/opt/custodian/ \
                        --env-file envfile \
                        cloudcustodian/c7n run -v -s /opt/custodian/output /opt/custodian/$files
                    done
                    '''&lt;/span&gt;
                    &lt;span class="o"&gt;}&lt;/span&gt;
                &lt;span class="o"&gt;}&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Jenkins console output&lt;/strong&gt;:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W-Xk51RH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d33wubrfki0l68.cloudfront.net/079ff0fe06e79c8eaad2db4cf40bfe4f7758292f/0058b/assets/img/blog/cloud-governance-using-cloud-custodian/cloud-custodian-jenkins-console-output.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W-Xk51RH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d33wubrfki0l68.cloudfront.net/079ff0fe06e79c8eaad2db4cf40bfe4f7758292f/0058b/assets/img/blog/cloud-governance-using-cloud-custodian/cloud-custodian-jenkins-console-output.png" alt="cloud-custodian-jenkins-console-output" width="880" height="401"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Tools and Features
&lt;/h2&gt;

&lt;p&gt;Cloud Custodian has a number of add-on &lt;a href="https://github.com/cloud-custodian/cloud-custodian/tree/master/tools"&gt;tools that have been developed by the community&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Multi Region and Multi Account support
&lt;/h3&gt;

&lt;p&gt;We can use &lt;a href="https://cloudcustodian.io/docs/tools/c7n-org.html"&gt;c7n-org&lt;/a&gt; plugging to configure multiple AWS, AZURE, GCP accounts and run them in parallel. Flag &lt;strong&gt;--region all&lt;/strong&gt; can be used to run the same policy across all regions.&lt;/p&gt;
&lt;h3&gt;
  
  
  Notification
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://cloudcustodian.io/docs/tools/c7n-mailer.html"&gt;c7n-mailer&lt;/a&gt; plugin provides lots of flexibility for alert notifications. We can use webhook, email, queue service, Datadog, Slack and Splunk for alerts.&lt;/p&gt;
&lt;h3&gt;
  
  
  Auto-resource-tagging
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/cloud-custodian/cloud-custodian/tree/master/tools/c7n_trailcreator"&gt;c7n_trailcreator&lt;/a&gt; script will process cloudtrail records to create a sqlite db of resources and their creators, and then use that sqlitedb to tag the resources with their creator's name.&lt;/p&gt;
&lt;h3&gt;
  
  
  Logging and Reporting
&lt;/h3&gt;

&lt;p&gt;It provides reporting in JSON and CSV format. We can also collect these metrics inside Cloud native logging and generate nice dashboards. We can store the logs locally, S3 or on Cloudwatch. A consistent logging format makes it easy to troubleshoot policies.&lt;/p&gt;
&lt;h3&gt;
  
  
  Custodian Dry run
&lt;/h3&gt;

&lt;p&gt;In Dry run(&lt;strong&gt;--dryrun&lt;/strong&gt;), the action part of policy is ignored. It shows what resources will be impacted by the policy. It is always best practice to do a dry run first before running the actual code.&lt;/p&gt;
&lt;h3&gt;
  
  
  Custodian Cache
&lt;/h3&gt;

&lt;p&gt;When we execute any policy it fetches data from the cloud and stored locally for 15 min. Cache is used to minimize api calls. We can set the cache with &lt;strong&gt;--cache-period 0&lt;/strong&gt; option.&lt;/p&gt;
&lt;h3&gt;
  
  
  Editor integration
&lt;/h3&gt;

&lt;p&gt;It can be integrated with Visual Studio Code for auto compilation and suggestion.&lt;/p&gt;
&lt;h3&gt;
  
  
  Custodian schema
&lt;/h3&gt;

&lt;p&gt;We can use Custodian schema command to find out the type of resource, action and filters that are available inside Cloud Custodian.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
custodian schema    &lt;span class="c"&gt;#Shows all resource available in custodian&lt;/span&gt;
custodian schema aws    &lt;span class="c"&gt;#Shows aws resource available in custodian&lt;/span&gt;
custodian schema aws.ec2     &lt;span class="c"&gt;#Shows aws ec2 action and filters&lt;/span&gt;
custodian schema aws.ec2.actions     &lt;span class="c"&gt;#Shows aws ec2 actions only&lt;/span&gt;
custodian schema aws.ec2.actions.stop    &lt;span class="c"&gt;#Shows ec2 stop sample policy and schema&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  How is Cloud Custodian better than other tools?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Simplicity and Consistency of writing policies across multiple cloud platforms and kubernetes. &lt;/li&gt;
&lt;li&gt;Multi account and Multi region support using c7n-org.&lt;/li&gt;
&lt;li&gt;Support a wide range of Notification channels using &lt;a href="https://cloudcustodian.io/docs/tools/c7n-mailer.html"&gt;c7n-mailer &lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Custodian's terraform provider enables writing and evaluating Custodian policies against Terraform IaaC modules.&lt;/li&gt;
&lt;li&gt;Custodian has deep integration with AWS config. It can deploy any config-rule that is supported by config. Also It can automatically provision aws lambda for AWS custom config policy.&lt;/li&gt;
&lt;li&gt;We can implement our custom policies in Python if you need to as it supports all rules as per Cloud providers SDK.&lt;/li&gt;
&lt;li&gt;Cloud Custodian is an opensource CNCF Sandbox project.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cloud Custodian Limitations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/cloud-custodian/cloud-custodian/issues/4993"&gt;No Default Dashboard&lt;/a&gt; (Supports AWS native dashboard but We can also send metrics output to Elasticsearch/Grafana, etc. and create dashboard).&lt;/li&gt;
&lt;li&gt;Cloud Custodian can not prevent custom layer validation pre deployments. It can only run periodically or based on some events.&lt;/li&gt;
&lt;li&gt;Cloud Custodian does not have any in-built policies. We need to write all policies by ourselves. However it has a lot of good example policies(&lt;a href="https://cloudcustodian.io/docs/aws/examples/index.html"&gt;aws&lt;/a&gt;, &lt;a href="https://cloudcustodian.io/docs/azure/examples/index.html"&gt;azure&lt;/a&gt;, &lt;a href="https://cloudcustodian.io/docs/gcp/examples/index.html"&gt;gcp&lt;/a&gt;) that we can use as reference.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Cloud Custodian enables us to define rules and remediation as one policy to facilitate a well-managed cloud infrastructure. We can also use it to write policies for managing Kubernetes resources like deployment, pod, etc. Compared to other cloud based governance tools It provides a very simple DSL to write policies and It’s Consistency across Cloud platforms. Custodian reduces the friction for innovating securely in the Cloud and also increases efficiency.&lt;/p&gt;

&lt;p&gt;We can use Cloud Custodian to optimize our Cloud cost by implementing offhour and cleanup policies. It also includes lots of plugins like Multi account/region support, Wide range of Notification tools(Slack, SMTP, sqs, Datadog, Webhooks, etc), etc. We can find a &lt;a href="https://github.com/cloud-custodian/cloud-custodian/tree/master/tools"&gt;list of Cloud Custodian plugins here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;That’s a wrap folks :) Hope the article was informative and you enjoyed reading it. I’d love to hear your thoughts and experience - let’s connect and start a conversation on &lt;a href="https://www.linkedin.com/in/alok-maurya-091ba682/"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  References &amp;amp; Further Reading:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://cloudcustodian.io/docs/"&gt;Cloud Custodian Docs&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cloudcustodian.io/docs/azure/advanced/azurepolicy.html"&gt;Azure Policy Comparison&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cloudcustodian.io/docs/aws/topics/config.html"&gt;AWS Config&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://stacklet.io/"&gt;Stacklet&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://medium.com/manomano-tech/cloud-custodian-overview-and-deployment-of-cloud-governance-d8e468fb4ab4"&gt;Cloud Custodian — Overview and deployment of cloud governance&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=N24ana3-yog"&gt;Cloud Governance as Code: A New Paradigm Shift&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.cdw.com/content/cdw/en/articles/cloud/what-is-cloud-governance-for-aws.html"&gt;What Is Cloud Governance for AWS?&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
      <category>cloudgovernace</category>
      <category>cloudcustodian</category>
    </item>
  </channel>
</rss>
