<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Harness</title>
    <description>The latest articles on Forem by Harness (@harness).</description>
    <link>https://forem.com/harness</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/harness"/>
    <language>en</language>
    <item>
      <title>Speed Up Your CI Pipelines with Docker Layer Caching</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Thu, 23 Jan 2025 16:03:21 +0000</pubDate>
      <link>https://forem.com/harness/speed-up-your-ci-pipelines-with-docker-layer-caching-4ffe</link>
      <guid>https://forem.com/harness/speed-up-your-ci-pipelines-with-docker-layer-caching-4ffe</guid>
      <description>&lt;p&gt;In modern software development, speed and efficiency are paramount. Long build times can slow down releases and hinder productivity. Docker layer caching is a powerful technique that helps optimize builds by reusing previously created image layers, reducing redundant processing. In this blog, we'll explore how Harness CI features &lt;a href="https://developer.harness.io/docs/continuous-integration/use-ci/caching-ci-data/docker-layer-caching" rel="noopener noreferrer"&gt;Docker Layer Caching (DLC)&lt;/a&gt; to enhance build performance and streamline your CI/CD pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  DLC and Multi-stage Builds
&lt;/h2&gt;

&lt;p&gt;Every instruction in a Dockerfile creates a layer in the final image. Docker caches these layers to avoid rebuilding them unnecessarily, which can save significant time and reduce infrastructure costs. However, when a layer changes (e.g., modifying a file copied with COPY), Docker invalidates the cache for that layer and all subsequent layers, requiring them to be rebuilt. Understanding and optimizing layer usage helps in writing more efficient Dockerfiles, achieving faster build times, and lowering compute costs.&lt;/p&gt;

&lt;p&gt;A multi-stage Dockerfile allows you to use multiple FROM statements to break the build process into stages. This helps keep the final image lightweight by copying only the necessary files from one stage to another, discarding anything unnecessary. It speeds up builds by leveraging layer caching, reducing the need to re-run expensive steps. Plus, it enhances security by minimizing the final image's attack surface and keeps the Dockerfile organized by separating concerns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Harness CI Intelligence: Docker Layer Caching
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://developer.harness.io/docs/continuous-integration/get-started/harness-ci-intelligence" rel="noopener noreferrer"&gt;Harness CI Intelligence&lt;/a&gt; optimizes Docker builds by leveraging Docker Layer Caching (DLC) to reuse unchanged image layers, significantly reducing build times and resource costs. When enabled, DLC restores previously built layers, avoiding redundant processing and speeding up the build and push process. Harness CI supports DLC across both Harness Cloud and self-managed infrastructure, providing flexibility in managing cache storage. This intelligent caching mechanism enhances CI/CD efficiency by minimizing infrastructure usage and improving developer productivity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v3swfx21l5jv5lb0t1n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v3swfx21l5jv5lb0t1n.png" alt="Harness CI Intelligence Overview" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmarking Build and Push to Docker
&lt;/h2&gt;

&lt;p&gt;Check out the following video for a demo on running a build and push to Docker step for a Go repository using GitHub Actions and Harness CI, with Harness CI achieving an 8X improvement in build times.&lt;br&gt;
Speed Up Your CI Pipelines with Docker Layer Caching&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/SOZxl761MCI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;The following chart summarizes the performance comparison of this benchmark (Harness CI with DLC vs. GitHub Actions):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9thaiyhqhfuwn5wqtrmp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9thaiyhqhfuwn5wqtrmp.png" alt="Benchmark: Harness CI vs. GitHub Actions" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Implementing Docker Layer Caching in your CI/CD pipelines can lead to significant improvements in build performance, cost savings, and overall development efficiency. By reusing unchanged layers and minimizing redundant processing, Harness CI helps teams accelerate their workflows while optimizing infrastructure usage. Whether you're running builds in Harness Cloud or a self-managed environment, enabling DLC ensures faster feedback loops and a smoother development experience. Start leveraging Docker Layer Caching today to speed up your CI pipelines and focus on delivering value faster.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>caching</category>
      <category>harness</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>End-to-end MLOps CI/CD pipeline with Harness and AWS</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Wed, 01 May 2024 13:29:10 +0000</pubDate>
      <link>https://forem.com/harness/end-to-end-mlops-cicd-pipeline-with-harness-and-aws-4084</link>
      <guid>https://forem.com/harness/end-to-end-mlops-cicd-pipeline-with-harness-and-aws-4084</guid>
      <description>&lt;p&gt;MLOps tackles the complexities of building, testing, deploying, and monitoring machine learning models in real-world environments.&lt;/p&gt;

&lt;p&gt;Integrating machine learning into the traditional software development lifecycle poses unique challenges due to the intricacies of data, model versioning, scalability, and ongoing monitoring.&lt;/p&gt;

&lt;p&gt;In this tutorial, you'll create an end-to-end MLOps CI/CD pipeline that will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build and push an ML model to AWS ECR.&lt;/li&gt;
&lt;li&gt;Run security scans and tests.&lt;/li&gt;
&lt;li&gt;Deploy the model to AWS Lambda.&lt;/li&gt;
&lt;li&gt;Add policy enforcement and monitoring for the model.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The story
&lt;/h2&gt;

&lt;p&gt;This tutorial uses a fictional bank called &lt;em&gt;Harness Bank&lt;/em&gt;. Assume that this fictional bank recently launched a website where clients can apply for a credit card. Based on the information provided in the form, the customer's application is approved or denied in seconds. This online credit card application is powered by a machine learning (ML) model trained on data that makes the decision accurate and unbiased.&lt;/p&gt;

&lt;p&gt;Assume that the current process to update this hypothetical ML model is manual. A data scientist builds a new image locally, runs tests, and manually ensures that the model passes the required threshold for accuracy and fairness.&lt;/p&gt;

&lt;p&gt;In this tutorial, you'll automate the model maintenance process and increase the build and delivery velocity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Design and architecture
&lt;/h3&gt;

&lt;p&gt;Before diving into the implementation, review the MLOps architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jgr5nwdx9ujfte6rqss.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jgr5nwdx9ujfte6rqss.png" alt="Architecture Diagram" width="800" height="45"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this tutorial, assume you are given a Python data science project, and you are requested to do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build and push an image for this project.&lt;/li&gt;
&lt;li&gt;Run security scans on the container image.&lt;/li&gt;
&lt;li&gt;Upload model visualization data to S3.&lt;/li&gt;
&lt;li&gt;Publish model visualization data within the pipeline.&lt;/li&gt;
&lt;li&gt;Run test on the model to find out accuracy and fairness scores.&lt;/li&gt;
&lt;li&gt;Based on those scores, use Open Policy Agent (OPA) policies to either approve or deny the model.&lt;/li&gt;
&lt;li&gt;Deploy the model.&lt;/li&gt;
&lt;li&gt;Monitor the model and ensure the model is not outdated.&lt;/li&gt;
&lt;li&gt;Trigger the pipeline based on certain git events.&lt;/li&gt;
&lt;li&gt;(Optional) Add approval gates for production deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this tutorial, assume that the data is already processed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;This tutorial requires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Harness account with access to the Continuous Integration, Continuous Delivery, and Security Testing Orchestration modules. If you are new to Harness, &lt;a href="https://app.harness.io/auth/#/signup/?&amp;amp;utm_campaign=cicd-devrel"&gt;you can sign up for free&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;An AWS account, credentials, and a Harness AWS connector.&lt;/li&gt;
&lt;li&gt;A GitHub account, credentials, and a Harness GitHub connector.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prepare AWS
&lt;/h3&gt;

&lt;p&gt;You need an AWS account with sufficient permissions to create/modify/view resources used in this tutorial.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Prepare AWS credentials.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This tutorial requires two sets of AWS credentials. One set is for a &lt;a href="https://dev.to/docs/platform/connectors/cloud-providers/add-aws-connector"&gt;Harness AWS connector&lt;/a&gt;, and the other is for the &lt;a href="https://dev.to/docs/security-testing-orchestration/sto-techref-category/aws-ecr-scanner-reference"&gt;AWS ECR scanner for STO&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can use an AWS Vault plugin to generate AWS credentials for the AWS connector, and you can use the AWS console to generate the AWS Access Key ID, AWS Secret Access Key, and AWS Session Token, which are valid for a shorter time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Save these credentials securely and make a note of your AWS account ID and AWS region.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ NOTE &lt;/p&gt;

&lt;p&gt;If you are using a personal, non-production AWS account for this tutorial, you can initially grant admin access for these credentials. Once the demo works, reduce access to adhere to the principle of least privilege.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create ECR repos. From your AWS console, navigate to Elastic Container Registry (ECR) and create two private repositories named &lt;code&gt;ccapproval&lt;/code&gt; and &lt;code&gt;ccapproval-deploy&lt;/code&gt;. Under &lt;strong&gt;Image scan settings&lt;/strong&gt;, enable &lt;strong&gt;Scan on Push&lt;/strong&gt; for both repositories.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an S3 bucket. Navigate to S3 and create a bucket named something like &lt;code&gt;mlopswebapp&lt;/code&gt;. You'll use this bucket to host a static website for the credit card approval application demo, along with a few other artifacts.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Make sure all options under &lt;strong&gt;Block public access (bucket settings)&lt;/strong&gt; are unchecked, and then apply the following bucket policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   {
       "Version": "2012-10-17",
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": "*",
               "Action": "s3:GetObject",
               "Resource": "arn:aws:s3:::YOUR_S3_BUCKET_NAME/*"
           }
       ]
   }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After making the bucket public, your bucket page should show a &lt;code&gt;Publicly accessible&lt;/code&gt; flag.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjt3rp83u0uou448h219.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjt3rp83u0uou448h219.png" alt="S3 bucket is public" width="688" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From your AWS console, go to &lt;strong&gt;AWS Lambda&lt;/strong&gt;, select &lt;strong&gt;Functions&lt;/strong&gt;, and create a function from a container image using the following configuration:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Name: &lt;code&gt;creditcardapplicationlambda&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Container image URI: Select &lt;strong&gt;Browse images&lt;/strong&gt; and find the &lt;code&gt;ccapproval-deploy&lt;/code&gt; image. You can choose any image tag.&lt;/li&gt;
&lt;li&gt;Architecture: &lt;code&gt;x86_64&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;From Advanced: Select &lt;strong&gt;Enable function URL&lt;/strong&gt; to make the function URL public. Anyone with the URL can access your function. For more information, go to the &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-urls.html"&gt;AWS documentation on Lambda function URLs&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Create function&lt;/strong&gt; to create the function. You'll notice an info banner confirming that the function URL is public.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28pegemyg7faw1xaclkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28pegemyg7faw1xaclkl.png" alt="Lambda Function URL" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Prepare GitHub
&lt;/h3&gt;

&lt;p&gt;This tutorial uses a GitHub account for source control management.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fork the &lt;a href="https://github.com/harness-community/mlops-creditcard-approval-model"&gt;MLops sample app repository&lt;/a&gt; into your GitHub account.&lt;/li&gt;
&lt;li&gt;Create a &lt;a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens"&gt;GitHub personal access token&lt;/a&gt; with following permissions on your forked repository:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;repo/content: read+write&lt;/li&gt;
&lt;li&gt;repo/pull requests: read&lt;/li&gt;
&lt;li&gt;repo/webhooks: read+write&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create Harness secrets
&lt;/h3&gt;

&lt;p&gt;Store your GitHub and AWS credentials as secrets in Harness.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In your Harness account, create or select a &lt;a href="https://dev.to/docs/platform/organizations-and-projects/projects-and-organizations"&gt;project&lt;/a&gt; to use for this tutorial.&lt;/li&gt;
&lt;li&gt;In your project settings, select &lt;strong&gt;Secrets&lt;/strong&gt;, select &lt;strong&gt;New Secret&lt;/strong&gt;, and then select &lt;strong&gt;Text&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Create the following &lt;a href="https://dev.to/docs/platform/secrets/add-use-text-secrets"&gt;Harness text secrets&lt;/a&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;git_pat&lt;/code&gt; - GitHub personal access token&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;aws_access_key_id&lt;/code&gt; - Generated from AWS console&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;aws_secret_access_key&lt;/code&gt; - Generated from AWS console&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;aws_session_token&lt;/code&gt; - Generated from AWS console&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;aws_vault_secret&lt;/code&gt; - Secret access key generated by Vault plugin&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Make sure the &lt;strong&gt;Name&lt;/strong&gt; and &lt;strong&gt;ID&lt;/strong&gt; match for each secret, because you reference secrets by their IDs in Harness pipelines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create AWS and GitHub connectors
&lt;/h3&gt;

&lt;p&gt;Create Harness connectors to connect to your AWS and GitHub accounts.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In your Harness project settings, go to &lt;strong&gt;Connectors&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;New Connector&lt;/strong&gt;, select the &lt;strong&gt;AWS&lt;/strong&gt; connector, and then create an &lt;a href="https://dev.to/docs/platform/connectors/cloud-providers/add-aws-connector"&gt;AWS connector&lt;/a&gt; with the following configuration:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Name: &lt;code&gt;mlopsawsconnector&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Access Key: AWS Vault plugin generated&lt;/li&gt;
&lt;li&gt;Secret Key: Use  your &lt;code&gt;aws_vault_secret&lt;/code&gt; secret&lt;/li&gt;
&lt;li&gt;Connectivity Mode: Connect through Harness Platform&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Leave all other settings as is, and make sure the connection test passes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyexcf60gf379vupzg2o3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyexcf60gf379vupzg2o3.png" alt="Connector Connectivity Status" width="421" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create another connector. This time, select the &lt;a href="https://dev.to/docs/platform/connectors/code-repositories/ref-source-repo-provider/git-hub-connector-settings-reference"&gt;GitHub connector&lt;/a&gt; and and use the following configuration:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Name: &lt;code&gt;mlopsgithubconnector&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;URL Type: &lt;code&gt;Repository&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Connection Type: &lt;code&gt;HTTP&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;GitHub Repository URL: Enter the URL to your fork of the demo repo, such as &lt;code&gt;https://github.com/:gitHubUsername/mlops-creditcard-approval-model&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Username: Enter your GitHub username&lt;/li&gt;
&lt;li&gt;Personal Access Token: Use your &lt;code&gt;git_pat&lt;/code&gt; secret&lt;/li&gt;
&lt;li&gt;Connectivity Mode: Connect through Harness Platform&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create the Harness pipeline
&lt;/h2&gt;

&lt;p&gt;In Harness, you create pipeline to represent workflows. Pipeline can have multiple stages, and each stage can have multiple steps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In your Harness project, &lt;a href="///docs/continuous-integration/use-ci/prep-ci-pipeline-components.md"&gt;create a pipeline&lt;/a&gt; named &lt;code&gt;Credit Card Approval MLops&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Add a &lt;strong&gt;Build&lt;/strong&gt; stage named &lt;code&gt;Train Model&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Make sure &lt;strong&gt;Clone Codebase&lt;/strong&gt; is enabled.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Third-party Git provider&lt;/strong&gt;, and then select your &lt;code&gt;mlopsgithubconnector&lt;/code&gt; GitHub connector. The repository name should populate automatically.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Set Up Stage&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the following sections of this tutorial, you'll configure this stage to build and push the data science image, and you'll add more stages to the pipeline to meet the tutorial's objectives.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ NOTE&lt;/p&gt;

&lt;p&gt;You can find a &lt;a href="https://github.com/harness-community/mlops-creditcard-approval-model/blob/main/sample-mlops-pipeline.yaml"&gt;sample pipeline for this tutorial in the demo repo&lt;/a&gt;. If you use this pipeline, you must replace the placeholder and sample values accordingly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Build, push, and scan the image
&lt;/h3&gt;

&lt;p&gt;Configure your &lt;code&gt;Train Model&lt;/code&gt; stage to build and push the data science image and then retrieve the ECR scan results.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the &lt;strong&gt;Infrastructure&lt;/strong&gt; tab and configure the build infrastructure for the &lt;code&gt;Train Model&lt;/code&gt; stage:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Cloud&lt;/strong&gt; to use &lt;a href="https://dev.to/docs/continuous-integration/use-ci/set-up-build-infrastructure/use-harness-cloud-build-infrastructure"&gt;Harness Cloud build infrastructure&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Platform&lt;/strong&gt;, select &lt;strong&gt;Linux&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Architecture&lt;/strong&gt;, select &lt;strong&gt;AMD64&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Select the &lt;strong&gt;Execution&lt;/strong&gt; tab to add steps to the stage.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Add Step&lt;/strong&gt;, select the &lt;a href="https://dev.to/docs/continuous-integration/use-ci/build-and-upload-artifacts/build-and-push/build-and-push-to-ecr-step-settings"&gt;Build and Push to ECR step&lt;/a&gt;, and configure the step as follows:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Harness Training&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;AWS Connector&lt;/strong&gt;, select your &lt;code&gt;mlopsawsconnector&lt;/code&gt; AWS connector.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Region&lt;/strong&gt;, enter your AWS region.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Account ID&lt;/strong&gt;, enter your AWS account ID.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Image Name&lt;/strong&gt;, enter &lt;code&gt;ccapproval&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Tags&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+pipeline.executionId&amp;gt;&lt;/code&gt;. You can select the &lt;strong&gt;Input type&lt;/strong&gt; icon to change the input type to expression (&lt;strong&gt;f(x)&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Dockerfile&lt;/strong&gt; (under &lt;strong&gt;Optional Configuration&lt;/strong&gt;), enter &lt;code&gt;Dockerfile_Training_Testing&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Apply Changes&lt;/strong&gt; to save the step, and then select &lt;strong&gt;Save&lt;/strong&gt; to save the pipeline.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next, you'll add steps to your build stage to retrieve the results of the ECR repo security scan.&lt;/p&gt;

&lt;p&gt;Because scanning is enabled on your ECR repositories, each image pushed to the repo by the &lt;strong&gt;Build and Push to ECR&lt;/strong&gt; step is scanned for vulnerabilities. In order to successfully retrieve the scan results, your pipeline needs to wait for the scan to finish and then request the results.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Before adding the step to retrieve the scan result, use a &lt;a href="https://dev.to/docs/continuous-integration/use-ci/run-step-settings"&gt;Run step&lt;/a&gt; to add a 15-second wait to ensure that the scan is complete before the pipeline requests the scan results.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Add Step&lt;/strong&gt; after the &lt;code&gt;Harness Training&lt;/code&gt; step, and select the &lt;strong&gt;Run&lt;/strong&gt; step.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Wait for ECR Image Scan&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Command&lt;/strong&gt;, enter the following, and then select &lt;strong&gt;Apply Changes&lt;/strong&gt; to save the step.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ECR Image Scan In Progress..."&lt;/span&gt;
   &lt;span class="nb"&gt;sleep &lt;/span&gt;15
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add an &lt;a href="https://dev.to/docs/security-testing-orchestration/sto-techref-category/aws-ecr-scanner-reference"&gt;AWS ECR Scan step&lt;/a&gt; to get the scan results.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Add Step&lt;/strong&gt; after the &lt;code&gt;Wait&lt;/code&gt; step, and select the &lt;strong&gt;AWS ECR Scan&lt;/strong&gt; step.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Security Scans for ML Model&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Target/Name&lt;/strong&gt;, enter &lt;code&gt;ccapproval-ecr-scan&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Variant&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+pipeline.executionId&amp;gt;&lt;/code&gt;. You can select the &lt;strong&gt;Input type&lt;/strong&gt; icon to change the input type to expression (&lt;strong&gt;f(x)&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Container Image/Name&lt;/strong&gt;, enter &lt;code&gt;ccapproval&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Container Image/Tag&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+pipeline.executionId&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Region&lt;/strong&gt;, enter your AWS region.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Authentication&lt;/strong&gt;, use &lt;a href="https://dev.to/docs/platform/variables-and-expressions/harness-variables"&gt;Harness expressions&lt;/a&gt; referencing your AWS credential secrets:

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Access ID&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+secrets.getValue("aws_access_key_id")&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Access Token&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+secrets.getValue("aws_secret_access_key")&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Access Region&lt;/strong&gt;, enter your AWS region.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Log Level&lt;/strong&gt;, enter &lt;code&gt;Info&lt;/code&gt;.

&lt;ul&gt;
&lt;li&gt;Under &lt;strong&gt;Settings&lt;/strong&gt;, add the following key-value pair: &lt;code&gt;AWS_SESSION_TOKEN: &amp;lt;+secrets.getValue("aws_session_token")&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Apply Changes&lt;/strong&gt; to save the step, and then select &lt;strong&gt;Save&lt;/strong&gt; to save the pipeline.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At this point, you can run the pipeline to test the Build stage.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Run Pipeline&lt;/strong&gt; to test the Build stage. For &lt;strong&gt;Git Branch&lt;/strong&gt;, enter &lt;code&gt;main&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Wait while the pipeline runs, and then check your &lt;code&gt;ccapproval&lt;/code&gt; ECR repository to find an image with a SHA matching the pipeline execution ID. Select &lt;strong&gt;Copy URI&lt;/strong&gt; to copy the image URI; you'll need it in the next section.&lt;/li&gt;
&lt;li&gt;Make sure the image scan also ran. In the Harness &lt;a href="https://dev.to/docs/continuous-integration/use-ci/viewing-builds"&gt;Build details&lt;/a&gt;, you can find the scan results in the &lt;strong&gt;AWS ECR Scan&lt;/strong&gt; step logs. For example:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   Scan Results: &lt;span class="o"&gt;{&lt;/span&gt;
       &lt;span class="s2"&gt;"jobId"&lt;/span&gt;: &lt;span class="s2"&gt;"xlf06YX6a8AupG_5igGA6I"&lt;/span&gt;,
       &lt;span class="s2"&gt;"status"&lt;/span&gt;: &lt;span class="s2"&gt;"Succeeded"&lt;/span&gt;,
       &lt;span class="s2"&gt;"issuesCount"&lt;/span&gt;: 10,
       &lt;span class="s2"&gt;"newIssuesCount"&lt;/span&gt;: 10,
      &lt;span class="s2"&gt;"issuesBySeverityCount"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
           &lt;span class="s2"&gt;"ExternalPolicyFailures"&lt;/span&gt;: 0,
           &lt;span class="s2"&gt;"NewCritical"&lt;/span&gt;: 0,
           &lt;span class="s2"&gt;"NewHigh"&lt;/span&gt;: 1,
           &lt;span class="s2"&gt;"NewMedium"&lt;/span&gt;: 5,
           &lt;span class="s2"&gt;"NewLow"&lt;/span&gt;: 4,
           &lt;span class="s2"&gt;"NewInfo"&lt;/span&gt;: 0,
           &lt;span class="s2"&gt;"Unassigned"&lt;/span&gt;: 0,
           &lt;span class="s2"&gt;"NewUnassigned"&lt;/span&gt;: 0,
           &lt;span class="s2"&gt;"Critical"&lt;/span&gt;: 0,
           &lt;span class="s2"&gt;"High"&lt;/span&gt;: 1,
           &lt;span class="s2"&gt;"Medium"&lt;/span&gt;: 5,
           &lt;span class="s2"&gt;"Low"&lt;/span&gt;: 4,
           &lt;span class="s2"&gt;"Info"&lt;/span&gt;: 0,
           &lt;span class="s2"&gt;"Ignored"&lt;/span&gt;: 0
       &lt;span class="o"&gt;}&lt;/span&gt;
   &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You've successfully completed the first part of this tutorial: Configuring a Build stage that builds, pushes, and scans a trained data science image.&lt;/p&gt;

&lt;p&gt;Continue the tutorial in the next sections and continue building your MLOps pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test and upload artifacts
&lt;/h3&gt;

&lt;p&gt;Add another &lt;strong&gt;Build&lt;/strong&gt; stage to your pipeline that will run tests, build a Lambda image, and upload artifacts to S3.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Edit your MLOps pipeline and add another &lt;strong&gt;Build&lt;/strong&gt; stage after the &lt;code&gt;Train Model&lt;/code&gt; stage. Name the stage &lt;code&gt;Run test and upload artifacts&lt;/code&gt; and make sure &lt;strong&gt;Clone Codebase&lt;/strong&gt; is enabled.&lt;/li&gt;
&lt;li&gt;On the stage's &lt;strong&gt;Overview&lt;/strong&gt; tab, locate &lt;strong&gt;Shared Paths&lt;/strong&gt;, and add &lt;code&gt;/harness/output&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Infrastructure&lt;/strong&gt; tab, select &lt;strong&gt;Propagate from existing stage&lt;/strong&gt;, and select your &lt;code&gt;Train Model&lt;/code&gt; stage.&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Execution&lt;/strong&gt; tab, add a &lt;strong&gt;Run&lt;/strong&gt; step to run pytest on the demo copebase. Select &lt;strong&gt;Add Step&lt;/strong&gt;, select the &lt;strong&gt;Run&lt;/strong&gt; step, and configure it as follows:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;pytest&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Shell&lt;/strong&gt;, select &lt;strong&gt;Sh&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Command&lt;/strong&gt;, enter:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   pytest --nbval-lax credit_card_approval.ipynb --junitxml=report.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Under &lt;strong&gt;Optional Configuration&lt;/strong&gt;, locate &lt;strong&gt;Container Registry&lt;/strong&gt;, and select your &lt;code&gt;mlopsawsconnector&lt;/code&gt; AWS connector.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Optional Configuration&lt;/strong&gt;, locate &lt;strong&gt;Image&lt;/strong&gt;, and enter the image URI from your &lt;code&gt;Train Model&lt;/code&gt; stage execution with the image tag replaced with &lt;code&gt;&amp;lt;+pipeline.executionId&amp;gt;&lt;/code&gt;. For example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   AWS_ACCOUNT_ID.dkr.ecr.AWS_REGION.amazonaws.com/AWS_ECR_REPO_NAME:&amp;lt;+pipeline.executionId&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The data science project includes two Dockerfiles: One for building the source and one for AWS Lambda deployment. Next, you'll add a step to build and push the image using the Dockerfile designed for AWS Lambda deployment.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Add Step&lt;/strong&gt; and add a &lt;strong&gt;Build and Push to ECR&lt;/strong&gt; step configured as follows:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Build and Push Lambda Deployment Image&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;AWS Connector&lt;/strong&gt;, select your &lt;code&gt;mlopsawsconnector&lt;/code&gt; AWS connector.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Region&lt;/strong&gt;, enter your AWS region.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Account ID&lt;/strong&gt;, enter your AWS account ID.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Image Name&lt;/strong&gt;, enter &lt;code&gt;ccapproval-deploy&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Tags&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+pipeline.executionId&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Optional Configuration&lt;/strong&gt;, locate &lt;strong&gt;Dockerfile&lt;/strong&gt;, and enter &lt;code&gt;Dockerfile_Inference_Lambda&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;pytest&lt;/code&gt; command from the &lt;strong&gt;Run&lt;/strong&gt; step generates an HTML file with some visualizations for the demo ML model. Next, add steps to upload the visualizations artifact to your AWS S3 bucket and post the artifact URL on the Artifacts tab of the &lt;a href="https://dev.to/docs/continuous-integration/use-ci/viewing-builds"&gt;Build details page&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Add Step&lt;/strong&gt;, and add an &lt;a href="https://dev.to/docs/continuous-integration/use-ci/build-and-upload-artifacts/upload-artifacts/upload-artifacts-to-s3"&gt;Upload Artifacts to S3 step&lt;/a&gt; configured as follows:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Upload artifacts to S3&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;AWS Connector&lt;/strong&gt;, select your &lt;code&gt;mlopsawsconnector&lt;/code&gt; AWS connector.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Region&lt;/strong&gt;, enter your AWS region.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Bucket&lt;/strong&gt;, enter your S3 bucket name.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Source Path&lt;/strong&gt;, enter &lt;code&gt;/harness/output/model_metrics.html&lt;/code&gt;. This is where the model visualization file from the &lt;code&gt;pytest&lt;/code&gt; step is stored.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Use the &lt;a href="https://dev.to/docs/continuous-integration/use-ci/build-and-upload-artifacts/artifacts-tab"&gt;Artifact Metadata Publisher plugin&lt;/a&gt; to post the visualization artifact URL on the build's Artifacts tab.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Add a &lt;strong&gt;Plugin&lt;/strong&gt; step after the &lt;strong&gt;Upload Artifacts to S3&lt;/strong&gt; step.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Publish ML model visualization&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Container Registry&lt;/strong&gt;, select the built-in &lt;strong&gt;Harness Docker Connector&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Image&lt;/strong&gt;, enter &lt;code&gt;plugins/artifact-metadata-publisher&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Optional Configuration&lt;/strong&gt;, locate &lt;strong&gt;Settings&lt;/strong&gt;, and add the following key-value pairs:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   file_urls: https://S3_BUCKET_NAME.s3.AWS_REGION.amazonaws.com/harness/output/model_metrics.html
   artifact_file: artifact.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In addition to the model visualization, the &lt;code&gt;pytest&lt;/code&gt; command also generates a &lt;code&gt;shared_env_variables.txt&lt;/code&gt; file to export the model's accuracy and fairness metrics. However, this data is lost when the build ends because Harness stages run in isolated containers. Therefore, you must add a step to export the &lt;code&gt;ACCURACY&lt;/code&gt; and &lt;code&gt;EQUAL_OPPORTUNITY_FAIRNESS_PERCENT&lt;/code&gt; values as &lt;a href="///docs/continuous-integration/use-ci/run-step-settings.md#output-variables"&gt;output variables&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After the &lt;strong&gt;Plugin&lt;/strong&gt; step, add a &lt;strong&gt;Run&lt;/strong&gt; step configured as follows:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Export accuracy and fairness variables&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Shell&lt;/strong&gt;, select &lt;strong&gt;Sh&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Commmand&lt;/strong&gt;, enter:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   # File path
   FILE_PATH="/harness/output/shared_env_variables.txt"

   # Read the file and export variables
   while IFS='=' read -r key value; do
       case $key in
           ACCURACY)
               export ACCURACY="$value"
               ;;
           EQUAL_OPPORTUNITY_FAIRNESS_PERCENT)
               export EQUAL_OPPORTUNITY_FAIRNESS_PERCENT="$value"
               ;;
           *)
               echo "Ignoring unknown variable: $key"
               ;;
       esac
   done &amp;lt; "$FILE_PATH"

   echo $ACCURACY
   echo $EQUAL_OPPORTUNITY_FAIRNESS_PERCENT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Under &lt;strong&gt;Optional Configuration&lt;/strong&gt;, locate &lt;strong&gt;Output Variables&lt;/strong&gt;, and add the following two output variables:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   ACCURACY
   EQUAL_OPPORTUNITY_FAIRNESS_PERCENT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Save the pipeline, and then run it. Again, use &lt;code&gt;main&lt;/code&gt; for the &lt;strong&gt;Git Branch&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Wait while the pipeline runs, and then make sure:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Your &lt;code&gt;ccapproval&lt;/code&gt; and &lt;code&gt;ccapproval-deploy&lt;/code&gt; ECR repositories have images with SHAs matches the pipeline execution ID.&lt;/li&gt;
&lt;li&gt;Your S3 bucket has &lt;code&gt;/harness/output/model_metrics.html&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The URL to the &lt;code&gt;model_metrics&lt;/code&gt; artifact appears on the Artifacts tab in Harness.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0j9hmzt04vjez8vzlml8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0j9hmzt04vjez8vzlml8.png" alt="Artifacts Tab" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The output variable values are in the log for the &lt;code&gt;Export accuracy and fairness variables&lt;/code&gt; step, such as:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   0.92662
   20.799999999999997
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congratulations! So far, you've completed half the requirements for this MLOps project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[x] Build and push an image for this project.&lt;/li&gt;
&lt;li&gt;[x] Run security scans on the container image.&lt;/li&gt;
&lt;li&gt;[x] Upload model visualization data to S3.&lt;/li&gt;
&lt;li&gt;[x] Publish model visualization data within the pipeline.&lt;/li&gt;
&lt;li&gt;[x] Run test on the model to find out accuracy and fairness scores.&lt;/li&gt;
&lt;li&gt;[ ] Based on those scores, use Open Policy Agent (OPA) policies to either approve or deny the model.&lt;/li&gt;
&lt;li&gt;[ ] Deploy the model.&lt;/li&gt;
&lt;li&gt;[ ] Monitor the model and ensure the model is not outdated.&lt;/li&gt;
&lt;li&gt;[ ] Trigger the pipeline based on certain git events.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.toOptional"&gt; &lt;/a&gt; Add approval gates for production deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Continue on with policy enforcement in the next section.&lt;/p&gt;

&lt;h3&gt;
  
  
  Add ML model policy checks
&lt;/h3&gt;

&lt;p&gt;In this section, you'll author OPA policies in Harness and use a &lt;strong&gt;Custom&lt;/strong&gt; stage to add policy enforcement to your pipeline.&lt;/p&gt;

&lt;p&gt;Harness &lt;a href="https://dev.to/docs/platform/governance/policy-as-code/harness-governance-overview"&gt;Policy As Code&lt;/a&gt; uses Open Policy Agent (OPA) as the central service to store and enforce policies for the different entities and processes across the Harness platform. You create individual policies, add them to policy sets, and select the entities (such as pipelines) to evaluate those policies against.&lt;/p&gt;

&lt;p&gt;For this tutorial, the policy requirements are that the model accuracy is over 90% and the fairness margin for equal opportunity is under 21%.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In your Harness project settings, go to &lt;strong&gt;Policies&lt;/strong&gt;, select the &lt;strong&gt;Policies&lt;/strong&gt; tab, and then select &lt;strong&gt;New Policy&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Check fairness and accuracy scores&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;How do you want to setup your Policy&lt;/strong&gt;, select &lt;strong&gt;Inline&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Enter the following policy definition, and then select &lt;strong&gt;Save&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rego"&gt;&lt;code&gt;   &lt;span class="ow"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

   &lt;span class="ow"&gt;default&lt;/span&gt; &lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

   &lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;accuracy&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.9&lt;/span&gt;
       &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fairnessScoreEqualOpportunity&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="m"&gt;21&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="n"&gt;deny&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;allow&lt;/span&gt;
       &lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Deny: Accuracy less than 90% or fairness score difference greater than 21%"&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Select the &lt;strong&gt;Policy Sets&lt;/strong&gt; tab, and then select &lt;strong&gt;New Policy Set&lt;/strong&gt;. Use the following configuration:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Credit Card Approval Policy Set&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Entity Type that this policy set applies to&lt;/strong&gt;, select &lt;strong&gt;Custom&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;On what event should the policy set be evaluated&lt;/strong&gt;, select &lt;strong&gt;On Step&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Add Policy&lt;/strong&gt;, and select your &lt;code&gt;Check fairness and accuracy scores&lt;/code&gt; policy.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;What should happen if a policy fails?&lt;/strong&gt;, select &lt;strong&gt;Warn and Continue&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Finish&lt;/strong&gt;, and make sure the &lt;strong&gt;Enforced&lt;/strong&gt; switch is enabled.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Edit your MLOps pipeline, and add a &lt;strong&gt;Custom&lt;/strong&gt; stage after the second &lt;strong&gt;Build&lt;/strong&gt; stage. Name the stage &lt;code&gt;Model Policy Checks&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Select a Harness Delegate to use for the &lt;strong&gt;Custom&lt;/strong&gt; stage.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The &lt;strong&gt;Build&lt;/strong&gt; stages run on Harness Cloud build infrastructure, which doesn't require a Harness Delegate. However, &lt;strong&gt;Custom&lt;/strong&gt; stages can't use this build infrastructure, so you need a &lt;a href="https://dev.to/docs/platform/delegates/delegate-concepts/delegate-overview"&gt;Harness Delegate&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you don't already have one, &lt;a href="https://developer.harness.io/docs/platform/get-started/tutorials/install-delegate"&gt;install a delegate&lt;/a&gt;. Then, on the &lt;strong&gt;Custom&lt;/strong&gt; stage's &lt;strong&gt;Advanced&lt;/strong&gt; tab, select your delegate in &lt;strong&gt;Define Delegate Selector&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add a &lt;strong&gt;Shell Script&lt;/strong&gt; step to relay the accuracy and fairness output variables from the previous stage to the current stage. Configure the &lt;strong&gt;Shell Script&lt;/strong&gt; step as follows:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Accuracy and Fairness&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Timeout&lt;/strong&gt;, enter &lt;code&gt;10m&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Script Type&lt;/strong&gt;, select &lt;strong&gt;Bash&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Select script location&lt;/strong&gt;, select &lt;strong&gt;Inline&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For &lt;strong&gt;Script&lt;/strong&gt;, enter the following:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  accuracy=&amp;lt;+pipeline.stages.Harness_Training.spec.execution.steps.Export_accuracy_and_fairness_variables.output.outputVariables.ACCURACY&amp;gt;
  fairness_equalopportunity=&amp;lt;+pipeline.stages.Harness_Training.spec.execution.steps.Export_accuracy_and_fairness_variables.output.outputVariables.EQUAL_OPPORTUNITY_FAIRNESS_PERCENT&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Under &lt;strong&gt;Optional Configuration&lt;/strong&gt;, locate &lt;strong&gt;Script Output Variables&lt;/strong&gt;, and add the following two variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;accuracy&lt;/code&gt; - String - &lt;code&gt;accuracy&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fairness_equalopportunity&lt;/code&gt; - String - &lt;code&gt;fairness_equalopportunity&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ NOTE&lt;/p&gt;

&lt;p&gt;While you can feed the output variables directly into &lt;strong&gt;Policy&lt;/strong&gt; steps, this &lt;strong&gt;Shell Script&lt;/strong&gt; step is a useful debugging measure that ensures the accuracy and fairness variables are populated correctly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Add a &lt;strong&gt;Policy&lt;/strong&gt; step after the &lt;strong&gt;Shell Script&lt;/strong&gt; step.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Enforce Fairness and Accuracy Policy&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Timeout&lt;/strong&gt;, enter &lt;code&gt;10m&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Entity Type&lt;/strong&gt;, select &lt;strong&gt;Custom&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Policy Set&lt;/strong&gt;, select your &lt;code&gt;Credit Card Approval Policy Set&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For &lt;strong&gt;Payload&lt;/strong&gt;, enter the following:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  {
      "accuracy": &amp;lt;+execution.steps.Accuracy_and_Fairness.output.outputVariables.accuracy&amp;gt;,
      "fairnessScoreEqualOpportunity": &amp;lt;+execution.steps.Accuracy_and_Fairness.output.outputVariables.fairness_equalopportunity&amp;gt;
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Save the pipeline.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The next time the pipeline runs, the policy is enforced to check if the model accuracy and fairness margins are within the acceptable limits. If not, the pipeline produces a warning and then continues (according to the policy set configuration). You could also configure the policy set so that the pipeline fails if there is a policy violation.&lt;/p&gt;

&lt;p&gt;If you want to test the response to a policy violation, you can modify the policy definition's &lt;code&gt;allow&lt;/code&gt; section to be more strict, such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rego"&gt;&lt;code&gt;&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;accuracy&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.95&lt;/span&gt;
    &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fairnessScoreEqualOpportunity&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="m"&gt;19&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since the model accuracy is around 92% and the fairness margin is around 20%, this policy definition should produce a warning. Make sure to revert the change to the policy definition once you're done experimenting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy AWS Lambda function
&lt;/h2&gt;

&lt;p&gt;In Harness, you can specify the location of a function definition, artifact, and AWS account, and then Harness deploys the Lambda function and automatically routes traffic from the old version of the Lambda function to the new version on each deployment. In this part of the tutorial, you'll update an existing Lambda function by adding a &lt;strong&gt;Deploy&lt;/strong&gt; stage with service, environment, and infrastructure definitions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Edit your MLOPs pipeline, and add a &lt;strong&gt;Deploy&lt;/strong&gt; stage named &lt;code&gt;lambdadeployment&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment Type&lt;/strong&gt;, select &lt;strong&gt;AWS Lambda&lt;/strong&gt;, and then select &lt;strong&gt;Continue&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Create a &lt;a href="///docs/continuous-delivery/get-started/key-concepts.md#service"&gt;service definition&lt;/a&gt; for the Lambda deployment.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Add Service&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;creditcardapproval-lambda-service&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Set up service&lt;/strong&gt;, select &lt;strong&gt;Inline&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Deployment Type&lt;/strong&gt;, select &lt;strong&gt;AWS Lambda&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;AWS Lambda Function Definition&lt;/strong&gt;, for &lt;strong&gt;Manifest Identifier&lt;/strong&gt;, enter &lt;code&gt;lambdadefinition&lt;/code&gt;, and for &lt;strong&gt;File/Folder Path&lt;/strong&gt;, enter &lt;code&gt;/lambdamanifest&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;After creating the manifest under Harness File Store, add the following to the service manifest, and select &lt;strong&gt;Save&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   functionName: `creditcardapplicationlambda`
   role: LAMBDA_FUNCTION_ARN
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;LAMBDA_FUNCTION_ARN&lt;/code&gt; with your Lambda function's ARN. You can find the &lt;strong&gt;Function ARN&lt;/strong&gt; when viewing the function in the AWS console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3g92peuko8bbn4vwm75i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3g92peuko8bbn4vwm75i.png" alt="Lambda ARN" width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under the &lt;strong&gt;Artifacts&lt;/strong&gt; section for the service definition, provide the artifact details to use for the lambda deployment:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Artifact Source Identifier: &lt;code&gt;ccapprovaldeploy&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Region: YOUR_AWS_REGION&lt;/li&gt;
&lt;li&gt;Image Path: &lt;code&gt;ccapproval-deploy&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Value - Tag: &lt;code&gt;&amp;lt;+input&amp;gt;&lt;/code&gt; (&lt;a href="https://dev.to/docs/platform/variables-and-expressions/runtime-inputs"&gt;runtime input&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Create environment and infrastructure definitions for the Lambda deployment. On the &lt;strong&gt;Deploy&lt;/strong&gt; stage's &lt;strong&gt;Environment&lt;/strong&gt; tab, select &lt;strong&gt;New Environment&lt;/strong&gt;, and use the following environment configuration:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   name: lambda-env
   type: PreProduction
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;From the &lt;code&gt;lambda-env&lt;/code&gt;, go to the &lt;strong&gt;Infrastructure Definitions&lt;/strong&gt; tab, and add an infrastructure definition with the following configuration:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   name: `aws-lambda-infra`
   deploymentType: `AwsLambda`
   type: AwsLambda
     spec:
       connectorRef: `mlopsawsconnector`
       region: YOUR_AWS_REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Save&lt;/strong&gt; to save the infrastructure definition.&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Deploy&lt;/strong&gt; stage's &lt;strong&gt;Execution&lt;/strong&gt; tab, add an &lt;strong&gt;AWS Lambda Deploy&lt;/strong&gt; step named &lt;code&gt;Deploy Aws Lambda&lt;/code&gt; for the name. No other configuration is necessary.&lt;/li&gt;
&lt;li&gt;Save and run the pipeline. For &lt;strong&gt;Git Branch&lt;/strong&gt;, enter &lt;code&gt;main&lt;/code&gt;, and for &lt;strong&gt;Tag&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+pipeline.executionId&amp;gt;&lt;/code&gt;, and then select &lt;strong&gt;Run Pipeline&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You need to provide the image tag value because the service definition's &lt;strong&gt;Tag&lt;/strong&gt; setting uses runtime input (&lt;code&gt;&amp;lt;+input&amp;gt;&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;While the pipeline runs, you can observe the build logs showing the lambda function being deployed with the latest artifact that was built and pushed from the same pipeline.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Test the response from the lambda function.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;In your AWS console, go to &lt;strong&gt;AWS Lambda&lt;/strong&gt;, select &lt;strong&gt;Functions&lt;/strong&gt;, and select your &lt;code&gt;creditcardapplicationlambda&lt;/code&gt; function.&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Test&lt;/strong&gt; tab, select &lt;strong&gt;Create new event&lt;/strong&gt;, and create an event named &lt;code&gt;testmodel&lt;/code&gt; with the following JSON:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   {
     "Num_Children": 2,
     "Income": 500000,
     "Own_Car": 1,
     "Own_Housing": 1
   }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Test&lt;/strong&gt; to execute the function with your &lt;code&gt;testmodel&lt;/code&gt; test event. Once the function finishes execution, you'll get the result with a &lt;strong&gt;Function URL&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6czh3ykiknruwn9gynri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6czh3ykiknruwn9gynri.png" alt="Test Lambda Function" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Note the &lt;strong&gt;Function URL&lt;/strong&gt; resulting from the lambda function test. This is the endpoint that your ML web application would call. Depending on the prediction of &lt;code&gt;0&lt;/code&gt; or &lt;code&gt;1&lt;/code&gt;, the web application either approves or denies the demo credit card application.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Monitor the model
&lt;/h2&gt;

&lt;p&gt;There are many ways to monitor ML models. In this tutorial, you'll monitor if the model was recently updated. If it hasn't been updated recently, Harness sends an email alerting you that the model might be stale.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Edit your MLOPs pipeline, and add a &lt;strong&gt;Build&lt;/strong&gt; stage after the &lt;strong&gt;Deploy&lt;/strong&gt; stage.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Monitor Model stage&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Disable&lt;/em&gt; &lt;strong&gt;Clone Codebase&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Infrastructure&lt;/strong&gt; tab, select &lt;strong&gt;Propagate from existing stage&lt;/strong&gt; and select the first &lt;strong&gt;Build&lt;/strong&gt; stage.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Add a &lt;strong&gt;Run&lt;/strong&gt; step to find out when the model was last updated.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Monitor Model step&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Shell&lt;/strong&gt;, select &lt;strong&gt;Sh&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Command&lt;/strong&gt;, enter:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   # GitHub repository owner
   OWNER="YOUR_GITHUB_USERNAME"

   # GitHub repository name
   REPO="mlops-creditcard-approval-model"

   # Path to the file you want to check (relative to the repository root)
   FILE_PATH="credit_card_approval.ipynb"

   # GitHub Personal Access Token (PAT)
   TOKEN=&amp;lt;+secrets.getValue("git_pat")&amp;gt;

   # GitHub API URL
   API_URL="https://api.github.com/repos/$OWNER/$REPO/commits?path=$FILE_PATH&amp;amp;per_page=1"

   # Get the current date
   CURRENT_DATE=$(date -u +"%Y-%m-%dT%H:%M:%SZ")

   # Calculate the date 7 days ago
   SEVEN_DAYS_AGO=$(date -u -d "7 days ago" +"%Y-%m-%dT%H:%M:%SZ")

   # Get the latest commit date for the file
   LATEST_COMMIT_DATE=$(curl -s -H "Authorization: token $TOKEN" $API_URL | jq -r '.[0].commit.committer.date')

   # Check if the file has been updated in the last 7 days
   if [ "$(date -d "$LATEST_COMMIT_DATE" +%s)" -lt "$(date -d "$SEVEN_DAYS_AGO" +%s)" ]; then
       export model_stale=true
   else
       export model_stale=false
   fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Under &lt;strong&gt;Optional Configuration&lt;/strong&gt;, add &lt;code&gt;model_stale&lt;/code&gt; to &lt;strong&gt;Output Variables&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;After the &lt;code&gt;Monitor Model&lt;/code&gt; stage, add a &lt;strong&gt;Custom&lt;/strong&gt; stage named &lt;code&gt;Email notification&lt;/code&gt;. This stage will send the email notification if the model is stale.&lt;/li&gt;
&lt;li&gt;Add an &lt;strong&gt;Email&lt;/strong&gt; step to the last &lt;strong&gt;Custom&lt;/strong&gt; stage.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Email&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Timeout&lt;/strong&gt;, enter &lt;code&gt;10m&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;To&lt;/strong&gt;, enter the email address to receive the notification, such as the email address for your Harness account.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Subject&lt;/strong&gt;, enter &lt;code&gt;Credit card approval ML model has not been updated in a week.&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Body&lt;/strong&gt;, enter &lt;code&gt;It has been 7 days since the credit card approval ML model was updated. Please update the model.&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the step's &lt;strong&gt;Advanced&lt;/strong&gt; tab, add a &lt;a href="https://dev.to/docs/platform/pipelines/step-skip-condition-settings"&gt;conditional execution&lt;/a&gt; so the &lt;strong&gt;Email&lt;/strong&gt; step only runs if the &lt;code&gt;model_stale&lt;/code&gt; variable (from the &lt;code&gt;Monitor Model&lt;/code&gt; step) is &lt;code&gt;true&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Execute this step&lt;/strong&gt;, select &lt;strong&gt;If the stage executes successfully up to this point&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;And execute this step only if the following JEXL Condition evaluates to true&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enter the following JEXL condition:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; &amp;lt;+pipeline.stages.Monitor_Model_Stage.spec.execution.steps.Monitor_Model_Step.output.outputVariables.model_stale&amp;gt; == true
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Save and run the pipeline. For &lt;strong&gt;Git Branch&lt;/strong&gt;, enter &lt;code&gt;main&lt;/code&gt;, and for &lt;strong&gt;Tag&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+pipeline.executionId&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Trigger pipeline based on Git events
&lt;/h2&gt;

&lt;p&gt;So far, this tutorial used manually triggered builds. However, as the number of builds and pipeline executions grow, it's not scalable to manually trigger builds. In this part of the tutorial, you'll add a &lt;a href="https://dev.to/docs/platform/triggers/triggering-pipelines"&gt;Git event trigger&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Assume your team has a specific requirement where they want the MLOps pipeline to run &lt;em&gt;only&lt;/em&gt; if there's an update to the Jupyter notebook in the codebase.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In your MLOps pipeline, select &lt;strong&gt;Triggers&lt;/strong&gt; at the top of the Pipeline Studio, and then select &lt;strong&gt;New Trigger&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select the GitHub webhook trigger.&lt;/li&gt;
&lt;li&gt;On the trigger's &lt;strong&gt;Configuration&lt;/strong&gt; tab:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;trigger_on_notebook_update&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Connector&lt;/strong&gt;, select your &lt;code&gt;mlopsgithubconnector&lt;/code&gt; GitHub connector.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Repository URL&lt;/strong&gt; should automatically populate.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Event&lt;/strong&gt;, select &lt;strong&gt;Push&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Continue&lt;/strong&gt; to go to the &lt;strong&gt;Conditions&lt;/strong&gt; tab.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Branch Name&lt;/strong&gt;, select the &lt;strong&gt;Equals&lt;/strong&gt; operator, and enter &lt;code&gt;main&lt;/code&gt; for the &lt;strong&gt;Matches Value&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Changed Files&lt;/strong&gt;, select the &lt;strong&gt;Equals&lt;/strong&gt; operator, and enter &lt;code&gt;credit_card_approval.ipynb&lt;/code&gt; for the &lt;strong&gt;Matches Value&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Continue&lt;/strong&gt; to go to the &lt;strong&gt;Pipeline Input&lt;/strong&gt; tab.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Git Branch&lt;/strong&gt; should automatically populate.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Primary Artifact&lt;/strong&gt;, enter &lt;code&gt;ccapprovaldeploy&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Tag&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+pipeline.executionId&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Create Trigger&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The trigger webhook should automatically register in your GitHub repository. If it doesn't, you'll need to manually register the webhook:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the list of triggers in Harness, select the &lt;strong&gt;Link&lt;/strong&gt; icon to copy the webhook URL for the trigger.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhamzqctpkzcp3ajyqih7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhamzqctpkzcp3ajyqih7.png" alt="Webhook URL" width="800" height="80"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In your GitHub repository, navigate to &lt;strong&gt;Settings&lt;/strong&gt;, select &lt;strong&gt;Webhook&lt;/strong&gt;, and then select &lt;strong&gt;Add Webhook&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Paste the webhook URL in &lt;strong&gt;Payload URL&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Set the &lt;strong&gt;Content type&lt;/strong&gt; to &lt;code&gt;application/json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Add webhook&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A green checkmark in the GitHub webhooks list indicates that the webhook connected successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53llvfv3439ir2fagcvo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53llvfv3439ir2fagcvo.png" alt="Webhook Success" width="800" height="186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the trigger in place, whenever you push a change to the &lt;code&gt;credit_card_approval.ipynb&lt;/code&gt; file on the &lt;code&gt;main&lt;/code&gt; branch, the MLOps pipeline runs. In the trigger settings, you could adjust or remove the &lt;strong&gt;Conditions&lt;/strong&gt; (branch name, changed files, and so on) according to your requirements, if you wanted to use a Git event trigger in a live development or production scenario.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add an approval gate before prod deployment
&lt;/h2&gt;

&lt;p&gt;Your organization might require an approval gate for your CI/CD pipeline before an artifact is deployed to production. Harness offers built-in approval steps for Jira, ServiceNow, or Harness approvals.&lt;/p&gt;

&lt;p&gt;Assume that you have a different image for production, and a different AWS Lambda function is deployed based on that container image. In your MLOps pipeline, you can create another &lt;code&gt;AWS Lambda deployment&lt;/code&gt; stage with another &lt;code&gt;AWS Lambda deploy&lt;/code&gt; step for the production environment and use the approval gate prior to running that production deployment stage.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To add the approval gate, add an &lt;strong&gt;Approval&lt;/strong&gt; stage immediately prior to the &lt;strong&gt;Deploy&lt;/strong&gt; stage that requires approval.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;approval-to-prod&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Approval Type&lt;/strong&gt;, select &lt;strong&gt;Harness Approval&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Add an &lt;strong&gt;Approval&lt;/strong&gt; step to the &lt;strong&gt;Approval&lt;/strong&gt; stage.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;approval-to-prod&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Timeout&lt;/strong&gt;, enter &lt;code&gt;1d&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Use the default &lt;strong&gt;Message&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;User Groups&lt;/strong&gt;, select &lt;strong&gt;Select User Groups&lt;/strong&gt;, select &lt;strong&gt;Project&lt;/strong&gt;, and select &lt;strong&gt;All Project Users&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Save the pipeline.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next time you run the pipeline, someone from the Harness project must approve the promotion of artifact to the production environment before the final &lt;strong&gt;Deploy&lt;/strong&gt; stage runs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use the model in a web application
&lt;/h2&gt;

&lt;p&gt;In a live MLOps scenario, the ML model would likely power a web application. While this app development is outside the scope of this tutorial, check out &lt;a href="https://developer.harness.io/docs/continuous-integration/development-guides/mlops/e2e-mlops-tutorial/#use-the-model-in-a-web-application"&gt;this animation&lt;/a&gt; that demonstrates a simple web application developed using plain HTML/CSS/JS. The outcome of the credit card application uses the response from the public AWS Lambda function URL invocation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations! Here's what you've accomplished in this tutorial:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[x] Build and push an image for this project.&lt;/li&gt;
&lt;li&gt;[x] Run security scans on the container image.&lt;/li&gt;
&lt;li&gt;[x] Upload model visualization data to S3.&lt;/li&gt;
&lt;li&gt;[x] Publish model visualization data within the pipeline.&lt;/li&gt;
&lt;li&gt;[x] Run test on the model to find out accuracy and fairness scores.&lt;/li&gt;
&lt;li&gt;[x] Based on those scores, use Open Policy Agent (OPA) policies to either approve or deny the model.&lt;/li&gt;
&lt;li&gt;[x] Deploy the model.&lt;/li&gt;
&lt;li&gt;[x] Monitor the model and ensure the model is not outdated.&lt;/li&gt;
&lt;li&gt;[x] Trigger the pipeline based on certain git events.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.toOptional"&gt;x&lt;/a&gt; Add approval gates for production deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that you've built an MLOps pipeline on Harness and used the Harness platform to train the model, check out the following guides to learn how you can integrate other popular ML tools and platforms into your Harness CI/CD pipelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developer.harness.io/docs/continuous-integration/development-guides/mlops/mlops-sagemaker"&gt;AWS SageMaker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.harness.io/docs/continuous-integration/development-guides/mlops/mlops-databricks"&gt;Databricks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.harness.io/docs/continuous-integration/development-guides/mlops/mlops-vertexai"&gt;Google Vertex AI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.harness.io/docs/continuous-integration/development-guides/mlops/mlops-azureml"&gt;Azure ML&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.harness.io/docs/continuous-integration/development-guides/mlops/mlops-mlflow"&gt;MLflow&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>mlops</category>
      <category>cicd</category>
      <category>aws</category>
      <category>harness</category>
    </item>
    <item>
      <title>React App and Build Pipeline with Task and Gitness</title>
      <dc:creator>Jim Sheldon</dc:creator>
      <pubDate>Fri, 05 Apr 2024 14:01:13 +0000</pubDate>
      <link>https://forem.com/harness/react-app-and-build-pipeline-with-task-and-gitness-178l</link>
      <guid>https://forem.com/harness/react-app-and-build-pipeline-with-task-and-gitness-178l</guid>
      <description>&lt;p&gt;&lt;a href="https://taskfile.dev/" rel="noopener noreferrer"&gt;Task&lt;/a&gt; is a task runner/automation tool written in &lt;a href="https://go.dev/" rel="noopener noreferrer"&gt;Go&lt;/a&gt; and distributed as a single binary.&lt;/p&gt;

&lt;p&gt;A Taskfile is written in YAML, which may provide a shorter learning curve than alternative tools, like &lt;a href="https://www.gnu.org/software/make/" rel="noopener noreferrer"&gt;GNU Make&lt;/a&gt;. Task can also leverage Go's robust &lt;a href="https://taskfile.dev/usage/#gos-template-engine" rel="noopener noreferrer"&gt;template engine&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Task can be useful in build and test pipelines. Logic for running builds and tests can be written once in &lt;code&gt;Taskfile.yml&lt;/code&gt; (or other &lt;a href="https://taskfile.dev/usage/#supported-file-names" rel="noopener noreferrer"&gt;supported filenames&lt;/a&gt;), then used in both local development workflows and under automation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gitness.com?utm_source=dev.to&amp;amp;utm_content=blog&amp;amp;utm_medium=blog&amp;amp;utm_term=gitness-task-in-ci"&gt;Gitness&lt;/a&gt; is an open source development platform from &lt;a href="https://www.harness.io?utm_source=dev.to&amp;amp;utm_content=blog&amp;amp;utm_medium=blog&amp;amp;utm_term=gitness-task-in-ci"&gt;Harness&lt;/a&gt; that hosts your source code repositories and runs your software development lifecycle pipelines.&lt;/p&gt;

&lt;p&gt;In this guide, we'll craft a &lt;code&gt;Taskfile.yml&lt;/code&gt; for a sample &lt;a href="https://react.dev/" rel="noopener noreferrer"&gt;React&lt;/a&gt; app, suitable for both local development and integration with your pipelines powered by Gitness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://taskfile.dev/installation/" rel="noopener noreferrer"&gt;Install Task&lt;/a&gt; and verify you can run &lt;code&gt;task&lt;/code&gt; in your terminal.&lt;/p&gt;

&lt;p&gt;This guide was tested with Task version 3.35.1.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ task --version
Task version: v3.35.1 (h1:zjQ3tLv+LIStDDTzOQx8F97NE/8FSTanjZuwgy/hwro=)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://www.docker.com/products/docker-desktop/" rel="noopener noreferrer"&gt;Install Docker&lt;/a&gt; and verify you can run &lt;code&gt;docker&lt;/code&gt; in your terminal.&lt;/p&gt;

&lt;p&gt;This guide was tested with Docker version 24.0.7.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ docker --version
Docker version 24.0.7, build afdd53b


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://nodejs.org/en/download" rel="noopener noreferrer"&gt;Install Node&lt;/a&gt; and verify you can run &lt;code&gt;node&lt;/code&gt; in your terminal.&lt;/p&gt;

&lt;p&gt;This guide was tested with Node version 20.12.0.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ node --version
v20.12.0


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;ℹ️ Note&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Installing Node also installs &lt;code&gt;npm&lt;/code&gt; and &lt;code&gt;npx&lt;/code&gt; binaries.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Create React Application
&lt;/h2&gt;

&lt;p&gt;Use the &lt;a href="https://create-react-app.dev/" rel="noopener noreferrer"&gt;Create React App&lt;/a&gt; project to automatically generate the application. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

npx create-react-app my-react-app


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Your &lt;code&gt;my-react-app&lt;/code&gt; directory structure will look like this.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

my-react-app/
├── README.md
├── node_modules/
├── package-lock.json
├── package.json
├── public/
│   ├── favicon.ico
│   ├── index.html
│   ├── logo192.png
│   ├── logo512.png
│   ├── manifest.json
│   └── robots.txt
└── src/
    ├── App.css
    ├── App.js
    ├── App.test.js
    ├── index.css
    ├── index.js
    ├── logo.svg
    ├── reportWebVitals.js
    └── setupTests.js


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Change to the application directory.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cd my-react-app


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Start the application.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

npm start


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Open &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt; in your browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F854iu180bjjtd05fn17u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F854iu180bjjtd05fn17u.png" alt="Sample React application running in a browser"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You now have a working React application! 🎉&lt;/p&gt;

&lt;p&gt;In your terminal, type Ctrl-C to stop the process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create &lt;code&gt;Taskfile.yml&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Create a &lt;code&gt;Taskfile.yml&lt;/code&gt; file in your &lt;code&gt;my-react-app&lt;/code&gt; directory with this configuration.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3'&lt;/span&gt;

&lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

  &lt;span class="na"&gt;npm-install&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cmds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;npm install&lt;/span&gt;

  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;deps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;npm-install&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;cmds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;npm run build&lt;/span&gt;

  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;deps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;npm-install&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;cmds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;npm test&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;build&lt;/code&gt; task runs &lt;code&gt;npm run build&lt;/code&gt;, which creates a &lt;a href="https://create-react-app.dev/docs/production-build/" rel="noopener noreferrer"&gt;production build&lt;/a&gt; of your application.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;test&lt;/code&gt; task runs &lt;code&gt;npm test&lt;/code&gt;, which runs &lt;a href="https://create-react-app.dev/docs/running-tests" rel="noopener noreferrer"&gt;unit tests&lt;/a&gt;. There is only one unit test in &lt;code&gt;src/App.test.js&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Note that &lt;code&gt;npm-install&lt;/code&gt; task is a &lt;a href="https://taskfile.dev/usage/#task-dependencies" rel="noopener noreferrer"&gt;dependency&lt;/a&gt; of the &lt;code&gt;build&lt;/code&gt; and &lt;code&gt;test&lt;/code&gt; tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build and Test
&lt;/h2&gt;

&lt;p&gt;Run &lt;code&gt;task build&lt;/code&gt; in your terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kc7r49oy6fex39pjrt3.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kc7r49oy6fex39pjrt3.gif" alt="Animated gif of the build task output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By default, &lt;code&gt;npm test&lt;/code&gt; will run tests interactively. To simulate running tests under &lt;a href="https://create-react-app.dev/docs/running-tests/#continuous-integration" rel="noopener noreferrer"&gt;Continuous Integration&lt;/a&gt;, set the CI environment variable.&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;CI=true task test&lt;/code&gt; in your terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj554xflkjr2d3r1g26nh.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj554xflkjr2d3r1g26nh.gif" alt="Animated gif of the test task output"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that &lt;code&gt;task&lt;/code&gt; passed the &lt;code&gt;CI=true&lt;/code&gt; &lt;a href="https://taskfile.dev/usage/#variables" rel="noopener noreferrer"&gt;variable&lt;/a&gt; to the &lt;code&gt;npm test&lt;/code&gt; command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Gitness
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://docs.gitness.com/#install-gitness" rel="noopener noreferrer"&gt;Install Gitness&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.gitness.com/#create-a-project" rel="noopener noreferrer"&gt;Create a project&lt;/a&gt; named &lt;code&gt;demo&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Either &lt;a href="https://docs.gitness.com/#create-a-repository" rel="noopener noreferrer"&gt;create a repository&lt;/a&gt; named &lt;code&gt;my-react-app&lt;/code&gt; and push your sample app code, or &lt;a href="https://docs.gitness.com/repositories/overview#import-a-repository" rel="noopener noreferrer"&gt;import&lt;/a&gt; the repository &lt;a href="https://github.com/jimsheldon/my-react-app" rel="noopener noreferrer"&gt;jimsheldon/my-react-app&lt;/a&gt;, which I created for this guide&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create Pipeline
&lt;/h2&gt;

&lt;p&gt;Open &lt;a href="http://localhost:3000/demo/my-react-app/pipelines" rel="noopener noreferrer"&gt;http://localhost:3000/demo/my-react-app/pipelines&lt;/a&gt; in your browser and select &lt;strong&gt;New Pipeline&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Enter &lt;code&gt;build-and-test&lt;/code&gt; in the &lt;strong&gt;Name&lt;/strong&gt; field (this will automatically populate the &lt;strong&gt;YAML Path&lt;/strong&gt; field), then select &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkycohke003skh6ghhbi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkycohke003skh6ghhbi.png" alt="Create Gitness pipeline"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter this Gitness &lt;a href="https://docs.gitness.com/category/pipelines" rel="noopener noreferrer"&gt;pipeline&lt;/a&gt; in the YAML editor.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pipeline&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;task example&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ci&lt;/span&gt;
      &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;install task&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;run&lt;/span&gt;
            &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;alpine&lt;/span&gt;
              &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
                &lt;span class="s"&gt;apk add curl&lt;/span&gt;
                &lt;span class="s"&gt;sh -c "$(curl --location https://taskfile.dev/install.sh)" -- -d&lt;/span&gt;
                &lt;span class="s"&gt;./bin/task --version&lt;/span&gt;

          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;run&lt;/span&gt;
            &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node:20&lt;/span&gt;
              &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
                &lt;span class="s"&gt;./bin/task build&lt;/span&gt;

          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;run&lt;/span&gt;
            &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node:20&lt;/span&gt;
              &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
                &lt;span class="s"&gt;./bin/task test&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This Gitness pipeline has a single stage with three steps. The &lt;code&gt;install task&lt;/code&gt; step uses the &lt;code&gt;alpine&lt;/code&gt; Docker image to &lt;a href="https://taskfile.dev/installation/#install-script" rel="noopener noreferrer"&gt;install&lt;/a&gt; the &lt;code&gt;task&lt;/code&gt; binary into the workspace, where it can be reused by the following &lt;code&gt;build&lt;/code&gt; and &lt;code&gt;test&lt;/code&gt; steps, which run in the &lt;code&gt;node&lt;/code&gt; Docker image.&lt;/p&gt;

&lt;p&gt;Select &lt;strong&gt;Save and Run&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzc6vpd8qmyheshdie1mf.png" alt="Gitness pipeline yaml editor"&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Observe the pipeline execution.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dkkplxjhaz1sq2ixk5v.png" alt="Gitness pipeline execution"&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This workflow improves the &lt;a href="https://en.wikipedia.org/wiki/Separation_of_concerns" rel="noopener noreferrer"&gt;separation of concerns&lt;/a&gt; between local development and pipeline execution.&lt;/p&gt;

&lt;p&gt;As long as the application can be built and tested with &lt;code&gt;task build&lt;/code&gt; and &lt;code&gt;task test&lt;/code&gt; commands, developers always keep the same workflow, and &lt;strong&gt;the pipeline does not need to be modified&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For example, the developers could &lt;a href="https://classic.yarnpkg.com/lang/en/docs/migrating-from-npm/" rel="noopener noreferrer"&gt;switch to yarn&lt;/a&gt; by changing &lt;code&gt;Taskfile.yml&lt;/code&gt; to use &lt;code&gt;yarn&lt;/code&gt; commands rather than &lt;code&gt;npm&lt;/code&gt; commands. This change would be transparent to developers, who would continue to run &lt;code&gt;task build&lt;/code&gt; and &lt;code&gt;task test&lt;/code&gt;, and the Gitness pipeline yaml would not need to be changed.&lt;/p&gt;

&lt;p&gt;Next steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;See more examples of using &lt;a href="https://docs.gitness.com/pipelines/samples/task" rel="noopener noreferrer"&gt;Task with Gitness&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create &lt;a href="https://docs.gitness.com/pipelines/triggers" rel="noopener noreferrer"&gt;triggers&lt;/a&gt; to automatically run the pipeline when commits are pushed to the repository&lt;/li&gt;
&lt;li&gt;Learn how Gitness &lt;a href="https://docs.gitness.com/installation/data" rel="noopener noreferrer"&gt;manages your data&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Learn how to manage Gitness &lt;a href="https://docs.gitness.com/administration/project-management" rel="noopener noreferrer"&gt;projects and roles&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Build and Push to GAR and Deploy to GKE - End-to-End CI/CD Pipeline</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Tue, 02 Jan 2024 20:50:07 +0000</pubDate>
      <link>https://forem.com/harness/build-and-push-to-gar-and-deploy-to-gke-end-to-end-cicd-pipeline-182i</link>
      <guid>https://forem.com/harness/build-and-push-to-gar-and-deploy-to-gke-end-to-end-cicd-pipeline-182i</guid>
      <description>&lt;p&gt;In this tutorial, you'll explore how to build a streamlined CI/CD pipeline using the Harness Platform, integrating the robust services of Google Artifact Registry (GAR) and Google Kubernetes Engine (GKE). GAR excels in managing and storing container images securely, while GKE offers a scalable environment for container deployment. The Harness Platform serves as a powerful orchestrator, simplifying the build and push process to GAR and managing complex deployments in GKE. You'll also cover implementing crucial approval steps for enhanced security and setting up Slack notifications for real-time updates, showcasing how these tools together facilitate a robust, streamlined CI/CD process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architectural and Pipeline Diagrams
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v47UIdnZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/48aumx0emm0qsax47nol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v47UIdnZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/48aumx0emm0qsax47nol.png" alt="Excalidraw Architectural Diagram" width="800" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lN2Jya2G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tui7smssy7fk9n0edxrk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lN2Jya2G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tui7smssy7fk9n0edxrk.png" alt="Complete Pipeline in Harness Pipeline Editor" width="800" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A Harness free plan. If you don't have one, &lt;a href="https://app.harness.io/auth/#/signup/?&amp;amp;utm_campaign=cd-devrel"&gt;sign up for free&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;A GitHub account.&lt;/li&gt;
&lt;li&gt;A Docker Hub account.&lt;/li&gt;
&lt;li&gt;A GCP account with permissions for Google Artifact Registry and Kubernetes Engine.t&lt;/li&gt;
&lt;li&gt;Access to a slack workspace and permissions to create a slack app.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There’s a bonus section in this tutorial where you’ll run security tests during the build process and create a policy for the deployment process. To follow this section, you’ll need a Harness enterprise account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Required Setup and Configurations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Demo application
&lt;/h3&gt;

&lt;p&gt;You can either &lt;a href="https://github.com/harness-community/captain-canary-adventure-app/fork"&gt;fork Captain Canary Adventure (CCA) App&lt;/a&gt; or bring your own application (as long as it has a Dockerfile).&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📝&lt;/th&gt;
&lt;th&gt;This tutorial assumes the use of a fork of the CCA App. If you are using your own app, make the necessary changes.&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  GitHub and Docker authentication
&lt;/h3&gt;

&lt;p&gt;Create a &lt;a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens"&gt;GitHub personal access token (PAT)&lt;/a&gt; that will have read access to the demo application repository. Create a &lt;a href="https://docs.docker.com/security/for-developers/access-tokens/"&gt;Docker access token&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Image registry setup on Google Cloud Platform (GCP):
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://console.cloud.google.com/apis/api/artifactregistry.googleapis.com"&gt;Enable the Artifact Registry API&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;From &lt;a href="https://console.cloud.google.com/artifacts"&gt;Artifact Registry&lt;/a&gt;, click &lt;strong&gt;+ CREATE REPOSITORY&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Give this repository a name - &lt;code&gt;cca-registry&lt;/code&gt;, choose &lt;code&gt;Docker&lt;/code&gt; as the format, &lt;code&gt;Standard&lt;/code&gt; as the mode, location type &lt;code&gt;Region&lt;/code&gt; (choose a region near you), &lt;code&gt;Google-managed encryption&lt;/code&gt; key for encryption, have &lt;code&gt;Dry Run&lt;/code&gt; selected, and click &lt;strong&gt;CREATE&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Kubernetes cluster setup with GKE:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://console.cloud.google.com/apis/api/container.googleapis.com"&gt;Enable Kubernetes Engine API&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://console.cloud.google.com/kubernetes/auto/add"&gt;Create a GKE (autopilot) cluster&lt;/a&gt; by selecting a region near you.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  GCP IAM and Service Account setup:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://console.cloud.google.com/iam-admin/serviceaccounts"&gt;Create a GCP Service Account&lt;/a&gt;. Copy the email address generated for this service account. It will be in this format: &lt;code&gt;SERVICE_ACCOUNT_NAME@GCP_PROJECT_NAME.iam.gserviceaccount.com&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Navigate to the artifact registry repository you created earlier, select it and click on &lt;strong&gt;Permissions&lt;/strong&gt; tab. You need to create two types of access to the artifact registry repository - a public read access and a fine-grained write access. Click &lt;strong&gt;+ ADD PRINCIPAL&lt;/strong&gt; from the &lt;strong&gt;Permissions&lt;/strong&gt; tab and paste the email address of the service account previously copied. Assign &lt;code&gt;Artifact Registry Writer&lt;/code&gt; role to this principal. 
Next, click &lt;strong&gt;+ ADD PRINCIPAL&lt;/strong&gt; from the &lt;strong&gt;Permissions&lt;/strong&gt; tab and type in &lt;code&gt;allUsers&lt;/code&gt; for the principal and &lt;code&gt;Artifact Registry Reader&lt;/code&gt; for the role. You might see a warning like this: 
&amp;gt; “This resource is public and can be accessed by anyone on the internet.” &lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;IAM &amp;amp; Admin&lt;/strong&gt;, locate the service account, select it, and then click on &lt;strong&gt;ADD KEY&lt;/strong&gt; → &lt;strong&gt;Create new key&lt;/strong&gt;. Choose the JSON format, and a key for your service account will be downloaded to your computer. Exercise caution and refrain from sharing this key with anyone; treat it as you would a password. &lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Slack workspace and app setup:
&lt;/h3&gt;

&lt;p&gt;To create a Slack app and incoming webhook, you'll need elevated privilege in that Slack workspace. &lt;a href="https://slack.com/help/articles/206845317-Create-a-Slack-workspace"&gt;Create a new Slack workspace&lt;/a&gt; for this tutorial and &lt;a href="https://api.slack.com/messaging/webhooks"&gt;an incoming webhook&lt;/a&gt; for a specific channel.&lt;/p&gt;

&lt;p&gt;Your newly created slack webhook will look like this: &lt;code&gt;https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Treat this as sensitive information&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Harness entity setup
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create secrets: &lt;/p&gt;

&lt;p&gt;a. GitHub Secret: Navigate to the &lt;a href="https://app.harness.io/"&gt;Harness console&lt;/a&gt;. From &lt;strong&gt;Project Setup&lt;/strong&gt; → &lt;strong&gt;Secrets&lt;/strong&gt;, click &lt;strong&gt;+ New Secret&lt;/strong&gt; → &lt;strong&gt;Text&lt;/strong&gt;, give the secret a name (for example, &lt;code&gt;cca-git-pat&lt;/code&gt;) and paste in the previously created GitHub PAT.&lt;/p&gt;

&lt;p&gt;b. Docker Secret: Similarly, create a docker secret (you can name it &lt;code&gt;docker-secret&lt;/code&gt;) and use the previously created Docker PAT as the secret value.&lt;/p&gt;

&lt;p&gt;c. Slack Webhook: Similarly, create a slack webhook secret (you can name it &lt;code&gt;slack-webhook&lt;/code&gt;) and paste in the previously created slack webhook value.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://developer.harness.io/docs/category/connectors/"&gt;Connectors&lt;/a&gt; in Harness help you pull in artifacts, sync with repos, integrate verification and analytics tools, and leverage collaboration channels. From &lt;strong&gt;Project Setup&lt;/strong&gt; → &lt;strong&gt;Connectors&lt;/strong&gt; → &lt;strong&gt;+ New Connector&lt;/strong&gt;, create the following connectors:&lt;/p&gt;

&lt;p&gt;a. GitHub Connector: Harness platform connects to the source code repository using this connector. Give this connector a name (for example, &lt;code&gt;cca-git-connector&lt;/code&gt;), choose URL type as &lt;strong&gt;Repository&lt;/strong&gt;, connection type as &lt;strong&gt;HTTP&lt;/strong&gt;, and paste in the forked Github Repository URL of the demo app. Use your GitHub username and the previously created GitHub secret for authentication. Select the connectivity mode as &lt;strong&gt;Connect through Harness Platform&lt;/strong&gt;. The connection test should be successful.&lt;/p&gt;

&lt;p&gt;b. Docker Connector: Harness platform pulls in the docker image for the Slack notification using this connector. Give this connector a name (for example, &lt;code&gt;docker-connector&lt;/code&gt;), choose provider type as &lt;strong&gt;DockerHub&lt;/strong&gt;, Docker Registry URL as &lt;code&gt;https://index.docker.io/v2/&lt;/code&gt;, enter in your docker username and select the previously created docker secret. Select the connectivity mode as &lt;strong&gt;Connect through Harness Platform&lt;/strong&gt;. The connection test should be successful.&lt;/p&gt;

&lt;p&gt;c. Kubernetes Connector: Harness platform creates and manages resources on your GKE cluster using this connector. Give this connector a name (for example, &lt;code&gt;gke-connector&lt;/code&gt;). Choose &lt;strong&gt;Use the credentials of a specific Harness Delegate…&lt;/strong&gt; and click &lt;strong&gt;+ Install new Delegate&lt;/strong&gt;. The Harness Delegate is a service you run in your local network or VPC to connect all of your providers with your Harness account. Follow the instructions to install a delegate on your Kubernetes cluster and once the installation is complete, select the newly created delegate from the dropdown. The connection test should be successful.&lt;/p&gt;

&lt;p&gt;d. GCP Connector: The GCP connector allows you to connect to your Google Cloud Platform resource and perform actions via Harness platform. Give this connector a name and select &lt;strong&gt;Specify credentials here&lt;/strong&gt; under the &lt;strong&gt;Details&lt;/strong&gt; section. Add a new secret name and upload the GCP service account key JSON file you previously downloaded. Select the connectivity mode as &lt;strong&gt;Connect through Harness Platform&lt;/strong&gt;. The connection test should be successful.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bpg-cjOQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7vzrcao4l76db6yuqn6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bpg-cjOQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7vzrcao4l76db6yuqn6z.png" alt="GCP Connector Configuration" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Build and push image to GAR
&lt;/h2&gt;

&lt;p&gt;First, let’s create the build and push image part of the pipeline. Click on &lt;strong&gt;Pipelines&lt;/strong&gt; → &lt;strong&gt;+ Create a Pipeline&lt;/strong&gt; and give it a name (e.g., &lt;code&gt;gar-gke-cicd-pipeline&lt;/code&gt;). Select the &lt;strong&gt;Inline&lt;/strong&gt; option to store the pipeline definition in Harness, and then click &lt;strong&gt;Start&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;Add Stage&lt;/strong&gt; and choose &lt;strong&gt;Build&lt;/strong&gt; as the stage type. A Harness pipeline can consist of one or more stages. Give this stage a name (e.g., &lt;code&gt;Push to GAR&lt;/code&gt;), select the &lt;strong&gt;Clone Codebase&lt;/strong&gt; option (this should be enabled, by default), and choose the GitHub connector you previously created from the dropdown. The repository name should auto-populate. Click &lt;strong&gt;Set Up Stage&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Infrastructure&lt;/strong&gt;, specify where you'll deploy your application. Select &lt;strong&gt;Cloud&lt;/strong&gt; for Harness hosted builds and choose &lt;strong&gt;Linux/AMD64&lt;/strong&gt; for the Platform option.&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Execution&lt;/strong&gt;, click &lt;strong&gt;Add Step&lt;/strong&gt; → &lt;strong&gt;Add Step&lt;/strong&gt; and find &lt;strong&gt;Build and Push to GAR&lt;/strong&gt; step from the Step Library. Name this step (e.g., &lt;code&gt;BuildAndPushToGAR&lt;/code&gt;) and select the GCP connector you created earlier from the dropdown. When choosing the host, use the region selected when creating the image registry repository (e.g., &lt;code&gt;northamerica-northeast1-docker.pkg.dev&lt;/code&gt;). Refer to &lt;a href="https://developer.harness.io/docs/continuous-integration/use-ci/build-and-upload-artifacts/build-and-push-to-gcr/#host"&gt;Harness Developer Hub docs&lt;/a&gt; or &lt;a href="https://cloud.google.com/artifact-registry/docs/repositories/repo-locations"&gt;GCP Artifact Registry docs&lt;/a&gt; for more details on selecting the region for GAR. Enter your GCP project ID under &lt;strong&gt;Project Id&lt;/strong&gt;. For &lt;strong&gt;Image Name&lt;/strong&gt;, use the image registry repository name followed by the application name in this format: &lt;code&gt;cca-registry/cca-app&lt;/code&gt;. Use &lt;code&gt;latest&lt;/code&gt; for now as the &lt;strong&gt;Tags&lt;/strong&gt;. Click &lt;strong&gt;Apply Changes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now, click &lt;strong&gt;Run&lt;/strong&gt; to execute the pipeline. Enter &lt;code&gt;Master&lt;/code&gt; as the git branch name for the build (or &lt;code&gt;main&lt;/code&gt; if you're not using the forked CCA app). A successful execution of the pipeline should resemble the following: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v9wLjssy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z909zlmpmgnc5ocycy5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v9wLjssy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z909zlmpmgnc5ocycy5d.png" alt="Successful Build and Push Pipeline Execution" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy to GKE
&lt;/h2&gt;

&lt;p&gt;If you're using a fork of the Captain Canary Adventure App, update the &lt;code&gt;deployment.yaml&lt;/code&gt; before deploying the application to Kubernetes. The current YAML uses Harness variables, but since you're not there yet, you'll need to hardcode some values for now.&lt;/p&gt;

&lt;p&gt;Assuming your GKE Artifact Registry repository name is &lt;code&gt;cca-registry&lt;/code&gt; and the image name is &lt;code&gt;cca-app&lt;/code&gt;, replace the current values in values.yaml with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cca-registry/cca-app&lt;/span&gt;
  &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;latest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Click on &lt;strong&gt;+ Add Stage&lt;/strong&gt; in the pipeline and choose &lt;strong&gt;Deploy&lt;/strong&gt; as the stage type. Give the stage a name (e.g., &lt;code&gt;GKE Deploy&lt;/code&gt;), select &lt;strong&gt;Kubernetes&lt;/strong&gt; as the deployment type, and click &lt;strong&gt;Set Up Stage&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Next, create a service to deploy. Choose &lt;strong&gt;Kubernetes&lt;/strong&gt; as the deployment type, select the GitHub connector, and specify the paths for the manifests. For example, for the Captain Canary Adventure App, here are the manifest details:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qecqxJcT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24bqp4qzp3iqw1tznth1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qecqxJcT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24bqp4qzp3iqw1tznth1.png" alt="Captain Canary K8s Manifest Details" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Environments represent your deployment targets (such as QA or Prod). Each environment contains one or more Infrastructure Definitions that list your target clusters, hosts, namespaces, etc. Click on &lt;strong&gt;+ New Environment&lt;/strong&gt;, give this environment a name (e.g., &lt;code&gt;cca-env&lt;/code&gt;), select &lt;strong&gt;Pre-Production&lt;/strong&gt; as the environment type, and click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Next, create an infrastructure definition. Click &lt;strong&gt;+ New Infrastructure&lt;/strong&gt;, under cluster details, select the GKE connector, provide a Kubernetes namespace where your application will be deployed (e.g., &lt;code&gt;cca-ns&lt;/code&gt;), and click &lt;strong&gt;Save&lt;/strong&gt;. For execution strategies, choose &lt;strong&gt;Rolling Deployment&lt;/strong&gt; and click &lt;strong&gt;Use Strategy&lt;/strong&gt;. Under optional configuration, select &lt;strong&gt;Enable Kubernetes Pruning&lt;/strong&gt;. With this setting, Harness will use pruning to remove any resources present in an old manifest but no longer in the manifest used for the current deployment. You can find more information about this configuration on the &lt;a href="https://developer.harness.io/docs/continuous-delivery/deploy-srv-diff-platforms/kubernetes/cd-kubernetes-category/prune-kubernetes-resources/"&gt;Harness Developer Hub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You’re all set! Click &lt;strong&gt;Save&lt;/strong&gt; and then &lt;strong&gt;Run&lt;/strong&gt;. Use &lt;code&gt;master&lt;/code&gt; for the git branch. A successful pipeline execution will override the &lt;code&gt;cca-app:latest&lt;/code&gt; image on your GAR repository and deploy this image to your GKE cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add Approval and Slack Notifications
&lt;/h2&gt;

&lt;p&gt;In practical DevOps pipelines, gates are implemented to control artifact promotion to the production environment. Harness supports &lt;a href="https://developer.harness.io/tutorials/cd-pipelines/approvals/"&gt;various types of approvals&lt;/a&gt; in Continuous Delivery (CD) pipelines. In this tutorial, you'll use the manual approval step.&lt;/p&gt;

&lt;p&gt;Within the &lt;strong&gt;gke-deploy&lt;/strong&gt; stage, click &lt;strong&gt;+ Add Step&lt;/strong&gt; before the &lt;strong&gt;Rollout Deployment&lt;/strong&gt; step and find &lt;strong&gt;Harness Approval&lt;/strong&gt; under Approval in the Step Library. Keep all default options, and you'll need to select the approver from the User Groups. Choose &lt;strong&gt;Project&lt;/strong&gt; → &lt;strong&gt;All Project Users&lt;/strong&gt; under user group selection. If you're the only member of this project, you'll be the sole approver. Click &lt;strong&gt;Apply Selected&lt;/strong&gt;. Click &lt;strong&gt;Apply Changes&lt;/strong&gt; for the manual approval step, and then click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now, let's add a notification stage so that whenever a deployment is approved in the CI/CD pipeline, a notification will be sent to a Slack channel, indicating who approved it.&lt;/p&gt;

&lt;p&gt;From the pipeline, click on &lt;strong&gt;Add Stage&lt;/strong&gt; after the gke-deploy stage, select &lt;strong&gt;Build&lt;/strong&gt; as the stage type, give this stage a name (e.g., &lt;code&gt;Notifications Stage&lt;/code&gt;), disable the Clone Codebase option, and click &lt;strong&gt;Set Up Stage&lt;/strong&gt;. Under Infrastructure, choose &lt;strong&gt;Use a New Infrastructure&lt;/strong&gt; → &lt;strong&gt;Cloud&lt;/strong&gt; and &lt;strong&gt;Linux → AMD64&lt;/strong&gt; for the Operating System. Under Execution, click &lt;strong&gt;Add Step&lt;/strong&gt; → &lt;strong&gt;Add Step&lt;/strong&gt; and find &lt;strong&gt;Plugin&lt;/strong&gt; in the Build section of the Step Library.&lt;/p&gt;

&lt;p&gt;Name this step (e.g., &lt;code&gt;Slack Notification&lt;/code&gt;), choose the Docker connector you previously created under Container Registry, for the image, use &lt;code&gt;plugins/slack&lt;/code&gt;, and add the following key-values under &lt;strong&gt;Optional Configuration&lt;/strong&gt; → &lt;strong&gt;Settings&lt;/strong&gt; (assuming the id for your Slack webhook secret is &lt;code&gt;slackwebhook&lt;/code&gt;). &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Key&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;webhook&lt;/td&gt;
&lt;td&gt;&lt;code&gt;&amp;lt;+secrets.getValue("slackwebhook")&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;template&lt;/td&gt;
&lt;td&gt;The deployment is moved to prod by &lt;code&gt;&amp;lt;+approval.approvalActivities[0].user.name&amp;gt;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Notice the use of a Harness variable expression in the template, which retrieves the name of the approver from the previous stage.&lt;/p&gt;

&lt;p&gt;Before running the pipeline, one more update is needed. Currently, every image built, pushed, and deployed has the same image tag, making it challenging to track based on the build number. Harness provides powerful &lt;a href="https://developer.harness.io/docs/platform/variables-and-expressions/harness-variables/"&gt;built-in and custom variable expressions&lt;/a&gt; for various practical use cases.&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;Variables&lt;/strong&gt; for your pipeline and select &lt;strong&gt;+ Add Variable&lt;/strong&gt; at the pipeline level. Let’s add two variables: &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Variable Name&lt;/th&gt;
&lt;th&gt;Variable Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;imageName&lt;/td&gt;
&lt;td&gt;cca-registry/cca-app&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;imageTag&lt;/td&gt;
&lt;td&gt;&lt;code&gt;&amp;lt;+pipeline.sequenceId&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For the &lt;strong&gt;imageTag&lt;/strong&gt;, click the 📌 icon and select &lt;strong&gt;Expression&lt;/strong&gt;. Every time you run the pipeline, the pipeline sequence ID will change, and subsequently, the image that will be built and deployed will also change.&lt;/p&gt;

&lt;p&gt;Now that you’ve updated the pipeline to pass in the &lt;strong&gt;imageName&lt;/strong&gt; and &lt;strong&gt;imageTag&lt;/strong&gt; as variables, let’s update the codebase to replace the hardcoded values. Revert the changes to &lt;code&gt;deployment.yaml&lt;/code&gt; and &lt;code&gt;values.yaml&lt;/code&gt; you previously made. You'll observe that the &lt;code&gt;values.yaml&lt;/code&gt; file will receive the &lt;strong&gt;imageName&lt;/strong&gt; and &lt;strong&gt;imageTag&lt;/strong&gt; during pipeline runtime, and then the &lt;code&gt;deployment.yaml&lt;/code&gt; file will use those values from the &lt;code&gt;values.yaml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Now, click &lt;strong&gt;Save&lt;/strong&gt; and then &lt;strong&gt;Run&lt;/strong&gt;. After a successful &lt;strong&gt;gar-build-and-push&lt;/strong&gt; stage, you should see an image in the GAR repository with a numeric tag that matches the pipeline sequence ID. Right after, you should see a following prompt for approval: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A0U8SIAV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bhcgxqmprz5k7iacsemm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A0U8SIAV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bhcgxqmprz5k7iacsemm.png" alt="Harness Approval" width="800" height="402"&gt;&lt;/a&gt;&lt;br&gt;
You can (optionally) add a comment and click &lt;strong&gt;Approve&lt;/strong&gt;. The pipeline should continue as before and you’ll see a deployment on your Kubernetes cluster. However, this time, you’ll see a slack notification resulting from your approval. To modify the text that appears on the notification, you can modify the &lt;strong&gt;template&lt;/strong&gt; value in the slack plugin step settings. &lt;/p&gt;
&lt;h2&gt;
  
  
  Security Tests and Policy Enforcement (Bonus Section)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📝&lt;/th&gt;
&lt;th&gt;These features are only available on Harness paid plans&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h3&gt;
  
  
  Run OWASP Tests
&lt;/h3&gt;

&lt;p&gt;You can scan your code repositories using &lt;a href="https://owasp.org/www-project-dependency-check/"&gt;OWASP Dependency-Check&lt;/a&gt; within a Harness pipeline. Within the &lt;code&gt;gar-build-and-push&lt;/code&gt; stage, click on &lt;strong&gt;+ Add Step&lt;/strong&gt; → &lt;strong&gt;Add Step&lt;/strong&gt; before the &lt;code&gt;BuildAndPushGAR&lt;/code&gt; step. From the step library, find &lt;strong&gt;Owasp&lt;/strong&gt; under the Security Tests section.&lt;/p&gt;

&lt;p&gt;Use the following settings to configure the OWASP Dependency Check and click &lt;strong&gt;Apply Changes&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Setting Name&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Name&lt;/td&gt;
&lt;td&gt;Owasp Tests&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scan Mode&lt;/td&gt;
&lt;td&gt;Orchestration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Target.Name&lt;/td&gt;
&lt;td&gt;cca-owasp-tests&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Variant&lt;/td&gt;
&lt;td&gt;master (this is the branch name for the repo)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Log Level&lt;/td&gt;
&lt;td&gt;Info&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fail On Severity&lt;/td&gt;
&lt;td&gt;Critical&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You can have any string values for the step name and target.name. For the &lt;strong&gt;Variant&lt;/strong&gt;, use the branch name for your codebase (e.g. &lt;code&gt;master&lt;/code&gt; or &lt;code&gt;main&lt;/code&gt;). Selecting &lt;strong&gt;Critical&lt;/strong&gt; for &lt;strong&gt;Fail On Severity&lt;/strong&gt; means that if there is any critical error, this test will fail and the pipeline execution will halt. You can check out the &lt;a href="https://developer.harness.io/docs/security-testing-orchestration/sto-techref-category/owasp-scanner-reference/"&gt;OWASP scanner reference&lt;/a&gt; to learn more on these configurations. &lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Save&lt;/strong&gt; and then &lt;strong&gt;Run&lt;/strong&gt;. If your codebase doesn’t have an OWASP critical bug, the pipeline should execute successfully. To enforce a fail on this OWASP scan, use a codebase with known vulnerabilities like &lt;a href="https://github.com/WebGoat/WebGoat"&gt;WebGoat&lt;/a&gt; and you’ll see the OWASP scanner in action.&lt;/p&gt;
&lt;h3&gt;
  
  
  Add a policy to mandate approval step on deployment stages
&lt;/h3&gt;

&lt;p&gt;Harness Policy As Code uses &lt;a href="https://www.openpolicyagent.org/"&gt;Open Policy Agent (OPA)&lt;/a&gt; as the central service to store and enforce policies for the different entities and processes across the Harness platform. In this section, you will define a policy that will deny a pipeline execution if there is no approval step defined in a deployment stage.&lt;/p&gt;

&lt;p&gt;From &lt;strong&gt;Project Setup&lt;/strong&gt; → &lt;strong&gt;Policies&lt;/strong&gt;, follow the wizard to create a policy from the policy library. Use the &lt;strong&gt;Pipeline - Approval&lt;/strong&gt; policy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wb2Wg9Kf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l5fywbpys3wxh6pf055c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wb2Wg9Kf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l5fywbpys3wxh6pf055c.png" alt="Pipeline Approval Policy" width="556" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next screen, choose &lt;strong&gt;Project&lt;/strong&gt; scope, trigger event &lt;strong&gt;On Run&lt;/strong&gt;, and for the severity, choose &lt;strong&gt;Error &amp;amp; Exit&lt;/strong&gt;. Next, click &lt;strong&gt;Yes&lt;/strong&gt; to apply the policy. &lt;/p&gt;

&lt;p&gt;Now, let’s remove the approval step from the gke-deploy stage. Click on &lt;strong&gt;Edit&lt;/strong&gt; on the pipeline and click on the cross button on the Harness Approval step. Click &lt;strong&gt;Save&lt;/strong&gt; and then &lt;strong&gt;Run&lt;/strong&gt;. You should see the following error:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dV1OUt3a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5tk79zboogjsegggfwhg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dV1OUt3a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5tk79zboogjsegggfwhg.png" alt="Policy Enforcement In Action" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add the Harness Approval back, save and run the pipeline and this time the pipeline should execute successfully. An end to end successful pipeline execution will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D5tuodJ---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pze4gowt740k3wzrb0cu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D5tuodJ---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pze4gowt740k3wzrb0cu.png" alt="End to end pipeline execution" width="800" height="576"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  View the running application
&lt;/h2&gt;

&lt;p&gt;While connected to your GKE cluster, execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; cca-ns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is assuming that you deployed the application to &lt;code&gt;cca-ns&lt;/code&gt; namespace. &lt;/p&gt;

&lt;p&gt;The output will be something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP    PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;        AGE
cca-app-service         LoadBalancer   34.118.227.33   34.152.47.53   80:30008/TCP   6d19h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Navigate to the IP address listed under the EXTERNAL-IP column for your case, and you should see a running Captain Canary Adventure application. As the application is running on port 80, you can omit the port number from the URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iI_avofK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rm8u9ez6lih5s8ukmb3x.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iI_avofK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rm8u9ez6lih5s8ukmb3x.gif" alt="Captain Canary Application Running" width="600" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Homework Task
&lt;/h2&gt;

&lt;p&gt;If you’d like to take this pipeline one step further, you can leverage caching to share data across stages because each stage in a Harness CI pipeline has its own build infrastructure. Check out how to &lt;a href="https://developer.harness.io/docs/continuous-integration/use-ci/caching-ci-data/save-cache-in-gcs/"&gt;save and restore cache from Google Cloud Storage (GCS)&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>gcp</category>
      <category>googlecloud</category>
      <category>cicd</category>
      <category>harness</category>
    </item>
    <item>
      <title>Ephemeral CI environments using ttl.sh and Gitness</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Mon, 18 Dec 2023 21:29:10 +0000</pubDate>
      <link>https://forem.com/harness/ephemeral-ci-environments-using-ttlsh-and-gitness-8jl</link>
      <guid>https://forem.com/harness/ephemeral-ci-environments-using-ttlsh-and-gitness-8jl</guid>
      <description>&lt;p&gt;In the realm of software development, balancing continuous integration with maintaining quality is a significant challenge. &lt;a href="https://ttl.sh/"&gt;ttl.sh&lt;/a&gt; and &lt;a href="https://docs.gitness.com/"&gt;Gitness&lt;/a&gt; offer a solution. ttl.sh is an ephemeral Docker image registry that allows for the creation of temporary image tags with built-in expiry. Gitness, an open-source platform by Harness, simplifies the management of source code repositories and development pipelines. This blog post will explore how these tools can be used to create temporary CI environments to expedite development processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Multiple developers working on different features can lead to merge conflicts and a bloated image registry. Traditional CI environments often retain Docker images from feature branches too long, which complicates image management and increases the likelihood of testing against outdated or incorrect images.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Urc5jNfO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t576xsvuko62jk6r7972.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Urc5jNfO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t576xsvuko62jk6r7972.png" alt="A PR workflow for ephemeral build environment" width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The workflow in the diagram leverages ttl.sh and Gitness to manage these challenges:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feature Branch Workflow&lt;/strong&gt;: The creation of a pull request (PR) for a feature branch triggers a build in Gitness.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Image Creation&lt;/strong&gt;: Gitness constructs a Docker image from the feature branch, using ttl.sh for tagging with a UUID and a time-to-live (TTL) limit. This process does not require credentials and maintains privacy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated Testing&lt;/strong&gt;: The image is put through automated tests. Should the tests fail, developers are notified; if they succeed, the process proceeds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Image Promotion&lt;/strong&gt;: After a PR passes all checks and is merged, the image is given a permanent tag and pushed to a central image registry, which at this point, requires proper credentials.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This approach ensures that only quality-assured images make it to the central registry, reducing the clutter and promoting a cleaner CI process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Let's construct this setup step by step. Starting with pushing an image to ttl.sh through Gitness, I’ll guide you to resources for completing the remaining components of the system.&lt;/p&gt;

&lt;p&gt;Begin with the Gitness documentation to set up your first project. You can start a new repository or link an existing one from GitHub or GitLab. The specific source code repository can be any; the essential requirement is the presence of a Dockerfile.&lt;/p&gt;

&lt;p&gt;Navigate to "Pipelines'' within your repository and create a new pipeline. Gitness will suggest a sample pipeline. Replace it with the following YAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pipeline&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build-and-push&lt;/span&gt;
     &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;platform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;arch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;amd64&lt;/span&gt;
         &lt;span class="na"&gt;os&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;linux&lt;/span&gt;
       &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker_build&lt;/span&gt;
           &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
             &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
               &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ttl.sh/xxxx-yyyy-nnnn-2a2222-4b44&lt;/span&gt;
               &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1h&lt;/span&gt;
             &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker&lt;/span&gt;
           &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;plugin&lt;/span&gt;
     &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ci&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Observe the image repo and tag. ttl.sh is the image repository name and the &lt;strong&gt;UUID (xxxx-yyyy-nnnn-2a2222-4b44)&lt;/strong&gt; is the image name. You can replace the hard coded image name with a Gitness secret for a dynamic image name. Click on &lt;strong&gt;Secrets&lt;/strong&gt; and &lt;a href="https://docs.gitness.com/pipelines/secrets"&gt;add a new Gitness secret&lt;/a&gt; called &lt;strong&gt;random_image_name&lt;/strong&gt;. Now you can update your pipeline YAML so that the image name is not fixed:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;repo: ttl.sh/${{ secrets.get("random_image_name") }}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Save the pipeline and click on &lt;strong&gt;Run&lt;/strong&gt;. After executing the pipeline, your output should resemble the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9YbAD-JG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sbgmxuvh460vvdeinxox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9YbAD-JG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sbgmxuvh460vvdeinxox.png" alt="Successsful pipeline execution" width="795" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next add the following &lt;a href="https://docs.gitness.com/category/steps"&gt;steps&lt;/a&gt; yourself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add a &lt;a href="https://docs.gitness.com/pipelines/steps/run"&gt;run step&lt;/a&gt; to execute tests on the container you just built and pushed.&lt;/li&gt;
&lt;li&gt;Add a &lt;a href="https://docs.gitness.com/pipelines/triggers"&gt;trigger&lt;/a&gt; so that when a pull request is opened, Gitness can automatically trigger pipeline execution.&lt;/li&gt;
&lt;li&gt;Add a &lt;a href="https://docs.gitness.com/pipelines/steps/plugin"&gt;slack plugin step&lt;/a&gt; so that failed tests trigger slack webhook and notification.&lt;/li&gt;
&lt;li&gt;Add another &lt;a href="https://docs.gitness.com/pipelines/steps/run"&gt;run step&lt;/a&gt; so that when all tests pass, the image tag is retagged and pushed to a private image registry. You’ll need authentication at this step.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Security Considerations
&lt;/h2&gt;

&lt;p&gt;While ttl.sh's no-credential, ephemeral approach offers flexibility and simplicity for CI environments, it introduces unique security considerations. The convenience of pushing images without credentials is counterbalanced by potential security risks — namely, the possibility of unauthorized image pulls. You can mitigate these risks through short-lived images and using UUIDs for image names:&lt;/p&gt;

&lt;p&gt;Short-Lived Images: By design, images in ttl.sh are ephemeral. Setting a short expiration time for an image means it's available for a limited time window, reducing the risk exposure period.&lt;/p&gt;

&lt;p&gt;Use of UUIDs: Incorporating UUIDs into image tags significantly lowers the risk of unauthorized access. The randomness and complexity of UUIDs make it exceedingly difficult for someone to guess the image name and pull it without authorization.&lt;/p&gt;

&lt;p&gt;However, for production environments, organizations might consider deploying a private version of ttl.sh. This allows for more control over the security aspects, such as network isolation, access control, and auditing capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;Integrating ttl.sh and Gitness creates an ephemeral CI environment that streamlines the build and test process. It helps to avoid the accumulation of outdated images and keeps the pipeline lean. This method is not just about speed; it's about maintaining a manageable and efficient development workflow. Adopting these tools can lead to more frequent and dependable software delivery for your engineering teams. Check out and follow &lt;a href="https://www.youtube.com/@Harnessio"&gt;Harness YouTube channel&lt;/a&gt; for&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>security</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Unlocking Efficiency: Exploring Churn Rate With Harness Software Engineering Insights (SEI)</title>
      <dc:creator>Francois Akiki</dc:creator>
      <pubDate>Fri, 15 Dec 2023 16:32:44 +0000</pubDate>
      <link>https://forem.com/harness/unlocking-efficiency-exploring-churn-rate-with-harness-software-engineering-insights-sei-37al</link>
      <guid>https://forem.com/harness/unlocking-efficiency-exploring-churn-rate-with-harness-software-engineering-insights-sei-37al</guid>
      <description>&lt;p&gt;Improving the effectiveness and productivity of developers is crucial for the success of an engineering team. To achieve this, the team should adopt a metric-driven approach to identify the root causes that scales down the team's productivity. Out of the many metrics that engineering leaders can use to understand an engineering team's productivity is Churn Rate. In this article, you'll learn how high churn rates can impact the team and how Harness SEI can help lower churn rates in your organization.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Churn Rate?
&lt;/h3&gt;

&lt;p&gt;In the context of software development, churn rate refers to the amount of work that is added or removed from a sprint backlog during a sprint. It measures the scope change and provides insights into the volatility of the sprint backlog.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A high churn rate indicates a lot of changes in the sprint backlog, which may result in delays in completing the sprint, and may also indicate that the requirements are not well-defined. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A low churn rate suggests that the sprint backlog is stable and the requirements are well-defined.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why Churn Rate Matters?
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Unveiling Sprint Volatility:
&lt;/h4&gt;

&lt;p&gt;Churn Rate acts as a window into the dynamic nature of your sprint backlog. By understanding the scope changes during a sprint, teams can identify and address volatility, fostering a more stable and predictable development environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Optimizing Workflows:
&lt;/h4&gt;

&lt;p&gt;SEI's Churn Rate empowers teams to optimize workflows by pinpointing areas of change mid-sprint. This insight allows for targeted strategies, ensuring that the development team can adapt swiftly and stay on course despite evolving project requirements.&lt;/p&gt;

&lt;h4&gt;
  
  
  Enhancing Collaboration:
&lt;/h4&gt;

&lt;p&gt;With Churn Rate, collaboration between product managers and engineers reaches new heights. The metric provides a common ground for discussions, enabling both teams to make informed decisions and align efforts seamlessly.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Formula Unveiled:
&lt;/h3&gt;

&lt;p&gt;Churn Rate = (Points added mid-sprint + Points removed mid-sprint + Positive difference of changes in planned issues) / Points committed at the start of the sprint&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Points added mid-sprint:&lt;/strong&gt; The sum of story points for items added during the sprint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Points removed mid-sprint:&lt;/strong&gt; Identifies the reduction in story points due to the removal of items during the sprint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Positive difference in planned issues:&lt;/strong&gt; Reflects the positive changes in story points for planned issues during the sprint.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zhuRwaZG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/6577607b880250e437503018_DnbWQ9sCfn8fN87c0bRYMIqCWm3QX3JRt31XplF3U6ePmyj1xE_UGJm2aAd8R_K2iLwAZmQ0ELN01eqdWPfngtmYOcyqWQQHUigYaXyJ7_kANC3ObfAfoalyXtMsLWLFKwt8tmzJtWQNO3wra6yBoBA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zhuRwaZG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/6577607b880250e437503018_DnbWQ9sCfn8fN87c0bRYMIqCWm3QX3JRt31XplF3U6ePmyj1xE_UGJm2aAd8R_K2iLwAZmQ0ELN01eqdWPfngtmYOcyqWQQHUigYaXyJ7_kANC3ObfAfoalyXtMsLWLFKwt8tmzJtWQNO3wra6yBoBA.png" alt="Measure scope change in your sprint using SEI's Churn Rate metric.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Measure scope change in your sprint using SEI's Churn Rate metric&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h3&gt;
  
  
  Incorporating Churn Rate into SEI
&lt;/h3&gt;

&lt;p&gt;SEI seamlessly integrates Churn Rate into its comprehensive suite of metrics, leveraging the power of the Trellis Framework. As part of the 40+ third-party integrations, Churn Rate ensures that your software factory's performance is not just measured but optimized, setting the stage for unparalleled efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unlocking Potential with SEI's Churn Rate
&lt;/h3&gt;

&lt;p&gt;In conclusion, Churn Rate emerges as a catalyst for transformative change in software development. SEI's commitment to providing actionable insights and workflow automation reaches new heights with this innovative metric. By understanding and harnessing Churn Rate, software delivery teams can not only increase productivity but also build a foundation for sustained excellence.&lt;/p&gt;

&lt;p&gt;In a world where adaptability is key, SEI's Churn Rate becomes the compass that guides your software engineering ship through the ever-changing seas of development, ensuring you not only navigate challenges but thrive in the face of change. Experience the power of Churn Rate with Software Engineering Insights – where productivity meets precision. &lt;/p&gt;

&lt;p&gt;Engineering teams can leverage Harness Software Engineering Insights to first baseline the people, processes and tooling bottlenecks and then drive a continuous improvement process. To learn more, schedule a &lt;a href="https://www.harness.io/demo/software-engineering-insights?utm_source=harness_io&amp;amp;utm_medium=cta&amp;amp;utm_campaign=sei&amp;amp;utm_content=blog"&gt;demo&lt;/a&gt; with our experts.&lt;/p&gt;

&lt;p&gt;‍Check out the original blog &lt;a href="https://www.harness.io/blog/harness-sei-churn-rate-insights?utm_source=pmm-adeeb-valiulla&amp;amp;utm_medium=zap&amp;amp;utm_content=blog"&gt;here&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>harness</category>
      <category>softwareengineeringinsights</category>
    </item>
    <item>
      <title>Introducing the Harness SRM Backstage Plugin</title>
      <dc:creator>Francois Akiki</dc:creator>
      <pubDate>Fri, 01 Dec 2023 04:53:28 +0000</pubDate>
      <link>https://forem.com/harness/introducing-the-harness-srm-backstage-plugin-3hg7</link>
      <guid>https://forem.com/harness/introducing-the-harness-srm-backstage-plugin-3hg7</guid>
      <description>&lt;p&gt;We are excited to introduce the latest addition to our suite of Open Source Backstage Plugins  - the &lt;a href="https://github.com/harness/backstage-plugins/tree/main/plugins/harness-srm"&gt;Harness SRM Backstage Plugin&lt;/a&gt;. This plugin is designed to seamlessly integrate with your &lt;a href="https://backstage.io/"&gt;Backstage Instance&lt;/a&gt; as well as with &lt;a href="https://www.harness.io/products/internal-developer-portal"&gt;Harness IDP&lt;/a&gt; to help with development team's workflow, ensuring that Service Level Objectives (SLOs) and error budgets are not just a metric but a part of your daily development practice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JCE6QYjH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/656818cc563e2d8583e323e4_H9ewF4T-RobPjO3l8oIgJqBwMpCq97LTwqUuzS0_0HppEUfdkDYYETkyNwC1tDXwOkoxcVn3NyfxkP7eApFlZF6jLAVVHopqlUIjBIIP6YxkiOH1voaPolRGCKKJIBdqnAvrfA8-CfDfr_UOuD7-B_I.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JCE6QYjH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/656818cc563e2d8583e323e4_H9ewF4T-RobPjO3l8oIgJqBwMpCq97LTwqUuzS0_0HppEUfdkDYYETkyNwC1tDXwOkoxcVn3NyfxkP7eApFlZF6jLAVVHopqlUIjBIIP6YxkiOH1voaPolRGCKKJIBdqnAvrfA8-CfDfr_UOuD7-B_I.png" alt="" width="800" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Focus on SLOs?
&lt;/h2&gt;

&lt;p&gt;Good Site Reliability Engineering (SRE) practices hinge on the continuous monitoring and adherence to SLOs. These objectives are pivotal in maintaining the reliability and performance of services. However, in the fast-paced world of software development, it's often challenging for developers to continually engage with separate observability tools. This is where our plugin bridges the gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  Streamlining Observability
&lt;/h2&gt;

&lt;p&gt;The Harness SRM Backstage Plugin is more than just a tool; it's a solution to a common oversight in the development process. By aggregating SLO data from &lt;a href="https://www.harness.io/products/service-reliability-management"&gt;Harness Service Reliability Management&lt;/a&gt; Module, this plugin brings critical insights directly to the developers' dashboard. This integration means that your team no longer needs to switch contexts or platforms to monitor their SLOs.&lt;/p&gt;

&lt;h2&gt;
  
  
  High Availability of Adequate Information
&lt;/h2&gt;

&lt;p&gt;With this plugin, information about SLOs is not tucked away in a separate tool but is readily available in the Developer Portal. This accessibility ensures that your team is always aware of the current status of your services, leading to quicker responses and resolution if SLOs are at risk of being breached.&lt;/p&gt;

&lt;h2&gt;
  
  
  Empowering Developers
&lt;/h2&gt;

&lt;p&gt;By making SLO data readily available within the familiar environment of the Developer Portal, the Harness SRM Backstage Plugin empowers developers to take proactive steps in maintaining and improving service reliability. This approach not only enhances individual productivity but also fosters a culture of accountability and ownership within the team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In summary, the Harness SRM Backstage Plugin will help with how development teams interact with and respond to SLOs. By integrating critical data into the daily workflow, it ensures that adhering to good SRE practices is not an additional task but a seamless part of the development process. &lt;/p&gt;

&lt;p&gt;‍&lt;a href="https://app.harness.io/auth/#/signup?utm_source=harness_io&amp;amp;utm_medium=cta&amp;amp;utm_campaign=platform&amp;amp;utm_content=main_nav"&gt;Signup&lt;/a&gt; today for SRM Module and start using the Plugin to experience the difference this plugin makes in your team's efficiency and the reliability of your services.&lt;/p&gt;

&lt;p&gt;‍Check out the original blog &lt;a href="https://www.harness.io/blog/announcing-the-harness-srm-backstage-plugin"&gt;here&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>backstage</category>
      <category>platformengineering</category>
      <category>devrel</category>
      <category>idp</category>
    </item>
    <item>
      <title>Secure Container Image Signing with Cosign and OPA</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Tue, 28 Nov 2023 22:15:55 +0000</pubDate>
      <link>https://forem.com/harness/secure-container-image-signing-with-cosign-and-opa-2nbo</link>
      <guid>https://forem.com/harness/secure-container-image-signing-with-cosign-and-opa-2nbo</guid>
      <description>&lt;p&gt;As the adoption of containers in modern development continues to grow, ensuring the integrity of container images has become a pivotal aspect of application deployment strategies. In this video, Harness Developer Advocate &lt;a href="https://www.linkedin.com/in/diahmed/"&gt;Dewan Ahmed&lt;/a&gt; demonstrates how to leverage the combined power of Cosign and OPA for the secure deployment of container images to your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/PLvjcCCStzs"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;You can also &lt;a href="https://developer.harness.io/tutorials/cd-pipelines/kubernetes/cosign-opa?utm_campaign=cd-devrel"&gt;read the text version of this tutorial&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cosign</category>
      <category>opa</category>
      <category>harness</category>
    </item>
    <item>
      <title>Harness Chaos Engineering Faults Landscape</title>
      <dc:creator>Francois Akiki</dc:creator>
      <pubDate>Tue, 28 Nov 2023 03:38:11 +0000</pubDate>
      <link>https://forem.com/harness/harness-chaos-engineering-faults-landscape-3h20</link>
      <guid>https://forem.com/harness/harness-chaos-engineering-faults-landscape-3h20</guid>
      <description>&lt;p&gt;Harness Chaos Engineering provides a library of chaos faults using which chaos experiments are constructed and run. It is simple and intuitive to construct the chaos experiments using the given set of chaos faults. Before we delve into the details of the faults, let's look at the anatomy of a chaos experiment and the role of the chaos faults in the chaos experiments.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Quick Review of a Chaos Fault and a Chaos Experiment&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A chaos fault is an actual fault or some distress injected around a system resource like CPU, Memory, Network, Time, IO system, Nodes, etc. A chaos experiment is an attempt to measure the resilience of a system when one or more chaos faults are run on it. In Harness Chaos Engineering, an experiment not only runs chaos faults, but it measures the resilience of the system in the context of the faults that were run. &lt;/p&gt;

&lt;p&gt;When a chaos experiment is completed running, it provides a “Resilience Score,” which indicates the resilience of the target system against the faults that are injected.  The Resilience Score is the % of successful steady state measurements measured during the chaos experiment execution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LPNoIcgq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/65647a3e9d3cc9117ded993b_Diagram.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LPNoIcgq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/65647a3e9d3cc9117ded993b_Diagram.png" alt="" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Resilience Score of a Chaos Experiment&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A developer who is designing and implementing the chaos experiment controls the meaning of a Resilience Score. The higher the number of steady state checks or probes passed during the experiment execution, the more it contributes to the resilience score. The steady state measurements in Harness CE are done through the Resilience Probes. Many resilience probes can be attached to a fault. The more probes you add to the faults inside the experiment, the more realistic the resilience score of the experiment will become. &lt;/p&gt;

&lt;p&gt;The Resilience score of a chaos experiment = The percentage of successful resilience probes in the experiment. &lt;/p&gt;

&lt;h5&gt;
  
  
  Construction of a Chaos Experiment
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kFVXcLaY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/65647a503b6d0722050208b1_Diagram-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kFVXcLaY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/65647a503b6d0722050208b1_Diagram-1.png" alt="" width="800" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Chaos Fault Landscape in Harness Chaos Engineering&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Pxr19qxS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/656461336f3be10641e5776b_k8s100.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Pxr19qxS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/656461336f3be10641e5776b_k8s100.png" alt="" width="103" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Faults for Kubernetes Resources&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Runs via the Kubernetes Chaos Infrastructure or Agent&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supported Faults:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All Pod related faults, Node faults, http faults, IO/database chaos, network faults and load chaos. These faults are certified for the cloud Kubernetes services like EKS, AKE and GKE as well as for the on-prem versions like RedHat OpenShift, SuSE Rancher and VMware Tanzu. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_2WDvzvJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/6564935b7ada5dccdaef53b0_vmware-logo%25201.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_2WDvzvJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/6564935b7ada5dccdaef53b0_vmware-logo%25201.png" alt="" width="100" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Faults for VMWare Resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Runs via the Kubernetes Chaos Infrastructure or Agent&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supported Faults:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Chaos faults are either injected through the VCenter APIs or through the VMware Tools directly on the operating system running inside the VM. Some faults such as VM power off, VM disk detach and VM host reboot are performed at the VCenter Level. Most of the common faults related to CPU/Memory/IO/Disk stress, http, DNS and Network faults are performed through the operating system through VMware tools. All the common faults are supported for the VMs running on Linux or Windows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BdJoMhKp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/65649397afdf081535406de0_linux%25201.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BdJoMhKp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/65649397afdf081535406de0_linux%25201.png" alt="" width="85" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Faults for Linux Resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Runs via the dedicated LinuxChaos Infrastructure or Agent&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supported Faults:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All faults related to resource stress, network and process. DNS error and spoof, Time Chaos and Disk are also supported. With ssh fault, network switches can also be targeted.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dWxOR7Me--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/656493eb92bb635c59505b8c_windows%25201.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dWxOR7Me--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/656493eb92bb635c59505b8c_windows%25201.png" alt="" width="100" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Faults for Windows Resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Runs via the Kubernetes Chaos Infrastructure or Agent&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supported Faults:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All faults related to resource stress, network and process. Time Chaos and Disk fill are also supported. These are supported for Windows instances that are running on Azure, VMware and AWS.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L0UCuNCg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/6564941beef870b4d8643870_aws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L0UCuNCg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/6564941beef870b4d8643870_aws.png" alt="" width="167" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Faults for AWS Resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Runs via the Kubernetes Chaos Infrastructure or Agent&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supported Faults:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Deep coverage in the chaos faults for EBS, EC2, ECS and Lambda. AZ down faults for NLB, ALB and CLB. Some faults for RDS. All Kubernetes faults are supported for EKS on AWS. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S4JaNFvX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/65649477034c7058303af959_Google-Cloud-Emblem%25201.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S4JaNFvX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/65649477034c7058303af959_Google-Cloud-Emblem%25201.png" alt="" width="178" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Faults for GCP Resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Runs via the Kubernetes Chaos Infrastructure or Agent&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supported Faults:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Faults for GCP VM disk and instance. All Kubernetes faults for GKE. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dchmwwa8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/656494a7034c7058303b33ee_azure%25201.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dchmwwa8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/656494a7034c7058303b33ee_azure%25201.png" alt="" width="100" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Faults for Azure Resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Runs via the Kubernetes Chaos Infrastructure or Agent&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supported Faults:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Faults for Azure VM disk,  instance and web app. All Kubernetes faults for AKS. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4Po9F--y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/656495416409d78705c4b45e_pcf%25201.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4Po9F--y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/656495416409d78705c4b45e_pcf%25201.png" alt="" width="100" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Faults for Cloud FoundryResources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Runs via the dedicated LinuxChaos Infrastructure or Agent&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supported Faults:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Support is extended for Pivotal Cloud Foundry as well as any other Cloud Foundry versions. Faults for Cloud Found App like Delete App, remove routes to app, stop app, unbind service from app etc are supported.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w5Jl8J7M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/656495a62063cb77507e7e60_springboot%25201.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w5Jl8J7M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/656495a62063cb77507e7e60_springboot%25201.png" alt="" width="100" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Faults for Spring Boot Resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Runs via the Kubernetes Chaos Infrastructure or Agent&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supported Faults:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Chaos faults for Spring Boot Apps. App Kill, CPU Stress, Memory Stress, Latency, Exceptions and any chaos monkey fault as wrapper. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Harness Chaos Engineering supports a wide variety of chaos faults that spans across Operating systems, Cloud platforms and Kubernetes. These faults enable end users to verify the resilience of the code being deployed to the target systems or of the systems serving business critical applications. Check out all the Harness Chaos Faults on the &lt;a href="https://developer.harness.io/docs/chaos-engineering/technical-reference/chaos-faults/"&gt;Harness Developer Hub.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Jvy0tcrZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/6564999d6b66127278d45303_HCE%2520200px.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Jvy0tcrZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/6564999d6b66127278d45303_HCE%2520200px.png" alt="" width="383" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;](&lt;a href="https://app.harness.io/auth/#/signup?module=chaos&amp;amp;utm_source=harness_io&amp;amp;utm_medium=cta&amp;amp;utm_campaign=ce&amp;amp;utm_content=hero"&gt;https://app.harness.io/auth/#/signup?module=chaos&amp;amp;utm_source=harness_io&amp;amp;utm_medium=cta&amp;amp;utm_campaign=ce&amp;amp;utm_content=hero&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;‍&lt;a href="https://app.harness.io/auth/#/signup?module=chaos&amp;amp;utm_source=harness_io&amp;amp;utm_medium=cta&amp;amp;utm_campaign=ce&amp;amp;utm_content=hero"&gt;Sign up FREE&lt;/a&gt; to experience the ease of resilience verification using chaos experiments. Harness provides a free plan that enables you to run a few chaos experiments free of charge for unlimited time.&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;‍Check out the original blog &lt;a href="https://www.harness.io/blog/harness-chaos-engineering-landscape"&gt;here&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
      <category>chaosengineering</category>
    </item>
    <item>
      <title>Start or Expand Your DevSecOps Education Journey - Introducing STO Developer Certification</title>
      <dc:creator>Francois Akiki</dc:creator>
      <pubDate>Tue, 28 Nov 2023 01:44:17 +0000</pubDate>
      <link>https://forem.com/harness/start-or-expand-your-devsecops-education-journey-introducing-sto-developer-certification-2b8g</link>
      <guid>https://forem.com/harness/start-or-expand-your-devsecops-education-journey-introducing-sto-developer-certification-2b8g</guid>
      <description>&lt;p&gt;As systems grow more distributed, complex, and critical to our daily lives the fog of development starts to set in; no one person has complete end to end visibility into the entire workings of a system. To combat the fog of development, dissemination of expertise across your pipelines is crucial. Security in an application and infrastructure context is a good model of this with the &lt;a href="https://www.harness.io/blog/devsecops-strategies-secure-applications"&gt;DevSecOps&lt;/a&gt; movement. &lt;/p&gt;

&lt;p&gt;Software development teams are becoming more tasked with shifting left requirements to produce hygienic software. In the real world, software ages like milk and not like wine, so what was hygienic today might not be hygienic tomorrow. Your CI/CD pipelines are conduits of change and are excellent spots to disseminate expertise and ensure compliance and standards to security posture. &lt;/p&gt;

&lt;p&gt;Harness’s &lt;a href="https://www.harness.io/products/security-testing-orchestration"&gt;Security Testing Orchestration or STO module&lt;/a&gt; is purpose built for your pipelines by orchestrating and prioritizing results from a multitude of scanning tools. Most organizations will have more than one scanning tool because tools can be granular or vertical in focus around a few pillars such as intent, language, and distribution. Because of the complexity of modern systems, everyone involved with the development of these systems should take a stake in helping secure these systems. &lt;/p&gt;

&lt;h2&gt;
  
  
  Security, Everyone’s Responsibility
&lt;/h2&gt;

&lt;p&gt;At Harness, we view security as an important skill to have, we are offering our STO Developer Certification for free so everyone can up-level their DevSecOps skills. Taking a &lt;a href="https://developer.harness.io/certifications/sto?lvl=developer"&gt;look at our study guide&lt;/a&gt;, will provide a great foundation around application vulnerability management. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4kFpuZSX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/6564babd68bf98666efe40d1_AthYlz6X34_eoD04EfwaM13Gu87vo0LT8mgzkH0IeITSoXykwScJFcVlE9xrv1Ra0DBf0_TRpgCM_e5rZfCiCFEgBa49UKcQaBOoMd5mGcAM6mcVMzPzT9ay7Mo9qfYMlRNSfTofoljzvxowYWPssWI.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4kFpuZSX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://assets-global.website-files.com/622642781cd7e96ac1f66807/6564babd68bf98666efe40d1_AthYlz6X34_eoD04EfwaM13Gu87vo0LT8mgzkH0IeITSoXykwScJFcVlE9xrv1Ra0DBf0_TRpgCM_e5rZfCiCFEgBa49UKcQaBOoMd5mGcAM6mcVMzPzT9ay7Mo9qfYMlRNSfTofoljzvxowYWPssWI.png" alt="STO Dev Cert Study Guide" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;STO Dev Cert Study Guide&lt;/p&gt;

&lt;p&gt;Because so many components are in modern software today, keeping up with the bill of materials / how components age can be tricky. Harness STO can help you identify and prioritize issues that do need to be addressed. Having Harness STO as part of your pipeline is a prudent capability and being certified in Harness STO is a great skill to have.&lt;/p&gt;

&lt;h2&gt;
  
  
  Study and Sign Up Today
&lt;/h2&gt;

&lt;p&gt;Getting certified at the developer level on Harness Security Testing Orchestration is a great milestone in your DevSecOps journey. Register for the exam from the &lt;a href="https://developer.harness.io/certifications/sto?lvl=developer"&gt;Harness Developer Hub&lt;/a&gt; once you feel comfortable taking the exam. &lt;/p&gt;

&lt;p&gt;-Harness Product Education Engineering Team&lt;/p&gt;

&lt;p&gt;‍&lt;/p&gt;

&lt;p&gt;‍Check out the original blog &lt;a href="https://www.harness.io/blog/introducing-sto-developer-certification"&gt;here&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>devsecops</category>
    </item>
    <item>
      <title>How to use Self-Service Onboarding in Harness Internal Developer Portal</title>
      <dc:creator>Debabrata Panigrahi</dc:creator>
      <pubDate>Fri, 17 Nov 2023 16:57:36 +0000</pubDate>
      <link>https://forem.com/harness/how-to-use-self-service-onboarding-in-harness-internal-developer-portal-142f</link>
      <guid>https://forem.com/harness/how-to-use-self-service-onboarding-in-harness-internal-developer-portal-142f</guid>
      <description>&lt;p&gt;In this tutorial, &lt;a href="https://www.linkedin.com/in/debanitr/"&gt;Debabrata&lt;/a&gt; a Developer Relations Engineer at Harness, dives into the essentials of creating a basic service onboarding pipeline within the &lt;a href="https://www.harness.io/products/internal-developer-portal"&gt;Harness Internal Developer Portal (IDP)&lt;/a&gt;, that runs on Backstage v1.17. This feature is a game-changer for platform engineers and developers looking to streamline their application development processes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What You'll Learn:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;🛠️ Setting Up the Pipeline:&lt;/strong&gt; We start by guiding you through the process of creating a Harness pipeline for service onboarding, including the creation of Build or Custom stages.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;📝 Using Software Templates:&lt;/strong&gt; Learn how to interact with software templates to collect user requirements efficiently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🔄 Automating Service Onboarding:&lt;/strong&gt; Discover how the Harness pipeline automates the onboarding of new services, from fetching skeleton code to creating new repositories.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🐍 Scripting with Cookiecutter:&lt;/strong&gt; We'll show you how to use a Python CLI, cookiecutter, to generate a basic Next.js app and set up a repository.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🌐 Managing Variables and Authentication:&lt;/strong&gt; Understand how to manage pipeline variables and authenticate requests within the pipeline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🎨 Creating a Software Template Definition:&lt;/strong&gt; We guide you through creating a template.yaml file in IDP, powered by Backstage Software Template.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/0GoK3SD1rxs"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>idp</category>
      <category>backstage</category>
      <category>platformengineering</category>
      <category>productivity</category>
    </item>
    <item>
      <title>From Zero to Kubernetes Deployment: Harness Continuous Delivery in Action</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Fri, 17 Nov 2023 14:45:48 +0000</pubDate>
      <link>https://forem.com/harness/from-zero-to-kubernetes-deployment-harness-continuous-delivery-in-action-1332</link>
      <guid>https://forem.com/harness/from-zero-to-kubernetes-deployment-harness-continuous-delivery-in-action-1332</guid>
      <description>&lt;p&gt;Harness Continuous Delivery pipelines enable you to orchestrate and automate your deployment workflows, allowing you to push updated application images to your target Kubernetes cluster seamlessly.&lt;/p&gt;

&lt;p&gt;In this video, &lt;a href="https://www.linkedin.com/in/diahmed/"&gt;Dewan Ahmed&lt;/a&gt;, a Developer Advocate at Harness, demonstrates how to install a Harness delegate and create entities such as Harness secrets, connectors, environments, services, and pipelines. Dewan guides you through a successful pipeline execution with a manual trigger and then shows how to create a pipeline variable to configure the Kubernetes namespace to be provided during pipeline execution.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/irDr4JlbmLY"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>harness</category>
      <category>cicd</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
