<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Dewan Ahmed</title>
    <description>The latest articles on Forem by Dewan Ahmed (@dewanahmed).</description>
    <link>https://forem.com/dewanahmed</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/dewanahmed"/>
    <language>en</language>
    <item>
      <title>The Principal Developer Advocate Paradox: Scaling Impact Without Burnout</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Thu, 13 Mar 2025 14:21:21 +0000</pubDate>
      <link>https://forem.com/dewanahmed/the-principal-developer-advocate-paradox-scaling-impact-without-burnout-3pg7</link>
      <guid>https://forem.com/dewanahmed/the-principal-developer-advocate-paradox-scaling-impact-without-burnout-3pg7</guid>
      <description>&lt;p&gt;As a Principal Developer Advocate, you don’t own the roadmap, control the pipeline, set budgets, or dictate campaigns—yet your work influences all of them. Your impact isn’t measured by code written but by friction removed, momentum created, and voices amplified.  &lt;/p&gt;

&lt;p&gt;The challenge? The more effective you are, the more demand you generate. You risk becoming a bottleneck, pulled into every meeting, every decision, every request. But true success in this role isn’t about being everywhere—it’s about creating &lt;strong&gt;force multipliers&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;Let’s break down the hidden challenges of this role—and how to navigate them without burning out.&lt;/p&gt;

&lt;h3&gt;
  
  
  Paradox of Belonging
&lt;/h3&gt;

&lt;p&gt;You are connected to every team—engineering, product, marketing, community—but you don’t &lt;em&gt;belong&lt;/em&gt; to any single one. The role can be surprisingly isolating; you're expected to bridge gaps, yet you often lack a dedicated home base. Finding circles of trust—peer advocates, engineers who value your input, and mentors—becomes essential to navigating this unique position.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Freedom-Responsibility Paradox&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You have significant autonomy in what you choose to work on, but there’s an implicit expectation that your efforts drive measurable impact. Are you truly solving the right problems, or just doing what seems urgent? The solve is to create an &lt;strong&gt;impact framework&lt;/strong&gt;—a structured way to assess whether your advocacy, content, and engagement are making a difference. Freedom in this role isn’t about working on what excites you; it’s about owning the mission of amplifying developer success in the highest-leverage way.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Bandwidth Challenges: From Social Resource to Force Multiplier&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;It’s easy to become the "go-to person"—the one in every meeting, the voice in every strategy discussion, the person responding to every community question. This leads to burnout, endless context switching, and a diluted ability to create &lt;em&gt;scalable&lt;/em&gt; impact. The trick is to shift from being a &lt;strong&gt;reactive resource&lt;/strong&gt; to a &lt;strong&gt;strategic force multiplier&lt;/strong&gt;:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Writing docs that answer recurring questions so teams can self-serve instead of always coming to you.
&lt;/li&gt;
&lt;li&gt;Creating frameworks that enable &lt;em&gt;others&lt;/em&gt; to advocate, instead of being the single point of engagement.
&lt;/li&gt;
&lt;li&gt;Focusing on content, programs, and initiatives that scale beyond your personal involvement.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Being Truly Present (Even When Writing Docs)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You’re in a meeting, constantly context-switching, your mind racing to the next three tasks, while also drafting a response to a community Slack thread. Developer Advocates juggle multiple domains—public speaking, content creation, product feedback, and internal alignment—which makes it tempting to be &lt;em&gt;everywhere at once&lt;/em&gt;. The reality? &lt;em&gt;Presence matters&lt;/em&gt;.  &lt;/p&gt;

&lt;p&gt;✔ When writing content, resist the urge to multitask. Well-written content scales your knowledge far beyond what any single conversation can.&lt;/p&gt;

&lt;p&gt;✔ When engaging with teams, be fully present. Not every meeting needs you, but when you’re in the right ones, your focus is your biggest asset.&lt;/p&gt;

&lt;p&gt;✔ Protect your deep work time—whether that’s for writing, content creation, or strategic thinking.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Perfection Trap&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In engineering, we optimize for correctness. In advocacy, speed and iteration often win. It’s tempting to overanalyze, perfect every talk, refine every article—only to realize that the audience has already moved on. Great Developer Advocacy means accepting that &lt;strong&gt;good content now&lt;/strong&gt; is better than &lt;em&gt;perfect&lt;/em&gt; content later. It’s about learning in public, iterating fast, and engaging before the moment passes.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Authority Paradox&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Contrary to perception, Principal Developer Advocates have &lt;strong&gt;influence, not authority&lt;/strong&gt;. You’re expected to drive cross-functional initiatives—impacting engineering roadmaps, developer experience, and community growth—but you don’t control teams, priorities, or budgets. You can’t mandate change. Instead, your success depends on &lt;strong&gt;persuasion, trust, and credibility&lt;/strong&gt;. The best Developer Advocates don’t push people to do things; they make others &lt;em&gt;want&lt;/em&gt; to take action through clarity, vision, and relentless execution.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Power of Saying "No"&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;With endless requests for your time, saying “yes” to everything means saying “no” to focused, strategic work. The ability to say “no” isn’t about shutting down collaboration—it’s about protecting your ability to make an impact.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If a request isn’t aligned with your mission, redirect it to scalable solutions (docs, videos, workshops).
&lt;/li&gt;
&lt;li&gt;Just because you &lt;em&gt;can&lt;/em&gt; do something doesn’t mean you &lt;em&gt;should&lt;/em&gt;—guard your time for high-leverage work.
&lt;/li&gt;
&lt;li&gt;The best Developer Advocates create &lt;strong&gt;systems&lt;/strong&gt; that help others succeed &lt;em&gt;without needing them in the loop every time.&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Time to Ship
&lt;/h3&gt;

&lt;p&gt;Being a Principal Developer Advocate is a delicate balance between influence and responsibility. You don't control the tools or the teams, but your impact reverberates through every part of the organization. To thrive in this role, you must understand how to amplify your efforts without being stretched too thin.  &lt;/p&gt;

&lt;p&gt;This isn’t just about showing up—it's about showing up where it matters. By focusing on scalable strategies, managing your bandwidth, and creating systems that empower others, you can truly maximize your impact. And when you embrace the power of saying "no," you protect the precious time that allows you to do your best work, at the highest level.&lt;/p&gt;

&lt;p&gt;The role might be paradoxical, but in its heart lies the opportunity to shape the future of developer experience, content creation, and community engagement—&lt;em&gt;without burning out.&lt;/em&gt; The key is mastering the art of leverage and focus to become the force multiplier that everyone needs.&lt;/p&gt;




&lt;p&gt;This blog was inspired by a &lt;a href="https://www.linkedin.com/posts/bhavik-kothari-5768b42a_some-obvious-and-not-so-obvious-challenges-activity-7303872281674465281-s5Mc" rel="noopener noreferrer"&gt;LinkedIn post from Bhavik Kothari, Principal Engineer at Amazon&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devrel</category>
      <category>community</category>
    </item>
    <item>
      <title>Speed Up Your CI Pipelines with Docker Layer Caching</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Fri, 24 Jan 2025 22:21:13 +0000</pubDate>
      <link>https://forem.com/dewanahmed/speed-up-your-ci-pipelines-with-docker-layer-caching-2lmg</link>
      <guid>https://forem.com/dewanahmed/speed-up-your-ci-pipelines-with-docker-layer-caching-2lmg</guid>
      <description>&lt;p&gt;In this video, Harness Principal Developer Advocate &lt;a class="mentioned-user" href="https://dev.to/dewanahmed"&gt;@dewanahmed&lt;/a&gt;  demonstrates how Docker Layer Caching (DLC) in Harness CI can speed up your pipelines by 8X. You'll learn how DLC optimizes builds by reusing unchanged image layers, reducing redundant processing, and cutting down build times.&lt;/p&gt;

&lt;p&gt;Watch as Dewan compares build and push performance between Harness CI and GitHub Actions, showcasing significant time savings and improved efficiency.&lt;/p&gt;

&lt;p&gt;To learn more, check out &lt;a href="https://developer.harness.io/docs/continuous-integration/use-ci/caching-ci-data/docker-layer-caching/" rel="noopener noreferrer"&gt;Harness CI docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Read the &lt;a href="https://www.harness.io/blog/speed-up-ci-pipelines-8x" rel="noopener noreferrer"&gt;blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>ci</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>Speed Up Your CI Pipelines with Docker Layer Caching</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Thu, 23 Jan 2025 16:03:21 +0000</pubDate>
      <link>https://forem.com/harness/speed-up-your-ci-pipelines-with-docker-layer-caching-4ffe</link>
      <guid>https://forem.com/harness/speed-up-your-ci-pipelines-with-docker-layer-caching-4ffe</guid>
      <description>&lt;p&gt;In modern software development, speed and efficiency are paramount. Long build times can slow down releases and hinder productivity. Docker layer caching is a powerful technique that helps optimize builds by reusing previously created image layers, reducing redundant processing. In this blog, we'll explore how Harness CI features &lt;a href="https://developer.harness.io/docs/continuous-integration/use-ci/caching-ci-data/docker-layer-caching" rel="noopener noreferrer"&gt;Docker Layer Caching (DLC)&lt;/a&gt; to enhance build performance and streamline your CI/CD pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  DLC and Multi-stage Builds
&lt;/h2&gt;

&lt;p&gt;Every instruction in a Dockerfile creates a layer in the final image. Docker caches these layers to avoid rebuilding them unnecessarily, which can save significant time and reduce infrastructure costs. However, when a layer changes (e.g., modifying a file copied with COPY), Docker invalidates the cache for that layer and all subsequent layers, requiring them to be rebuilt. Understanding and optimizing layer usage helps in writing more efficient Dockerfiles, achieving faster build times, and lowering compute costs.&lt;/p&gt;

&lt;p&gt;A multi-stage Dockerfile allows you to use multiple FROM statements to break the build process into stages. This helps keep the final image lightweight by copying only the necessary files from one stage to another, discarding anything unnecessary. It speeds up builds by leveraging layer caching, reducing the need to re-run expensive steps. Plus, it enhances security by minimizing the final image's attack surface and keeps the Dockerfile organized by separating concerns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Harness CI Intelligence: Docker Layer Caching
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://developer.harness.io/docs/continuous-integration/get-started/harness-ci-intelligence" rel="noopener noreferrer"&gt;Harness CI Intelligence&lt;/a&gt; optimizes Docker builds by leveraging Docker Layer Caching (DLC) to reuse unchanged image layers, significantly reducing build times and resource costs. When enabled, DLC restores previously built layers, avoiding redundant processing and speeding up the build and push process. Harness CI supports DLC across both Harness Cloud and self-managed infrastructure, providing flexibility in managing cache storage. This intelligent caching mechanism enhances CI/CD efficiency by minimizing infrastructure usage and improving developer productivity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v3swfx21l5jv5lb0t1n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v3swfx21l5jv5lb0t1n.png" alt="Harness CI Intelligence Overview" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmarking Build and Push to Docker
&lt;/h2&gt;

&lt;p&gt;Check out the following video for a demo on running a build and push to Docker step for a Go repository using GitHub Actions and Harness CI, with Harness CI achieving an 8X improvement in build times.&lt;br&gt;
Speed Up Your CI Pipelines with Docker Layer Caching&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/SOZxl761MCI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;The following chart summarizes the performance comparison of this benchmark (Harness CI with DLC vs. GitHub Actions):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9thaiyhqhfuwn5wqtrmp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9thaiyhqhfuwn5wqtrmp.png" alt="Benchmark: Harness CI vs. GitHub Actions" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Implementing Docker Layer Caching in your CI/CD pipelines can lead to significant improvements in build performance, cost savings, and overall development efficiency. By reusing unchanged layers and minimizing redundant processing, Harness CI helps teams accelerate their workflows while optimizing infrastructure usage. Whether you're running builds in Harness Cloud or a self-managed environment, enabling DLC ensures faster feedback loops and a smoother development experience. Start leveraging Docker Layer Caching today to speed up your CI pipelines and focus on delivering value faster.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>caching</category>
      <category>harness</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>End-to-end MLOps CI/CD pipeline with Harness and AWS</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Wed, 01 May 2024 13:29:10 +0000</pubDate>
      <link>https://forem.com/harness/end-to-end-mlops-cicd-pipeline-with-harness-and-aws-4084</link>
      <guid>https://forem.com/harness/end-to-end-mlops-cicd-pipeline-with-harness-and-aws-4084</guid>
      <description>&lt;p&gt;MLOps tackles the complexities of building, testing, deploying, and monitoring machine learning models in real-world environments.&lt;/p&gt;

&lt;p&gt;Integrating machine learning into the traditional software development lifecycle poses unique challenges due to the intricacies of data, model versioning, scalability, and ongoing monitoring.&lt;/p&gt;

&lt;p&gt;In this tutorial, you'll create an end-to-end MLOps CI/CD pipeline that will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build and push an ML model to AWS ECR.&lt;/li&gt;
&lt;li&gt;Run security scans and tests.&lt;/li&gt;
&lt;li&gt;Deploy the model to AWS Lambda.&lt;/li&gt;
&lt;li&gt;Add policy enforcement and monitoring for the model.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The story
&lt;/h2&gt;

&lt;p&gt;This tutorial uses a fictional bank called &lt;em&gt;Harness Bank&lt;/em&gt;. Assume that this fictional bank recently launched a website where clients can apply for a credit card. Based on the information provided in the form, the customer's application is approved or denied in seconds. This online credit card application is powered by a machine learning (ML) model trained on data that makes the decision accurate and unbiased.&lt;/p&gt;

&lt;p&gt;Assume that the current process to update this hypothetical ML model is manual. A data scientist builds a new image locally, runs tests, and manually ensures that the model passes the required threshold for accuracy and fairness.&lt;/p&gt;

&lt;p&gt;In this tutorial, you'll automate the model maintenance process and increase the build and delivery velocity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Design and architecture
&lt;/h3&gt;

&lt;p&gt;Before diving into the implementation, review the MLOps architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jgr5nwdx9ujfte6rqss.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jgr5nwdx9ujfte6rqss.png" alt="Architecture Diagram" width="800" height="45"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this tutorial, assume you are given a Python data science project, and you are requested to do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build and push an image for this project.&lt;/li&gt;
&lt;li&gt;Run security scans on the container image.&lt;/li&gt;
&lt;li&gt;Upload model visualization data to S3.&lt;/li&gt;
&lt;li&gt;Publish model visualization data within the pipeline.&lt;/li&gt;
&lt;li&gt;Run test on the model to find out accuracy and fairness scores.&lt;/li&gt;
&lt;li&gt;Based on those scores, use Open Policy Agent (OPA) policies to either approve or deny the model.&lt;/li&gt;
&lt;li&gt;Deploy the model.&lt;/li&gt;
&lt;li&gt;Monitor the model and ensure the model is not outdated.&lt;/li&gt;
&lt;li&gt;Trigger the pipeline based on certain git events.&lt;/li&gt;
&lt;li&gt;(Optional) Add approval gates for production deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this tutorial, assume that the data is already processed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;This tutorial requires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Harness account with access to the Continuous Integration, Continuous Delivery, and Security Testing Orchestration modules. If you are new to Harness, &lt;a href="https://app.harness.io/auth/#/signup/?&amp;amp;utm_campaign=cicd-devrel"&gt;you can sign up for free&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;An AWS account, credentials, and a Harness AWS connector.&lt;/li&gt;
&lt;li&gt;A GitHub account, credentials, and a Harness GitHub connector.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prepare AWS
&lt;/h3&gt;

&lt;p&gt;You need an AWS account with sufficient permissions to create/modify/view resources used in this tutorial.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Prepare AWS credentials.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This tutorial requires two sets of AWS credentials. One set is for a &lt;a href="https://dev.to/docs/platform/connectors/cloud-providers/add-aws-connector"&gt;Harness AWS connector&lt;/a&gt;, and the other is for the &lt;a href="https://dev.to/docs/security-testing-orchestration/sto-techref-category/aws-ecr-scanner-reference"&gt;AWS ECR scanner for STO&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can use an AWS Vault plugin to generate AWS credentials for the AWS connector, and you can use the AWS console to generate the AWS Access Key ID, AWS Secret Access Key, and AWS Session Token, which are valid for a shorter time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Save these credentials securely and make a note of your AWS account ID and AWS region.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ NOTE &lt;/p&gt;

&lt;p&gt;If you are using a personal, non-production AWS account for this tutorial, you can initially grant admin access for these credentials. Once the demo works, reduce access to adhere to the principle of least privilege.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create ECR repos. From your AWS console, navigate to Elastic Container Registry (ECR) and create two private repositories named &lt;code&gt;ccapproval&lt;/code&gt; and &lt;code&gt;ccapproval-deploy&lt;/code&gt;. Under &lt;strong&gt;Image scan settings&lt;/strong&gt;, enable &lt;strong&gt;Scan on Push&lt;/strong&gt; for both repositories.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an S3 bucket. Navigate to S3 and create a bucket named something like &lt;code&gt;mlopswebapp&lt;/code&gt;. You'll use this bucket to host a static website for the credit card approval application demo, along with a few other artifacts.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Make sure all options under &lt;strong&gt;Block public access (bucket settings)&lt;/strong&gt; are unchecked, and then apply the following bucket policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   {
       "Version": "2012-10-17",
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": "*",
               "Action": "s3:GetObject",
               "Resource": "arn:aws:s3:::YOUR_S3_BUCKET_NAME/*"
           }
       ]
   }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After making the bucket public, your bucket page should show a &lt;code&gt;Publicly accessible&lt;/code&gt; flag.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjt3rp83u0uou448h219.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjt3rp83u0uou448h219.png" alt="S3 bucket is public" width="688" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From your AWS console, go to &lt;strong&gt;AWS Lambda&lt;/strong&gt;, select &lt;strong&gt;Functions&lt;/strong&gt;, and create a function from a container image using the following configuration:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Name: &lt;code&gt;creditcardapplicationlambda&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Container image URI: Select &lt;strong&gt;Browse images&lt;/strong&gt; and find the &lt;code&gt;ccapproval-deploy&lt;/code&gt; image. You can choose any image tag.&lt;/li&gt;
&lt;li&gt;Architecture: &lt;code&gt;x86_64&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;From Advanced: Select &lt;strong&gt;Enable function URL&lt;/strong&gt; to make the function URL public. Anyone with the URL can access your function. For more information, go to the &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-urls.html"&gt;AWS documentation on Lambda function URLs&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Create function&lt;/strong&gt; to create the function. You'll notice an info banner confirming that the function URL is public.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28pegemyg7faw1xaclkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28pegemyg7faw1xaclkl.png" alt="Lambda Function URL" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Prepare GitHub
&lt;/h3&gt;

&lt;p&gt;This tutorial uses a GitHub account for source control management.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fork the &lt;a href="https://github.com/harness-community/mlops-creditcard-approval-model"&gt;MLops sample app repository&lt;/a&gt; into your GitHub account.&lt;/li&gt;
&lt;li&gt;Create a &lt;a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens"&gt;GitHub personal access token&lt;/a&gt; with following permissions on your forked repository:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;repo/content: read+write&lt;/li&gt;
&lt;li&gt;repo/pull requests: read&lt;/li&gt;
&lt;li&gt;repo/webhooks: read+write&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create Harness secrets
&lt;/h3&gt;

&lt;p&gt;Store your GitHub and AWS credentials as secrets in Harness.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In your Harness account, create or select a &lt;a href="https://dev.to/docs/platform/organizations-and-projects/projects-and-organizations"&gt;project&lt;/a&gt; to use for this tutorial.&lt;/li&gt;
&lt;li&gt;In your project settings, select &lt;strong&gt;Secrets&lt;/strong&gt;, select &lt;strong&gt;New Secret&lt;/strong&gt;, and then select &lt;strong&gt;Text&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Create the following &lt;a href="https://dev.to/docs/platform/secrets/add-use-text-secrets"&gt;Harness text secrets&lt;/a&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;git_pat&lt;/code&gt; - GitHub personal access token&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;aws_access_key_id&lt;/code&gt; - Generated from AWS console&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;aws_secret_access_key&lt;/code&gt; - Generated from AWS console&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;aws_session_token&lt;/code&gt; - Generated from AWS console&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;aws_vault_secret&lt;/code&gt; - Secret access key generated by Vault plugin&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Make sure the &lt;strong&gt;Name&lt;/strong&gt; and &lt;strong&gt;ID&lt;/strong&gt; match for each secret, because you reference secrets by their IDs in Harness pipelines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create AWS and GitHub connectors
&lt;/h3&gt;

&lt;p&gt;Create Harness connectors to connect to your AWS and GitHub accounts.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In your Harness project settings, go to &lt;strong&gt;Connectors&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;New Connector&lt;/strong&gt;, select the &lt;strong&gt;AWS&lt;/strong&gt; connector, and then create an &lt;a href="https://dev.to/docs/platform/connectors/cloud-providers/add-aws-connector"&gt;AWS connector&lt;/a&gt; with the following configuration:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Name: &lt;code&gt;mlopsawsconnector&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Access Key: AWS Vault plugin generated&lt;/li&gt;
&lt;li&gt;Secret Key: Use  your &lt;code&gt;aws_vault_secret&lt;/code&gt; secret&lt;/li&gt;
&lt;li&gt;Connectivity Mode: Connect through Harness Platform&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Leave all other settings as is, and make sure the connection test passes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyexcf60gf379vupzg2o3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyexcf60gf379vupzg2o3.png" alt="Connector Connectivity Status" width="421" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create another connector. This time, select the &lt;a href="https://dev.to/docs/platform/connectors/code-repositories/ref-source-repo-provider/git-hub-connector-settings-reference"&gt;GitHub connector&lt;/a&gt; and and use the following configuration:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Name: &lt;code&gt;mlopsgithubconnector&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;URL Type: &lt;code&gt;Repository&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Connection Type: &lt;code&gt;HTTP&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;GitHub Repository URL: Enter the URL to your fork of the demo repo, such as &lt;code&gt;https://github.com/:gitHubUsername/mlops-creditcard-approval-model&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Username: Enter your GitHub username&lt;/li&gt;
&lt;li&gt;Personal Access Token: Use your &lt;code&gt;git_pat&lt;/code&gt; secret&lt;/li&gt;
&lt;li&gt;Connectivity Mode: Connect through Harness Platform&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create the Harness pipeline
&lt;/h2&gt;

&lt;p&gt;In Harness, you create pipeline to represent workflows. Pipeline can have multiple stages, and each stage can have multiple steps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In your Harness project, &lt;a href="///docs/continuous-integration/use-ci/prep-ci-pipeline-components.md"&gt;create a pipeline&lt;/a&gt; named &lt;code&gt;Credit Card Approval MLops&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Add a &lt;strong&gt;Build&lt;/strong&gt; stage named &lt;code&gt;Train Model&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Make sure &lt;strong&gt;Clone Codebase&lt;/strong&gt; is enabled.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Third-party Git provider&lt;/strong&gt;, and then select your &lt;code&gt;mlopsgithubconnector&lt;/code&gt; GitHub connector. The repository name should populate automatically.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Set Up Stage&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the following sections of this tutorial, you'll configure this stage to build and push the data science image, and you'll add more stages to the pipeline to meet the tutorial's objectives.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ NOTE&lt;/p&gt;

&lt;p&gt;You can find a &lt;a href="https://github.com/harness-community/mlops-creditcard-approval-model/blob/main/sample-mlops-pipeline.yaml"&gt;sample pipeline for this tutorial in the demo repo&lt;/a&gt;. If you use this pipeline, you must replace the placeholder and sample values accordingly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Build, push, and scan the image
&lt;/h3&gt;

&lt;p&gt;Configure your &lt;code&gt;Train Model&lt;/code&gt; stage to build and push the data science image and then retrieve the ECR scan results.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the &lt;strong&gt;Infrastructure&lt;/strong&gt; tab and configure the build infrastructure for the &lt;code&gt;Train Model&lt;/code&gt; stage:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Cloud&lt;/strong&gt; to use &lt;a href="https://dev.to/docs/continuous-integration/use-ci/set-up-build-infrastructure/use-harness-cloud-build-infrastructure"&gt;Harness Cloud build infrastructure&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Platform&lt;/strong&gt;, select &lt;strong&gt;Linux&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Architecture&lt;/strong&gt;, select &lt;strong&gt;AMD64&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Select the &lt;strong&gt;Execution&lt;/strong&gt; tab to add steps to the stage.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Add Step&lt;/strong&gt;, select the &lt;a href="https://dev.to/docs/continuous-integration/use-ci/build-and-upload-artifacts/build-and-push/build-and-push-to-ecr-step-settings"&gt;Build and Push to ECR step&lt;/a&gt;, and configure the step as follows:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Harness Training&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;AWS Connector&lt;/strong&gt;, select your &lt;code&gt;mlopsawsconnector&lt;/code&gt; AWS connector.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Region&lt;/strong&gt;, enter your AWS region.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Account ID&lt;/strong&gt;, enter your AWS account ID.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Image Name&lt;/strong&gt;, enter &lt;code&gt;ccapproval&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Tags&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+pipeline.executionId&amp;gt;&lt;/code&gt;. You can select the &lt;strong&gt;Input type&lt;/strong&gt; icon to change the input type to expression (&lt;strong&gt;f(x)&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Dockerfile&lt;/strong&gt; (under &lt;strong&gt;Optional Configuration&lt;/strong&gt;), enter &lt;code&gt;Dockerfile_Training_Testing&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Apply Changes&lt;/strong&gt; to save the step, and then select &lt;strong&gt;Save&lt;/strong&gt; to save the pipeline.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next, you'll add steps to your build stage to retrieve the results of the ECR repo security scan.&lt;/p&gt;

&lt;p&gt;Because scanning is enabled on your ECR repositories, each image pushed to the repo by the &lt;strong&gt;Build and Push to ECR&lt;/strong&gt; step is scanned for vulnerabilities. In order to successfully retrieve the scan results, your pipeline needs to wait for the scan to finish and then request the results.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Before adding the step to retrieve the scan result, use a &lt;a href="https://dev.to/docs/continuous-integration/use-ci/run-step-settings"&gt;Run step&lt;/a&gt; to add a 15-second wait to ensure that the scan is complete before the pipeline requests the scan results.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Add Step&lt;/strong&gt; after the &lt;code&gt;Harness Training&lt;/code&gt; step, and select the &lt;strong&gt;Run&lt;/strong&gt; step.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Wait for ECR Image Scan&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Command&lt;/strong&gt;, enter the following, and then select &lt;strong&gt;Apply Changes&lt;/strong&gt; to save the step.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ECR Image Scan In Progress..."&lt;/span&gt;
   &lt;span class="nb"&gt;sleep &lt;/span&gt;15
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add an &lt;a href="https://dev.to/docs/security-testing-orchestration/sto-techref-category/aws-ecr-scanner-reference"&gt;AWS ECR Scan step&lt;/a&gt; to get the scan results.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Add Step&lt;/strong&gt; after the &lt;code&gt;Wait&lt;/code&gt; step, and select the &lt;strong&gt;AWS ECR Scan&lt;/strong&gt; step.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Security Scans for ML Model&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Target/Name&lt;/strong&gt;, enter &lt;code&gt;ccapproval-ecr-scan&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Variant&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+pipeline.executionId&amp;gt;&lt;/code&gt;. You can select the &lt;strong&gt;Input type&lt;/strong&gt; icon to change the input type to expression (&lt;strong&gt;f(x)&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Container Image/Name&lt;/strong&gt;, enter &lt;code&gt;ccapproval&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Container Image/Tag&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+pipeline.executionId&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Region&lt;/strong&gt;, enter your AWS region.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Authentication&lt;/strong&gt;, use &lt;a href="https://dev.to/docs/platform/variables-and-expressions/harness-variables"&gt;Harness expressions&lt;/a&gt; referencing your AWS credential secrets:

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Access ID&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+secrets.getValue("aws_access_key_id")&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Access Token&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+secrets.getValue("aws_secret_access_key")&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Access Region&lt;/strong&gt;, enter your AWS region.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Log Level&lt;/strong&gt;, enter &lt;code&gt;Info&lt;/code&gt;.

&lt;ul&gt;
&lt;li&gt;Under &lt;strong&gt;Settings&lt;/strong&gt;, add the following key-value pair: &lt;code&gt;AWS_SESSION_TOKEN: &amp;lt;+secrets.getValue("aws_session_token")&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Apply Changes&lt;/strong&gt; to save the step, and then select &lt;strong&gt;Save&lt;/strong&gt; to save the pipeline.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At this point, you can run the pipeline to test the Build stage.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Run Pipeline&lt;/strong&gt; to test the Build stage. For &lt;strong&gt;Git Branch&lt;/strong&gt;, enter &lt;code&gt;main&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Wait while the pipeline runs, and then check your &lt;code&gt;ccapproval&lt;/code&gt; ECR repository to find an image with a SHA matching the pipeline execution ID. Select &lt;strong&gt;Copy URI&lt;/strong&gt; to copy the image URI; you'll need it in the next section.&lt;/li&gt;
&lt;li&gt;Make sure the image scan also ran. In the Harness &lt;a href="https://dev.to/docs/continuous-integration/use-ci/viewing-builds"&gt;Build details&lt;/a&gt;, you can find the scan results in the &lt;strong&gt;AWS ECR Scan&lt;/strong&gt; step logs. For example:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   Scan Results: &lt;span class="o"&gt;{&lt;/span&gt;
       &lt;span class="s2"&gt;"jobId"&lt;/span&gt;: &lt;span class="s2"&gt;"xlf06YX6a8AupG_5igGA6I"&lt;/span&gt;,
       &lt;span class="s2"&gt;"status"&lt;/span&gt;: &lt;span class="s2"&gt;"Succeeded"&lt;/span&gt;,
       &lt;span class="s2"&gt;"issuesCount"&lt;/span&gt;: 10,
       &lt;span class="s2"&gt;"newIssuesCount"&lt;/span&gt;: 10,
      &lt;span class="s2"&gt;"issuesBySeverityCount"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
           &lt;span class="s2"&gt;"ExternalPolicyFailures"&lt;/span&gt;: 0,
           &lt;span class="s2"&gt;"NewCritical"&lt;/span&gt;: 0,
           &lt;span class="s2"&gt;"NewHigh"&lt;/span&gt;: 1,
           &lt;span class="s2"&gt;"NewMedium"&lt;/span&gt;: 5,
           &lt;span class="s2"&gt;"NewLow"&lt;/span&gt;: 4,
           &lt;span class="s2"&gt;"NewInfo"&lt;/span&gt;: 0,
           &lt;span class="s2"&gt;"Unassigned"&lt;/span&gt;: 0,
           &lt;span class="s2"&gt;"NewUnassigned"&lt;/span&gt;: 0,
           &lt;span class="s2"&gt;"Critical"&lt;/span&gt;: 0,
           &lt;span class="s2"&gt;"High"&lt;/span&gt;: 1,
           &lt;span class="s2"&gt;"Medium"&lt;/span&gt;: 5,
           &lt;span class="s2"&gt;"Low"&lt;/span&gt;: 4,
           &lt;span class="s2"&gt;"Info"&lt;/span&gt;: 0,
           &lt;span class="s2"&gt;"Ignored"&lt;/span&gt;: 0
       &lt;span class="o"&gt;}&lt;/span&gt;
   &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You've successfully completed the first part of this tutorial: Configuring a Build stage that builds, pushes, and scans a trained data science image.&lt;/p&gt;

&lt;p&gt;Continue the tutorial in the next sections and continue building your MLOps pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test and upload artifacts
&lt;/h3&gt;

&lt;p&gt;Add another &lt;strong&gt;Build&lt;/strong&gt; stage to your pipeline that will run tests, build a Lambda image, and upload artifacts to S3.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Edit your MLOps pipeline and add another &lt;strong&gt;Build&lt;/strong&gt; stage after the &lt;code&gt;Train Model&lt;/code&gt; stage. Name the stage &lt;code&gt;Run test and upload artifacts&lt;/code&gt; and make sure &lt;strong&gt;Clone Codebase&lt;/strong&gt; is enabled.&lt;/li&gt;
&lt;li&gt;On the stage's &lt;strong&gt;Overview&lt;/strong&gt; tab, locate &lt;strong&gt;Shared Paths&lt;/strong&gt;, and add &lt;code&gt;/harness/output&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Infrastructure&lt;/strong&gt; tab, select &lt;strong&gt;Propagate from existing stage&lt;/strong&gt;, and select your &lt;code&gt;Train Model&lt;/code&gt; stage.&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Execution&lt;/strong&gt; tab, add a &lt;strong&gt;Run&lt;/strong&gt; step to run pytest on the demo copebase. Select &lt;strong&gt;Add Step&lt;/strong&gt;, select the &lt;strong&gt;Run&lt;/strong&gt; step, and configure it as follows:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;pytest&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Shell&lt;/strong&gt;, select &lt;strong&gt;Sh&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Command&lt;/strong&gt;, enter:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   pytest --nbval-lax credit_card_approval.ipynb --junitxml=report.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Under &lt;strong&gt;Optional Configuration&lt;/strong&gt;, locate &lt;strong&gt;Container Registry&lt;/strong&gt;, and select your &lt;code&gt;mlopsawsconnector&lt;/code&gt; AWS connector.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Optional Configuration&lt;/strong&gt;, locate &lt;strong&gt;Image&lt;/strong&gt;, and enter the image URI from your &lt;code&gt;Train Model&lt;/code&gt; stage execution with the image tag replaced with &lt;code&gt;&amp;lt;+pipeline.executionId&amp;gt;&lt;/code&gt;. For example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   AWS_ACCOUNT_ID.dkr.ecr.AWS_REGION.amazonaws.com/AWS_ECR_REPO_NAME:&amp;lt;+pipeline.executionId&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The data science project includes two Dockerfiles: One for building the source and one for AWS Lambda deployment. Next, you'll add a step to build and push the image using the Dockerfile designed for AWS Lambda deployment.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Add Step&lt;/strong&gt; and add a &lt;strong&gt;Build and Push to ECR&lt;/strong&gt; step configured as follows:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Build and Push Lambda Deployment Image&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;AWS Connector&lt;/strong&gt;, select your &lt;code&gt;mlopsawsconnector&lt;/code&gt; AWS connector.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Region&lt;/strong&gt;, enter your AWS region.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Account ID&lt;/strong&gt;, enter your AWS account ID.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Image Name&lt;/strong&gt;, enter &lt;code&gt;ccapproval-deploy&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Tags&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+pipeline.executionId&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Optional Configuration&lt;/strong&gt;, locate &lt;strong&gt;Dockerfile&lt;/strong&gt;, and enter &lt;code&gt;Dockerfile_Inference_Lambda&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;pytest&lt;/code&gt; command from the &lt;strong&gt;Run&lt;/strong&gt; step generates an HTML file with some visualizations for the demo ML model. Next, add steps to upload the visualizations artifact to your AWS S3 bucket and post the artifact URL on the Artifacts tab of the &lt;a href="https://dev.to/docs/continuous-integration/use-ci/viewing-builds"&gt;Build details page&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Add Step&lt;/strong&gt;, and add an &lt;a href="https://dev.to/docs/continuous-integration/use-ci/build-and-upload-artifacts/upload-artifacts/upload-artifacts-to-s3"&gt;Upload Artifacts to S3 step&lt;/a&gt; configured as follows:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Upload artifacts to S3&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;AWS Connector&lt;/strong&gt;, select your &lt;code&gt;mlopsawsconnector&lt;/code&gt; AWS connector.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Region&lt;/strong&gt;, enter your AWS region.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Bucket&lt;/strong&gt;, enter your S3 bucket name.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Source Path&lt;/strong&gt;, enter &lt;code&gt;/harness/output/model_metrics.html&lt;/code&gt;. This is where the model visualization file from the &lt;code&gt;pytest&lt;/code&gt; step is stored.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Use the &lt;a href="https://dev.to/docs/continuous-integration/use-ci/build-and-upload-artifacts/artifacts-tab"&gt;Artifact Metadata Publisher plugin&lt;/a&gt; to post the visualization artifact URL on the build's Artifacts tab.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Add a &lt;strong&gt;Plugin&lt;/strong&gt; step after the &lt;strong&gt;Upload Artifacts to S3&lt;/strong&gt; step.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Publish ML model visualization&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Container Registry&lt;/strong&gt;, select the built-in &lt;strong&gt;Harness Docker Connector&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Image&lt;/strong&gt;, enter &lt;code&gt;plugins/artifact-metadata-publisher&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Optional Configuration&lt;/strong&gt;, locate &lt;strong&gt;Settings&lt;/strong&gt;, and add the following key-value pairs:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   file_urls: https://S3_BUCKET_NAME.s3.AWS_REGION.amazonaws.com/harness/output/model_metrics.html
   artifact_file: artifact.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In addition to the model visualization, the &lt;code&gt;pytest&lt;/code&gt; command also generates a &lt;code&gt;shared_env_variables.txt&lt;/code&gt; file to export the model's accuracy and fairness metrics. However, this data is lost when the build ends because Harness stages run in isolated containers. Therefore, you must add a step to export the &lt;code&gt;ACCURACY&lt;/code&gt; and &lt;code&gt;EQUAL_OPPORTUNITY_FAIRNESS_PERCENT&lt;/code&gt; values as &lt;a href="///docs/continuous-integration/use-ci/run-step-settings.md#output-variables"&gt;output variables&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After the &lt;strong&gt;Plugin&lt;/strong&gt; step, add a &lt;strong&gt;Run&lt;/strong&gt; step configured as follows:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Export accuracy and fairness variables&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Shell&lt;/strong&gt;, select &lt;strong&gt;Sh&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Commmand&lt;/strong&gt;, enter:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   # File path
   FILE_PATH="/harness/output/shared_env_variables.txt"

   # Read the file and export variables
   while IFS='=' read -r key value; do
       case $key in
           ACCURACY)
               export ACCURACY="$value"
               ;;
           EQUAL_OPPORTUNITY_FAIRNESS_PERCENT)
               export EQUAL_OPPORTUNITY_FAIRNESS_PERCENT="$value"
               ;;
           *)
               echo "Ignoring unknown variable: $key"
               ;;
       esac
   done &amp;lt; "$FILE_PATH"

   echo $ACCURACY
   echo $EQUAL_OPPORTUNITY_FAIRNESS_PERCENT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Under &lt;strong&gt;Optional Configuration&lt;/strong&gt;, locate &lt;strong&gt;Output Variables&lt;/strong&gt;, and add the following two output variables:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   ACCURACY
   EQUAL_OPPORTUNITY_FAIRNESS_PERCENT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Save the pipeline, and then run it. Again, use &lt;code&gt;main&lt;/code&gt; for the &lt;strong&gt;Git Branch&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Wait while the pipeline runs, and then make sure:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Your &lt;code&gt;ccapproval&lt;/code&gt; and &lt;code&gt;ccapproval-deploy&lt;/code&gt; ECR repositories have images with SHAs matches the pipeline execution ID.&lt;/li&gt;
&lt;li&gt;Your S3 bucket has &lt;code&gt;/harness/output/model_metrics.html&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The URL to the &lt;code&gt;model_metrics&lt;/code&gt; artifact appears on the Artifacts tab in Harness.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0j9hmzt04vjez8vzlml8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0j9hmzt04vjez8vzlml8.png" alt="Artifacts Tab" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The output variable values are in the log for the &lt;code&gt;Export accuracy and fairness variables&lt;/code&gt; step, such as:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   0.92662
   20.799999999999997
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congratulations! So far, you've completed half the requirements for this MLOps project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[x] Build and push an image for this project.&lt;/li&gt;
&lt;li&gt;[x] Run security scans on the container image.&lt;/li&gt;
&lt;li&gt;[x] Upload model visualization data to S3.&lt;/li&gt;
&lt;li&gt;[x] Publish model visualization data within the pipeline.&lt;/li&gt;
&lt;li&gt;[x] Run test on the model to find out accuracy and fairness scores.&lt;/li&gt;
&lt;li&gt;[ ] Based on those scores, use Open Policy Agent (OPA) policies to either approve or deny the model.&lt;/li&gt;
&lt;li&gt;[ ] Deploy the model.&lt;/li&gt;
&lt;li&gt;[ ] Monitor the model and ensure the model is not outdated.&lt;/li&gt;
&lt;li&gt;[ ] Trigger the pipeline based on certain git events.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.toOptional"&gt; &lt;/a&gt; Add approval gates for production deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Continue on with policy enforcement in the next section.&lt;/p&gt;

&lt;h3&gt;
  
  
  Add ML model policy checks
&lt;/h3&gt;

&lt;p&gt;In this section, you'll author OPA policies in Harness and use a &lt;strong&gt;Custom&lt;/strong&gt; stage to add policy enforcement to your pipeline.&lt;/p&gt;

&lt;p&gt;Harness &lt;a href="https://dev.to/docs/platform/governance/policy-as-code/harness-governance-overview"&gt;Policy As Code&lt;/a&gt; uses Open Policy Agent (OPA) as the central service to store and enforce policies for the different entities and processes across the Harness platform. You create individual policies, add them to policy sets, and select the entities (such as pipelines) to evaluate those policies against.&lt;/p&gt;

&lt;p&gt;For this tutorial, the policy requirements are that the model accuracy is over 90% and the fairness margin for equal opportunity is under 21%.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In your Harness project settings, go to &lt;strong&gt;Policies&lt;/strong&gt;, select the &lt;strong&gt;Policies&lt;/strong&gt; tab, and then select &lt;strong&gt;New Policy&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Check fairness and accuracy scores&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;How do you want to setup your Policy&lt;/strong&gt;, select &lt;strong&gt;Inline&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Enter the following policy definition, and then select &lt;strong&gt;Save&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rego"&gt;&lt;code&gt;   &lt;span class="ow"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

   &lt;span class="ow"&gt;default&lt;/span&gt; &lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

   &lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;accuracy&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.9&lt;/span&gt;
       &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fairnessScoreEqualOpportunity&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="m"&gt;21&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="n"&gt;deny&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;allow&lt;/span&gt;
       &lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Deny: Accuracy less than 90% or fairness score difference greater than 21%"&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Select the &lt;strong&gt;Policy Sets&lt;/strong&gt; tab, and then select &lt;strong&gt;New Policy Set&lt;/strong&gt;. Use the following configuration:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Credit Card Approval Policy Set&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Entity Type that this policy set applies to&lt;/strong&gt;, select &lt;strong&gt;Custom&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;On what event should the policy set be evaluated&lt;/strong&gt;, select &lt;strong&gt;On Step&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Add Policy&lt;/strong&gt;, and select your &lt;code&gt;Check fairness and accuracy scores&lt;/code&gt; policy.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;What should happen if a policy fails?&lt;/strong&gt;, select &lt;strong&gt;Warn and Continue&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Finish&lt;/strong&gt;, and make sure the &lt;strong&gt;Enforced&lt;/strong&gt; switch is enabled.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Edit your MLOps pipeline, and add a &lt;strong&gt;Custom&lt;/strong&gt; stage after the second &lt;strong&gt;Build&lt;/strong&gt; stage. Name the stage &lt;code&gt;Model Policy Checks&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Select a Harness Delegate to use for the &lt;strong&gt;Custom&lt;/strong&gt; stage.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The &lt;strong&gt;Build&lt;/strong&gt; stages run on Harness Cloud build infrastructure, which doesn't require a Harness Delegate. However, &lt;strong&gt;Custom&lt;/strong&gt; stages can't use this build infrastructure, so you need a &lt;a href="https://dev.to/docs/platform/delegates/delegate-concepts/delegate-overview"&gt;Harness Delegate&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you don't already have one, &lt;a href="https://developer.harness.io/docs/platform/get-started/tutorials/install-delegate"&gt;install a delegate&lt;/a&gt;. Then, on the &lt;strong&gt;Custom&lt;/strong&gt; stage's &lt;strong&gt;Advanced&lt;/strong&gt; tab, select your delegate in &lt;strong&gt;Define Delegate Selector&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add a &lt;strong&gt;Shell Script&lt;/strong&gt; step to relay the accuracy and fairness output variables from the previous stage to the current stage. Configure the &lt;strong&gt;Shell Script&lt;/strong&gt; step as follows:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Accuracy and Fairness&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Timeout&lt;/strong&gt;, enter &lt;code&gt;10m&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Script Type&lt;/strong&gt;, select &lt;strong&gt;Bash&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Select script location&lt;/strong&gt;, select &lt;strong&gt;Inline&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For &lt;strong&gt;Script&lt;/strong&gt;, enter the following:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  accuracy=&amp;lt;+pipeline.stages.Harness_Training.spec.execution.steps.Export_accuracy_and_fairness_variables.output.outputVariables.ACCURACY&amp;gt;
  fairness_equalopportunity=&amp;lt;+pipeline.stages.Harness_Training.spec.execution.steps.Export_accuracy_and_fairness_variables.output.outputVariables.EQUAL_OPPORTUNITY_FAIRNESS_PERCENT&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Under &lt;strong&gt;Optional Configuration&lt;/strong&gt;, locate &lt;strong&gt;Script Output Variables&lt;/strong&gt;, and add the following two variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;accuracy&lt;/code&gt; - String - &lt;code&gt;accuracy&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fairness_equalopportunity&lt;/code&gt; - String - &lt;code&gt;fairness_equalopportunity&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ NOTE&lt;/p&gt;

&lt;p&gt;While you can feed the output variables directly into &lt;strong&gt;Policy&lt;/strong&gt; steps, this &lt;strong&gt;Shell Script&lt;/strong&gt; step is a useful debugging measure that ensures the accuracy and fairness variables are populated correctly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Add a &lt;strong&gt;Policy&lt;/strong&gt; step after the &lt;strong&gt;Shell Script&lt;/strong&gt; step.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Enforce Fairness and Accuracy Policy&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Timeout&lt;/strong&gt;, enter &lt;code&gt;10m&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Entity Type&lt;/strong&gt;, select &lt;strong&gt;Custom&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Policy Set&lt;/strong&gt;, select your &lt;code&gt;Credit Card Approval Policy Set&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For &lt;strong&gt;Payload&lt;/strong&gt;, enter the following:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  {
      "accuracy": &amp;lt;+execution.steps.Accuracy_and_Fairness.output.outputVariables.accuracy&amp;gt;,
      "fairnessScoreEqualOpportunity": &amp;lt;+execution.steps.Accuracy_and_Fairness.output.outputVariables.fairness_equalopportunity&amp;gt;
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Save the pipeline.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The next time the pipeline runs, the policy is enforced to check if the model accuracy and fairness margins are within the acceptable limits. If not, the pipeline produces a warning and then continues (according to the policy set configuration). You could also configure the policy set so that the pipeline fails if there is a policy violation.&lt;/p&gt;

&lt;p&gt;If you want to test the response to a policy violation, you can modify the policy definition's &lt;code&gt;allow&lt;/code&gt; section to be more strict, such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rego"&gt;&lt;code&gt;&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;accuracy&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.95&lt;/span&gt;
    &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fairnessScoreEqualOpportunity&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="m"&gt;19&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since the model accuracy is around 92% and the fairness margin is around 20%, this policy definition should produce a warning. Make sure to revert the change to the policy definition once you're done experimenting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy AWS Lambda function
&lt;/h2&gt;

&lt;p&gt;In Harness, you can specify the location of a function definition, artifact, and AWS account, and then Harness deploys the Lambda function and automatically routes traffic from the old version of the Lambda function to the new version on each deployment. In this part of the tutorial, you'll update an existing Lambda function by adding a &lt;strong&gt;Deploy&lt;/strong&gt; stage with service, environment, and infrastructure definitions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Edit your MLOPs pipeline, and add a &lt;strong&gt;Deploy&lt;/strong&gt; stage named &lt;code&gt;lambdadeployment&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment Type&lt;/strong&gt;, select &lt;strong&gt;AWS Lambda&lt;/strong&gt;, and then select &lt;strong&gt;Continue&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Create a &lt;a href="///docs/continuous-delivery/get-started/key-concepts.md#service"&gt;service definition&lt;/a&gt; for the Lambda deployment.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Add Service&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;creditcardapproval-lambda-service&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Set up service&lt;/strong&gt;, select &lt;strong&gt;Inline&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Deployment Type&lt;/strong&gt;, select &lt;strong&gt;AWS Lambda&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;AWS Lambda Function Definition&lt;/strong&gt;, for &lt;strong&gt;Manifest Identifier&lt;/strong&gt;, enter &lt;code&gt;lambdadefinition&lt;/code&gt;, and for &lt;strong&gt;File/Folder Path&lt;/strong&gt;, enter &lt;code&gt;/lambdamanifest&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;After creating the manifest under Harness File Store, add the following to the service manifest, and select &lt;strong&gt;Save&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   functionName: `creditcardapplicationlambda`
   role: LAMBDA_FUNCTION_ARN
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;LAMBDA_FUNCTION_ARN&lt;/code&gt; with your Lambda function's ARN. You can find the &lt;strong&gt;Function ARN&lt;/strong&gt; when viewing the function in the AWS console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3g92peuko8bbn4vwm75i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3g92peuko8bbn4vwm75i.png" alt="Lambda ARN" width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under the &lt;strong&gt;Artifacts&lt;/strong&gt; section for the service definition, provide the artifact details to use for the lambda deployment:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Artifact Source Identifier: &lt;code&gt;ccapprovaldeploy&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Region: YOUR_AWS_REGION&lt;/li&gt;
&lt;li&gt;Image Path: &lt;code&gt;ccapproval-deploy&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Value - Tag: &lt;code&gt;&amp;lt;+input&amp;gt;&lt;/code&gt; (&lt;a href="https://dev.to/docs/platform/variables-and-expressions/runtime-inputs"&gt;runtime input&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Create environment and infrastructure definitions for the Lambda deployment. On the &lt;strong&gt;Deploy&lt;/strong&gt; stage's &lt;strong&gt;Environment&lt;/strong&gt; tab, select &lt;strong&gt;New Environment&lt;/strong&gt;, and use the following environment configuration:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   name: lambda-env
   type: PreProduction
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;From the &lt;code&gt;lambda-env&lt;/code&gt;, go to the &lt;strong&gt;Infrastructure Definitions&lt;/strong&gt; tab, and add an infrastructure definition with the following configuration:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   name: `aws-lambda-infra`
   deploymentType: `AwsLambda`
   type: AwsLambda
     spec:
       connectorRef: `mlopsawsconnector`
       region: YOUR_AWS_REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Save&lt;/strong&gt; to save the infrastructure definition.&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Deploy&lt;/strong&gt; stage's &lt;strong&gt;Execution&lt;/strong&gt; tab, add an &lt;strong&gt;AWS Lambda Deploy&lt;/strong&gt; step named &lt;code&gt;Deploy Aws Lambda&lt;/code&gt; for the name. No other configuration is necessary.&lt;/li&gt;
&lt;li&gt;Save and run the pipeline. For &lt;strong&gt;Git Branch&lt;/strong&gt;, enter &lt;code&gt;main&lt;/code&gt;, and for &lt;strong&gt;Tag&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+pipeline.executionId&amp;gt;&lt;/code&gt;, and then select &lt;strong&gt;Run Pipeline&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You need to provide the image tag value because the service definition's &lt;strong&gt;Tag&lt;/strong&gt; setting uses runtime input (&lt;code&gt;&amp;lt;+input&amp;gt;&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;While the pipeline runs, you can observe the build logs showing the lambda function being deployed with the latest artifact that was built and pushed from the same pipeline.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Test the response from the lambda function.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;In your AWS console, go to &lt;strong&gt;AWS Lambda&lt;/strong&gt;, select &lt;strong&gt;Functions&lt;/strong&gt;, and select your &lt;code&gt;creditcardapplicationlambda&lt;/code&gt; function.&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Test&lt;/strong&gt; tab, select &lt;strong&gt;Create new event&lt;/strong&gt;, and create an event named &lt;code&gt;testmodel&lt;/code&gt; with the following JSON:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   {
     "Num_Children": 2,
     "Income": 500000,
     "Own_Car": 1,
     "Own_Housing": 1
   }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Test&lt;/strong&gt; to execute the function with your &lt;code&gt;testmodel&lt;/code&gt; test event. Once the function finishes execution, you'll get the result with a &lt;strong&gt;Function URL&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6czh3ykiknruwn9gynri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6czh3ykiknruwn9gynri.png" alt="Test Lambda Function" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Note the &lt;strong&gt;Function URL&lt;/strong&gt; resulting from the lambda function test. This is the endpoint that your ML web application would call. Depending on the prediction of &lt;code&gt;0&lt;/code&gt; or &lt;code&gt;1&lt;/code&gt;, the web application either approves or denies the demo credit card application.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Monitor the model
&lt;/h2&gt;

&lt;p&gt;There are many ways to monitor ML models. In this tutorial, you'll monitor if the model was recently updated. If it hasn't been updated recently, Harness sends an email alerting you that the model might be stale.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Edit your MLOPs pipeline, and add a &lt;strong&gt;Build&lt;/strong&gt; stage after the &lt;strong&gt;Deploy&lt;/strong&gt; stage.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Monitor Model stage&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Disable&lt;/em&gt; &lt;strong&gt;Clone Codebase&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;On the &lt;strong&gt;Infrastructure&lt;/strong&gt; tab, select &lt;strong&gt;Propagate from existing stage&lt;/strong&gt; and select the first &lt;strong&gt;Build&lt;/strong&gt; stage.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Add a &lt;strong&gt;Run&lt;/strong&gt; step to find out when the model was last updated.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Monitor Model step&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Shell&lt;/strong&gt;, select &lt;strong&gt;Sh&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Command&lt;/strong&gt;, enter:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   # GitHub repository owner
   OWNER="YOUR_GITHUB_USERNAME"

   # GitHub repository name
   REPO="mlops-creditcard-approval-model"

   # Path to the file you want to check (relative to the repository root)
   FILE_PATH="credit_card_approval.ipynb"

   # GitHub Personal Access Token (PAT)
   TOKEN=&amp;lt;+secrets.getValue("git_pat")&amp;gt;

   # GitHub API URL
   API_URL="https://api.github.com/repos/$OWNER/$REPO/commits?path=$FILE_PATH&amp;amp;per_page=1"

   # Get the current date
   CURRENT_DATE=$(date -u +"%Y-%m-%dT%H:%M:%SZ")

   # Calculate the date 7 days ago
   SEVEN_DAYS_AGO=$(date -u -d "7 days ago" +"%Y-%m-%dT%H:%M:%SZ")

   # Get the latest commit date for the file
   LATEST_COMMIT_DATE=$(curl -s -H "Authorization: token $TOKEN" $API_URL | jq -r '.[0].commit.committer.date')

   # Check if the file has been updated in the last 7 days
   if [ "$(date -d "$LATEST_COMMIT_DATE" +%s)" -lt "$(date -d "$SEVEN_DAYS_AGO" +%s)" ]; then
       export model_stale=true
   else
       export model_stale=false
   fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Under &lt;strong&gt;Optional Configuration&lt;/strong&gt;, add &lt;code&gt;model_stale&lt;/code&gt; to &lt;strong&gt;Output Variables&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;After the &lt;code&gt;Monitor Model&lt;/code&gt; stage, add a &lt;strong&gt;Custom&lt;/strong&gt; stage named &lt;code&gt;Email notification&lt;/code&gt;. This stage will send the email notification if the model is stale.&lt;/li&gt;
&lt;li&gt;Add an &lt;strong&gt;Email&lt;/strong&gt; step to the last &lt;strong&gt;Custom&lt;/strong&gt; stage.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;Email&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Timeout&lt;/strong&gt;, enter &lt;code&gt;10m&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;To&lt;/strong&gt;, enter the email address to receive the notification, such as the email address for your Harness account.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Subject&lt;/strong&gt;, enter &lt;code&gt;Credit card approval ML model has not been updated in a week.&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Body&lt;/strong&gt;, enter &lt;code&gt;It has been 7 days since the credit card approval ML model was updated. Please update the model.&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the step's &lt;strong&gt;Advanced&lt;/strong&gt; tab, add a &lt;a href="https://dev.to/docs/platform/pipelines/step-skip-condition-settings"&gt;conditional execution&lt;/a&gt; so the &lt;strong&gt;Email&lt;/strong&gt; step only runs if the &lt;code&gt;model_stale&lt;/code&gt; variable (from the &lt;code&gt;Monitor Model&lt;/code&gt; step) is &lt;code&gt;true&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Execute this step&lt;/strong&gt;, select &lt;strong&gt;If the stage executes successfully up to this point&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;And execute this step only if the following JEXL Condition evaluates to true&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enter the following JEXL condition:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; &amp;lt;+pipeline.stages.Monitor_Model_Stage.spec.execution.steps.Monitor_Model_Step.output.outputVariables.model_stale&amp;gt; == true
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Save and run the pipeline. For &lt;strong&gt;Git Branch&lt;/strong&gt;, enter &lt;code&gt;main&lt;/code&gt;, and for &lt;strong&gt;Tag&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+pipeline.executionId&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Trigger pipeline based on Git events
&lt;/h2&gt;

&lt;p&gt;So far, this tutorial used manually triggered builds. However, as the number of builds and pipeline executions grow, it's not scalable to manually trigger builds. In this part of the tutorial, you'll add a &lt;a href="https://dev.to/docs/platform/triggers/triggering-pipelines"&gt;Git event trigger&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Assume your team has a specific requirement where they want the MLOps pipeline to run &lt;em&gt;only&lt;/em&gt; if there's an update to the Jupyter notebook in the codebase.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In your MLOps pipeline, select &lt;strong&gt;Triggers&lt;/strong&gt; at the top of the Pipeline Studio, and then select &lt;strong&gt;New Trigger&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select the GitHub webhook trigger.&lt;/li&gt;
&lt;li&gt;On the trigger's &lt;strong&gt;Configuration&lt;/strong&gt; tab:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;trigger_on_notebook_update&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Connector&lt;/strong&gt;, select your &lt;code&gt;mlopsgithubconnector&lt;/code&gt; GitHub connector.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Repository URL&lt;/strong&gt; should automatically populate.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Event&lt;/strong&gt;, select &lt;strong&gt;Push&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Continue&lt;/strong&gt; to go to the &lt;strong&gt;Conditions&lt;/strong&gt; tab.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Branch Name&lt;/strong&gt;, select the &lt;strong&gt;Equals&lt;/strong&gt; operator, and enter &lt;code&gt;main&lt;/code&gt; for the &lt;strong&gt;Matches Value&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Changed Files&lt;/strong&gt;, select the &lt;strong&gt;Equals&lt;/strong&gt; operator, and enter &lt;code&gt;credit_card_approval.ipynb&lt;/code&gt; for the &lt;strong&gt;Matches Value&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Continue&lt;/strong&gt; to go to the &lt;strong&gt;Pipeline Input&lt;/strong&gt; tab.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Git Branch&lt;/strong&gt; should automatically populate.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Primary Artifact&lt;/strong&gt;, enter &lt;code&gt;ccapprovaldeploy&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Tag&lt;/strong&gt;, enter &lt;code&gt;&amp;lt;+pipeline.executionId&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Select &lt;strong&gt;Create Trigger&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The trigger webhook should automatically register in your GitHub repository. If it doesn't, you'll need to manually register the webhook:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the list of triggers in Harness, select the &lt;strong&gt;Link&lt;/strong&gt; icon to copy the webhook URL for the trigger.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhamzqctpkzcp3ajyqih7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhamzqctpkzcp3ajyqih7.png" alt="Webhook URL" width="800" height="80"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In your GitHub repository, navigate to &lt;strong&gt;Settings&lt;/strong&gt;, select &lt;strong&gt;Webhook&lt;/strong&gt;, and then select &lt;strong&gt;Add Webhook&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Paste the webhook URL in &lt;strong&gt;Payload URL&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Set the &lt;strong&gt;Content type&lt;/strong&gt; to &lt;code&gt;application/json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Add webhook&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A green checkmark in the GitHub webhooks list indicates that the webhook connected successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53llvfv3439ir2fagcvo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53llvfv3439ir2fagcvo.png" alt="Webhook Success" width="800" height="186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the trigger in place, whenever you push a change to the &lt;code&gt;credit_card_approval.ipynb&lt;/code&gt; file on the &lt;code&gt;main&lt;/code&gt; branch, the MLOps pipeline runs. In the trigger settings, you could adjust or remove the &lt;strong&gt;Conditions&lt;/strong&gt; (branch name, changed files, and so on) according to your requirements, if you wanted to use a Git event trigger in a live development or production scenario.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add an approval gate before prod deployment
&lt;/h2&gt;

&lt;p&gt;Your organization might require an approval gate for your CI/CD pipeline before an artifact is deployed to production. Harness offers built-in approval steps for Jira, ServiceNow, or Harness approvals.&lt;/p&gt;

&lt;p&gt;Assume that you have a different image for production, and a different AWS Lambda function is deployed based on that container image. In your MLOps pipeline, you can create another &lt;code&gt;AWS Lambda deployment&lt;/code&gt; stage with another &lt;code&gt;AWS Lambda deploy&lt;/code&gt; step for the production environment and use the approval gate prior to running that production deployment stage.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To add the approval gate, add an &lt;strong&gt;Approval&lt;/strong&gt; stage immediately prior to the &lt;strong&gt;Deploy&lt;/strong&gt; stage that requires approval.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;approval-to-prod&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Approval Type&lt;/strong&gt;, select &lt;strong&gt;Harness Approval&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Add an &lt;strong&gt;Approval&lt;/strong&gt; step to the &lt;strong&gt;Approval&lt;/strong&gt; stage.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;Name&lt;/strong&gt;, enter &lt;code&gt;approval-to-prod&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Timeout&lt;/strong&gt;, enter &lt;code&gt;1d&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Use the default &lt;strong&gt;Message&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;User Groups&lt;/strong&gt;, select &lt;strong&gt;Select User Groups&lt;/strong&gt;, select &lt;strong&gt;Project&lt;/strong&gt;, and select &lt;strong&gt;All Project Users&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Save the pipeline.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next time you run the pipeline, someone from the Harness project must approve the promotion of artifact to the production environment before the final &lt;strong&gt;Deploy&lt;/strong&gt; stage runs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use the model in a web application
&lt;/h2&gt;

&lt;p&gt;In a live MLOps scenario, the ML model would likely power a web application. While this app development is outside the scope of this tutorial, check out &lt;a href="https://developer.harness.io/docs/continuous-integration/development-guides/mlops/e2e-mlops-tutorial/#use-the-model-in-a-web-application"&gt;this animation&lt;/a&gt; that demonstrates a simple web application developed using plain HTML/CSS/JS. The outcome of the credit card application uses the response from the public AWS Lambda function URL invocation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations! Here's what you've accomplished in this tutorial:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[x] Build and push an image for this project.&lt;/li&gt;
&lt;li&gt;[x] Run security scans on the container image.&lt;/li&gt;
&lt;li&gt;[x] Upload model visualization data to S3.&lt;/li&gt;
&lt;li&gt;[x] Publish model visualization data within the pipeline.&lt;/li&gt;
&lt;li&gt;[x] Run test on the model to find out accuracy and fairness scores.&lt;/li&gt;
&lt;li&gt;[x] Based on those scores, use Open Policy Agent (OPA) policies to either approve or deny the model.&lt;/li&gt;
&lt;li&gt;[x] Deploy the model.&lt;/li&gt;
&lt;li&gt;[x] Monitor the model and ensure the model is not outdated.&lt;/li&gt;
&lt;li&gt;[x] Trigger the pipeline based on certain git events.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.toOptional"&gt;x&lt;/a&gt; Add approval gates for production deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that you've built an MLOps pipeline on Harness and used the Harness platform to train the model, check out the following guides to learn how you can integrate other popular ML tools and platforms into your Harness CI/CD pipelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developer.harness.io/docs/continuous-integration/development-guides/mlops/mlops-sagemaker"&gt;AWS SageMaker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.harness.io/docs/continuous-integration/development-guides/mlops/mlops-databricks"&gt;Databricks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.harness.io/docs/continuous-integration/development-guides/mlops/mlops-vertexai"&gt;Google Vertex AI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.harness.io/docs/continuous-integration/development-guides/mlops/mlops-azureml"&gt;Azure ML&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.harness.io/docs/continuous-integration/development-guides/mlops/mlops-mlflow"&gt;MLflow&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>mlops</category>
      <category>cicd</category>
      <category>aws</category>
      <category>harness</category>
    </item>
    <item>
      <title>Build and Push to GAR and Deploy to GKE - End-to-End CI/CD Pipeline</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Tue, 02 Jan 2024 20:50:07 +0000</pubDate>
      <link>https://forem.com/harness/build-and-push-to-gar-and-deploy-to-gke-end-to-end-cicd-pipeline-182i</link>
      <guid>https://forem.com/harness/build-and-push-to-gar-and-deploy-to-gke-end-to-end-cicd-pipeline-182i</guid>
      <description>&lt;p&gt;In this tutorial, you'll explore how to build a streamlined CI/CD pipeline using the Harness Platform, integrating the robust services of Google Artifact Registry (GAR) and Google Kubernetes Engine (GKE). GAR excels in managing and storing container images securely, while GKE offers a scalable environment for container deployment. The Harness Platform serves as a powerful orchestrator, simplifying the build and push process to GAR and managing complex deployments in GKE. You'll also cover implementing crucial approval steps for enhanced security and setting up Slack notifications for real-time updates, showcasing how these tools together facilitate a robust, streamlined CI/CD process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architectural and Pipeline Diagrams
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v47UIdnZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/48aumx0emm0qsax47nol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v47UIdnZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/48aumx0emm0qsax47nol.png" alt="Excalidraw Architectural Diagram" width="800" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lN2Jya2G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tui7smssy7fk9n0edxrk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lN2Jya2G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tui7smssy7fk9n0edxrk.png" alt="Complete Pipeline in Harness Pipeline Editor" width="800" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A Harness free plan. If you don't have one, &lt;a href="https://app.harness.io/auth/#/signup/?&amp;amp;utm_campaign=cd-devrel"&gt;sign up for free&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;A GitHub account.&lt;/li&gt;
&lt;li&gt;A Docker Hub account.&lt;/li&gt;
&lt;li&gt;A GCP account with permissions for Google Artifact Registry and Kubernetes Engine.t&lt;/li&gt;
&lt;li&gt;Access to a slack workspace and permissions to create a slack app.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There’s a bonus section in this tutorial where you’ll run security tests during the build process and create a policy for the deployment process. To follow this section, you’ll need a Harness enterprise account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Required Setup and Configurations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Demo application
&lt;/h3&gt;

&lt;p&gt;You can either &lt;a href="https://github.com/harness-community/captain-canary-adventure-app/fork"&gt;fork Captain Canary Adventure (CCA) App&lt;/a&gt; or bring your own application (as long as it has a Dockerfile).&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📝&lt;/th&gt;
&lt;th&gt;This tutorial assumes the use of a fork of the CCA App. If you are using your own app, make the necessary changes.&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  GitHub and Docker authentication
&lt;/h3&gt;

&lt;p&gt;Create a &lt;a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens"&gt;GitHub personal access token (PAT)&lt;/a&gt; that will have read access to the demo application repository. Create a &lt;a href="https://docs.docker.com/security/for-developers/access-tokens/"&gt;Docker access token&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Image registry setup on Google Cloud Platform (GCP):
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://console.cloud.google.com/apis/api/artifactregistry.googleapis.com"&gt;Enable the Artifact Registry API&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;From &lt;a href="https://console.cloud.google.com/artifacts"&gt;Artifact Registry&lt;/a&gt;, click &lt;strong&gt;+ CREATE REPOSITORY&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;Give this repository a name - &lt;code&gt;cca-registry&lt;/code&gt;, choose &lt;code&gt;Docker&lt;/code&gt; as the format, &lt;code&gt;Standard&lt;/code&gt; as the mode, location type &lt;code&gt;Region&lt;/code&gt; (choose a region near you), &lt;code&gt;Google-managed encryption&lt;/code&gt; key for encryption, have &lt;code&gt;Dry Run&lt;/code&gt; selected, and click &lt;strong&gt;CREATE&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Kubernetes cluster setup with GKE:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://console.cloud.google.com/apis/api/container.googleapis.com"&gt;Enable Kubernetes Engine API&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://console.cloud.google.com/kubernetes/auto/add"&gt;Create a GKE (autopilot) cluster&lt;/a&gt; by selecting a region near you.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  GCP IAM and Service Account setup:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://console.cloud.google.com/iam-admin/serviceaccounts"&gt;Create a GCP Service Account&lt;/a&gt;. Copy the email address generated for this service account. It will be in this format: &lt;code&gt;SERVICE_ACCOUNT_NAME@GCP_PROJECT_NAME.iam.gserviceaccount.com&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Navigate to the artifact registry repository you created earlier, select it and click on &lt;strong&gt;Permissions&lt;/strong&gt; tab. You need to create two types of access to the artifact registry repository - a public read access and a fine-grained write access. Click &lt;strong&gt;+ ADD PRINCIPAL&lt;/strong&gt; from the &lt;strong&gt;Permissions&lt;/strong&gt; tab and paste the email address of the service account previously copied. Assign &lt;code&gt;Artifact Registry Writer&lt;/code&gt; role to this principal. 
Next, click &lt;strong&gt;+ ADD PRINCIPAL&lt;/strong&gt; from the &lt;strong&gt;Permissions&lt;/strong&gt; tab and type in &lt;code&gt;allUsers&lt;/code&gt; for the principal and &lt;code&gt;Artifact Registry Reader&lt;/code&gt; for the role. You might see a warning like this: 
&amp;gt; “This resource is public and can be accessed by anyone on the internet.” &lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;IAM &amp;amp; Admin&lt;/strong&gt;, locate the service account, select it, and then click on &lt;strong&gt;ADD KEY&lt;/strong&gt; → &lt;strong&gt;Create new key&lt;/strong&gt;. Choose the JSON format, and a key for your service account will be downloaded to your computer. Exercise caution and refrain from sharing this key with anyone; treat it as you would a password. &lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Slack workspace and app setup:
&lt;/h3&gt;

&lt;p&gt;To create a Slack app and incoming webhook, you'll need elevated privilege in that Slack workspace. &lt;a href="https://slack.com/help/articles/206845317-Create-a-Slack-workspace"&gt;Create a new Slack workspace&lt;/a&gt; for this tutorial and &lt;a href="https://api.slack.com/messaging/webhooks"&gt;an incoming webhook&lt;/a&gt; for a specific channel.&lt;/p&gt;

&lt;p&gt;Your newly created slack webhook will look like this: &lt;code&gt;https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Treat this as sensitive information&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Harness entity setup
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create secrets: &lt;/p&gt;

&lt;p&gt;a. GitHub Secret: Navigate to the &lt;a href="https://app.harness.io/"&gt;Harness console&lt;/a&gt;. From &lt;strong&gt;Project Setup&lt;/strong&gt; → &lt;strong&gt;Secrets&lt;/strong&gt;, click &lt;strong&gt;+ New Secret&lt;/strong&gt; → &lt;strong&gt;Text&lt;/strong&gt;, give the secret a name (for example, &lt;code&gt;cca-git-pat&lt;/code&gt;) and paste in the previously created GitHub PAT.&lt;/p&gt;

&lt;p&gt;b. Docker Secret: Similarly, create a docker secret (you can name it &lt;code&gt;docker-secret&lt;/code&gt;) and use the previously created Docker PAT as the secret value.&lt;/p&gt;

&lt;p&gt;c. Slack Webhook: Similarly, create a slack webhook secret (you can name it &lt;code&gt;slack-webhook&lt;/code&gt;) and paste in the previously created slack webhook value.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://developer.harness.io/docs/category/connectors/"&gt;Connectors&lt;/a&gt; in Harness help you pull in artifacts, sync with repos, integrate verification and analytics tools, and leverage collaboration channels. From &lt;strong&gt;Project Setup&lt;/strong&gt; → &lt;strong&gt;Connectors&lt;/strong&gt; → &lt;strong&gt;+ New Connector&lt;/strong&gt;, create the following connectors:&lt;/p&gt;

&lt;p&gt;a. GitHub Connector: Harness platform connects to the source code repository using this connector. Give this connector a name (for example, &lt;code&gt;cca-git-connector&lt;/code&gt;), choose URL type as &lt;strong&gt;Repository&lt;/strong&gt;, connection type as &lt;strong&gt;HTTP&lt;/strong&gt;, and paste in the forked Github Repository URL of the demo app. Use your GitHub username and the previously created GitHub secret for authentication. Select the connectivity mode as &lt;strong&gt;Connect through Harness Platform&lt;/strong&gt;. The connection test should be successful.&lt;/p&gt;

&lt;p&gt;b. Docker Connector: Harness platform pulls in the docker image for the Slack notification using this connector. Give this connector a name (for example, &lt;code&gt;docker-connector&lt;/code&gt;), choose provider type as &lt;strong&gt;DockerHub&lt;/strong&gt;, Docker Registry URL as &lt;code&gt;https://index.docker.io/v2/&lt;/code&gt;, enter in your docker username and select the previously created docker secret. Select the connectivity mode as &lt;strong&gt;Connect through Harness Platform&lt;/strong&gt;. The connection test should be successful.&lt;/p&gt;

&lt;p&gt;c. Kubernetes Connector: Harness platform creates and manages resources on your GKE cluster using this connector. Give this connector a name (for example, &lt;code&gt;gke-connector&lt;/code&gt;). Choose &lt;strong&gt;Use the credentials of a specific Harness Delegate…&lt;/strong&gt; and click &lt;strong&gt;+ Install new Delegate&lt;/strong&gt;. The Harness Delegate is a service you run in your local network or VPC to connect all of your providers with your Harness account. Follow the instructions to install a delegate on your Kubernetes cluster and once the installation is complete, select the newly created delegate from the dropdown. The connection test should be successful.&lt;/p&gt;

&lt;p&gt;d. GCP Connector: The GCP connector allows you to connect to your Google Cloud Platform resource and perform actions via Harness platform. Give this connector a name and select &lt;strong&gt;Specify credentials here&lt;/strong&gt; under the &lt;strong&gt;Details&lt;/strong&gt; section. Add a new secret name and upload the GCP service account key JSON file you previously downloaded. Select the connectivity mode as &lt;strong&gt;Connect through Harness Platform&lt;/strong&gt;. The connection test should be successful.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bpg-cjOQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7vzrcao4l76db6yuqn6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bpg-cjOQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7vzrcao4l76db6yuqn6z.png" alt="GCP Connector Configuration" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Build and push image to GAR
&lt;/h2&gt;

&lt;p&gt;First, let’s create the build and push image part of the pipeline. Click on &lt;strong&gt;Pipelines&lt;/strong&gt; → &lt;strong&gt;+ Create a Pipeline&lt;/strong&gt; and give it a name (e.g., &lt;code&gt;gar-gke-cicd-pipeline&lt;/code&gt;). Select the &lt;strong&gt;Inline&lt;/strong&gt; option to store the pipeline definition in Harness, and then click &lt;strong&gt;Start&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;Add Stage&lt;/strong&gt; and choose &lt;strong&gt;Build&lt;/strong&gt; as the stage type. A Harness pipeline can consist of one or more stages. Give this stage a name (e.g., &lt;code&gt;Push to GAR&lt;/code&gt;), select the &lt;strong&gt;Clone Codebase&lt;/strong&gt; option (this should be enabled, by default), and choose the GitHub connector you previously created from the dropdown. The repository name should auto-populate. Click &lt;strong&gt;Set Up Stage&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Infrastructure&lt;/strong&gt;, specify where you'll deploy your application. Select &lt;strong&gt;Cloud&lt;/strong&gt; for Harness hosted builds and choose &lt;strong&gt;Linux/AMD64&lt;/strong&gt; for the Platform option.&lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Execution&lt;/strong&gt;, click &lt;strong&gt;Add Step&lt;/strong&gt; → &lt;strong&gt;Add Step&lt;/strong&gt; and find &lt;strong&gt;Build and Push to GAR&lt;/strong&gt; step from the Step Library. Name this step (e.g., &lt;code&gt;BuildAndPushToGAR&lt;/code&gt;) and select the GCP connector you created earlier from the dropdown. When choosing the host, use the region selected when creating the image registry repository (e.g., &lt;code&gt;northamerica-northeast1-docker.pkg.dev&lt;/code&gt;). Refer to &lt;a href="https://developer.harness.io/docs/continuous-integration/use-ci/build-and-upload-artifacts/build-and-push-to-gcr/#host"&gt;Harness Developer Hub docs&lt;/a&gt; or &lt;a href="https://cloud.google.com/artifact-registry/docs/repositories/repo-locations"&gt;GCP Artifact Registry docs&lt;/a&gt; for more details on selecting the region for GAR. Enter your GCP project ID under &lt;strong&gt;Project Id&lt;/strong&gt;. For &lt;strong&gt;Image Name&lt;/strong&gt;, use the image registry repository name followed by the application name in this format: &lt;code&gt;cca-registry/cca-app&lt;/code&gt;. Use &lt;code&gt;latest&lt;/code&gt; for now as the &lt;strong&gt;Tags&lt;/strong&gt;. Click &lt;strong&gt;Apply Changes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now, click &lt;strong&gt;Run&lt;/strong&gt; to execute the pipeline. Enter &lt;code&gt;Master&lt;/code&gt; as the git branch name for the build (or &lt;code&gt;main&lt;/code&gt; if you're not using the forked CCA app). A successful execution of the pipeline should resemble the following: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v9wLjssy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z909zlmpmgnc5ocycy5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v9wLjssy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z909zlmpmgnc5ocycy5d.png" alt="Successful Build and Push Pipeline Execution" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy to GKE
&lt;/h2&gt;

&lt;p&gt;If you're using a fork of the Captain Canary Adventure App, update the &lt;code&gt;deployment.yaml&lt;/code&gt; before deploying the application to Kubernetes. The current YAML uses Harness variables, but since you're not there yet, you'll need to hardcode some values for now.&lt;/p&gt;

&lt;p&gt;Assuming your GKE Artifact Registry repository name is &lt;code&gt;cca-registry&lt;/code&gt; and the image name is &lt;code&gt;cca-app&lt;/code&gt;, replace the current values in values.yaml with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cca-registry/cca-app&lt;/span&gt;
  &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;latest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Click on &lt;strong&gt;+ Add Stage&lt;/strong&gt; in the pipeline and choose &lt;strong&gt;Deploy&lt;/strong&gt; as the stage type. Give the stage a name (e.g., &lt;code&gt;GKE Deploy&lt;/code&gt;), select &lt;strong&gt;Kubernetes&lt;/strong&gt; as the deployment type, and click &lt;strong&gt;Set Up Stage&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Next, create a service to deploy. Choose &lt;strong&gt;Kubernetes&lt;/strong&gt; as the deployment type, select the GitHub connector, and specify the paths for the manifests. For example, for the Captain Canary Adventure App, here are the manifest details:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qecqxJcT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24bqp4qzp3iqw1tznth1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qecqxJcT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24bqp4qzp3iqw1tznth1.png" alt="Captain Canary K8s Manifest Details" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Environments represent your deployment targets (such as QA or Prod). Each environment contains one or more Infrastructure Definitions that list your target clusters, hosts, namespaces, etc. Click on &lt;strong&gt;+ New Environment&lt;/strong&gt;, give this environment a name (e.g., &lt;code&gt;cca-env&lt;/code&gt;), select &lt;strong&gt;Pre-Production&lt;/strong&gt; as the environment type, and click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Next, create an infrastructure definition. Click &lt;strong&gt;+ New Infrastructure&lt;/strong&gt;, under cluster details, select the GKE connector, provide a Kubernetes namespace where your application will be deployed (e.g., &lt;code&gt;cca-ns&lt;/code&gt;), and click &lt;strong&gt;Save&lt;/strong&gt;. For execution strategies, choose &lt;strong&gt;Rolling Deployment&lt;/strong&gt; and click &lt;strong&gt;Use Strategy&lt;/strong&gt;. Under optional configuration, select &lt;strong&gt;Enable Kubernetes Pruning&lt;/strong&gt;. With this setting, Harness will use pruning to remove any resources present in an old manifest but no longer in the manifest used for the current deployment. You can find more information about this configuration on the &lt;a href="https://developer.harness.io/docs/continuous-delivery/deploy-srv-diff-platforms/kubernetes/cd-kubernetes-category/prune-kubernetes-resources/"&gt;Harness Developer Hub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You’re all set! Click &lt;strong&gt;Save&lt;/strong&gt; and then &lt;strong&gt;Run&lt;/strong&gt;. Use &lt;code&gt;master&lt;/code&gt; for the git branch. A successful pipeline execution will override the &lt;code&gt;cca-app:latest&lt;/code&gt; image on your GAR repository and deploy this image to your GKE cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add Approval and Slack Notifications
&lt;/h2&gt;

&lt;p&gt;In practical DevOps pipelines, gates are implemented to control artifact promotion to the production environment. Harness supports &lt;a href="https://developer.harness.io/tutorials/cd-pipelines/approvals/"&gt;various types of approvals&lt;/a&gt; in Continuous Delivery (CD) pipelines. In this tutorial, you'll use the manual approval step.&lt;/p&gt;

&lt;p&gt;Within the &lt;strong&gt;gke-deploy&lt;/strong&gt; stage, click &lt;strong&gt;+ Add Step&lt;/strong&gt; before the &lt;strong&gt;Rollout Deployment&lt;/strong&gt; step and find &lt;strong&gt;Harness Approval&lt;/strong&gt; under Approval in the Step Library. Keep all default options, and you'll need to select the approver from the User Groups. Choose &lt;strong&gt;Project&lt;/strong&gt; → &lt;strong&gt;All Project Users&lt;/strong&gt; under user group selection. If you're the only member of this project, you'll be the sole approver. Click &lt;strong&gt;Apply Selected&lt;/strong&gt;. Click &lt;strong&gt;Apply Changes&lt;/strong&gt; for the manual approval step, and then click &lt;strong&gt;Save&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now, let's add a notification stage so that whenever a deployment is approved in the CI/CD pipeline, a notification will be sent to a Slack channel, indicating who approved it.&lt;/p&gt;

&lt;p&gt;From the pipeline, click on &lt;strong&gt;Add Stage&lt;/strong&gt; after the gke-deploy stage, select &lt;strong&gt;Build&lt;/strong&gt; as the stage type, give this stage a name (e.g., &lt;code&gt;Notifications Stage&lt;/code&gt;), disable the Clone Codebase option, and click &lt;strong&gt;Set Up Stage&lt;/strong&gt;. Under Infrastructure, choose &lt;strong&gt;Use a New Infrastructure&lt;/strong&gt; → &lt;strong&gt;Cloud&lt;/strong&gt; and &lt;strong&gt;Linux → AMD64&lt;/strong&gt; for the Operating System. Under Execution, click &lt;strong&gt;Add Step&lt;/strong&gt; → &lt;strong&gt;Add Step&lt;/strong&gt; and find &lt;strong&gt;Plugin&lt;/strong&gt; in the Build section of the Step Library.&lt;/p&gt;

&lt;p&gt;Name this step (e.g., &lt;code&gt;Slack Notification&lt;/code&gt;), choose the Docker connector you previously created under Container Registry, for the image, use &lt;code&gt;plugins/slack&lt;/code&gt;, and add the following key-values under &lt;strong&gt;Optional Configuration&lt;/strong&gt; → &lt;strong&gt;Settings&lt;/strong&gt; (assuming the id for your Slack webhook secret is &lt;code&gt;slackwebhook&lt;/code&gt;). &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Key&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;webhook&lt;/td&gt;
&lt;td&gt;&lt;code&gt;&amp;lt;+secrets.getValue("slackwebhook")&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;template&lt;/td&gt;
&lt;td&gt;The deployment is moved to prod by &lt;code&gt;&amp;lt;+approval.approvalActivities[0].user.name&amp;gt;&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Notice the use of a Harness variable expression in the template, which retrieves the name of the approver from the previous stage.&lt;/p&gt;

&lt;p&gt;Before running the pipeline, one more update is needed. Currently, every image built, pushed, and deployed has the same image tag, making it challenging to track based on the build number. Harness provides powerful &lt;a href="https://developer.harness.io/docs/platform/variables-and-expressions/harness-variables/"&gt;built-in and custom variable expressions&lt;/a&gt; for various practical use cases.&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;Variables&lt;/strong&gt; for your pipeline and select &lt;strong&gt;+ Add Variable&lt;/strong&gt; at the pipeline level. Let’s add two variables: &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Variable Name&lt;/th&gt;
&lt;th&gt;Variable Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;imageName&lt;/td&gt;
&lt;td&gt;cca-registry/cca-app&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;imageTag&lt;/td&gt;
&lt;td&gt;&lt;code&gt;&amp;lt;+pipeline.sequenceId&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For the &lt;strong&gt;imageTag&lt;/strong&gt;, click the 📌 icon and select &lt;strong&gt;Expression&lt;/strong&gt;. Every time you run the pipeline, the pipeline sequence ID will change, and subsequently, the image that will be built and deployed will also change.&lt;/p&gt;

&lt;p&gt;Now that you’ve updated the pipeline to pass in the &lt;strong&gt;imageName&lt;/strong&gt; and &lt;strong&gt;imageTag&lt;/strong&gt; as variables, let’s update the codebase to replace the hardcoded values. Revert the changes to &lt;code&gt;deployment.yaml&lt;/code&gt; and &lt;code&gt;values.yaml&lt;/code&gt; you previously made. You'll observe that the &lt;code&gt;values.yaml&lt;/code&gt; file will receive the &lt;strong&gt;imageName&lt;/strong&gt; and &lt;strong&gt;imageTag&lt;/strong&gt; during pipeline runtime, and then the &lt;code&gt;deployment.yaml&lt;/code&gt; file will use those values from the &lt;code&gt;values.yaml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Now, click &lt;strong&gt;Save&lt;/strong&gt; and then &lt;strong&gt;Run&lt;/strong&gt;. After a successful &lt;strong&gt;gar-build-and-push&lt;/strong&gt; stage, you should see an image in the GAR repository with a numeric tag that matches the pipeline sequence ID. Right after, you should see a following prompt for approval: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A0U8SIAV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bhcgxqmprz5k7iacsemm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A0U8SIAV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bhcgxqmprz5k7iacsemm.png" alt="Harness Approval" width="800" height="402"&gt;&lt;/a&gt;&lt;br&gt;
You can (optionally) add a comment and click &lt;strong&gt;Approve&lt;/strong&gt;. The pipeline should continue as before and you’ll see a deployment on your Kubernetes cluster. However, this time, you’ll see a slack notification resulting from your approval. To modify the text that appears on the notification, you can modify the &lt;strong&gt;template&lt;/strong&gt; value in the slack plugin step settings. &lt;/p&gt;
&lt;h2&gt;
  
  
  Security Tests and Policy Enforcement (Bonus Section)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;📝&lt;/th&gt;
&lt;th&gt;These features are only available on Harness paid plans&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h3&gt;
  
  
  Run OWASP Tests
&lt;/h3&gt;

&lt;p&gt;You can scan your code repositories using &lt;a href="https://owasp.org/www-project-dependency-check/"&gt;OWASP Dependency-Check&lt;/a&gt; within a Harness pipeline. Within the &lt;code&gt;gar-build-and-push&lt;/code&gt; stage, click on &lt;strong&gt;+ Add Step&lt;/strong&gt; → &lt;strong&gt;Add Step&lt;/strong&gt; before the &lt;code&gt;BuildAndPushGAR&lt;/code&gt; step. From the step library, find &lt;strong&gt;Owasp&lt;/strong&gt; under the Security Tests section.&lt;/p&gt;

&lt;p&gt;Use the following settings to configure the OWASP Dependency Check and click &lt;strong&gt;Apply Changes&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Setting Name&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Name&lt;/td&gt;
&lt;td&gt;Owasp Tests&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scan Mode&lt;/td&gt;
&lt;td&gt;Orchestration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Target.Name&lt;/td&gt;
&lt;td&gt;cca-owasp-tests&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Variant&lt;/td&gt;
&lt;td&gt;master (this is the branch name for the repo)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Log Level&lt;/td&gt;
&lt;td&gt;Info&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fail On Severity&lt;/td&gt;
&lt;td&gt;Critical&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You can have any string values for the step name and target.name. For the &lt;strong&gt;Variant&lt;/strong&gt;, use the branch name for your codebase (e.g. &lt;code&gt;master&lt;/code&gt; or &lt;code&gt;main&lt;/code&gt;). Selecting &lt;strong&gt;Critical&lt;/strong&gt; for &lt;strong&gt;Fail On Severity&lt;/strong&gt; means that if there is any critical error, this test will fail and the pipeline execution will halt. You can check out the &lt;a href="https://developer.harness.io/docs/security-testing-orchestration/sto-techref-category/owasp-scanner-reference/"&gt;OWASP scanner reference&lt;/a&gt; to learn more on these configurations. &lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Save&lt;/strong&gt; and then &lt;strong&gt;Run&lt;/strong&gt;. If your codebase doesn’t have an OWASP critical bug, the pipeline should execute successfully. To enforce a fail on this OWASP scan, use a codebase with known vulnerabilities like &lt;a href="https://github.com/WebGoat/WebGoat"&gt;WebGoat&lt;/a&gt; and you’ll see the OWASP scanner in action.&lt;/p&gt;
&lt;h3&gt;
  
  
  Add a policy to mandate approval step on deployment stages
&lt;/h3&gt;

&lt;p&gt;Harness Policy As Code uses &lt;a href="https://www.openpolicyagent.org/"&gt;Open Policy Agent (OPA)&lt;/a&gt; as the central service to store and enforce policies for the different entities and processes across the Harness platform. In this section, you will define a policy that will deny a pipeline execution if there is no approval step defined in a deployment stage.&lt;/p&gt;

&lt;p&gt;From &lt;strong&gt;Project Setup&lt;/strong&gt; → &lt;strong&gt;Policies&lt;/strong&gt;, follow the wizard to create a policy from the policy library. Use the &lt;strong&gt;Pipeline - Approval&lt;/strong&gt; policy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wb2Wg9Kf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l5fywbpys3wxh6pf055c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wb2Wg9Kf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l5fywbpys3wxh6pf055c.png" alt="Pipeline Approval Policy" width="556" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next screen, choose &lt;strong&gt;Project&lt;/strong&gt; scope, trigger event &lt;strong&gt;On Run&lt;/strong&gt;, and for the severity, choose &lt;strong&gt;Error &amp;amp; Exit&lt;/strong&gt;. Next, click &lt;strong&gt;Yes&lt;/strong&gt; to apply the policy. &lt;/p&gt;

&lt;p&gt;Now, let’s remove the approval step from the gke-deploy stage. Click on &lt;strong&gt;Edit&lt;/strong&gt; on the pipeline and click on the cross button on the Harness Approval step. Click &lt;strong&gt;Save&lt;/strong&gt; and then &lt;strong&gt;Run&lt;/strong&gt;. You should see the following error:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dV1OUt3a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5tk79zboogjsegggfwhg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dV1OUt3a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5tk79zboogjsegggfwhg.png" alt="Policy Enforcement In Action" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add the Harness Approval back, save and run the pipeline and this time the pipeline should execute successfully. An end to end successful pipeline execution will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D5tuodJ---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pze4gowt740k3wzrb0cu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D5tuodJ---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pze4gowt740k3wzrb0cu.png" alt="End to end pipeline execution" width="800" height="576"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  View the running application
&lt;/h2&gt;

&lt;p&gt;While connected to your GKE cluster, execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; cca-ns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is assuming that you deployed the application to &lt;code&gt;cca-ns&lt;/code&gt; namespace. &lt;/p&gt;

&lt;p&gt;The output will be something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP    PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;        AGE
cca-app-service         LoadBalancer   34.118.227.33   34.152.47.53   80:30008/TCP   6d19h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Navigate to the IP address listed under the EXTERNAL-IP column for your case, and you should see a running Captain Canary Adventure application. As the application is running on port 80, you can omit the port number from the URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iI_avofK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rm8u9ez6lih5s8ukmb3x.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iI_avofK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rm8u9ez6lih5s8ukmb3x.gif" alt="Captain Canary Application Running" width="600" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Homework Task
&lt;/h2&gt;

&lt;p&gt;If you’d like to take this pipeline one step further, you can leverage caching to share data across stages because each stage in a Harness CI pipeline has its own build infrastructure. Check out how to &lt;a href="https://developer.harness.io/docs/continuous-integration/use-ci/caching-ci-data/save-cache-in-gcs/"&gt;save and restore cache from Google Cloud Storage (GCS)&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>gcp</category>
      <category>googlecloud</category>
      <category>cicd</category>
      <category>harness</category>
    </item>
    <item>
      <title>Ephemeral CI environments using ttl.sh and Gitness</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Mon, 18 Dec 2023 21:29:10 +0000</pubDate>
      <link>https://forem.com/harness/ephemeral-ci-environments-using-ttlsh-and-gitness-8jl</link>
      <guid>https://forem.com/harness/ephemeral-ci-environments-using-ttlsh-and-gitness-8jl</guid>
      <description>&lt;p&gt;In the realm of software development, balancing continuous integration with maintaining quality is a significant challenge. &lt;a href="https://ttl.sh/"&gt;ttl.sh&lt;/a&gt; and &lt;a href="https://docs.gitness.com/"&gt;Gitness&lt;/a&gt; offer a solution. ttl.sh is an ephemeral Docker image registry that allows for the creation of temporary image tags with built-in expiry. Gitness, an open-source platform by Harness, simplifies the management of source code repositories and development pipelines. This blog post will explore how these tools can be used to create temporary CI environments to expedite development processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Multiple developers working on different features can lead to merge conflicts and a bloated image registry. Traditional CI environments often retain Docker images from feature branches too long, which complicates image management and increases the likelihood of testing against outdated or incorrect images.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Urc5jNfO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t576xsvuko62jk6r7972.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Urc5jNfO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t576xsvuko62jk6r7972.png" alt="A PR workflow for ephemeral build environment" width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The workflow in the diagram leverages ttl.sh and Gitness to manage these challenges:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feature Branch Workflow&lt;/strong&gt;: The creation of a pull request (PR) for a feature branch triggers a build in Gitness.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Image Creation&lt;/strong&gt;: Gitness constructs a Docker image from the feature branch, using ttl.sh for tagging with a UUID and a time-to-live (TTL) limit. This process does not require credentials and maintains privacy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated Testing&lt;/strong&gt;: The image is put through automated tests. Should the tests fail, developers are notified; if they succeed, the process proceeds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Image Promotion&lt;/strong&gt;: After a PR passes all checks and is merged, the image is given a permanent tag and pushed to a central image registry, which at this point, requires proper credentials.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This approach ensures that only quality-assured images make it to the central registry, reducing the clutter and promoting a cleaner CI process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Let's construct this setup step by step. Starting with pushing an image to ttl.sh through Gitness, I’ll guide you to resources for completing the remaining components of the system.&lt;/p&gt;

&lt;p&gt;Begin with the Gitness documentation to set up your first project. You can start a new repository or link an existing one from GitHub or GitLab. The specific source code repository can be any; the essential requirement is the presence of a Dockerfile.&lt;/p&gt;

&lt;p&gt;Navigate to "Pipelines'' within your repository and create a new pipeline. Gitness will suggest a sample pipeline. Replace it with the following YAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pipeline&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build-and-push&lt;/span&gt;
     &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;platform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;arch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;amd64&lt;/span&gt;
         &lt;span class="na"&gt;os&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;linux&lt;/span&gt;
       &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker_build&lt;/span&gt;
           &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
             &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
               &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ttl.sh/xxxx-yyyy-nnnn-2a2222-4b44&lt;/span&gt;
               &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1h&lt;/span&gt;
             &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker&lt;/span&gt;
           &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;plugin&lt;/span&gt;
     &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ci&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Observe the image repo and tag. ttl.sh is the image repository name and the &lt;strong&gt;UUID (xxxx-yyyy-nnnn-2a2222-4b44)&lt;/strong&gt; is the image name. You can replace the hard coded image name with a Gitness secret for a dynamic image name. Click on &lt;strong&gt;Secrets&lt;/strong&gt; and &lt;a href="https://docs.gitness.com/pipelines/secrets"&gt;add a new Gitness secret&lt;/a&gt; called &lt;strong&gt;random_image_name&lt;/strong&gt;. Now you can update your pipeline YAML so that the image name is not fixed:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;repo: ttl.sh/${{ secrets.get("random_image_name") }}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Save the pipeline and click on &lt;strong&gt;Run&lt;/strong&gt;. After executing the pipeline, your output should resemble the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9YbAD-JG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sbgmxuvh460vvdeinxox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9YbAD-JG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sbgmxuvh460vvdeinxox.png" alt="Successsful pipeline execution" width="795" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next add the following &lt;a href="https://docs.gitness.com/category/steps"&gt;steps&lt;/a&gt; yourself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add a &lt;a href="https://docs.gitness.com/pipelines/steps/run"&gt;run step&lt;/a&gt; to execute tests on the container you just built and pushed.&lt;/li&gt;
&lt;li&gt;Add a &lt;a href="https://docs.gitness.com/pipelines/triggers"&gt;trigger&lt;/a&gt; so that when a pull request is opened, Gitness can automatically trigger pipeline execution.&lt;/li&gt;
&lt;li&gt;Add a &lt;a href="https://docs.gitness.com/pipelines/steps/plugin"&gt;slack plugin step&lt;/a&gt; so that failed tests trigger slack webhook and notification.&lt;/li&gt;
&lt;li&gt;Add another &lt;a href="https://docs.gitness.com/pipelines/steps/run"&gt;run step&lt;/a&gt; so that when all tests pass, the image tag is retagged and pushed to a private image registry. You’ll need authentication at this step.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Security Considerations
&lt;/h2&gt;

&lt;p&gt;While ttl.sh's no-credential, ephemeral approach offers flexibility and simplicity for CI environments, it introduces unique security considerations. The convenience of pushing images without credentials is counterbalanced by potential security risks — namely, the possibility of unauthorized image pulls. You can mitigate these risks through short-lived images and using UUIDs for image names:&lt;/p&gt;

&lt;p&gt;Short-Lived Images: By design, images in ttl.sh are ephemeral. Setting a short expiration time for an image means it's available for a limited time window, reducing the risk exposure period.&lt;/p&gt;

&lt;p&gt;Use of UUIDs: Incorporating UUIDs into image tags significantly lowers the risk of unauthorized access. The randomness and complexity of UUIDs make it exceedingly difficult for someone to guess the image name and pull it without authorization.&lt;/p&gt;

&lt;p&gt;However, for production environments, organizations might consider deploying a private version of ttl.sh. This allows for more control over the security aspects, such as network isolation, access control, and auditing capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;Integrating ttl.sh and Gitness creates an ephemeral CI environment that streamlines the build and test process. It helps to avoid the accumulation of outdated images and keeps the pipeline lean. This method is not just about speed; it's about maintaining a manageable and efficient development workflow. Adopting these tools can lead to more frequent and dependable software delivery for your engineering teams. Check out and follow &lt;a href="https://www.youtube.com/@Harnessio"&gt;Harness YouTube channel&lt;/a&gt; for&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>security</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Secure Container Image Signing with Cosign and OPA</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Tue, 28 Nov 2023 22:15:55 +0000</pubDate>
      <link>https://forem.com/harness/secure-container-image-signing-with-cosign-and-opa-2nbo</link>
      <guid>https://forem.com/harness/secure-container-image-signing-with-cosign-and-opa-2nbo</guid>
      <description>&lt;p&gt;As the adoption of containers in modern development continues to grow, ensuring the integrity of container images has become a pivotal aspect of application deployment strategies. In this video, Harness Developer Advocate &lt;a href="https://www.linkedin.com/in/diahmed/"&gt;Dewan Ahmed&lt;/a&gt; demonstrates how to leverage the combined power of Cosign and OPA for the secure deployment of container images to your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/PLvjcCCStzs"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;You can also &lt;a href="https://developer.harness.io/tutorials/cd-pipelines/kubernetes/cosign-opa?utm_campaign=cd-devrel"&gt;read the text version of this tutorial&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cosign</category>
      <category>opa</category>
      <category>harness</category>
    </item>
    <item>
      <title>From Zero to Kubernetes Deployment: Harness Continuous Delivery in Action</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Fri, 17 Nov 2023 14:45:48 +0000</pubDate>
      <link>https://forem.com/harness/from-zero-to-kubernetes-deployment-harness-continuous-delivery-in-action-1332</link>
      <guid>https://forem.com/harness/from-zero-to-kubernetes-deployment-harness-continuous-delivery-in-action-1332</guid>
      <description>&lt;p&gt;Harness Continuous Delivery pipelines enable you to orchestrate and automate your deployment workflows, allowing you to push updated application images to your target Kubernetes cluster seamlessly.&lt;/p&gt;

&lt;p&gt;In this video, &lt;a href="https://www.linkedin.com/in/diahmed/"&gt;Dewan Ahmed&lt;/a&gt;, a Developer Advocate at Harness, demonstrates how to install a Harness delegate and create entities such as Harness secrets, connectors, environments, services, and pipelines. Dewan guides you through a successful pipeline execution with a manual trigger and then shows how to create a pipeline variable to configure the Kubernetes namespace to be provided during pipeline execution.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/irDr4JlbmLY"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>harness</category>
      <category>cicd</category>
      <category>devops</category>
    </item>
    <item>
      <title>Securing CI/CD Images with Cosign and OPA</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Wed, 15 Nov 2023 14:01:27 +0000</pubDate>
      <link>https://forem.com/harness/securing-cicd-images-with-cosign-and-opa-3plo</link>
      <guid>https://forem.com/harness/securing-cicd-images-with-cosign-and-opa-3plo</guid>
      <description>&lt;p&gt;With the growing adoption of containers in modern development, ensuring the integrity of container images has become central to application deployment strategies. Rapid deployment offers agility, but it also presents security challenges. Container image signing addresses these challenges, allowing engineering teams to verify that the deployed images are authentic and unchanged.&lt;/p&gt;

&lt;p&gt;However, an authentic image is just one piece of the puzzle. Making sure these images meet organizational standards is important, and policy engine tools are crucial for that. By establishing and enforcing clear guidelines, these tools pave the way for a secure and streamlined deployment workflow.&lt;/p&gt;

&lt;p&gt;In essence, container image signing involves adding a digital stamp to an image, affirming its authenticity. This digital assurance guarantees that the image is unchanged from creation to deployment. In this blog, I'll explain how to sign container images for Kubernetes using &lt;a href="https://github.com/sigstore/cosign"&gt;Cosign&lt;/a&gt; and the &lt;a href="https://www.openpolicyagent.org/"&gt;Open Policy Agent&lt;/a&gt;. I will also share a tutorial that demonstrates these concepts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing the Right Tools for Image Signing and Verification
&lt;/h2&gt;

&lt;p&gt;Choosing the right tools for container image signing and verification is important in CI/CD pipeline security. Let's walk through the available options to find the one that best meets your needs for securing container images.&lt;/p&gt;

&lt;h3&gt;
  
  
  Image Signing and Verification Tools:
&lt;/h3&gt;

&lt;p&gt;Notary v1, previously know as Docker Content Trust, uses &lt;a href="https://theupdateframework.io/"&gt;The Update Framework (TUF)&lt;/a&gt; but has some issues with signature portability and storage. Though it established a solid foundation in image signing, it lacks some of the enhanced security features found in more recent tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://grafeas.io/"&gt;Grafeas&lt;/a&gt;: While Grafeas offers a comprehensive solution for the software development lifecycle, it is not designed for public or open-source software (OSS) image verification. It's better suited for first-party integration, particularly with Google Kubernetes Engine (GKE).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/notaryproject/notation"&gt;Notary v2&lt;/a&gt;: The evolution to Notary v2 brought improvements in signature portability and integration with third-party key management solutions. However, it does not provide a certificate authority, leaving public key discovery for open-source image verification as an unresolved issue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://theupdateframework.io/"&gt;The Update Framework (TUF)&lt;/a&gt;: TUF is a framework, not a tool, designed to enhance the security of software update systems. It focuses on resilience against key compromises and attacks, employing verifiable records to verify the authenticity of update files. TUF's flexibility and integration ease make it a foundational element in securing software updates, though it's not a direct image signing tool like the others.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/sigstore/cosign"&gt;Cosign&lt;/a&gt;: In this context, Cosign from the Sigstore project offers a compelling solution. Its simplicity, registry compatibility, and effective link between images and their signatures provide a user-friendly and versatile approach. The integration of Fulcio for certificate management and Rekor for secure logging enhances Cosign's appeal, making it particularly suitable for modern development environments that prioritize security and agility.&lt;/p&gt;

&lt;p&gt;Cosign's strength lies in its verification process, which is vital in the CI/CD pipeline. When integrated with policy engines like the Open Policy Agent (OPA), Cosign ensures not only the authenticity of container images but also their compliance with organizational standards. Additionally, Cosign provides enhanced support for SPIFFE, GitHub Actions, or service account identities, and it includes warnings for signing OCI images by tag, highlighting the risks associated with tag mutability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Policy Enforcement Options:
&lt;/h3&gt;

&lt;p&gt;Open Policy Agent (OPA): OPA's strength lies in its ability to define fine-grained policies as code, offering granular control over policy enforcement. Integrated seamlessly with Kubernetes, it enforces policies in real-time, preventing unauthorized image deployments.&lt;/p&gt;

&lt;p&gt;Kubernetes Admission Controllers: These controllers are a native part of the Kubernetes ecosystem, making them a straightforward choice for Kubernetes users. They excel at validating and enforcing policies for various Kubernetes resources, including pods, making them scalable for large deployments.&lt;/p&gt;

&lt;p&gt;When choosing tools for container image signing and policy enforcement, it's important to balance functionality, scalability, and community support. Among the options discussed, Cosign and OPA (Open Policy Agent) stand out for their effectiveness in securing containerized applications.&lt;/p&gt;

&lt;p&gt;Cosign simplifies the process of image signing and verification. It offers a user-friendly approach that caters to developers and security teams alike. With a growing user community and easy integration, it's a practical choice for securing container images.&lt;/p&gt;

&lt;p&gt;OPA, on the other hand, excels in policy enforcement within Kubernetes environments. It enables you to define and enforce policies as code, granting you fine-grained control over image deployments.&lt;/p&gt;

&lt;p&gt;In the upcoming section, let's delve into architectural diagrams and explore how the combination of Cosign and OPA offers a practical approach to sign and verify container images while enforcing them using a general policy engine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architectural Diagrams: Securing Container Images and Deployment
&lt;/h2&gt;

&lt;p&gt;Traditional cryptography uses public-private key pairs for signing and verifying images. However, modern practices, like cosign, prefer keyless signing. In this approach, an OIDC (OpenID Connect) provider is utilized, making the process more streamlined and secure, as it removes the complexities of key management.&lt;/p&gt;

&lt;p&gt;The process starts with an architect selecting a trusted public base image as the foundation for their application. Let's dive into two distinct but interconnected flows that demonstrate how container image signing with Cosign and policy enforcement using Open Policy Agent (OPA) play a crucial role in ensuring a secure deployment pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LSoUEhNM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fihn5p2rjnt1l2fwq9xz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LSoUEhNM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fihn5p2rjnt1l2fwq9xz.png" alt="An architect is building a Secure Base Image" width="800" height="209"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The process starts with an architect who selects a trusted Public Base Image as the foundation for their application. To ensure the image's integrity, the architect runs a vulnerability scanning process. This step identifies and addresses any security vulnerabilities within the base image.&lt;/p&gt;

&lt;p&gt;Next, any unnecessary or unused components are removed from the image, minimizing potential attack vectors. The architect adds the necessary libraries, dependencies, and software packages required for the application to run efficiently. Then, the image undergoes rigorous security testing to ensure that it meets security standards and aligns with organizational policies.&lt;/p&gt;

&lt;p&gt;The final step involves image signing with Cosign, a tool specifically designed for image signing. Cosign adds a digital stamp to the image, affirming its authenticity. This signature plays a crucial role in verifying the image's integrity during deployment. The signed image is then pushed to an artifact registry as a new, secure base image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kYkYrGc2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ck6yhlgqlj5m8umc273p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kYkYrGc2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ck6yhlgqlj5m8umc273p.png" alt="A developer gets a YES/NO deployment decision based on  raw `cosign verify` endraw  and OPA rules" width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the deployment phase, a developer selects a base image for their application, either an Unsigned Public Base Image or a Signed Public Base Image. If the developer chooses an Unsigned Public Base Image, the deployment process checks the image's signature and evaluates it against predefined policies using Open Policy Agent (OPA). Due to the lack of a valid signature, the image is Denied For Deployment.&lt;/p&gt;

&lt;p&gt;On the other hand, if the developer opts for a Signed Public Base Image, the same verification and policy enforcement process is applied using OPA. This time, since the image has a valid signature, it is Allowed For Deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hands-On: Deploying Secure Containers with Cosign and OPA
&lt;/h2&gt;

&lt;p&gt;Having explored the strengths of Cosign and OPA for container image signing, you're now ready to apply these tools in a practical scenario. To get a real-world feel for how these technologies can enhance the security of your Kubernetes deployments, &lt;a href="https://developer.harness.io/tutorials/cd-pipelines/kubernetes/cosign-opa?utm_campaign=cd-devrel"&gt;follow this hands-on tutorial&lt;/a&gt;. It will guide you through the process of securely deploying container images using the combined capabilities of Cosign and OPA in your Kubernetes cluster. The Harness Software Supply Chain Assurance (SSCA) module addresses the challenges of securing your software supply chain, including image signing and verification. To explore how SSCA can enhance your pipeline security, &lt;a href="https://www.harness.io/products/software-supply-chain-assurance?utm_campaign=cd-devrel"&gt;check it out&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>containers</category>
      <category>security</category>
      <category>devops</category>
    </item>
    <item>
      <title>KubeCon 2023 NA Recap - Developer Experience is Monumental</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Mon, 13 Nov 2023 15:26:35 +0000</pubDate>
      <link>https://forem.com/harness/kubecon-2023-na-recap-developer-experience-is-monumental-41f4</link>
      <guid>https://forem.com/harness/kubecon-2023-na-recap-developer-experience-is-monumental-41f4</guid>
      <description>&lt;p&gt;KubeCon + CloudNativeCon North America has wrapped up, and the buzz it generated is still in the air. With the halls of Chicago's convention center now quiet, those of us who attended are left with a wealth of new insights and ideas. The conference proved to be a fertile ground for learning and networking, with cloud native professionals / technologists from leading open projects from around the world sharing their knowledge.&lt;/p&gt;

&lt;p&gt;At Harness, we were right there in the thick of it, participating in the Co-Located events and engaging with the community. Our team dove into conversations, contributed to discussions, and absorbed a lot of feedback on everything from technical workflows to project management.&lt;/p&gt;

&lt;p&gt;For those who couldn't make it, or for attendees who want to revisit the highlights, our recap will bring you the key points from KubeCon 2023. We'll cover the sessions that made us think, the keynotes that motivated us, and the informal chats that often lead to the best ideas.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pulse of Cloud Native: KubeCon Session Breakdown
&lt;/h2&gt;

&lt;p&gt;As cloud native technology matures, the focus of KubeCon sessions has honed in on a few key areas: the construction of layered abstractions over Kubernetes, addressing the unique challenges within specific industry verticals, and the role of AI in cloud native spaces. While it's impossible to cover all the sessions within this recap, we've curated a subset of the extensive and enlightening discussions for you. Here are some highlights:&lt;/p&gt;

&lt;h3&gt;
  
  
  Argo and Flux
&lt;/h3&gt;

&lt;p&gt;Since deploying Kubernetes to cloud or on-prem environments has largely been standardized, the community is now working on adding orchestration on top of orchestration. The word “Scale” has featured in the session title at no fewer than 30 talks. Argo and Flux have taken center stage as key orchestration tools for declarative GitOps management of Kubernetes workloads. &lt;/p&gt;

&lt;p&gt;These projects have been in the Cloud Native Computing Foundation for a few years now. Organizations are beginning to report their success in using them to manage not just hundreds or thousands of nodes, but &lt;a href="https://sched.co/1R2po"&gt;hundreds&lt;/a&gt; - &lt;a href="https://sched.co/1TZ2V"&gt;to&lt;/a&gt; - &lt;a href="https://sched.co/1R2mf"&gt;thousands&lt;/a&gt; of clusters and deployed applications. The piling up of abstraction layers is leading to an everything-as-code approach for which a multitude of vendors and community projects have stepped up to offer their resources and solutions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenging Verticals
&lt;/h3&gt;

&lt;p&gt;With the maturing of deployment architectures, a marked shift has also taken place to solve problems in industry verticals where cloud native has been a challenge. One example is &lt;a href="https://sched.co/1R2m3"&gt;deploying to secure or air-gapped environments&lt;/a&gt;. Crossing the air gap remains a wrench in the automation process. Organizations that operate in that space developed a couple of practical solutions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Develop as much as possible in unclassified environments while ensuring the application is as self-contained and portable.&lt;/li&gt;
&lt;li&gt;Closely engage with the open source community so that CVE’s are identified and acted on quickly prior to crossing the air gap.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AI
&lt;/h3&gt;

&lt;p&gt;As might be expected, artificial intelligence was prominent in several sessions. The industry is still determining whether AI (namely LLMs) has reached the peak of the hype cycle, and what role it plays in cloud native. Compared to the general buzz, there were relatively few sessions specifically on AI/ML. Even then, those sessions were less about consuming generative AI, and more about deploying custom AI/ML applications.&lt;/p&gt;

&lt;p&gt;The focus then was largely on the massive compute resources required for AI training and deployment. The community is still catching up when it comes to developing open source alternatives to the dominant proprietary stacks, but some projects have emerged to propose community-driven approaches to model training (&lt;a href="https://github.com/FederatedAI/FATE-LLM"&gt;federated LLMs&lt;/a&gt; for example).&lt;/p&gt;

&lt;h2&gt;
  
  
  On the Decline: Yesterday's Hot Topics Take a Backseat
&lt;/h2&gt;

&lt;p&gt;Since last year’s KubeCon, a few parts of the ecosystem have started to either mature or decrease in popularity. While by no means on the way out, we noticed a few of the following were less emphasized in this year’s talks and vendor showcases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pure Kubernetes Implementations
&lt;/h3&gt;

&lt;p&gt;While the number of &lt;a href="https://www.cncf.io/training/certification/software-conformance/"&gt;certified Kubernetes distributions&lt;/a&gt; have ticked back up to historic highs, vendors are now focusing on the application layer rather than leading with K8s. Deploying Kubernetes to the public and private cloud is now largely standardized. Organizations are therefore now turning their infrastructure attention to edge cases such as maintaining legacy &lt;a href="https://kubevirt.io/"&gt;VM workloads&lt;/a&gt;, or else are moving up the stack into &lt;a href="https://knative.dev/docs/"&gt;serverless&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unchecked Developer Autonomy
&lt;/h3&gt;

&lt;p&gt;Security is no longer optional. The rise of &lt;a href="https://cybersecurityventures.com/software-supply-chain-attacks-to-cost-the-world-60-billion-by-2025/"&gt;software supply chain attacks&lt;/a&gt; and rise in open source and cloud resources have spurred companies to balance developer productivity with the need to secure their data and environments. While developer experience is still paramount, the ecosystem is realizing that “you build it, you run it” should not mean “no guardrails”. Organizations are moving toward a more centralized experience, implementing managed platforms like &lt;a href="https://www.harness.io/blog/backstagecon-2023-recap-developer-experience-top-of-mind"&gt;backstage&lt;/a&gt;. Tools like keycloak and OPA have risen to “must-haves” in cluster administration and permission management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dialogues at the Booth: Harnessing Customer Voices
&lt;/h2&gt;

&lt;p&gt;Harness has been a long time advocate of the Cloud Native community and ecosystem, and is proud to help sponsor KubeCon. We had a multitude of great conversations across our &lt;a href="https://www.harness.io/blog/argocon-2023-gitops-achieves-escape-velocity"&gt;ArgoCon&lt;/a&gt;, &lt;a href="https://www.harness.io/blog/backstagecon-2023-recap-developer-experience-top-of-mind"&gt;BackstageCon&lt;/a&gt;, Litmus Chaos, and main-floor KubeCon booths. Helping support the next generation of workloads by reducing friction in software delivery are top of mind for many individuals we got to interact with over the course of the week. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iTcKj76x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z9w5wt20neyha2ok49u5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iTcKj76x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z9w5wt20neyha2ok49u5.png" alt="Harness Team at KubeCon at the Helm" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Engineering pillars and experiences that previously in a non-cloud-native world such as security, scalability, and robustness would be afterthoughts are front and center during the development cycle now thanks to the ever increasing burden of shifting left.  The Harness Platform is well poised to help reduce significant toil as more expertise is disseminated across your evolving delivery pipelines. We are excited to see what is in store over the course of the year before KubeCon 2024.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Next Stop, Salt Lake City For KubeCon NA 2024
&lt;/h2&gt;

&lt;p&gt;As we wrap up our journey at KubeCon 2023, we're taking a moment to reflect on the rich tapestry of ideas and innovations that were shared. This year's event has not only reinforced the pivotal role of cloud native technologies in shaping the future of software but also highlighted the evolving landscape where some trends rise and others give way to more pressing innovations.&lt;/p&gt;

&lt;p&gt;At Harness, we've had the privilege of engaging directly with the community that's pushing these boundaries. Our conversations at the booth brought us face to face with the pulse of the industry—where the enthusiasm for Kubernetes (K8s) is as robust as ever, and the love for our four-legged friends (K9s) continues to bring smiles and a sense of camaraderie.&lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u90yNqTP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/txjzkplvxfgx7jdkspih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u90yNqTP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/txjzkplvxfgx7jdkspih.png" alt="A Harness, harness: K9s for K8s" width="800" height="1067"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking beyond the current cloud native vistas, we're excited about what's on the horizon. As we set our sights on KubeCon in Salt Lake City as the next flagship conference gathers adopters of cloud native technology, we carry forward the insights and feedback we've garnered here in Chicago. The dialogue doesn't end with the closing of the conference doors; it's just the beginning. We invite you to continue these conversations with us, explore how Harness can streamline your DevOps journey, and share in the excitement for what's to come.&lt;/p&gt;

&lt;p&gt;Until we meet again, let's keep pushing the envelope in our respective fields, inspired by the collective wisdom of KubeCon 2023. And remember, whether it's for your infrastructure needs or a harness for your K9, &lt;a href="https://app.harness.io/auth/#/signup/?module=cd&amp;amp;utm_source=website&amp;amp;utm_medium=harness-blog&amp;amp;utm_campaign=cd-devrel&amp;amp;utm_content=kubecon"&gt;Harness is here to support you&lt;/a&gt;. See you in &lt;a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america-2024/"&gt;Salt Lake City&lt;/a&gt;!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nick, Dewan, Ravi&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubecon</category>
      <category>kubernetes</category>
      <category>conference</category>
      <category>recap</category>
    </item>
    <item>
      <title>Harness Developer Hub - Ease of Authoring with Git Triggers</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Fri, 27 Oct 2023 13:21:39 +0000</pubDate>
      <link>https://forem.com/harness/harness-developer-hub-ease-of-authoring-with-git-triggers-3h26</link>
      <guid>https://forem.com/harness/harness-developer-hub-ease-of-authoring-with-git-triggers-3h26</guid>
      <description>&lt;p&gt;It’s been about a year since &lt;a href="https://www.harness.io/blog/introducing-the-harness-developer-hub-beta-release" rel="noopener noreferrer"&gt;we launched Harness Developer Hub&lt;/a&gt; [HDH] in Beta. Today, HDH is GA and is serving tens of thousands of unique visitors every month and hundreds of thousands of pageviews every month all across the globe. All of this while supporting hundreds of contributors with varying levels of skills. The traffic and number of contributors in the &lt;a href="https://github.com/harness/developer-hub" rel="noopener noreferrer"&gt;public repository&lt;/a&gt; continues to grow as we expand the capabilities of HDH. &lt;/p&gt;

&lt;p&gt;Looking at how HDH is architected, HDH is a &lt;a href="https://docusaurus.io/" rel="noopener noreferrer"&gt;Docursarus&lt;/a&gt; Implementation. Our site embraces documentation-as-code as a paradigm and is no different than any other modern TypeScript [Javascript] based application. We have an application that multiple contributors need to contribute to and needs to be built and deployed all throughout the day. &lt;/p&gt;

&lt;p&gt;Over the previous year we have made two shifts in how we build and deploy. We now treat every commit as a potential release and build multiple times throughout the day with every git commit and also deploy multiple times throughout the day with every merge to our main branch. Let’s look at our current solution and then jog down memory lane how we evolved. &lt;/p&gt;

&lt;h2&gt;
  
  
  Current HDH Pipeline Strategy
&lt;/h2&gt;

&lt;p&gt;We leverage several Harness capabilities to deliver HDH to the world.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0lgiqshx21qk5dm3kg9l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0lgiqshx21qk5dm3kg9l.png" alt="HDH Pipeline Triggered by Git Repository"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting at the Repository
&lt;/h2&gt;

&lt;p&gt;Our source code management solution is the source of truth for HDH and the genesis of changes being published. We have webhook events that fire on several SCM events to Harness to process. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Branch or tag creation/deletion. &lt;/li&gt;
&lt;li&gt;Pull Request Events - Created/merged/synchronized/updated/closed&lt;/li&gt;
&lt;li&gt;Git Pushes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These events are then processed by Harness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Harness Build and Deploy - Conditionally from Git Hooks in the Cloud
&lt;/h2&gt;

&lt;p&gt;Our goal is to provide preview/ephemeral builds for changes that are represented in a Pull Request. To do this, we need to remotely build the Docusarus instance which leverages Yarn and NPM to facilitate the build. We build on every net new commit to the PR. &lt;/p&gt;

&lt;p&gt;We build via a Harness Cloud [hosted] build node so we do not have to manage build infrastructure and dependencies on the build node. We also leverage for performance &lt;a href="https://developer.harness.io/docs/continuous-integration/use-ci/caching-ci-data/cache-intelligence/" rel="noopener noreferrer"&gt;Cache Intelligence&lt;/a&gt; on a conservative estimate sped up our builds more than 30%. From when we implemented the current setup, we have had over 9000 builds. &lt;/p&gt;

&lt;p&gt;From a deployment standpoint, we deploy to our static host which is Netlify. The flexibility and extensibility of Harness allows us to bring a plugin that &lt;a href="https://github.com/harness-community/drone-netlify" rel="noopener noreferrer"&gt;interacts with Netlify’s APIs&lt;/a&gt;. We have a decision that we make in &lt;a href="https://commons.apache.org/proper/commons-jexl/reference/syntax.html" rel="noopener noreferrer"&gt;JEXL&lt;/a&gt; if a build needs to head to a preview environment or if a build needs to be published to production. &lt;/p&gt;

&lt;p&gt;Preview Logic [if branch is not main]:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;+trigger.event&amp;gt;.equals("PR") &amp;amp;&amp;amp; &amp;lt;+trigger.branch&amp;gt;!~"/^main$/"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Production Logic [if branch is main]:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;+trigger.event&amp;gt;.equals("PUSH") &amp;amp;&amp;amp; &amp;lt;+trigger.targetBranch&amp;gt;.equals("main")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configuring this &lt;a href="https://developer.harness.io/tutorials/cd-pipelines/trigger/" rel="noopener noreferrer"&gt;Harness Trigger&lt;/a&gt;, here is our YAML configuration looking out for a few events.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source:
type: Webhook
spec:
type: Github
spec:
type: PullRequest
spec:
connectorRef: hdh_gh_connector
autoAbortPreviousExecutions: false
payloadConditions: []
headerConditions: []
actions:
- Open
- Synchronize
- Reopen
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Based on the condition, we fire a slightly different request to the Netlify API. Once we get the results of the Netlify API call, we comment back to the GitHub PR. This allows the contributor to preview their work in a live site if a preview flow is executed. In totality, the Pipeline looks as follows in the Harness Editor:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1069hd17p9b9ipz2yu5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1069hd17p9b9ipz2yu5.png" alt="Harness HDH Editable Pipeline&amp;lt;br&amp;gt;
"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example the Cache Intelligence step is easy to weave in during the Build Stage. Once execution will look as follows in the Harness UI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6thbo2quam267l279blu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6thbo2quam267l279blu.png" alt="Executed Git Trigger Pipeline for HDH"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pipelines are designed to evolve. We had two other renditions of the Pipeline which we optimized over the year to produce what we are currently leveraging today.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pipelines Should Evolve
&lt;/h2&gt;

&lt;p&gt;We have embraced two principals as we evolved our pipelines. The &lt;a href="https://en.wikipedia.org/wiki/KISS_principle" rel="noopener noreferrer"&gt;KISS Principle&lt;/a&gt; to take a more simplistic approach and &lt;a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself" rel="noopener noreferrer"&gt;DRY Principle&lt;/a&gt; to cut out duplicate steps/tests. Our second rendition was Kubernetes heavy for the static site before we optimized on calling the Netlify APIs directly for a preview build; we used to maintain our own preview environment when Netlify provided this out of the box. Because we learned of this feature, we were able to easily modify our HDH Pipeline to leverage this new methodology. &lt;/p&gt;

&lt;p&gt;If you like to continuously improve your software delivery capabilities, I would implore you to consider &lt;a href="https://app.harness.io/auth/#/signup/?module=cd&amp;amp;utm_source=website&amp;amp;utm_medium=harness-blog&amp;amp;utm_campaign=cd-devrel&amp;amp;utm_content=hdh" rel="noopener noreferrer"&gt;signing up and  using the Harness Platform&lt;/a&gt; to help you with your goals.&lt;/p&gt;

&lt;p&gt;Cheers,&lt;/p&gt;

&lt;p&gt;-Ravi&lt;/p&gt;

</description>
      <category>git</category>
      <category>harness</category>
      <category>automation</category>
      <category>docsascode</category>
    </item>
    <item>
      <title>Need for Automation - GitOps at Scale</title>
      <dc:creator>Dewan Ahmed</dc:creator>
      <pubDate>Wed, 18 Oct 2023 14:08:20 +0000</pubDate>
      <link>https://forem.com/harness/need-for-automation-gitops-at-scale-1dcp</link>
      <guid>https://forem.com/harness/need-for-automation-gitops-at-scale-1dcp</guid>
      <description>&lt;p&gt;Building a skyscraper is a lot like building software. Initially, you might experiment with materials on a different site or check how beams handle stress. Once confident, you lay the foundation and then start building upwards. Similarly, in software development, you kick things off with a proof of concept (POC) on a small scale. Once you're convinced of the tooling's capacity, you scale up to production. GitOps is no different. &lt;/p&gt;

&lt;p&gt;Beginning typically with trials and using Git as the backbone for declarative infrastructure and applications, the challenge arises as you scale: how do you efficiently manage everything? And just as you'd need machinery to help construct a skyscraper, in the world of GitOps, tools like Terraform become invaluable, especially when setting up or "bootstrapping" these automation tools. In this blog, we'll navigate through the early, middle, and advanced stages (day 0, day 1, and day 2) of GitOps and explore how Terraform can simplify the scaling process for day 2.&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 0 GitOps Challenges: Laying the Groundwork
&lt;/h3&gt;

&lt;p&gt;Day 0 in GitOps can be likened to the preparatory phase of constructing a skyscraper where materials and designs are rigorously tested. At this initial stage, you're not delving into the complexities of scaling. Instead, you're laying the groundwork, familiarizing yourself with the fundamental principles of GitOps.&lt;/p&gt;

&lt;p&gt;A practical starting point is to &lt;a href="https://developer.harness.io/tutorials/cd-pipelines/kubernetes/manifest"&gt;set up a basic GitOps workflow to deploy a Kubernetes manifest&lt;/a&gt; from a single git repository to a single cluster. At this juncture, the majority of tasks can be managed directly through the CLI or UI. The environment is simple enough that manually invoking commands or using native interfaces is sufficient. While tools like Terraform offer automation benefits, they're not essential at this stage. The primary objective of Day 0 is exploration: understanding the tools, assessing their fit, and choosing the best ones for your specific needs. The experience and knowledge garnered here form the bedrock for the challenges and complexities of the stages ahead.&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 1 GitOps Challenges: Setting the Foundation
&lt;/h3&gt;

&lt;p&gt;Day 1 in GitOps is like getting the ground ready for building a skyscraper. After your first dive into GitOps, the real work begins.&lt;/p&gt;

&lt;p&gt;A big task is figuring out how to keep application code and configuration separate. When your app has services across multiple Git repositories, how do you make sure everything deploys smoothly and works together?&lt;/p&gt;

&lt;p&gt;Next, as you start automating your CI pipeline, there's a tricky part: if you push manifest changes to a Git repository, you might accidentally start an endless cycle of build jobs and commits. You need to set up your pipelines right to avoid this mess.&lt;/p&gt;

&lt;p&gt;Also, as you add more to your app, developers will need to deploy different services, like deployments and stateful sets. Each of these might have its own settings. While tools like Kustomize can help, if you're not careful, you can end up with "overlay folders hell". It becomes hard to know what's where and manage everything.&lt;/p&gt;

&lt;p&gt;Day 1 is all about sorting out these early challenges, setting things up so you're ready for bigger tasks and scaling in the next stages.&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 2 GitOps Challenges: Automation Complexities
&lt;/h3&gt;

&lt;p&gt;As we progress to Day 2 of our GitOps journey, we encounter several challenges:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bootstrapping&lt;/strong&gt;: Tools like ArgoCD are great for automating deployments, but how do we set up ArgoCD itself? This is the classic bootstrapping problem in the GitOps context.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tool Reliability&lt;/strong&gt;: Automation is essential for system consistency. But this is true ONLY if the automation tool is reliable and consistent.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Understanding the End Goal&lt;/strong&gt;: With declarative tools like Kubernetes and Terraform, we define the desired outcome. However, we need to clearly understand and define that end state first.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Terraform offers solutions to these Day 2 challenges:&lt;/p&gt;

&lt;p&gt;Tackling the Bootstrapping Dilemma: One of Terraform's core strengths is its ability to set up and provision tools. So, when it comes to initializing GitOps tools like ArgoCD, Terraform can declaratively define the infrastructure, software installations, and configurations. This helps sidestep the "chicken or the egg" problem because Terraform is the tool you use to deploy your deployment tools.&lt;/p&gt;

&lt;p&gt;Ensuring Reliability and Consistency: Terraform is designed with idempotency in mind, meaning you can run the same configuration multiple times and get the same result. This ensures that your infrastructure is reliable and consistent. If something drifts from the desired configuration, Terraform will recognize it and can revert it back, ensuring that the tool itself becomes a reliable part of the automation process.&lt;/p&gt;

&lt;p&gt;Setting Clear End Goals with a Declarative Approach: With Terraform, you define your infrastructure as code in a declarative manner. You specify what you want the end state to look like, and Terraform figures out how to achieve it. This directly answers the challenge of needing to know your end state: you define it in your Terraform configuration, and the tool takes care of making it happen.&lt;/p&gt;

&lt;p&gt;For a closer look at how Terraform can boost GitOps automation, the following webinar provides more insights:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/xqgOV23VcoM"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Build software like skyscrapers
&lt;/h3&gt;

&lt;p&gt;Just as constructing a skyscraper involves meticulous planning, testing, and execution in phases, the journey of GitOps follows a similar trajectory. We start with small-scale experiments, lay a robust foundation, and then scale up, tackling complex automation challenges along the way. Each stage has its hurdles, but as we've seen, with the right tools and strategies, these challenges can be addressed effectively. Terraform, in particular, stands out as a battle-tested tool in this journey, providing solutions to many of the complexities of Day 2. &lt;/p&gt;

&lt;p&gt;If you're keen on elevating your GitOps game, especially in terms of automation, I strongly recommend giving the &lt;a href="https://developer.harness.io/docs/platform/automation/terraform/harness-terraform-provider-overview/?utm_source=dewan&amp;amp;utm_medium=harness-blog&amp;amp;utm_campaign=cd-devrel&amp;amp;utm_content=dewan-devrel"&gt;Harness Terraform provider&lt;/a&gt; a spin. Let's build software like skyscrapers - strong, scalable, and awe-inspiring!&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>gitops</category>
      <category>cicd</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
