<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Lionel♾️☁️</title>
    <description>The latest articles on Forem by Lionel♾️☁️ (@softwaresennin).</description>
    <link>https://forem.com/softwaresennin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/softwaresennin"/>
    <language>en</language>
    <item>
      <title>End to End CI/CD pipeline using GitHub Actions for Android Application</title>
      <dc:creator>Lionel♾️☁️</dc:creator>
      <pubDate>Fri, 13 Dec 2024 18:39:19 +0000</pubDate>
      <link>https://forem.com/devcloudninjas/end-to-end-cicd-pipeline-using-github-actions-for-android-application-36i5</link>
      <guid>https://forem.com/devcloudninjas/end-to-end-cicd-pipeline-using-github-actions-for-android-application-36i5</guid>
      <description>&lt;p&gt;&lt;em&gt;In this article, you will get a brief idea about how to create an End to End CI/CD Pipeline using GitHub Actions for an Android Application&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Project Source Code&lt;/strong&gt; : &lt;a href="https://github.com/devcloudninjas/DevOps-Projects/tree/master/DevOps%20Project-14" rel="noopener noreferrer"&gt;LINK&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Here, we will be covering some use cases like how to trigger one workflow from another workflow, how to run 2 jobs which depends upon each other, how to add public ip of GitHub Actions in security groups of Jfrog which is running in an EC2 , on port 8082 — so that GitHub Actions can access Jfrog to upload the .apk file into repository, how to integrate SonarQube , Teams with GitHub Actions , how to create a cron job in workflow, how to delete artifacts which are created during the workflow in GitHub Actions, how to clean caches in GitHub Actions which gets created every time your run a workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step by Step Process :&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq4l9eqh941rpucbj5y7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq4l9eqh941rpucbj5y7.png" alt="IMG" width="736" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00g1mr4a2szqyjkl18zm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00g1mr4a2szqyjkl18zm.png" alt="IMG" width="736" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To create a workflow, go to Actions in your GitHub Repository and choose a template yml file or click “&lt;strong&gt;set up a workflow yourself&lt;/strong&gt;” — for me I have chosen Android CI as my application is android application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjotveksxh4sf1knilxn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjotveksxh4sf1knilxn.png" alt="IMG" width="736" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This basic template you will get to do CI part of Android Application.&lt;/p&gt;

&lt;p&gt;Let me explain you some of the terms used in workflow file:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;name&lt;/strong&gt; : The name of the workflow as it will appear in the “Actions” tab of the GitHub repository. Like here it is “&lt;strong&gt;Android CI”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;on:&lt;/strong&gt; Specifies the trigger for this workflow. So here the workflow will be triggered when there is a &lt;strong&gt;push event&lt;/strong&gt; in “&lt;strong&gt;main&lt;/strong&gt;” branch and &lt;strong&gt;pull_request&lt;/strong&gt; event in “&lt;strong&gt;main&lt;/strong&gt;” branch&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;jobs&lt;/strong&gt;: A workflow job is a set of steps that execute on the same runner. We can have multiple jobs in a single workflow yml file. Groups together all the jobs that run in the &lt;code&gt;Android CI&lt;/code&gt; workflow. Here, in the example there is a single job whose name is &lt;strong&gt;build&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;runs-on&lt;/strong&gt;: Configures the job to run on the latest version of an &lt;strong&gt;Ubuntu Linux runner&lt;/strong&gt;. This means that the job will execute on a fresh virtual machine hosted by GitHub. You can use windows and macOS runner too.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;steps:&lt;/strong&gt; Groups together all the steps that run in the &lt;code&gt;build&lt;/code&gt; job. Each item nested under this section is a separate action or shell script.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;uses: actions/checkout@v3 :&lt;/strong&gt; The &lt;code&gt;uses&lt;/code&gt; keyword specifies that this step will run &lt;code&gt;v3&lt;/code&gt; of the &lt;code&gt;actions/checkout&lt;/code&gt; action. This is an action that checks out your repository onto the runner, allowing you to run scripts or other actions against your code (such as build and test tools). You should use the checkout action any time your workflow will run against the repository's code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;uses: actions/setup-java@v3 :&lt;/strong&gt; This step uses the &lt;code&gt;actions/setup-java@v3&lt;/code&gt; action to install the specified version of the JDK (this example uses v11) of distribution: 'temurin'&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;run: chmod +x gradlew:&lt;/strong&gt; The &lt;code&gt;run&lt;/code&gt; keyword tells the job to execute a command on the runner. In this case, you are granting execute permission for gradlew&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;run: ./gradlew build:&lt;/strong&gt; In this case you are building the code using gradle&lt;/p&gt;

&lt;p&gt;After this , Click on &lt;strong&gt;Start Commit&lt;/strong&gt; and add comment and click Commit. This will create a basic android CI workflow in GitHub Action.&lt;/p&gt;

&lt;p&gt;To create a secret in GitHub Actions  Go to Settings and then click on Secrets and then to Actions and create different secrets .&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftp9z68ndeuphxuqpqed.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftp9z68ndeuphxuqpqed.png" alt="IMG" width="736" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7meczf6lfidpq31fhm4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7meczf6lfidpq31fhm4.png" alt="IMG" width="736" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s modify the android.yml file : Full Workflow&lt;/p&gt;

&lt;p&gt;Let’s discuss part by part&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffh9fdkk0vj2ej49a5cre.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffh9fdkk0vj2ej49a5cre.png" alt="IMG" width="614" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, the workflow is getting triggered whenever we push into the branches “main” or “qa” or “develop” and whenever we pull request into the branches “main” or “qa”.&lt;/p&gt;

&lt;p&gt;You can use environment variables to store information that you want to reference in your workflow. You reference environment variables within a workflow step or an action, and the variables are interpolated on the runner machine that runs your workflow. Commands that run in actions or workflow steps can create, read, and modify environment variables.&lt;/p&gt;

&lt;p&gt;You can define environment variables that are scoped for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The entire workflow, by using &lt;a href="https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#env" rel="noopener noreferrer"&gt;env&lt;/a&gt; at the top level of the workflow file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The contents of a job within a workflow, by using &lt;a href="https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idenv" rel="noopener noreferrer"&gt;jobs.&amp;lt;job_id&amp;gt;.env&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A specific step within a job, by using &lt;a href="https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsenv" rel="noopener noreferrer"&gt;jobs.&amp;lt;job_id&amp;gt;.steps[*].env&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here in the workflow we can have created a env at the top level with variable name as “AWS_DEFAULT_REGION” and assigned value as “ap-south-1”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5o9yuyvhr9k4ilcmm987.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5o9yuyvhr9k4ilcmm987.png" alt="IMG" width="736" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;uses: actions/checkout@v3 :&lt;/strong&gt; The &lt;code&gt;uses&lt;/code&gt; keyword specifies that this step will run &lt;code&gt;v3&lt;/code&gt; of the &lt;code&gt;actions/checkout&lt;/code&gt; action. This is an action that checks out your repository onto the runner, allowing you to run scripts or other actions against your code (such as build and test tools). You should use the checkout action any time your workflow will run against the repository's code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;uses: actions/setup-java@v3 :&lt;/strong&gt; This step uses the &lt;code&gt;actions/setup-java@v3&lt;/code&gt; action to install the specified version of the JDK (this example uses v11) of distribution: 'temurin'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu24ikjqo7klfid6mwvrt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu24ikjqo7klfid6mwvrt.png" alt="IMG" width="736" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;run: chmod +x gradlew:&lt;/strong&gt; The &lt;code&gt;run&lt;/code&gt; keyword tells the job to execute a command on the runner. In this case, you are granting execute permission for gradlew&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwq44z3put6uljehdsquj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwq44z3put6uljehdsquj.png" alt="IMG" width="736" height="562"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;run: ./gradlew clean:&lt;/strong&gt; Gradle clean will delete a build directory if already present.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12jlcxhfxdncrky88ovm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12jlcxhfxdncrky88ovm.png" alt="IMG" width="736" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;run: ./gradlew lint:&lt;/strong&gt; It will detect poorly structured code that can impact the reliability and efficiency of your Android apps and make your code harder to maintain&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljwcnpu5fbzbdt1tpw95.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljwcnpu5fbzbdt1tpw95.png" alt="IMG" width="736" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;run: ./gradlew build:&lt;/strong&gt; A process of building a Gradle project&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimg9pk26njs6gz7vys5n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimg9pk26njs6gz7vys5n.png" alt="IMG" width="736" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;run: ./gradlew jacocoTest:&lt;/strong&gt; The JacocoReport task can be used to generate code coverage reports in different formats&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0of4j1kjx1el5jrpkt9f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0of4j1kjx1el5jrpkt9f.png" alt="IMG" width="736" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ee86pry4i6zbdznmyt7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ee86pry4i6zbdznmyt7.png" alt="IMG" width="736" height="585"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frc8nmhyuo40cs2u1cqlc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frc8nmhyuo40cs2u1cqlc.png" alt="IMG" width="736" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this step we are integrating GitHub Actions with SonarQube. To make your workflows faster and more efficient, you can create and use caches for dependencies and other commonly reused files.&lt;/p&gt;

&lt;p&gt;In the Cache SonarQube Package Step and Cache Gradle Package we are caching SonarQube packages and Gradle Packages with given path where runner stores the cache. The new cache will use the &lt;code&gt;key&lt;/code&gt; you provided and contains the files you specify in &lt;code&gt;path&lt;/code&gt; and alternative restore key used if no cache hit occurs for &lt;code&gt;key&lt;/code&gt;, these restore keys are used sequentially in the order provided to find and restore a cache.&lt;/p&gt;

&lt;p&gt;In the next step we are doing code analysis using SonarQube. We have added environment variables whose values are as secrets which you can use in your workflows as environment variables.&lt;/p&gt;

&lt;p&gt;The secrets in GitHub Actions are defined as {{ secrets.secret_name }}. Here we have added SonarQube Token and SonarQube URL as secrets. Then we are running the command in runner as “./gradlew sonarqube” and passing the projectkey.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxm75r9314aqhksu2371e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxm75r9314aqhksu2371e.png" alt="IMG" width="736" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In build.gradle we have added the plugins for SonarQube&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8152qjlyso7s0gp25krs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8152qjlyso7s0gp25krs.png" alt="IMG" width="736" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see that SonarQube analysis is passed and coverage report is greater than 80%&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0s0v3c9yfukw8rya7jm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0s0v3c9yfukw8rya7jm.png" alt="IMG" width="736" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next step “Date and Time” , we have evaluated date and time using Linux command and created outputs in the step by writing to stdout in the format of ::set-output name=&amp;lt;name&amp;gt;::&amp;lt;value&amp;gt;. A step can have multiple outputs. Steps that create outputs must have unique ids.&lt;/p&gt;

&lt;p&gt;current_date_time::$(date +”%d-%m-%Y-%H-%M-%S”)  Here current_date_time variable is in the format of %d : Day of the month , %m : Month , %Y : Year , %H : Hour, %M : Minutes, %S : Seconds&lt;/p&gt;

&lt;p&gt;Here, the output name is “current_date_time” and id of the step of “Date and Time” is “date” which is unique name in the workflow.&lt;/p&gt;

&lt;p&gt;To use this parameter in the job we can use in the way {{ steps.&amp;lt;step-id&amp;gt;.outputs.&amp;lt;output-name&amp;gt; }} . Here in the example it is {{ steps.date.outputs.current_date_time }}&lt;/p&gt;

&lt;p&gt;In the next step “Copy APK files to a directory” we are creating a directory structure to store the Debug and Release APK files in the format&lt;/p&gt;

&lt;p&gt;&lt;code&gt;apk-files &amp;gt; debug &amp;gt; app-debug-11–11–2022–09–09–12–36.apk&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;apk-files &amp;gt; release&amp;gt; app-release-unsigned-11–11–2022–09–09–12–36.apk&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In the last step “Upload apk-files” Directory we are uploading artifactory — APK Files which we created so that in the deploy “job” we can download this artifactory as it will be using a new ubuntu runner. Here {{ github.workspace }} is default path for the checkout action. The path which we want to upload is the apk-files directory and if-no-files is present in the path , then ignore this step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrndl1z1dyrh92sc829c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrndl1z1dyrh92sc829c.png" alt="IMG" width="736" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptjub8hm9h3ck237ahks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptjub8hm9h3ck237ahks.png" alt="IMG" width="736" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfvskbre2weqv3ybegt1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfvskbre2weqv3ybegt1.png" alt="IMG" width="736" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this step we are integrating teams with GitHub Actions. This action take your GitHub token and the webhook URL which is generated during the configuration part .&lt;/p&gt;

&lt;p&gt;Create a teams channel add people who should be notified for workflow success and fail steps . Click Connectors in the channel and then choose “Incoming Webhook” and add and then configure and then add a name and copy the URL and paste it as a secret in your GitHub Secrets and use it in your workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5vw8c8aqznibnh5cbmq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5vw8c8aqznibnh5cbmq.png" alt="IMG" width="736" height="700"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgqvz5h52611a7tf9291c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgqvz5h52611a7tf9291c.png" alt="IMG" width="736" height="720"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you see the last part, we are creating an outputs variable name as “CURRENT_DATE_TIME” and passing the date time value . Since this variable we want to use in another job “deploy” . To pass any variables from 2 different jobs we need to create output values like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65j2ocdxitwp5qu9x8dn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65j2ocdxitwp5qu9x8dn.png" alt="IMG" width="736" height="638"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then we are creating another job named as “deploy” where we are adding our CD part of the Workflow.&lt;/p&gt;

&lt;p&gt;Here &lt;strong&gt;“needs” : build&lt;/strong&gt; means after successfully executing the “build” job only this “deploy” job will run.&lt;/p&gt;

&lt;p&gt;Then the if condition is telling that if branches are “qa” or “master” then only this steps will run inside the runner.&lt;/p&gt;

&lt;p&gt;Then again in the steps checkout repository is happening and in the next step “Download apk-files Artifactory” , we are downloading the artifact which we just uploaded in “build” job. We have mentioned the path where we need to download the artifact&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqi7395iisojoy632pst.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqi7395iisojoy632pst.png" alt="IMG" width="736" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then in “Display structure of downloaded files” step we are checking the directory structure of downloaded artifacts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2darr9rmomjdplmriuqr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2darr9rmomjdplmriuqr.png" alt="IMG" width="736" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2x4w4dquhwuz2p8h8mzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2x4w4dquhwuz2p8h8mzu.png" alt="IMG" width="736" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next step “Public IP of GitHub Hosted Runner”  we are generating the Public IP of the GitHub Hosted Runner by using the action &lt;a href="//mailto:haythem/public-ip@1.3"&gt;haythem/public-ip@1.3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fod57aeudqxs0hlcc5mgh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fod57aeudqxs0hlcc5mgh.png" alt="IMG" width="660" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then in the next step we are adding the Public IP to Security Group in which Jfrog is running in the EC2 Instance so that GitHub Actions can access the Jfrog Page at 8082 using AWS CLI Commands “authorize-security-group-ingress”. For this we need to create a user with having EC2 Full Access permission and give programmatic access to get access_key_id and secret_access_key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6t41ebwc4a0muvx6aneq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6t41ebwc4a0muvx6aneq.png" alt="IMG" width="736" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8iz03fu3o681vgauolc6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8iz03fu3o681vgauolc6.png" alt="IMG" width="736" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the first step we are downloading the Jfrog CLI with latest version. Add the JF_URL which is the URL of the artifactory where we are storing the .apk files and access token which we can create in Admin → User Management → Access Token .&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yyvkp3vz3mc3s96pbd8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yyvkp3vz3mc3s96pbd8.png" alt="IMG" width="736" height="596"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also set the “Password Encryption Policy” to Unsupported for demo purpose.&lt;/p&gt;

&lt;p&gt;Then in the next step we are creating folders for QA and Master Branch. Where in script we are using if and else condition , that if GitHub branch is qa then we will create a directory QA in apk-files directory else master. Now the directory structure will be like:&lt;/p&gt;

&lt;p&gt;apk-files &amp;gt; qa &amp;gt; debug &amp;gt; app-debug-11–11–2022–09–09–12–36.apk&lt;/p&gt;

&lt;p&gt;apk-files &amp;gt; qa&amp;gt; release&amp;gt; app-release-unsigned-11–11–2022–09–09–12–36.apk&lt;/p&gt;

&lt;p&gt;OR&lt;/p&gt;

&lt;p&gt;apk-files &amp;gt; master &amp;gt; debug &amp;gt; app-debug-11–11–2022–09–09–12–36.apk&lt;/p&gt;

&lt;p&gt;apk-files &amp;gt; master &amp;gt; release&amp;gt; app-release-unsigned-11–11–2022–09–09–12–36.apk&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy59twf63hmoblfaqsrn9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy59twf63hmoblfaqsrn9.png" alt="IMG" width="736" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkj0ar13ojaekro3t3t34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkj0ar13ojaekro3t3t34.png" alt="IMG" width="736" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the step “Upload APK files to Jfrog” we are using Jfrog CLI commands to upload apk files from Ubuntu Runner to Jfrog Artifactory –&amp;gt; “android-artifact”&lt;/p&gt;

&lt;p&gt;jf rt u — url ${{ secrets.JF_URL }} — user ${{ secrets.JF_USER }} — password ${{ secrets.JF_PASSWORD }} apk-files/qa/debug/app-debug-${{ needs.build.outputs.CURRENT_DATE_TIME }}.apk android-artifact/&lt;/p&gt;

&lt;p&gt;Here, u means upload , — url means Artifactory Repository URL (android-artifact) one, — user means Username of Jfrog UI and –password means Password of Jfrog UI , &amp;lt;path of the file want to upload&amp;gt; &amp;lt;artifact-repo-name&amp;gt;/&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7m4zkmhgflta95r0f5l6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7m4zkmhgflta95r0f5l6.png" alt="IMG" width="736" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foeawsgg5xj24qdrqaqc3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foeawsgg5xj24qdrqaqc3.png" alt="IMG" width="736" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This URL to file is the JF_URL which we need to add a secret in GitHub Actions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvaihjr0jtcirc3wmgrf7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvaihjr0jtcirc3wmgrf7.png" alt="IMG" width="736" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then in the next step , we are removing the GitHub Actions Public IP from the security group of Jfrog EC2 Instance . if: always() make sure that this step runs always although if any steps fails.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfjv3j9jckjhkolhbqhk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfjv3j9jckjhkolhbqhk.png" alt="IMG" width="736" height="80"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then in the last step we are sending notifications to teams. You can see all these information will go to teams.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx1gd7zy3qv0ni87vplfm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx1gd7zy3qv0ni87vplfm.png" alt="IMG" width="736" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Case&lt;/strong&gt; : If we want to delete the caches which are formed after running each workflow&lt;/p&gt;

&lt;p&gt;Here, we are using workflow_run command  It allows you to execute a workflow based on execution or completion of another workflow. So, here we are telling that “Clear Cache” Workflow will run only when workflow “Android CI and CD” workflow will complete (type of activity) successfully. Then we add &lt;code&gt;permissions&lt;/code&gt; as write in a top-level key, to apply to all jobs in the workflow. When you add the &lt;code&gt;permissions&lt;/code&gt; key within a specific job, all actions and run commands within that job that use the &lt;code&gt;GITHUB_TOKEN&lt;/code&gt; gain the access rights you specify.&lt;/p&gt;

&lt;p&gt;After that we run a script where we are first listing all Caches using JavaScript command and then we are deleting caches using their ID.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn767476l89ar2m4toq8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn767476l89ar2m4toq8.png" alt="IMG" width="736" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see all the caches got deleted which got created in the previous workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Case&lt;/strong&gt; : If we want to delete the artifacts which are formed when we uploaded the artifacts (apk-files) so that we can pass that directory from build job to deploy job using cron job — Every HOUR&lt;/p&gt;

&lt;p&gt;Here, we are running a cron job which will run every hour and delete all artifacts which got created . Here we are passing the GitHub token in purge-artifacts action.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Case&lt;/strong&gt; : How to create Self Hosted Runner and how to configure the self-hosted runner application as a service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flq8xrxv8zvxbp4xc75ty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flq8xrxv8zvxbp4xc75ty.png" alt="IMG" width="736" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7wgdpitzqhy884gj3f7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7wgdpitzqhy884gj3f7.png" alt="IMG" width="736" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To create self-hosted runner , Go to Settings  Actions  Runner and click create and select the type of OS you have . For me I am choosing Linux OS.&lt;/p&gt;

&lt;p&gt;Run the commands which you get here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tipyumxfczb9fjbe7yo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tipyumxfczb9fjbe7yo.png" alt="IMG" width="736" height="114"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8sbxt5vakfvs0zglisl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj8sbxt5vakfvs0zglisl.png" alt="IMG" width="736" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add all the values when asking about Runner Registration.&lt;/p&gt;

&lt;p&gt;To connect to the Runner we need to start the run.sh file&lt;/p&gt;

&lt;p&gt;We can see that the runner is up and running now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyoqngd9eyjaiou81twlz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyoqngd9eyjaiou81twlz.png" alt="IMG" width="736" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkrs7r3fjio3s436h3e8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkrs7r3fjio3s436h3e8.png" alt="IMG" width="736" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to configure the self-hosted runner application as a service so that the runner is up and running if your Linux machine is up and running&lt;/p&gt;

&lt;p&gt;Run these commands :&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Installing the service&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;1. Stop the self-hosted runner application if it is currently running.&lt;/p&gt;

&lt;p&gt;2. Install the service with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; ./svc.sh &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3. Alternatively, the command takes an optional &lt;code&gt;user&lt;/code&gt; argument to install the service as a different user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./svc.sh &lt;span class="nb"&gt;install &lt;/span&gt;USERNAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  &lt;strong&gt;Starting the service&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Start the service with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; ./svc.sh start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  &lt;strong&gt;Checking the status of the service&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Check the status of the service with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; ./svc.sh status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5azjpk088hat5lxs9rf0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5azjpk088hat5lxs9rf0.png" alt="IMG" width="736" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want test with Android Application and want all codes : &lt;a href="https://github.com/devcloudninjas/DevOps-Projects/tree/master/DevOps%20Project-14" rel="noopener noreferrer"&gt;Check this out&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.buymeacoffee.com/devcloudninjas" rel="noopener noreferrer"&gt;Buy me a coffee :)&lt;/a&gt; ← — — If you like my articles&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu60tsvw4tqzk7ebg7mtm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu60tsvw4tqzk7ebg7mtm.png" alt="IMG" width="736" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>github</category>
      <category>android</category>
      <category>devops</category>
      <category>cicd</category>
    </item>
    <item>
      <title>DevOps from 0 to Hero - for Freshers</title>
      <dc:creator>Lionel♾️☁️</dc:creator>
      <pubDate>Sun, 20 Oct 2024 08:32:44 +0000</pubDate>
      <link>https://forem.com/devcloudninjas/devops-from-0-to-hero-for-freshers-3mj4</link>
      <guid>https://forem.com/devcloudninjas/devops-from-0-to-hero-for-freshers-3mj4</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;DevOps&lt;/strong&gt; is a transformative culture and set of practices that bring together &lt;strong&gt;software development (Dev) and IT operations (Ops)&lt;/strong&gt;. It aims to shorten the &lt;strong&gt;development lifecycle, deliver continuous integration and continuous delivery (CI/CD),&lt;/strong&gt; and ensure high software quality. If you're a fresher with zero knowledge in DevOps, &lt;strong&gt;this guide will help you get started on your journey to becoming a proficient DevOps engineer&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  📚 Step-by-Step Guide
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Understand the Basics
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1.1 What is DevOps?
&lt;/h4&gt;

&lt;p&gt;DevOps is a set of practices that combines software development and IT operations. It emphasizes collaboration, communication, and integration between developers and IT operations teams. DevOps aims to automate and streamline the processes of building, testing, and deploying software.&lt;/p&gt;

&lt;h4&gt;
  
  
  1.2 Core DevOps Principles
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Continuous Integration (CI)&lt;/strong&gt;: Regularly merging code changes into a central repository to detect and fix integration issues early.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continuous Delivery (CD)&lt;/strong&gt;: Automating the process of deploying code changes to production after passing rigorous automated tests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code (IaC)&lt;/strong&gt;: Managing and provisioning infrastructure through code, enabling version control and automated deployment of infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Microservices Architecture&lt;/strong&gt;: Breaking down applications into smaller, independently deployable services for improved scalability and maintainability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring and Logging&lt;/strong&gt;: Implementing robust systems to track application performance and quickly identify issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. 🔧 Learn the Foundation Skills
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgifdb.com%2Fimages%2Fhigh%2Fcoding-penguin-i-like-pressing-buttons-puv3coc5z4pkth51.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgifdb.com%2Fimages%2Fhigh%2Fcoding-penguin-i-like-pressing-buttons-puv3coc5z4pkth51.webp" alt="coding" width="480" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  2.1 Basic Programming
&lt;/h4&gt;

&lt;p&gt;Learning a programming language is essential for automating tasks and writing scripts. Some widely used languages in DevOps are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Python&lt;/strong&gt;: Known for its simplicity and readability, Python is great for scripting and automation.

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.learnpython.org/" rel="noopener noreferrer"&gt;Learn Python&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Go&lt;/strong&gt;: Gaining popularity in DevOps for its performance and concurrency features.

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.golang.org/" rel="noopener noreferrer"&gt;Learn Go&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;JavaScript&lt;/strong&gt;: Often used in web development and automation tasks.

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.javascript.com/" rel="noopener noreferrer"&gt;Learn JavaScript&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  2.2 Operating Systems
&lt;/h4&gt;

&lt;p&gt;Understanding operating systems, especially Linux, is crucial as most DevOps tools and environments run on Linux. Learn basic commands, file systems, process management, and networking in Linux.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.tutorialspoint.com/unix/index.htm" rel="noopener noreferrer"&gt;Linux Tutorial&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  2.3 Networking Basics
&lt;/h4&gt;

&lt;p&gt;Understanding networking fundamentals is important for configuring and managing servers, containers, and applications. Learn about IP addresses, DNS, HTTP/HTTPS, firewalls, and load balancers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.cisco.com/c/en/us/solutions/small-business/resource-center/networking/networking-basics.html" rel="noopener noreferrer"&gt;Networking Basics&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. 🌿 Dive into Version Control
&lt;/h3&gt;

&lt;h4&gt;
  
  
  3.1 Git
&lt;/h4&gt;

&lt;p&gt;Git is a version control system that tracks changes in source code, allowing multiple developers to work on a project simultaneously without conflicts. Learn the basics of Git commands and workflows.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.atlassian.com/git/tutorials/learn-git-with-bitbucket-cloud" rel="noopener noreferrer"&gt;Git Tutorial&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://git-scm.com/" rel="noopener noreferrer"&gt;Official Git Website&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3.2 GitHub
&lt;/h4&gt;

&lt;p&gt;GitHub is a platform for hosting Git repositories, providing tools for collaborative development, code review, and project management.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://guides.github.com/activities/hello-world/" rel="noopener noreferrer"&gt;GitHub Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/" rel="noopener noreferrer"&gt;GitHub Website&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  4. 🔄 Master Continuous Integration and Continuous Delivery (CI/CD)
&lt;/h3&gt;

&lt;h4&gt;
  
  
  4.1 Jenkins
&lt;/h4&gt;

&lt;p&gt;Jenkins is an open-source automation server that helps automate parts of the software development process, including building, testing, and deploying code.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.jenkins.io/doc/" rel="noopener noreferrer"&gt;Jenkins Tutorial&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.jenkins.io/" rel="noopener noreferrer"&gt;Jenkins Website&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  4.2 GitLab CI/CD
&lt;/h4&gt;

&lt;p&gt;GitLab CI/CD is a powerful tool integrated with GitLab for automating the entire DevOps lifecycle. Learn how to create and manage CI/CD pipelines.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.gitlab.com/ee/ci/" rel="noopener noreferrer"&gt;GitLab CI/CD Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://about.gitlab.com/" rel="noopener noreferrer"&gt;GitLab Website&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Explore Configuration Management
&lt;/h3&gt;

&lt;h4&gt;
  
  
  5.1 Ansible
&lt;/h4&gt;

&lt;p&gt;Ansible is an open-source automation tool used for configuration management, application deployment, and task automation. It uses simple, human-readable YAML templates to define automation jobs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.ansible.com/ansible/latest/user_guide/index.html" rel="noopener noreferrer"&gt;Ansible Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.ansible.com/" rel="noopener noreferrer"&gt;Ansible Website&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  5.2 Puppet
&lt;/h4&gt;

&lt;p&gt;Puppet is a configuration management tool that helps automate the provisioning and management of infrastructure.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://puppet.com/docs/puppet/latest/puppet_index.html" rel="noopener noreferrer"&gt;Puppet Tutorial&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://puppet.com/" rel="noopener noreferrer"&gt;Puppet Website&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  6. 🐳 Understand Containerization and Orchestration
&lt;/h3&gt;

&lt;h4&gt;
  
  
  6.1 Docker
&lt;/h4&gt;

&lt;p&gt;Docker is a platform for developing, shipping, and running applications inside containers. Containers are lightweight, portable, and consistent environments that ensure applications run the same way regardless of where they are deployed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/get-started/" rel="noopener noreferrer"&gt;Docker Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker Website&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  6.2 Kubernetes
&lt;/h4&gt;

&lt;p&gt;Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It helps manage containerized applications in a clustered environment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/home/" rel="noopener noreferrer"&gt;Kubernetes Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes Website&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  7. ☁️ Explore Cloud Platforms
&lt;/h3&gt;

&lt;h4&gt;
  
  
  7.1 AWS (Amazon Web Services)
&lt;/h4&gt;

&lt;p&gt;AWS is a comprehensive cloud computing platform offering a wide range of services, including compute, storage, and databases. Learn the basics of AWS services such as EC2 (virtual servers), S3 (object storage), RDS (relational databases), and Lambda (serverless computing).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/training/" rel="noopener noreferrer"&gt;AWS Training&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS Website&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  7.2 Azure
&lt;/h4&gt;

&lt;p&gt;Azure is Microsoft's cloud computing platform that provides a variety of cloud services, including those for compute, analytics, storage, and networking. Familiarize yourself with Azure's offerings and capabilities.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/learn/paths/azure-fundamentals/" rel="noopener noreferrer"&gt;Azure Fundamentals&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://azure.microsoft.com/" rel="noopener noreferrer"&gt;Azure Website&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  7.3 Google Cloud Platform (GCP)
&lt;/h4&gt;

&lt;p&gt;GCP is Google's cloud computing service, offering a range of services such as compute, storage, and machine learning. Learn about GCP's infrastructure and services.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/training" rel="noopener noreferrer"&gt;GCP Training&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/" rel="noopener noreferrer"&gt;GCP Website&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  8. Learn Infrastructure as Code (IaC)
&lt;/h3&gt;

&lt;h4&gt;
  
  
  8.1 Terraform
&lt;/h4&gt;

&lt;p&gt;Terraform is an open-source tool for building, changing, and versioning infrastructure safely and efficiently. It allows you to define and provision infrastructure using a high-level configuration language.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/docs/index.html" rel="noopener noreferrer"&gt;Terraform Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform Website&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  9. 📊 Implement Monitoring and Logging
&lt;/h3&gt;

&lt;h4&gt;
  
  
  9.1 Prometheus
&lt;/h4&gt;

&lt;p&gt;Prometheus is an open-source monitoring system and time-series database that is well-suited for monitoring containerized applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://prometheus.io/docs/introduction/overview/" rel="noopener noreferrer"&gt;Prometheus Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus Website&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  9.2 ELK Stack (Elasticsearch, Logstash, Kibana)
&lt;/h4&gt;

&lt;p&gt;The ELK Stack is a powerful set of tools for searching, analyzing, and visualizing log data. Elasticsearch stores and indexes log data, Logstash processes it, and Kibana visualizes it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.elastic.co/what-is/elk-stack" rel="noopener noreferrer"&gt;ELK Stack Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.elastic.co/" rel="noopener noreferrer"&gt;Elastic Website&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  10. Get Hands-On Experience
&lt;/h3&gt;

&lt;h4&gt;
  
  
  10.1 Build Projects
&lt;/h4&gt;

&lt;p&gt;Apply what you've learned by working on real projects. Here are some ideas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build a complete CI/CD pipeline for a sample application&lt;/li&gt;
&lt;li&gt;Deploy a microservices architecture on Kubernetes&lt;/li&gt;
&lt;li&gt;Implement a multi-cloud disaster recovery solution&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Practical experience is crucial for mastering DevOps.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  11. 🤝 Join the Community
&lt;/h3&gt;

&lt;p&gt;Participate in DevOps communities, forums, and meetups to learn from others, share your experiences, and stay updated on industry trends.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://devops.stackexchange.com/" rel="noopener noreferrer"&gt;DevOps Stack Exchange&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reddit.com/r/devops/" rel="noopener noreferrer"&gt;DevOps Subreddit&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  12. 📚 Continuous Learning
&lt;/h3&gt;

&lt;h4&gt;
  
  
  12.1 Online Courses
&lt;/h4&gt;

&lt;p&gt;Enroll in online courses to deepen your understanding and keep your skills up-to-date. Many platforms offer comprehensive DevOps courses taught by industry experts.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.coursera.org/courses?query=devops" rel="noopener noreferrer"&gt;Coursera DevOps Courses&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.udemy.com/topic/devops/" rel="noopener noreferrer"&gt;Udemy DevOps Courses&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  12.2 Books
&lt;/h4&gt;

&lt;p&gt;Read books on DevOps practices, tools, and methodologies. Some highly recommended books are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;The Phoenix Project&lt;/em&gt; by Gene Kim, Kevin Behr, and George Spafford: A novel about IT, DevOps, and helping your business win.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;The DevOps Handbook&lt;/em&gt; by Gene Kim, Jez Humble, Patrick Debois, and John Willis: How to create world-class agility, reliability, and security in technology organizations.&lt;/li&gt;
&lt;/ul&gt;




&lt;blockquote&gt;
&lt;p&gt;DevOps is not just about tools, but also about fostering a culture of collaboration, continuous improvement, and shared responsibility. Embrace the DevOps mindset in your work and interactions with team members.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  📚 Security in DevOps (DevSecOps)
&lt;/h2&gt;

&lt;p&gt;As you progress in your DevOps journey, don't forget to integrate security practices into your workflows. DevSecOps emphasizes the importance of building security into every stage of the development process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Career Paths in DevOps
&lt;/h2&gt;

&lt;p&gt;DevOps offers various career paths and specializations. Some roles you might consider as you progress in your career include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DevOps Engineer&lt;/li&gt;
&lt;li&gt;Site Reliability Engineer (SRE)&lt;/li&gt;
&lt;li&gt;Cloud Architect&lt;/li&gt;
&lt;li&gt;Automation Specialist&lt;/li&gt;
&lt;li&gt;DevSecOps Engineer&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;p&gt;To deepen your understanding of specific DevOps topics, here are some in-depth articles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.atlassian.com/continuous-delivery/principles/continuous-integration-vs-delivery-vs-deployment" rel="noopener noreferrer"&gt;The Comprehensive Guide to CI/CD Pipelines&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://martinfowler.com/articles/microservices.html" rel="noopener noreferrer"&gt;Microservices Architecture: A Comprehensive Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.hashicorp.com/resources/what-is-infrastructure-as-code" rel="noopener noreferrer"&gt;Infrastructure as Code: What It Is and Why It Matters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes" rel="noopener noreferrer"&gt;The Ultimate Guide to Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  DevOps Podcasts and YouTube Channels
&lt;/h2&gt;

&lt;p&gt;Stay up-to-date with the latest in DevOps through these popular podcasts and YouTube channels:&lt;/p&gt;

&lt;h3&gt;
  
  
  Podcasts:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://devopscafe.org/" rel="noopener noreferrer"&gt;DevOps Cafe&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.shipitshow.com/" rel="noopener noreferrer"&gt;The Ship It Show&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.arresteddevops.com/" rel="noopener noreferrer"&gt;Arrested DevOps&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  YouTube Channels:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/c/DevOpsToolkit" rel="noopener noreferrer"&gt;DevOps Toolkit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/c/TechWorldwithNana" rel="noopener noreferrer"&gt;TechWorld with Nana&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/channel/UCT-nPlVzJI-ccQXlxjSvJmw" rel="noopener noreferrer"&gt;AWS Online Tech Talks&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  A Day in the Life of a DevOps Engineer
&lt;/h2&gt;

&lt;p&gt;To give you a practical perspective of what it's like to work as a DevOps engineer, here's a typical day:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;th&gt;Activity&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;8:00 AM&lt;/td&gt;
&lt;td&gt;Start the day by checking monitoring dashboards for any overnight issues&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;9:00 AM&lt;/td&gt;
&lt;td&gt;Attend the daily stand-up meeting with the development team to discuss ongoing projects and potential blockers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10:00 AM&lt;/td&gt;
&lt;td&gt;Work on automating a deployment process using Jenkins and Ansible&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12:00 PM&lt;/td&gt;
&lt;td&gt;Lunch break and catch up on the latest DevOps news and trends&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1:00 PM&lt;/td&gt;
&lt;td&gt;Troubleshoot a production issue reported by the operations team&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3:00 PM&lt;/td&gt;
&lt;td&gt;Collaborate with developers to optimize a Docker container for a new microservice&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4:00 PM&lt;/td&gt;
&lt;td&gt;Review and merge pull requests for infrastructure-as-code changes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5:00 PM&lt;/td&gt;
&lt;td&gt;Document the day's work and plan for tomorrow's tasks&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;This schedule can vary greatly depending on the organization and current projects, but it gives you an idea of the diverse tasks a DevOps engineer might handle in a day.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Learning DevOps from scratch may seem daunting, but with the right approach and resources, you can master the essential skills and become a proficient DevOps engineer. Follow this step-by-step guide, practice consistently, and engage with the DevOps community to accelerate your learning journey.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stay curious, and embrace the DevOps mindset to drive innovation and efficiency in software development and operations!&lt;/strong&gt; 🎉&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;&lt;em&gt;Thank you for reading our blog …:)&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;© &lt;strong&gt;Copyrights:&lt;/strong&gt; &lt;a href="https://t.me/devcloudninjas" rel="noopener noreferrer"&gt;&lt;strong&gt;DevCloudNinjas&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Join Our &lt;a href="https://t.me/devcloudninjas" rel="noopener noreferrer"&gt;&lt;strong&gt;Telegram Community&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;||&lt;/strong&gt; &lt;a href="https://github.com/devcloudninjas" rel="noopener noreferrer"&gt;&lt;strong&gt;Follow us for more&lt;/strong&gt;&lt;/a&gt; DevOps Content.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgz7oxsordnr5imq3cxua.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgz7oxsordnr5imq3cxua.gif" alt="meh" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>newbie</category>
      <category>learning</category>
    </item>
    <item>
      <title>Deploying Django Application on AWS with Terraform - Part 1</title>
      <dc:creator>Lionel♾️☁️</dc:creator>
      <pubDate>Mon, 01 Apr 2024 05:25:32 +0000</pubDate>
      <link>https://forem.com/softwaresennin/deploying-django-application-on-aws-with-terraform-1j7e</link>
      <guid>https://forem.com/softwaresennin/deploying-django-application-on-aws-with-terraform-1j7e</guid>
      <description>&lt;p&gt;Hi everyone, I am back with another project. This time we will be working on a CICD pipeline, deploying our application to our AWS account. In this project we will use Terraform to deploy our Django application to our AWS account.&lt;/p&gt;

&lt;p&gt;Below are the tools and services that we will use and their categories.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/apotitech/eefebc53ce4c154f180defdc758d385f" rel="noopener noreferrer"&gt;https://gist.github.com/apotitech/eefebc53ce4c154f180defdc758d385f&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we get into today's hands on writeup, let us start with the different parts of our project from start to finish.&lt;/p&gt;

&lt;h3&gt;
  
  
  Parts of Project
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Minimal Working Setup&lt;/li&gt;
&lt;li&gt;Connecting PostgreSQL RDS&lt;/li&gt;
&lt;li&gt;GitLab CI/CD&lt;/li&gt;
&lt;li&gt;Namecheap Domain + SSL&lt;/li&gt;
&lt;li&gt;Celery and SQS&lt;/li&gt;
&lt;li&gt;Connecting to Amazon S3&lt;/li&gt;
&lt;li&gt;ECS Autoscaling&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We are starting with the first part today&lt;/p&gt;

&lt;h2&gt;
  
  
  Minimal Working Setup
&lt;/h2&gt;

&lt;p&gt;In this part, we will go through first - the setup of our AWS account, then we will create our Terraform project, and lastly we will define resources for our web application. At the end, we will deploy our Django application on our AWS ECS. We will access our app using our Load Balancer URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A720%2Fformat%3Awebp%2F1%2AgSy-S5RkI_G7JZ0TR-c4Ow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A720%2Fformat%3Awebp%2F1%2AgSy-S5RkI_G7JZ0TR-c4Ow.png" alt="scrnli3252024121301 AMgif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Django project
&lt;/h2&gt;

&lt;p&gt;Let’s start with our Django application. Create a new folder and initialize a default Django project.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;django-aws &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;django-aws
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;django-aws-backend &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;django-aws-backend
&lt;span class="nv"&gt;$ &lt;/span&gt;git init &lt;span class="nt"&gt;--initial-branch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main
&lt;span class="nv"&gt;$ &lt;/span&gt;python3.10 &lt;span class="nt"&gt;-m&lt;/span&gt; venv venv
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; ./venv/bin/activate
&lt;span class="o"&gt;(&lt;/span&gt;venv&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$ &lt;/span&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;&lt;span class="nv"&gt;Django&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;3.2.13
&lt;span class="o"&gt;(&lt;/span&gt;venv&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$ &lt;/span&gt;django-admin startproject django_aws &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="o"&gt;(&lt;/span&gt;venv&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$ &lt;/span&gt;./manage.py migrate
&lt;span class="o"&gt;(&lt;/span&gt;venv&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$ &lt;/span&gt;./manage.py runserver


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now that our Django server is setup, let's check our Django greeting page at &lt;a href="http://127.0.0.1:8000/" rel="noopener noreferrer"&gt;http://127.0.0.1:8000&lt;/a&gt;, ensure that Django is running, and kill the development server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A720%2Fformat%3Awebp%2F1%2ATEx96-8c2q5fiY5bv-5WPQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A720%2Fformat%3Awebp%2F1%2ATEx96-8c2q5fiY5bv-5WPQ.png" alt="1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we are going to dockerize our application. First, we will add a &lt;code&gt;requirements.txt&lt;/code&gt; file to the Django project:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;Django&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="mf"&gt;3.2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;13&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For testing purposes, enable debug mode and allow all hosts in our &lt;code&gt;settings.py&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;DEBUG&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;

&lt;span class="n"&gt;ALLOWED_HOSTS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To containerize our app, we need to have a dockerfile. So next, we add a &lt;code&gt;Dockerfile&lt;/code&gt; in our working directory:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.10-slim-buster&lt;/span&gt;

&lt;span class="c"&gt;# Open http port&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8000&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PYTHONUNBUFFERED 1&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PYTHONDONTWRITEBYTECODE 1&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; DEBIAN_FRONTEND noninteractive&lt;/span&gt;

&lt;span class="c"&gt;# Install pip and gunicorn web server&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; &lt;span class="nt"&gt;--upgrade&lt;/span&gt; pip
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;&lt;span class="nv"&gt;gunicorn&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;20.1.0

&lt;span class="c"&gt;# Install requirements.txt&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt /&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; /requirements.txt

&lt;span class="c"&gt;# Moving application files&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . /app&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now let us go ahead and build and run our docker container locally.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;docker build &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; django-aws-backend
&lt;span class="nv"&gt;$ &lt;/span&gt;docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 8000:8000 django-aws-backend gunicorn &lt;span class="nt"&gt;-b&lt;/span&gt; 0.0.0.0:8000 django_aws.wsgi:application


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let's go to pur localhost port &lt;a href="http://127.0.0.1:8000/" rel="noopener noreferrer"&gt;http://127.0.0.1:8000&lt;/a&gt; page and verify that we have successfully built and run the docker image with a Django application. You should see exactly the same greeting page as for the &lt;code&gt;runserver&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;There may be some files that we do not want to push to our git repo. For that. let’s add a &lt;code&gt;.gitignore&lt;/code&gt; file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

*.sqlite3
.idea
.env
venv
.DS_Store
__pycache__
static
media


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, we will continue with our git steps. Now we commit our changes:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;git add &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"initial commit"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Good job making it this far.&lt;/p&gt;

&lt;p&gt;For now, we are done with the Django part. In the next steps, we deploy our application on AWS. But first, we need to configure our AWS account.&lt;/p&gt;

&lt;p&gt;We need to create credentials for AWS CLI and Terraform. So we’ll create a new user with administration access to the AWS account. This user will be able to create and change resources on your AWS account.&lt;/p&gt;

&lt;p&gt;First, we will go to the &lt;a href="https://console.aws.amazon.com/iam/home" rel="noopener noreferrer"&gt;IAM&lt;/a&gt; service, select the “Users” tab, and click “Add Users”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhy0e82glpz1sb2clskzi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhy0e82glpz1sb2clskzi.png" alt="eewr"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter Username and choose the ‘Access key — Programmatic access’ option. This option means that your user will have ‘Access key’ to use AWS API. Also, this user won’t be able to sing in the AWS web console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7qg6zrw8yjqkgpe047z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7qg6zrw8yjqkgpe047z.png" alt="rrft"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the “&lt;strong&gt;&lt;em&gt;Attach existing policies directly&lt;/em&gt;&lt;/strong&gt;” tab and select “&lt;strong&gt;AdministratorAccess&lt;/strong&gt;.” Then click next and skip the “Add tags” step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpp8ykpnuq8gfib1tr9ea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpp8ykpnuq8gfib1tr9ea.png" alt="ccxsd"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Review user details and click “&lt;strong&gt;&lt;em&gt;Create user&lt;/em&gt;&lt;/strong&gt;.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0afwpr296wuklhcmnwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0afwpr296wuklhcmnwg.png" alt="ggter"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yay, we have successfully created our user!&lt;/p&gt;

&lt;p&gt;Now we need to save your &lt;strong&gt;Access key ID&lt;/strong&gt; and &lt;strong&gt;Secret access key&lt;/strong&gt; in some safe place. Be aware of committing these keys in public repositories or other public places. Anybody who owns these keys can manage your AWS account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7pvpasskww95xlhgv6a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7pvpasskww95xlhgv6a.png" alt="ffrrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can configure AWS CLI and check our credentials. We will use the &lt;code&gt;us-east-2&lt;/code&gt; region in this guide. Feel free to change it.&lt;/p&gt;

&lt;p&gt;Now we can configure AWS CLI and check our credentials. We will use the &lt;code&gt;us-east-2&lt;/code&gt; region in this guide. Feel free to change it.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;aws configure
AWS Access Key ID &lt;span class="o"&gt;[&lt;/span&gt;None]: AKU832EUBFEFWICT
AWS Secret Access Key &lt;span class="o"&gt;[&lt;/span&gt;None]: 5HZMEFi4ff4F4DEi24HYEsOPDNE8DYWTzCx
Default region name &lt;span class="o"&gt;[&lt;/span&gt;us-east-2]: us-east-2
Default output format &lt;span class="o"&gt;[&lt;/span&gt;table]: table
&lt;span class="nv"&gt;$ &lt;/span&gt;aws sts get-caller-identity
&lt;span class="nt"&gt;-----------------------------------------------------&lt;/span&gt;
|                 GetCallerIdentity                 |
+---------+-----------------------------------------+
|  Account|  947134793474                           |  &amp;lt;- AWS_ACCOUNT_ID
|  Arn    |  arn:aws:iam::947134793474:user/admin   |
|  UserId |  AIDJEFFEIUFBFUR245EPV                  |
+---------+-----------------------------------------+


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Remember your &lt;code&gt;AWS_ACCOUNT_ID&lt;/code&gt;. We'll use it in the next steps.&lt;/p&gt;

&lt;p&gt;Now we are all set up to create Terraform project!&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Terraform Project
&lt;/h2&gt;

&lt;p&gt;Let’s create a new folder &lt;code&gt;django-aws/django-aws-infrastructure&lt;/code&gt; for our Terraform project.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cd&lt;/span&gt; .. 
&lt;span class="nb"&gt;mkdir &lt;/span&gt;django-aws-infrastructure &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;django-aws-infrastructure  
git init &lt;span class="nt"&gt;--initial-branch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let us add our &lt;code&gt;provider.tf&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;region&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here, we defined our &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs" rel="noopener noreferrer"&gt;AWS&lt;/a&gt; provider. We use &lt;a href="https://www.terraform.io/language/values/variables" rel="noopener noreferrer"&gt;Terraform variable&lt;/a&gt; for specifying an AWS region. Let’s define &lt;code&gt;region&lt;/code&gt; and &lt;code&gt;project_name&lt;/code&gt; variables in the &lt;code&gt;variables.tf&lt;/code&gt; file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"region"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"The AWS region to create resources in."&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-2"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"project_name"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Project name to use in resource names"&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"django-aws"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, we will run &lt;code&gt;terraform init&lt;/code&gt; to create a new Terraform working directory, download the AWS provider, and everything else that will be needed.&lt;/p&gt;

&lt;p&gt;Now we are ready to create resources for our infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resources we will use
&lt;/h3&gt;

&lt;p&gt;Here is the plan of the services we wull use to configure our project&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/apotitech/6c1ac26ee528eacb7dbeae7635253058" rel="noopener noreferrer"&gt;https://gist.github.com/apotitech/6c1ac26ee528eacb7dbeae7635253058&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Elastic Container Repository, ECR
&lt;/h4&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ecr_repository"&lt;/span&gt; &lt;span class="s2"&gt;"backend"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                 &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${var.project_name}-backend"&lt;/span&gt;
  &lt;span class="nx"&gt;image_tag_mutability&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"MUTABLE"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, we will run &lt;code&gt;terraform plan&lt;/code&gt;. We'll see that Terraform is going to create an ECR repository.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

Terraform will perform the following actions:
  &lt;span class="c"&gt;# aws_ecr_repository.backend will be created&lt;/span&gt;
  + resource &lt;span class="s2"&gt;"aws_ecr_repository"&lt;/span&gt; &lt;span class="s2"&gt;"backend"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      ...
    &lt;span class="o"&gt;}&lt;/span&gt;
Plan: 1 to add, 0 to change, 0 to destroy.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If the plan looks good, we will go ahead and run &lt;code&gt;terraform apply&lt;/code&gt;. We will be prompted to accept or refuse the plan. Type &lt;code&gt;yes&lt;/code&gt; to confirm changes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws_ecr_repository.backend: Creating...
aws_ecr_repository.backend: Creation &lt;span class="nb"&gt;complete &lt;/span&gt;after 1s &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;django-aws-backend]
Apply &lt;span class="nb"&gt;complete&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt; Resources: 1 added, 0 changed, 0 destroyed.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Great, our repository is created. Now, let’s push our Django image to this new registry. Before we do that, we will need to build an image with tag, below is mine &lt;code&gt;${AWS_ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/django-aws-backend:latest&lt;/code&gt;, authorize in the ECR, and push an image:&lt;/p&gt;

&lt;p&gt;In my own case, my account ID is &lt;code&gt;94713479347&lt;/code&gt; and I will use the &lt;code&gt;us-east-2&lt;/code&gt; region&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ../django-aws-backend
&lt;span class="nv"&gt;$ &lt;/span&gt;docker build &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; 947134793474.dkr.ecr.us-east-2.amazonaws.com/django-aws-backend:latest
&lt;span class="nv"&gt;$ &lt;/span&gt;aws ecr get-login-password &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-2 | docker login &lt;span class="nt"&gt;--username&lt;/span&gt; AWS &lt;span class="nt"&gt;--password-stdin&lt;/span&gt; 947134793474.dkr.ecr.us-east-2.amazonaws.com
&lt;span class="nv"&gt;$ &lt;/span&gt;docker push 947134793474.dkr.ecr.us-east-2.amazonaws.com/django-aws-backend:latest


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  Network
&lt;/h4&gt;

&lt;p&gt;Now, let’s create a network for our application. First we create our variables.tf file. Add this block to the &lt;code&gt;variables.tf&lt;/code&gt; file:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"availability_zones"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Availability zones"&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"us-east-2a"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"us-east-2c"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, we will create a &lt;code&gt;network.tf&lt;/code&gt; file with the following content:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/apotitech/ba6d140d902cef8e7038118829335d7e" rel="noopener noreferrer"&gt;https://gist.github.com/apotitech/ba6d140d902cef8e7038118829335d7e&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below are the resources that we will be creating. We’ve defined the following resources with our code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/vpc/" rel="noopener noreferrer"&gt;Virtual Private Cloud&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html" rel="noopener noreferrer"&gt;Public and Private subnets&lt;/a&gt; in different &lt;a href="https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/RegionsAndAZs.html" rel="noopener noreferrer"&gt;Availability zones&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html" rel="noopener noreferrer"&gt;Internet Gateway&lt;/a&gt; for internet access for our public subnets.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html" rel="noopener noreferrer"&gt;NAT Gateway&lt;/a&gt; for internet access for our private subnets.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next we will apply what we just coded. Run &lt;code&gt;terraform apply&lt;/code&gt; command to apply changes on AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Load Balancer
&lt;/h3&gt;

&lt;p&gt;We continue our infrastructure building. Next we will create our load balancer &lt;code&gt;load_balancer.tf&lt;/code&gt; file with the following content:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/apotitech/da3f7ba10640b6c1809c2894c7525d62" rel="noopener noreferrer"&gt;https://gist.github.com/apotitech/da3f7ba10640b6c1809c2894c7525d62&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us look at the resources that the code below will create:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html" rel="noopener noreferrer"&gt;Application Load Balancer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html" rel="noopener noreferrer"&gt;LB Listener&lt;/a&gt; to receive incoming HTTP requests.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html" rel="noopener noreferrer"&gt;LB Target group&lt;/a&gt; to route requests to the Django application.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html" rel="noopener noreferrer"&gt;Security Group&lt;/a&gt; to control incoming traffic to load balancer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before we proceed, we want to know the load balancer URL. For that we need our code to output it in our terminal. For that we will add a &lt;code&gt;outputs.tf&lt;/code&gt; file with the following code and run &lt;code&gt;terraform apply&lt;/code&gt; to create load balancer and see its hostname.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"prod_lb_domain"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_lb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;prod&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dns_name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We will see our ALB domain in output.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

Outputs:
prod_lb_hostname &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"prod-57218461274.us-east-2.elb.amazonaws.com"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can now check our this domain in our browser. It should respond with &lt;code&gt;503 Service Temporarily Unavailable&lt;/code&gt; error because there are no targets associated with the target groups we created yet.&lt;/p&gt;

&lt;p&gt;In the next step, we'll deploy the Django application that will be accessible by this URL.&lt;/p&gt;

&lt;h3&gt;
  
  
  Application
&lt;/h3&gt;

&lt;p&gt;Last but not least, we’ll create the application using the &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html" rel="noopener noreferrer"&gt;ECS Service&lt;/a&gt;. For this, we will add a &lt;code&gt;ecs.tf&lt;/code&gt; file with following content:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/apotitech/bfb372e92d2b88fee935ee39a8f68ed7" rel="noopener noreferrer"&gt;https://gist.github.com/apotitech/bfb372e92d2b88fee935ee39a8f68ed7&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, we will add the &lt;code&gt;ecs_prod_backend_retention_days&lt;/code&gt; variable to the &lt;code&gt;variables.tf&lt;/code&gt; file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"ecs_prod_backend_retention_days"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Retention period for backend logs"&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;then we add a container definition in a new file &lt;code&gt;templates/backend_container.json.tpl&lt;/code&gt; and run &lt;code&gt;terraform apply&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${name}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"image"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${image}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"essential"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"links"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"portMappings"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"containerPort"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"hostPort"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"protocol"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"tcp"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="err"&gt;jsonencode(command)&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"logConfiguration"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"logDriver"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"awslogs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"options"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"awslogs-group"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${log_group}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"awslogs-region"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${region}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"awslogs-stream-prefix"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${log_stream}"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Our code will be creating:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html" rel="noopener noreferrer"&gt;ECS Cluster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html" rel="noopener noreferrer"&gt;ECS Task Definition&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html" rel="noopener noreferrer"&gt;ECS Service&lt;/a&gt; to run tasks with the specified definition in the ECS cluster&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html" rel="noopener noreferrer"&gt;IAM Policies&lt;/a&gt; to allow tasks access to resources.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html" rel="noopener noreferrer"&gt;Cloudwatch Log&lt;/a&gt; group and stream for log collection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, go to the our &lt;a href="https://us-east-2.console.aws.amazon.com/ecs/home" rel="noopener noreferrer"&gt;ECS AWS Console&lt;/a&gt; and look at our running service and tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A720%2Fformat%3Awebp%2F1%2A3NvXTuELvOvVS1SgAg9CTQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A720%2Fformat%3Awebp%2F1%2A3NvXTuELvOvVS1SgAg9CTQ.png" alt="img"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A720%2Fformat%3Awebp%2F1%2A83jt5PsBeD4CQpilznc3eA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A720%2Fformat%3Awebp%2F1%2A83jt5PsBeD4CQpilznc3eA.png" alt="ffr"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A720%2Fformat%3Awebp%2F1%2A7QbdvV62zvSs6YQmDyblnQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A720%2Fformat%3Awebp%2F1%2A7QbdvV62zvSs6YQmDyblnQ.png" alt="ccd"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we will go ahead and Check the Load Balancer domain in a browser again, to ensure that our setup works. Now we will see the Django’s starting page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A720%2Fformat%3Awebp%2F1%2ACDJDFFPIpQp_tvJflmiuNA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A720%2Fformat%3Awebp%2F1%2ACDJDFFPIpQp_tvJflmiuNA.png" alt="ccf"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Awesoome, great job making it this far, our setup is working. It’s time to commit our changes in the &lt;code&gt;django-aws-infrastructure&lt;/code&gt; repo. We will add another file, a &lt;code&gt;.gitignore&lt;/code&gt; file&lt;/p&gt;

&lt;p&gt;Great job making it this far. Now we will go ahead and apply all of our terraform configuration &lt;code&gt;terraform apply&lt;/code&gt; . We will st&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"prod_lb_domain"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_lb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;prod&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dns_name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We will see our ALB domain in output.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

Outputs:
prod_lb_hostname &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"prod-57218461274.us-east-2.elb.amazonaws.com"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can now check our this domain in our browser. It should respond with &lt;code&gt;503 Service Temporarily Unavailable&lt;/code&gt; error because there are no targets associated with the target groups we created yet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/apotitech/5f4afbe7826322f8cf6de112d150cd8f" rel="noopener noreferrer"&gt;https://gist.github.com/apotitech/5f4afbe7826322f8cf6de112d150cd8f&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our code is all ready to go. Now we will save, commit our changes and push our changes:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;p&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git add &lt;span class="nb"&gt;.&lt;/span&gt;&lt;br&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"initialize infrastructure"&lt;/span&gt;&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Congratulations&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;Yay! We have now deployed our Django web application with ECS Service + Fargate on AWS. But now it works with &lt;a href="https://www.sqlite.org/index.html" rel="noopener noreferrer"&gt;SQLite&lt;/a&gt; file database. This file will be recreated on every service restart. So, our app cannot persist any data for now. &lt;a href="https://medium.com/@softwaresennin/deploying-django-application-on-aws-with-terraform-connecting-postgresql-rds-bcd9c29d6276" rel="noopener noreferrer"&gt;In the next article we’ll connect&lt;/a&gt; Django to &lt;a href="https://aws.amazon.com/rds/postgresql/" rel="noopener noreferrer"&gt;AWS RDS PostgreSQL&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you need technical consulting on your project or have any questions or suggestions, please comment below or connect with me directly on &lt;a href="https://www.linkedin.com/in/lionel-tchami/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Do not forget the 👏&lt;strong&gt;❤️&lt;/strong&gt; and share if you like this content!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Thank you for joining me, and best of luck with your AWS endeavors!&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>python</category>
    </item>
    <item>
      <title>Thank you so much everyone!</title>
      <dc:creator>Lionel♾️☁️</dc:creator>
      <pubDate>Sat, 21 Oct 2023 19:23:14 +0000</pubDate>
      <link>https://forem.com/softwaresennin/thank-you-so-much-everyone-p7j</link>
      <guid>https://forem.com/softwaresennin/thank-you-so-much-everyone-p7j</guid>
      <description>&lt;p&gt;Hi fam, I'm so excited to announce that I've reached 30,000 views and 6K+ followers! This is such a huge milestone for me, and I couldn't have done it without your support.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6v6sduimwnu4w0qymm0.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6v6sduimwnu4w0qymm0.gif" alt="thanks" width="640" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm so grateful for all of your comments, shares, and feedback. It means the world to me that you're interested in what I have to say about DevOps, Cloud, SRE and AI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxze4mixasar6a643si2n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxze4mixasar6a643si2n.png" alt="CO" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Your support is what keeps me going and helps me write even more articles.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I love hearing from you and learning from your experiences as well.&lt;/p&gt;

&lt;p&gt;One of my favourite things about being a part of the &lt;strong&gt;dev.to&lt;/strong&gt; community is the people I've met along the way. I've learned so much from all of you, and I'm so &lt;strong&gt;grateful&lt;/strong&gt; for your support.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjpuly1uavw4acmxxv1r.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjpuly1uavw4acmxxv1r.gif" alt="thanks you" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I remember one time when I was writing an article on a new DevOps tool. I was struggling to understand a particular concept, so I asked for help in the &lt;strong&gt;&lt;em&gt;dev.to community&lt;/em&gt;&lt;/strong&gt;. Within minutes, I had several helpful responses from other developers. I was so impressed by their willingness to help me out.&lt;/p&gt;

&lt;p&gt;This is just one example of the many ways that the dev.to community has helped me. I'm so grateful to be a part of such a supportive and knowledgeable group of people.&lt;/p&gt;

&lt;p&gt;I'm so glad to be a part of the &lt;strong&gt;dev.to&lt;/strong&gt; community. Thank you for being a part of my journey! I couldn't have reached this milestone without you. I'm excited to continue sharing my knowledge and experience with the &lt;strong&gt;dev.to&lt;/strong&gt; community in the years to come.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxtydx1smt3w1ym1ul04u.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxtydx1smt3w1ym1ul04u.gif" alt="Thank you" width="498" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm also open to &lt;strong&gt;feedback on my existing articles&lt;/strong&gt;, please don't hesitate to let me know what you think.&lt;/p&gt;

&lt;p&gt;What topics would you like to see me write about in the future? Please share your suggestions in the comments below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiw51z035q4dq41ibbnsm.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiw51z035q4dq41ibbnsm.gif" alt="COmment" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>meta</category>
    </item>
    <item>
      <title>How Netflix Uses the Cloud - AWS</title>
      <dc:creator>Lionel♾️☁️</dc:creator>
      <pubDate>Sun, 24 Sep 2023 05:33:01 +0000</pubDate>
      <link>https://forem.com/aws-builders/how-netflix-uses-the-cloud-aws-191c</link>
      <guid>https://forem.com/aws-builders/how-netflix-uses-the-cloud-aws-191c</guid>
      <description>&lt;h2&gt;
  
  
  How they Use AWS Services
&lt;/h2&gt;

&lt;p&gt;Binge-watching 🍿has become more and more of a phenomena. Netflix 🎬 has transformed the way we watch shows. Beneath its interface 🔍 is a really sophisticated system that effortlessly introduces new shows, movies 🎥 and beams them to countless devices 📱💻 worldwide. &lt;/p&gt;

&lt;p&gt;Today, we'll take a deep dive into the magic ✨ behind Netflix's content orchestration and distribution, and unravel the AWS cloud ☁️ mechanics that support its beautiful framework. 🌐🛠️&lt;/p&gt;

&lt;h2&gt;
  
  
  1. The Netflix Magic 🎬🍿
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Metadata and Shows ✨
&lt;/h3&gt;

&lt;p&gt;We start our Netflix adventure with its administrators 🧙‍♂️ uploading fresh episodes and movies, with metadata such as tags 🏷️, titles 📜, and descriptions 🖋️. The metadata is the backbone of organizing and categorizing shows, making your search for the next binge-watch, sleek and smooth. The metadata will be stored in the Elasticsearch/OpenSearch Database 🗃️, renowned for their flash ⚡ speed search ! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F893vbdmwu3xitfnmgvaa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F893vbdmwu3xitfnmgvaa.png" alt="Metada"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Uploading and Process Videos! 🎥➡️☁️
&lt;/h3&gt;

&lt;p&gt;Once our Admin hits the &lt;code&gt;upload&lt;/code&gt; button, the video 🎞️ is uploaded to an Amazon S3 bucket 🪣. To make sure everyone, no matter their device 📱💻🖥️ or internet speed 🌐, gets a flawless viewing 🍿 experience, once uploaded, the video gets a a complete makeover 💄. Thanks to &lt;code&gt;AWS Elemental MediaConvert&lt;/code&gt; 🌀, the video is converted into various sizes (4K, 1080p, 720p), perfectly fitting all screen sizes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccxla4di1n9zkiydzlo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccxla4di1n9zkiydzlo9.png" alt="2nd"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: AI Content Analysis in Action! 🔍🤖
&lt;/h3&gt;

&lt;p&gt;Next, let's get into quality checks. For that, Netflix uses its &lt;code&gt;AWS Rekognition&lt;/code&gt; 🧠⚙️. It meticulously scans 🕵️‍♂️ all uploaded videos to spot any sensitive 🚫 content 📼. Thanks to this tool 🤖🔍, Netflix ensures that their platform remains a safe and wholesome space for all viewers. After all it is &lt;code&gt;safety first, binge later!&lt;/code&gt; 🛡️🍿&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdx42n8sm8bwqu46fvr7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdx42n8sm8bwqu46fvr7.png" alt="ai content analysis"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 4: AWS Step Functions for Parallel Processing 🤹⚙️&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We all have our shows that we are impatiently waiting to watch on our favorite streaming platform. So the Netflix team cannot wait and upload one movie then another. To fix this, Netflix amped up its efficiency by using &lt;code&gt;AWS Step Functions&lt;/code&gt; for speedy parallel processing 🌌⏩of videos. Think of &lt;code&gt;AWS Step Functions&lt;/code&gt; as a series 🎼of tasks, from content scanning 🔄 to AI content analysis 🔍🤖 (&lt;strong&gt;Step 1 to 3&lt;/strong&gt;), all at the same beat. With all these processes happening side by side, not only does the waiting time ⏳ drop drastically, but in addition the whole Netflix ecosystem runs even smoother 🚀✨.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllferc0ytukggeihgfef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllferc0ytukggeihgfef.png" alt="step functions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 5: Smart Storage Frequent vs Infrequent Access 📦🔄&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Once the processing is completed, our videos are stored in a frequently used &lt;code&gt;Amazon S3 bucket&lt;/code&gt; 🥂💼, where it's all set for prime time. Here's the smart bit: any movie or episode that is not frequently watched 🤷‍♂️📉, gets transferred to a cozy spot in the 'chill-zone' &lt;code&gt;S3 Glacier&lt;/code&gt; 🛋️🪣. This storage switch-ups 🕺💃 not only ensure that costs are kept low 💲⚖️ but also that resources are used in the best way possible 🧠⚡&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Efficiency meets elegance! 🎩✨&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fne0jrxra4p62jrav4zai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fne0jrxra4p62jrav4zai.png" alt="storage"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 6: Deep Freeze Dive for Rare Gems 🏔️❄️&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;What about those shows and episodes 🎬 that only a select few cherish 💎? Netflix keeps them &lt;code&gt;Amazon S3 Glacier&lt;/code&gt;. Think of AWS S3 Glacier as a vault 🗝️ where cinematic rarities are safely kept, until needed. This multi-layered storage strategy 🍰 ensures that all Netflix's content is cost-effective. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A balance of budget 💸 and value 💡 with flair! 🎉🌌&lt;/p&gt;

&lt;p&gt;Next -&amp;gt; The Nextflix Express Lane&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe59nn3qrxqtldd822vpm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe59nn3qrxqtldd822vpm.png" alt="glacier s3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 7: Netflix and AWS CDN (Open Connect) 🚀🎥&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In 2020, Netflix became and international phenomenon. They needed to provide similar seamless quality to their international customers. As a solution, Netflix used AWS' Content Delivery Network, &lt;code&gt;Open Connect&lt;/code&gt; 🌐⚡ Open Connect uses AWS' Edge Locations located internationally to slash wait times 🕐 making sure their next binge-worthy shows are just a blink away 📺✨. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Lightning-quick movie magic! 🍿🎬🎉&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzw9taxclyk7bjik9dkax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzw9taxclyk7bjik9dkax.png" alt="CDN"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 8: Diving Deep with Data Magic 🔍✨&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The one thing which too many people seem not really that important but which makes all the difference to all companies, is data. To unlock the mysteries of its user habits 🕵️‍♂️💡 and system prowess 🖥️🚀, Netflix uses ELK stack trio: &lt;code&gt;Elasticsearch&lt;/code&gt;, &lt;code&gt;Logstash&lt;/code&gt;, and &lt;code&gt;Kibana&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;How do they all work together to make Netflix better ?&lt;/p&gt;

&lt;p&gt;First &lt;code&gt;Elasticsearch&lt;/code&gt; the &lt;strong&gt;detective&lt;/strong&gt;, sorts and stashes away logs (clues) 📁🔍; next, &lt;code&gt;Logstash&lt;/code&gt; the &lt;strong&gt;craftsman&lt;/strong&gt;, processes and channels the data 🌊⚙️; and &lt;code&gt;Kibana&lt;/code&gt;? The artist 🎨, paints beautiful data &lt;strong&gt;visualizations&lt;/strong&gt; for Netflix engineers to use. With this dynamic trio, Netflix ensures every move is backed by data  📊 and strategy🪄!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50yedd4ywpyk8tkaf2f3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50yedd4ywpyk8tkaf2f3.png" alt="Analytics"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;2. Netflix and us, viewers 🌍✨&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: One Service, Many Screens 📺📱💻🎮&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now let us step into our shoes. Whether you're on a smartphone 📱, TV 📺, laptop 💻, or even a gaming console 🎮, you can access your favorite streaming platform. This chameleon-like adaptability ensures that no matter your device, you always have access to 🍿🎬the same responsive, seamless and beautiful user experience. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3csewuaej15rgm0uo1iv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3csewuaej15rgm0uo1iv.png" alt="screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2: A Dazzling Dance of Design and Delivery 🌐💃&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Whenever you open Netflix, you're actually viewing their website built using the magical &lt;code&gt;React.js&lt;/code&gt; 🪄🖥️. React.js is a &lt;code&gt;JavaScript&lt;/code&gt; library and is what gives you a dazzling experience. When you halt on a movie, a video snippet plays which captivates 🌀🎨 your attention. And who ties it all together? The CDN (&lt;code&gt;Open Connect&lt;/code&gt;), acting as the bridge 🌉, ensuring that every video and frame is delivered in perfect harmony 🎶🎬. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F800l6fe7advvxtplvsn1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F800l6fe7advvxtplvsn1.png" alt="frontend"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3: Searching for a show 🔍💎&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Looking for your next show? Go to the search page of Netflix and &lt;strong&gt;type away&lt;/strong&gt;! 🎹✨ As you search, the website communicates with the CDN 🌐, which connects with Netflix's backstage — &lt;code&gt;AWS API Gateway&lt;/code&gt; and microservices 🎩⚙️. This triggers a &lt;code&gt;Lambda&lt;/code&gt; function (a magic spell) if you will 🪄⚡, which will go and search / query the database, using the metadata you provided.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Your personal concierge for your binge-watching journey. 🍿🌟&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsols220owd8r3p4vcvar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsols220owd8r3p4vcvar.png" alt="searching / indexing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 4: Using CDN for Video Delivery🚂✨&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Once the search is complete, the CDN &lt;code&gt;(Open Connect)&lt;/code&gt;, like a cinematic librarian 🤓📚, will quickly grab the video's metadata from the digital archives -&amp;gt; &lt;code&gt;Database&lt;/code&gt; and, at the same time, connects to the &lt;code&gt;S3 bucket&lt;/code&gt; where the shows🎬🪣 are kept. Both the movie's metadata (details) and the movie itself will be cached / stashed within the CDN 🌐🎥, prepping for future movie nights.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 5: Seamless Playback 🍿🌌&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Thanks to CDN's &lt;code&gt;(Open Connect)&lt;/code&gt; 🌐 smart caching / stashing skills 🧠🔐, frequently accessed content are very smooth (minimal latency)! Regardless of a user's location 🌍, this caching mechanism ensures a lag-free experience 🎥💨. So sit back, relax, and enjoy a seamless cinema spree! 🛋️🎉&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qv7ay5taa7ayui59qla.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qv7ay5taa7ayui59qla.png" alt="playback"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion 🌟&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;I am still to study the backend of other streaming platforms. However from my research, Netflix's behind-the-scenes magic ✨ is nothing short of tech wizardry 🎩🔮. First with AWS's unique tools and power trio - &lt;strong&gt;Elemental MediaConvert&lt;/strong&gt;, &lt;strong&gt;Rekognition&lt;/strong&gt;, and &lt;strong&gt;Step Functions&lt;/strong&gt;, Netflix consistently concocts a potion of optimized content 🏰. In addition to that, AWS' storage solutions 🗄️, Open Connect CDN 🌐⚡, and ELK 🔍 ensures that viewers like you and I get a cost-friendly, lightning-quick 🚀, and always-on 🌍 streaming experience. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxx4we1n9sfsk7eqxgoyi.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxx4we1n9sfsk7eqxgoyi.gif" alt="NETFLIX ARCHITECTURE"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As all our screens connect to Netflix 📺📱💻, one thing is very clear - their promise of delivering seamless content,  remains unwavering 🤘🎬!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>tutorial</category>
      <category>ai</category>
    </item>
    <item>
      <title>Every Project Deserves its CI/CD pipeline, no matter how small</title>
      <dc:creator>Lionel♾️☁️</dc:creator>
      <pubDate>Wed, 30 Aug 2023 20:30:00 +0000</pubDate>
      <link>https://forem.com/aws-builders/every-project-deserves-its-cicd-pipeline-no-matter-how-small-19j9</link>
      <guid>https://forem.com/aws-builders/every-project-deserves-its-cicd-pipeline-no-matter-how-small-19j9</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;TL;DR&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In today's tech industry, setting up a CI/CD pipeline is quite easy. Creating a CI/CD pipeline even for a simple side project is a great way to learn many things. For today we will be working on one of my side projects using Portainer, Gitlab and Docker for the setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2du449rfv50jst0ddrg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2du449rfv50jst0ddrg.jpg" alt="Gitlab" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;My sample project&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As the founder of Apoti Development Association (A.D.A.) an NGO, I like organizing technical events in the Buea area (SW Region of Cameroon, Africa). I was frequently asked if there is a way to know all the upcoming events (the meetups, the Jugs, the ones organized by the local associations, etc.) After taking some time to look into it, I realized there was no single place which listed them all. So I came up with &lt;a href="https://apotidev.org/events/" rel="noopener noreferrer"&gt;https://apotidev.org/events&lt;/a&gt;, a simple web page which tries to keep an update of all the events. This project is available in Gitlab.&lt;/p&gt;

&lt;p&gt;Disclaimer: Even though this is a simple project, the complexity of this project is not important here. The different components of our CI/CD pipeline we will detail can also be used in almost the same way for more complicated projects. However they are a nice fit for micro-services.&lt;/p&gt;

&lt;h2&gt;
  
  
  A look at the code
&lt;/h2&gt;

&lt;p&gt;To make things as simple as possible, we have an &lt;code&gt;events.json&lt;/code&gt; file in which all new events are added. Let's look at a snippet of it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"events"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Let's Serve Day 2018"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"desc"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Hi everyone! We're back with 50, 60-minute practitioner-led sessions and live Q&amp;amp;A on Slack. Our tracks include CI/CD, Cloud-Native Infrastructure, Cultural Transformations, DevSecOps, and Site Reliability Engineering. 24 hours. 112 speakers. Free online."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"date"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"October 17, 2018, online event"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"ts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"20181017T000000"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"link"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://www.alldaydevops.com/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"sponsors"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"all-day-devops"&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Creation of a Business Blockchain (lab) &amp;amp; introduction to smart contracts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"desc"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Come with your laptop! We invite you to join us to create the first prototype of a Business Blockchain (Lab) and get an introduction to smart contracts."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"ts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"20181004T181500"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"date"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"October 4 at 6:15 pm at CEEI"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"link"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://www.meetup.com/en-EN/IBM-Cloud-Cote-d-Azur-Meetup/events/254472667/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"sponsors"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ibm"&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;…&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our mustache &lt;a href="https://gitlab.com/lucj/ada.events/blob/master/index.mustache" rel="noopener noreferrer"&gt;template&lt;/a&gt; is applied to this file. It will help us to generate the final web assets. &lt;/p&gt;

&lt;h2&gt;
  
  
  Docker multi-stage build
&lt;/h2&gt;

&lt;p&gt;Once our web assets have been generated, they are copied into an nginx image — the image that is deployed on our target machine.&lt;/p&gt;

&lt;p&gt;Thanks to Gitlab's multi-stage build, our build is in two parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;creation of the assets&lt;/li&gt;
&lt;li&gt;generation of the final image containing the assets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's look at the Dockerfile used for the build&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Generate the assets  &lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:8.12.0-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . /build  &lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /build  &lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm i  
&lt;span class="k"&gt;RUN &lt;/span&gt;node clean.js  
&lt;span class="k"&gt;RUN &lt;/span&gt;./node_modules/mustache/bin/mustache events.json index.mustache &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; index.html


&lt;span class="c"&gt;# Build the final Docker image used to serve them&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; nginx:1.14.0  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /build/*.html /usr/share/nginx/html/  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; events.json /usr/share/nginx/html/  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; css /usr/share/nginx/html/css  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; js /usr/share/nginx/html/js  &lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; img /usr/share/nginx/html/img&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Local testing
&lt;/h2&gt;

&lt;p&gt;Before we proceed, we need to test the generation of our site. Just clone the repository and run the test script. This script will create an image and run a container out of it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# First Clone the repo&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;git clone git@gitlab.com:lucj/ada.events.git

&lt;span class="c"&gt;# Next, cd into the repo&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;sophia.events
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let us run our test script&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;./test.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what our output looks lke&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Sending build context to Docker daemon  2.588MB  
Step 1/12 : FROM node:8.12.0-alpine AS build  
 ---&amp;gt; df48b68da02a  
Step 2/12 : COPY . /build  
 ---&amp;gt; f4005274aadf  
Step 3/12 : WORKDIR /build  
 ---&amp;gt; Running in 5222c3b6cf12  
Removing intermediate container 5222c3b6cf12  
 ---&amp;gt; 81947306e4af  
Step 4/12 : RUN npm i  
 ---&amp;gt; Running in de4e6182036b  
npm notice created a lockfile as package-lock.json. You should commit this file.  
npm WARN www@1.0.0 No repository field.added 2 packages from 3 contributors and audited 2 packages in 1.675s  
found 0 vulnerabilitiesRemoving intermediate container de4e6182036b  
 ---&amp;gt; d0eb4627e01f  
Step 5/12 : RUN node clean.js  
 ---&amp;gt; Running in f4d3c4745901  
Removing intermediate container f4d3c4745901  
 ---&amp;gt; 602987ce7162  
Step 6/12 : RUN ./node_modules/mustache/bin/mustache events.json index.mustache &amp;gt; index.html  
 ---&amp;gt; Running in 05b5ebd73b89  
Removing intermediate container 05b5ebd73b89  
 ---&amp;gt; d982ff9cc61c  
Step 7/12 : FROM nginx:1.14.0  
 ---&amp;gt; 86898218889a  
Step 8/12 : COPY --from=build /build/*.html /usr/share/nginx/html/  
 ---&amp;gt; Using cache  
 ---&amp;gt; e0c25127223f  
Step 9/12 : COPY events.json /usr/share/nginx/html/  
 ---&amp;gt; Using cache  
 ---&amp;gt; 64e8a1c5e79d  
Step 10/12 : COPY css /usr/share/nginx/html/css  
 ---&amp;gt; Using cache  
 ---&amp;gt; e524c31b64c2  
Step 11/12 : COPY js /usr/share/nginx/html/js  
 ---&amp;gt; Using cache  
 ---&amp;gt; 1ef9dece9bb4  
Step 12/12 : COPY img /usr/share/nginx/html/img  
 ---&amp;gt; e50bf7836d2f  
Successfully built e50bf7836d2f  
Successfully tagged registry.gitlab.com/ada/ada.events:latest  
=&amp;gt; web site available on [http://localhost:32768](http://localhost:32768/)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now access our webpage using the URL provided at the end&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1100%2Fformat%3Awebp%2F1%2AY21lGvz9lCwXw6GAq_-1Bw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1100%2Fformat%3Awebp%2F1%2AY21lGvz9lCwXw6GAq_-1Bw.png" alt="one" width="800" height="703"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Our target environment&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Provisioning a virtual machine on a cloud provider
&lt;/h3&gt;

&lt;p&gt;As you have probably noticed, this web site is not critical (only a few dozen visits a day), and as such it only runs on a single virtual machine. This one was created with Docker Machine on &lt;a href="http://amazon.aws.com/" rel="noopener noreferrer"&gt;AWS&lt;/a&gt;, the best cloud provider.&lt;/p&gt;

&lt;p&gt;Given the scale of our project, you must have noticed that it is not that critical (barely a few visits a day), so we will only one one virtual machine for it. For that we created ours with Docker on Exoscale, a nice European cloud provider.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Docker swarm
&lt;/h3&gt;

&lt;p&gt;We configured our VM (virtual machine) above so it runs the Docker daemon in Swarm mode so that it would allow us to use the stack, service, secret primitives, config and the great (very easy to use) orchestration abilities of Docker swarm&lt;/p&gt;

&lt;h3&gt;
  
  
  The application running as a Docker stack
&lt;/h3&gt;

&lt;p&gt;The file below -&amp;gt; &lt;code&gt;ada.yaml&lt;/code&gt; defines the service which runs our Nginx web server that contains the web assets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.7"&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;www&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry.gitlab.com/lucj/sophia.events&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;proxy&lt;/span&gt;
    &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replicated&lt;/span&gt;
      &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
      &lt;span class="na"&gt;update_config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;parallelism&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
        &lt;span class="na"&gt;delay&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
      &lt;span class="na"&gt;restart_policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;on-failure&lt;/span&gt;

&lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;proxy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;external&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break this down &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The docker image is in our private registry on gitlab.com.&lt;/li&gt;
&lt;li&gt;The service is in replicated mode with 2 replicas, this means that 2 tasks / 2 containers of the service are always running at the same time. A virtual IP address (VIP) will be associated to this service by Docker Swarm. That way, each request targeted at the service is easily load balanced between our two replicas.&lt;/li&gt;
&lt;li&gt;Every time that an update is done to our service (like deploying a new version of the website), one of our replicas is updated and the 2nd is updated 10 secs after. This makes sure our website is always available even during the update process. &lt;/li&gt;
&lt;li&gt;Our service is also attached to the external &lt;em&gt;proxy&lt;/em&gt; network. This makes it so our TLS termination service (which runs in another service which is deployed on docker swarm, but out of our project) can always send requests to our &lt;em&gt;www&lt;/em&gt; service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our stack is executed with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker stack deploy &lt;span class="nt"&gt;-c&lt;/span&gt; ada.yml ada_events
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Portainer: One tool to manage them all
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://portainer.io/" rel="noopener noreferrer"&gt;Portainer&lt;/a&gt; is a really great web UI which will help us to manage all our Docker hosts and Docker Swarm clusters very easily. Let's take a look at its interface where it lists all our stacks available in the swarm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1100%2Fformat%3Awebp%2F1%2AQ28WiD0-_8zqDv_CtWdqRA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1100%2Fformat%3Awebp%2F1%2AQ28WiD0-_8zqDv_CtWdqRA.png" alt="two" width="800" height="531"&gt;&lt;/a&gt;&lt;br&gt;
As you can see above, our current setup has 3 stacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First we have &lt;code&gt;Portainer&lt;/code&gt; itself&lt;/li&gt;
&lt;li&gt;Then we have &lt;code&gt;ada_events&lt;/code&gt; or in this case &lt;code&gt;sophia_events&lt;/code&gt; which contains the service which runs our web site&lt;/li&gt;
&lt;li&gt;Last we have &lt;code&gt;tls&lt;/code&gt;, our TLS termination service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If we go ahead and list the details of the &lt;em&gt;www&lt;/em&gt; service, in the &lt;em&gt;ada_events&lt;/em&gt; stack, we can easily see that the &lt;strong&gt;Service webhook&lt;/strong&gt; is activated. This is a new feature, available since Portainer version 1.19.2. This update allows us to define a &lt;code&gt;HTTP Post endpoint&lt;/code&gt; which we can call to trigger an update of our service. As you will notice later on, our Gitlab Runner is in charge of calling this webhook.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1100%2Fformat%3Awebp%2F1%2ACxk54DFLdcfZjvGTDvm56w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1100%2Fformat%3Awebp%2F1%2ACxk54DFLdcfZjvGTDvm56w.png" alt="three" width="800" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: As you see in the screenshot above, I use localhost:8888 to access &lt;strong&gt;Portainer&lt;/strong&gt;. Since I don't want to expose our &lt;strong&gt;Portainer&lt;/strong&gt; instance to the external world, access is done through an &lt;code&gt;ssh&lt;/code&gt; tunne; which we open with the command below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i ~/.docker/machine/machines/labs/id_rsa -NL 8888:localhost:9000 $USER@$HOST
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we have done this, all requests targeted at our local machine on port 8888 -&amp;gt; &lt;code&gt;localhost:8888&lt;/code&gt; are sent to port 9000 on our VM through ssh. Port 9000 is the port where Portainer is running on our VM but this port is not opened to the outside world. We used a security group in our &lt;code&gt;AWS&lt;/code&gt; config to block it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB&lt;/strong&gt;: Note that in the command above, the &lt;code&gt;ssh&lt;/code&gt; key that was used to connect to the VM was the one generated by our Docker Machine during the creation of the VM.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitLab runner
&lt;/h2&gt;

&lt;p&gt;A gitlab runner is a continuous integration tool that helps automate the process of testing and deploying any and all applications. It works in with GitLab CI to run any job defined in the project's &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;So in our project, our GitLab runner is in charge of executing all the actions we defined in the &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file. On Gitlab, you have a choice of using your runners or using the shared runners available. In this project, we used a VM on &lt;code&gt;AWS&lt;/code&gt; as our runner. &lt;/p&gt;

&lt;p&gt;First we register our runner providing a couple of commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;CONFIG_FOLDER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/tmp/gitlab-runner-config

&lt;span class="nv"&gt;$ &lt;/span&gt;docker run — &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="se"&gt;\ &lt;/span&gt; 
 &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;$CONFIG_FOLDER&lt;/span&gt;:/etc/gitlab-runner &lt;span class="se"&gt;\ &lt;/span&gt; 
 gitlab/gitlab-runner register &lt;span class="se"&gt;\ &lt;/span&gt; 
   &lt;span class="nt"&gt;--non-interactive&lt;/span&gt; &lt;span class="se"&gt;\ &lt;/span&gt; 
   &lt;span class="nt"&gt;--executor&lt;/span&gt; &lt;span class="s2"&gt;"docker"&lt;/span&gt; &lt;span class="se"&gt;\ &lt;/span&gt; 
   —-docker-image docker:stable &lt;span class="se"&gt;\ &lt;/span&gt; 
   &lt;span class="nt"&gt;--url&lt;/span&gt; &lt;span class="s2"&gt;"[https://gitlab.com/](https://gitlab.com/)"&lt;/span&gt; &lt;span class="se"&gt;\ &lt;/span&gt; 
   —-registration-token &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PROJECT_TOKEN&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\ &lt;/span&gt; 
   —-description &lt;span class="s2"&gt;"AWS Docker Runner"&lt;/span&gt; &lt;span class="se"&gt;\ &lt;/span&gt; 
   &lt;span class="nt"&gt;--tag-list&lt;/span&gt; &lt;span class="s2"&gt;"docker"&lt;/span&gt; &lt;span class="se"&gt;\ &lt;/span&gt; 
   &lt;span class="nt"&gt;--run-untagged&lt;/span&gt; &lt;span class="se"&gt;\ &lt;/span&gt; 
   —-locked&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"false"&lt;/span&gt; &lt;span class="se"&gt;\ &lt;/span&gt; 
   &lt;span class="nt"&gt;--docker-privileged&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you see above, we have &lt;code&gt;$PROJECT_TOKEN&lt;/code&gt; as one of the needed options. We will get that from the project page on Gitlab, used to register new runners&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1100%2Fformat%3Awebp%2F1%2AfBHXnpbMKwM0yraHaj1-hw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1100%2Fformat%3Awebp%2F1%2AfBHXnpbMKwM0yraHaj1-hw.png" alt="three" width="800" height="603"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we have registered our gitlab runner, we can now start it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;CONFIG_FOLDER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/tmp/gitlab-runner-config

&lt;span class="nv"&gt;$ &lt;/span&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\ &lt;/span&gt; 
 &lt;span class="nt"&gt;--name&lt;/span&gt; gitlab-runner &lt;span class="se"&gt;\ &lt;/span&gt; 
 —-restart always &lt;span class="se"&gt;\ &lt;/span&gt; 
 &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;$CONFIG_FOLDER&lt;/span&gt;:/etc/gitlab-runner &lt;span class="se"&gt;\ &lt;/span&gt; 
 &lt;span class="nt"&gt;-v&lt;/span&gt; /var/run/docker.sock:/var/run/docker.sock &lt;span class="se"&gt;\ &lt;/span&gt; 
 gitlab/gitlab-runner:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once our VM has been setup as a Gitlab runner, it will be listed in the CI/CD page, under settings of our project&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1100%2Fformat%3Awebp%2F1%2A7evU79ZJ4VWH6QFdT_KCgA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1100%2Fformat%3Awebp%2F1%2A7evU79ZJ4VWH6QFdT_KCgA.png" alt="four" width="800" height="579"&gt;&lt;/a&gt;&lt;br&gt;
Now that we have a runner, it can now receive work to do every time that we have a new commit and push in our git repo. It will sequentially run the different stages in our &lt;code&gt;.gitlab-ci.yaml&lt;/code&gt; file. Let's look at our &lt;code&gt;.gitlab-ci.yaml&lt;/code&gt; file -&amp;gt; the file that configures our Gitlab CI/CD pipeline&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  CONTAINER_IMAGE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry.gitlab.com/$CI_PROJECT_PATH&lt;/span&gt;
&lt;span class="na"&gt;  DOCKER_HOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tcp://docker:2375&lt;/span&gt;
&lt;span class="na"&gt;stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;  - test&lt;/span&gt;
&lt;span class="s"&gt;  - build&lt;/span&gt;
&lt;span class="s"&gt;  - deploy&lt;/span&gt;

&lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
&lt;span class="na"&gt;  image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node:8.12.0-alpine&lt;/span&gt;
&lt;span class="na"&gt;  script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;    - npm i&lt;/span&gt;
&lt;span class="s"&gt;    - npm test&lt;/span&gt;

&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build&lt;/span&gt;
&lt;span class="na"&gt;  image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker:stable&lt;/span&gt;
&lt;span class="na"&gt;  services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;    - docker:dind&lt;/span&gt;
&lt;span class="na"&gt;  script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;    - docker image build -t $CONTAINER_IMAGE:$CI_BUILD_REF -t $CONTAINER_IMAGE:latest .&lt;/span&gt;
&lt;span class="s"&gt;    - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com&lt;/span&gt;
&lt;span class="s"&gt;    - docker image push $CONTAINER_IMAGE:latest&lt;/span&gt;
&lt;span class="s"&gt;    - docker image push $CONTAINER_IMAGE:$CI_BUILD_REF&lt;/span&gt;
&lt;span class="na"&gt;  only&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;    - master&lt;/span&gt;

&lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;  stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy&lt;/span&gt;
&lt;span class="na"&gt;  image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;alpine&lt;/span&gt;
&lt;span class="na"&gt;  script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;    - apk add --update curl&lt;/span&gt;
&lt;span class="s"&gt;    - curl -XPOST $WWW_WEBHOOK&lt;/span&gt;
&lt;span class="na"&gt;  only&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;    - master&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break down the stages&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, the test stage starts by running some pre-checks ensuring that the  &lt;code&gt;events.json&lt;/code&gt; file is well formed and also makes sure that there is no images missing..&lt;/li&gt;
&lt;li&gt;Next, the build stage uses docker to build the image and then pushes it to our GitLab registry.&lt;/li&gt;
&lt;li&gt;Lastly, the deploy stage triggers the update of our service via a &lt;code&gt;webhook&lt;/code&gt; sent to our &lt;strong&gt;Portainer&lt;/strong&gt; app. 
&amp;gt; Note that the &lt;code&gt;WWW_WEBHOOK&lt;/code&gt; variable is defined in the CI/CD settings of our project page on GitLab.com.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1100%2Fformat%3Awebp%2F1%2AfpPmRJeAqtR6WWD_nc3xJQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1100%2Fformat%3Awebp%2F1%2AfpPmRJeAqtR6WWD_nc3xJQ.png" alt="five" width="800" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Some &lt;strong&gt;Notes&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Our Gitlab runner is running inside a container in our Docker swarm. Like mentioned before, we could have instead used a shared runner (publicly available runners which share their time between the jobs needed by different projects hosted on GitLab) — but, in our case, since the runner must have access to our Portainer endpoint (to send the webhook), and also because I don't want our &lt;strong&gt;Portainer&lt;/strong&gt; app to be publicly accessible, I preferred setting it this way, having it run inside the cluster. It is also more secure this way.&lt;/p&gt;

&lt;p&gt;In addition to that, because our runner is in a docker container, it is able to send the webhook to the &lt;strong&gt;IP address&lt;/strong&gt; of the &lt;strong&gt;Docker0 bridge network&lt;/strong&gt;, to connect with Portainer through port 9000 which it exposes on the host. Thus, our webhook has following format: &lt;em&gt;&lt;a href="http://172.17.0.1:9000/api%5B%E2%80%A6%5Da7-4af2-a95b-b748d92f1b3b" rel="noopener noreferrer"&gt;http://172.17.0.1:9000/api[…]a7-4af2-a95b-b748d92f1b3b&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Deployment&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Updating a new version of our app follows the workflow below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ngm4doub9ythjnm7hhj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ngm4doub9ythjnm7hhj.png" alt="six" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;It starts with a developer pushing some changes to our GitLab repo. The changes in the code mainly involve adding or updating one or more events in our &lt;code&gt;events.json&lt;/code&gt; file while also adding some sponsors' logo.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After this, the GitLab runner performs all the actions that we defined in the &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then the GitLab runner calls our webhook that is defined in &lt;strong&gt;Portainer&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, upon the webhook reception, &lt;strong&gt;Portainer&lt;/strong&gt; deploys the newest version of the &lt;em&gt;www&lt;/em&gt; service. It does this, calling the &lt;em&gt;Docker Swarm API&lt;/em&gt;. Portainer can access to the API because the &lt;code&gt;/var/run/docker.sock&lt;/code&gt; socket is bind mounted once it is started.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now our users can access the newest version of our events website.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Let's Test
&lt;/h3&gt;

&lt;p&gt;Let's test our pipeline by doing a couple changes in the code and then committing / pushing those changes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s1"&gt;'Fix image'&lt;/span&gt;  

&lt;span class="nv"&gt;$ &lt;/span&gt;git push origin master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see in this screenshot below, our changes triggered our pipeline &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1100%2Fformat%3Awebp%2F1%2AJeVsifm36zx_ZfjXTqHzQQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1100%2Fformat%3Awebp%2F1%2AJeVsifm36zx_ZfjXTqHzQQ.png" alt="seven" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Breaking down the steps
&lt;/h3&gt;

&lt;p&gt;On &lt;strong&gt;Portainer&lt;/strong&gt; side, the webhook was received and the service update was performed. Also, although we cannot see it clearly here, one replica has been updated. Like we mentioned before, that still left the website accessible through the other replica. The other replica also was updated a couple of seconds later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1100%2Fformat%3Awebp%2F1%2AjoNxer_6SRvyCLj8ACnmLA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1100%2Fformat%3Awebp%2F1%2AjoNxer_6SRvyCLj8ACnmLA.png" alt="eight" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Although this was a tiny project, setting up a CI/CD pipeline for it was a good exercise. First it helped me get more familiar with GitLab (which has been on my To-Learn list for quite some time). Having done this project, I can say that it is an excellent, professional product. Also, this project was a great opportunity for me to play with the long awaited &lt;strong&gt;webhook&lt;/strong&gt; feature available in updated versions of &lt;strong&gt;Portainer&lt;/strong&gt;. Lastly, choosing to use Docker Swarm for this project was a real no-brainer - so cool and easy to use!&lt;/p&gt;

&lt;p&gt;Hope you found this project as interesting as I did. No matter how small your project is, it would be a great idea to build it using CI/CD. &lt;/p&gt;

&lt;p&gt;What projects are you working on and how has this article inspired you? Please comment below.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Create your first Web-app using ChatGPT</title>
      <dc:creator>Lionel♾️☁️</dc:creator>
      <pubDate>Fri, 25 Aug 2023 19:24:00 +0000</pubDate>
      <link>https://forem.com/softwaresennin/create-your-first-web-app-using-chatgpt-2174</link>
      <guid>https://forem.com/softwaresennin/create-your-first-web-app-using-chatgpt-2174</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Language translation is essential in our globalized world, bridging language gaps. Web apps are becoming more important for seamless communication due to the rising need for language translation services.&lt;/p&gt;

&lt;p&gt;This article discusses creating a web-based language translation application utilizing ChatGPT and Streamlit. ChatGPT, an advanced OpenAI language model, can generate natural-sounding text responses from inputs provided. On the other hand, Streamlit is a powerful open-source framework for fast, easy data science web application development.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL ; DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Use Chatgpt to create a Translator Web-app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5s135h2hxxn7eqgpcg9b.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5s135h2hxxn7eqgpcg9b.gif" alt="here"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Here we go....&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  I. Starting ChatGPT and Streamlit
&lt;/h2&gt;

&lt;p&gt;Before building our language translation web application, we must set up our development environment and install the necessary packages. This section explains how to configure and install OpenAI with Streamlit.&lt;/p&gt;

&lt;p&gt;ChatGPT's language model was created by OpenAI, which powers our web-based translation tool. To use OpenAI's resources, we will need to get an API key from the website. With the API key, we can install OpenAI using pip in our command prompt or terminal:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

pip &lt;span class="nb"&gt;install &lt;/span&gt;openai


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, we need to setup &lt;a href="https://docs.streamlit.io/library/get-started" rel="noopener noreferrer"&gt;Streamlit&lt;/a&gt;, which is the framework we will be using to build our web app. We can also use &lt;em&gt;pip&lt;/em&gt; to install Streamlit:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

pip &lt;span class="nb"&gt;install &lt;/span&gt;streamlit


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now that OpenAI and Streamlit have been installed, we can now start building our translation web-based app.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For more information on ChatGPT, Streamlit, and the OpenAI API key acquisition procedure, check out my in-depth tutorial, &lt;strong&gt;Building a ChatGPT Web Application using Streamlit and OpenAI: A Step-by-Step Tutorial&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  II. Developing the Translation Web-App
&lt;/h2&gt;

&lt;p&gt;After installing the necessary packages, let us now see how to create the user interface for our language translation web-based application. Please follow this full guide:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Name a new file app.py in your favourite Python editor. Please copy the code snippets into your file and save changes.&lt;/li&gt;
&lt;li&gt;Import the necessary packages to start coding:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="c1"&gt;# Importing required packages  
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;streamlit&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;  
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;We will use ChatGPT model engine &lt;code&gt;text-davinci-003&lt;/code&gt;. This model can mimic human responses and even translate across languages. This model requires registration and an OpenAI &lt;code&gt;api_key&lt;/code&gt;, as explained in step (V.5) of this article. Now, add the following two lines to your script:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Set the model engine and your OpenAI API key  &lt;/span&gt;
model_engine &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"text-davinci-003"&lt;/span&gt;  
openai.api_key &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"your_secret_key"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Next we will create the &lt;code&gt;translate_tex&lt;/code&gt; function which will take care of the &lt;strong&gt;translation&lt;/strong&gt; process. This function will take whatever the user selects as &lt;strong&gt;target language&lt;/strong&gt; and &lt;strong&gt;text input&lt;/strong&gt; and return the &lt;strong&gt;translated text&lt;/strong&gt;.
```python
&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Define a function to handle the translation process
&lt;/h1&gt;

&lt;p&gt;def translate_text(text, target_language):  &lt;/p&gt;

&lt;h1&gt;
  
  
  Define the prompt for the ChatGPT model
&lt;/h1&gt;

&lt;p&gt;prompt = f"Translate '{text}' to {target_language}"  &lt;/p&gt;

&lt;h1&gt;
  
  
  Generate the translated text using ChatGPT
&lt;/h1&gt;

&lt;p&gt;response = openai.Completion.create(&lt;br&gt;&lt;br&gt;
engine=model_engine,&lt;br&gt;&lt;br&gt;
prompt=prompt,&lt;br&gt;&lt;br&gt;
max_tokens=1024,&lt;br&gt;&lt;br&gt;
n=1,&lt;br&gt;&lt;br&gt;
stop=None,&lt;br&gt;&lt;br&gt;
temperature=0.7,&lt;br&gt;&lt;br&gt;
)  &lt;/p&gt;

&lt;h1&gt;
  
  
  Extract the translated text from the response
&lt;/h1&gt;

&lt;p&gt;translated_text = response.choices[0].text.strip()  &lt;/p&gt;

&lt;p&gt;return translated_text&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
5. Now to bring it all together, we will create the `main()` function that will call the needed functions to create our language web-based translation app. Let us setup this app, so the user will be able to translate text into `Arabic, English, Spanish, French, German, Japanese, Russian, Korean, Chinese, Yoruba` :

```python


# Define the main function that sets up the Streamlit UI and handles the translation process  
def main():  
# Set up the Streamlit UI  
st.sidebar.header('Language Translation App')  
st.sidebar.write('Enter text to translate and select the target language:')  

# Create a text input for the user to enter the text to be translated  
text_input = st.text_input('Enter text to translate')  

# Create a selectbox for the user to select the target language  
target_language = st.selectbox('Select language', ['Arabic', 'English', 'Spanish', 'French', 'German', 'Japanese', 'Russian', 'Korean', 'Chinese', 'Yoruba'])  

# Create a button that the user can click to initiate the translation process  
translate_button = st.button('Translate')  

# Create a placeholder where the translated text will be displayed  
translated_text = st.empty()  

# Handle the translation process when the user clicks the translate button  
if translate_button:  
translated_text.text('Translating...')  
translated_text.text(translate_text(text_input, target_language))


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Now for our last step, we need to call the main function that we created&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="c1"&gt;# Call the main function  
&lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;  
&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;All done, now go ahead and save the file. We can give any name to our python file. I will call it &lt;code&gt;Translator_GPT.py&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We can now run our translator app in our terminal.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;span class="n"&gt;streamlit&lt;/span&gt; &lt;span class="n"&gt;run&lt;/span&gt; &lt;span class="n"&gt;Translator_GPT&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;py&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once you run this command, a webpage in your default web browser will open where we will see our web-app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fburlmtlw6ff1o3mgdkgg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fburlmtlw6ff1o3mgdkgg.png" alt="here"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please go ahead and test the app for yourself. This is what it will look like. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A4800%2Fformat%3Awebp%2F1%2Aqw4tQfsDGS6GZ-wJIcMGPA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A4800%2Fformat%3Awebp%2F1%2Aqw4tQfsDGS6GZ-wJIcMGPA.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  III. Making our Web-app to the World
&lt;/h2&gt;

&lt;p&gt;To make our app public, there are 3 easy ways. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://streamlit.io/" rel="noopener noreferrer"&gt;Streamlit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/" rel="noopener noreferrer"&gt;Hugging Face&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.heroku.com/" rel="noopener noreferrer"&gt;Heroku&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will look at how to make our available to everyone, using streamlit&lt;/p&gt;

&lt;h4&gt;
  
  
  III.1 Before Deploying to the Cloud…
&lt;/h4&gt;

&lt;p&gt;We are now ready to deploy our translation app to the rest of the world. Before doing that, we need to do some house cleaning.&lt;/p&gt;

&lt;h4&gt;
  
  
  III.2 Install git
&lt;/h4&gt;

&lt;p&gt;We need to install &lt;a href="https://github.com/git-guides/install-git" rel="noopener noreferrer"&gt;Git&lt;/a&gt;, our Version Control Tool which will allow us to run git commands on the terminal to upload our app.&lt;/p&gt;

&lt;h4&gt;
  
  
  III.3 Add a ‘requirements.text’ file
&lt;/h4&gt;

&lt;p&gt;The cloud platform will need to know the python packages to install before they can start your app. We will specify that in our &lt;code&gt;requirements.txt&lt;/code&gt;file.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;streamlit&lt;br&gt;&lt;br&gt;
pandas&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  III.4 Deploy your App to Streamlit Cloud&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;We will be using Streamlit to make our website public&lt;/p&gt;

&lt;h4&gt;
  
  
  III.4.1 Set up a GitHub Account
&lt;/h4&gt;

&lt;p&gt;First, create a GitHub account here: &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;Click here&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  III.4.2 Create a New GitHub Repository
&lt;/h4&gt;

&lt;p&gt;In the upper-right corner of any page, use the drop-down &lt;code&gt;+&lt;/code&gt; menu, and select &lt;code&gt;New repository&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A624%2Fformat%3Awebp%2F1%2A2TDZMiGkMocZJV7UDrXiRA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A624%2Fformat%3Awebp%2F1%2A2TDZMiGkMocZJV7UDrXiRA.png" alt="github"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give the repo name you want &lt;code&gt;repository name&lt;/code&gt;, -&amp;gt; click on &lt;code&gt;Create repository&lt;/code&gt;. For now, no need to change any parameters, we will go with the default parameters.&lt;/p&gt;

&lt;h4&gt;
  
  
  III.4.3 Upload Files to your GitHub Repository
&lt;/h4&gt;

&lt;p&gt;Now click on ‘uploading an existing file’.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc97y28dzh27ru5s94q3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc97y28dzh27ru5s94q3k.png" alt="tre"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select your &lt;code&gt;app.py&lt;/code&gt; and &lt;code&gt;requirements.txt&lt;/code&gt; files inside the following page.&lt;/p&gt;

&lt;h4&gt;
  
  
  III.4.4 Set up a Streamlit Cloud Account
&lt;/h4&gt;

&lt;p&gt;First we need to create a streamlit account. To do create a Streamlit cloud account here: &lt;a href="https://streamlit.io/cloud" rel="noopener noreferrer"&gt;Click&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  III.4.5 Create a New App and Link your GitHub Account
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You will see a prominently visible "&lt;strong&gt;New app&lt;/strong&gt;" button after logging in, which is self-explanatory once you see it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Also, you will get a prompt asking you to "&lt;strong&gt;Connect to GitHub&lt;/strong&gt;". Click on it and log into your previously created GitHub account to continue.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ysklhh6pdxmjkv1mw5s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ysklhh6pdxmjkv1mw5s.png" alt="here"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  III.4.6 Deploy your App
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;On the next screen, search for the &lt;strong&gt;GitHub repository&lt;/strong&gt; that you just created. Type the repo name in the area -&amp;gt; &lt;code&gt;Repository&lt;/code&gt;, which is specifically for this purpose.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the field -&amp;gt; &lt;code&gt;Main file path&lt;/code&gt; change to &lt;code&gt;app.py&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the 'Deploy!' button.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0r2hjwrgba1vymikedcp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0r2hjwrgba1vymikedcp.png" alt="go"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  III.4.7 Your Public App is Now Live on Streamlit Cloud!
&lt;/h4&gt;

&lt;p&gt;After waiting for some time, your app will appear. Congrats! Here’s mine: &lt;a href="https://share.streamlit.io/apotitech/translatorGPT/main/TranslatorGPT.py" rel="noopener noreferrer"&gt;https://share.streamlit.io/apotitech/translatorGPT/main/TranslatorGPT.py&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwd6q2fjhhic07tkjfoj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwd6q2fjhhic07tkjfoj.png" alt="site"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Congratulations!!!!!! You have created your very first app.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0jvrgi0y3asbqbebqnx.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0jvrgi0y3asbqbebqnx.gif" alt="yay"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Great job creating your very first app. Please go ahead and share your new app to others and let them test your app. &lt;/p&gt;

&lt;p&gt;With that said ----------------&amp;gt; &lt;strong&gt;Happy Building!!&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Thank you for taking the time to read this! If you like the article, please clap (up to 50 times!) and connect with me on &lt;a href="https://www.linkedin.com/in/apotitech-b79097210/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; , &lt;a href="https://dev.to/softwaresennin"&gt;dev.to&lt;/a&gt; and &lt;a href="https://medium.com/@softwaresennin" rel="noopener noreferrer"&gt;Medium&lt;/a&gt; to remain up to speed on my future articles. 😅&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>tutorial</category>
      <category>openai</category>
      <category>beginners</category>
      <category>python</category>
    </item>
    <item>
      <title>LocalStack: Emulate AWS Services for Local Development &amp; Testing</title>
      <dc:creator>Lionel♾️☁️</dc:creator>
      <pubDate>Thu, 24 Aug 2023 07:06:19 +0000</pubDate>
      <link>https://forem.com/aws-builders/localstack-emulate-aws-services-for-local-development-testing-eoj</link>
      <guid>https://forem.com/aws-builders/localstack-emulate-aws-services-for-local-development-testing-eoj</guid>
      <description>&lt;p&gt;It can be time-consuming, difficult, and even dangerous to create and test cloud-based apps in a production setting. This is when the significance of regional growth becomes apparent. Because it takes place on the developer's own system, local development helps keep costs down, makes debugging easier, and shortens development times.&lt;/p&gt;




&lt;h4&gt;
  
  
  -&amp;gt; Introducing LocalStack
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A800%2F0%2A8xFlBRolFUuHGx7F.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A800%2F0%2A8xFlBRolFUuHGx7F.jpg" title="text" alt="text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The best, if not one of the best local development tools is &lt;a href="https://localstack.cloud/" rel="noopener noreferrer"&gt;LocalStack&lt;/a&gt;, which is based on &lt;strong&gt;AWS&lt;/strong&gt; (Amazon Web Services). &lt;strong&gt;LocalStack&lt;/strong&gt; creates a fully working local &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS&lt;/a&gt; cloud stack, which can do some offline development and testing of our cloud and Serverless apps. &lt;/p&gt;

&lt;p&gt;Local stack is user-friendly and provides a testing environment on your local system that mimics the APIs and behaviors of &lt;strong&gt;AWS&lt;/strong&gt; cloud Services. Now you can create and test your &lt;strong&gt;AWS&lt;/strong&gt; applications without spending money or requiring access to the internet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A640%2F1%2AIcrOPkGB-71Q5EcbGD9jqQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A640%2F1%2AIcrOPkGB-71Q5EcbGD9jqQ.png" title="text" alt="text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker's Importance
&lt;/h2&gt;

&lt;p&gt;Docker is an essential part of this infrastructure. &lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; is an open-source technology that employs containerized software distribution through virtualization at the operating system level. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2F1%2AZP9cyJ4GkKjs6uhpknP5NQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2F1%2AZP9cyJ4GkKjs6uhpknP5NQ.png" title="text" alt="text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will not only guarantee that the application will function correctly in any setting because each container is self-contained and includes all necessary software, libraries, and system utilities but also that it is all working locally. &lt;strong&gt;Docker&lt;/strong&gt; offers the containerization used by &lt;strong&gt;LocalStack&lt;/strong&gt; to simulate the &lt;strong&gt;AWS&lt;/strong&gt; cloud on your local workstation.&lt;/p&gt;




&lt;h4&gt;
  
  
  Getting started
&lt;/h4&gt;

&lt;h2&gt;
  
  
  Initializing Docker
&lt;/h2&gt;

&lt;p&gt;We need to make sure Docker is functional before we can launch &lt;strong&gt;LocalStack&lt;/strong&gt;. Starting up Docker varies from one Operating system to another.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;MacOS&lt;/em&gt;&lt;/strong&gt;: Docker can be launched in the background on &lt;strong&gt;MacOS&lt;/strong&gt; by typing &lt;code&gt;open --background -a Docker&lt;/code&gt;. Docker be launched in the background.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Linux&lt;/em&gt;&lt;/strong&gt;: Docker is typically deployed on &lt;strong&gt;Linux&lt;/strong&gt; as a service. The command &lt;code&gt;sudo service docker start&lt;/code&gt; can be used to start it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Windows&lt;/em&gt;&lt;/strong&gt;: On &lt;strong&gt;Windows&lt;/strong&gt;, Docker can be initialized from the Start menu or with a command like &lt;code&gt;Start-Process -NoNewWindow "C:\Program Files\Docker\Docker\Docker Desktop.exe"&lt;/code&gt; in PowerShell.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's crucial to verify Docker's continued operation after starting it. The command &lt;code&gt;docker system info&lt;/code&gt; can be used for this purpose. Information about the Docker system should be returned from this command if Docker is functioning properly.  If Docker is not running, the command will not return any output.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Get LocalStack Up and Running
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1200%2F1%2A9lFbRWsRBTyvk1V5RPhW0Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1200%2F1%2A9lFbRWsRBTyvk1V5RPhW0Q.png" title="text" alt="text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;LocalStack&lt;/code&gt; can be installed and launched once Docker is up and running. But the installation procedure is different for each &lt;code&gt;OS&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Using the &lt;code&gt;brew install localstack&lt;/code&gt; command, the &lt;code&gt;Homebrew&lt;/code&gt; package manager may be used to set up LocalStack on &lt;strong&gt;MacOS&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using the &lt;code&gt;sudo apt-get install localstack&lt;/code&gt; command, LocalStack may be set up on Linux machines with the help of the &lt;strong&gt;APT&lt;/strong&gt; package management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;LocalStack&lt;/code&gt; package manager is available for &lt;strong&gt;Windows&lt;/strong&gt; and may be installed using the command &lt;code&gt;choco install localstack&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once LocalStack has been installed, you can launch it with the command &lt;code&gt;localstack start&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using a Shell Script to Fully Automate Everything
&lt;/h2&gt;

&lt;p&gt;To simplify matters, we can write a shell script to &lt;strong&gt;determine the OS&lt;/strong&gt;, &lt;strong&gt;launch Docker&lt;/strong&gt;, &lt;strong&gt;deploy LocalStack&lt;/strong&gt;, and &lt;strong&gt;start&lt;/strong&gt; it up. The following script must be run with &lt;code&gt;chmod +x localstack.sh &amp;amp;&amp;amp;./localstack.sh&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="c"&gt;# Function to start Docker and ensure it's running on macOS&lt;/span&gt;
start_docker_mac&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Starting Docker on macOS..."&lt;/span&gt;
    open &lt;span class="nt"&gt;--background&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; Docker
    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; docker system info &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null 2&amp;gt;&amp;amp;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;&lt;span class="nb"&gt;sleep &lt;/span&gt;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Docker is running."&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Function to start Docker and ensure it's running on Linux&lt;/span&gt;
start_docker_linux&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Starting Docker on Linux..."&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;service docker start
    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; docker system info &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null 2&amp;gt;&amp;amp;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;&lt;span class="nb"&gt;sleep &lt;/span&gt;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Docker is running."&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Function to start Docker and ensure it's running on Windows&lt;/span&gt;
start_docker_windows&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Starting Docker on Windows..."&lt;/span&gt;
    Start-Process &lt;span class="nt"&gt;-NoNewWindow&lt;/span&gt; &lt;span class="s2"&gt;"C:&lt;/span&gt;&lt;span class="se"&gt;\P&lt;/span&gt;&lt;span class="s2"&gt;rogram Files&lt;/span&gt;&lt;span class="se"&gt;\D&lt;/span&gt;&lt;span class="s2"&gt;ocker&lt;/span&gt;&lt;span class="se"&gt;\D&lt;/span&gt;&lt;span class="s2"&gt;ocker&lt;/span&gt;&lt;span class="se"&gt;\D&lt;/span&gt;&lt;span class="s2"&gt;ocker Desktop.exe"&lt;/span&gt;
    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="o"&gt;(!(&lt;/span&gt;Test-Connection localhost &lt;span class="nt"&gt;-count&lt;/span&gt; 1&lt;span class="o"&gt;))&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; Start-Sleep &lt;span class="nt"&gt;-Seconds&lt;/span&gt; 1 &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Docker is running."&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Function to install LocalStack on macOS&lt;/span&gt;
install_mac&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Installing LocalStack on macOS..."&lt;/span&gt;
    brew &lt;span class="nb"&gt;install &lt;/span&gt;localstack
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Function to install LocalStack on Linux&lt;/span&gt;
install_linux&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Installing LocalStack on Linux..."&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;localstack
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Function to install LocalStack on Windows&lt;/span&gt;
install_windows&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Installing LocalStack on Windows..."&lt;/span&gt;
    choco &lt;span class="nb"&gt;install &lt;/span&gt;localstack
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Detect the operating system&lt;/span&gt;
&lt;span class="nv"&gt;OS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nv"&gt;$OS&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt;
  &lt;span class="s1"&gt;'Linux'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    start_docker_linux
    install_linux
    &lt;span class="p"&gt;;;&lt;/span&gt;
  &lt;span class="s1"&gt;'Darwin'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
    start_docker_mac
    install_mac
    &lt;span class="p"&gt;;;&lt;/span&gt;
  &lt;span class="s1"&gt;'WindowsNT'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nv"&gt;OS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'Windows'&lt;/span&gt;
    start_docker_windows
    install_windows
    &lt;span class="p"&gt;;;&lt;/span&gt;
  &lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;;;&lt;/span&gt;
&lt;span class="k"&gt;esac&lt;/span&gt;

&lt;span class="c"&gt;# Start LocalStack&lt;/span&gt;
localstack start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script above defines functions that, &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Depending on the operating system&lt;/strong&gt;, will either begin running Docker or install LocalStack. &lt;/li&gt;
&lt;li&gt;It then &lt;strong&gt;determines the operating system&lt;/strong&gt; and calls the functions that are relevant to that system. &lt;/li&gt;
&lt;li&gt;Finally, it &lt;strong&gt;activates the LocalStack&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Please be aware that running this script requires that you already have Docker and the necessary package manager (&lt;code&gt;Homebrew&lt;/code&gt; for &lt;strong&gt;MacOS&lt;/strong&gt;, &lt;code&gt;APT&lt;/code&gt; for &lt;strong&gt;Linux&lt;/strong&gt;, and &lt;code&gt;Chocolatey&lt;/code&gt; for &lt;strong&gt;Windows&lt;/strong&gt;) installed on your computer. In the event that they are not already installed, you will be required to do so before running this script.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The combination of LocalStack with Docker makes for a great tool for developing locally on AWS. Emulating the environment of Amazon Web Services (AWS) locally enables developers to improve their productivity, cut expenses, and avoid the dangers that are associated with building software directly in a live cloud environment. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fapotitech%2Fphotos_document_md%2Fraw%2Fmaster%2Fsrc%2Fcommon%2Fphotos%2Fgiphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fapotitech%2Fphotos_document_md%2Fraw%2Fmaster%2Fsrc%2Fcommon%2Fphotos%2Fgiphy.gif" title="text" alt="text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can make it even simpler to get your local AWS cloud environment up and running by automating the process of starting Docker, installing LocalStack, and running LocalStack with the help of a straightforward shell script. This will allow you to save time.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>K8S Quickstart &amp; Helm</title>
      <dc:creator>Lionel♾️☁️</dc:creator>
      <pubDate>Mon, 21 Aug 2023 21:16:00 +0000</pubDate>
      <link>https://forem.com/aws-builders/k8s-quickstart-helm-566o</link>
      <guid>https://forem.com/aws-builders/k8s-quickstart-helm-566o</guid>
      <description>&lt;p&gt;Today, Kubernetes becomes a must for DevOps Engineers, SRE and others for orchestrating containers. Once you have a Docker image of your application, you have to code some YAML manifests to define Kubernetes workloads after which, you deploy them with the &lt;a href="https://kubernetes.io/docs/reference/kubectl/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihbcuvqr87l6cz8dbw6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihbcuvqr87l6cz8dbw6g.png" alt="here"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This deployment way is when you’ve only one application. When you start to have many applications and multiple environments it becomes overwhelmed. Often you define the same YAML files 90% of the time.&lt;/p&gt;

&lt;p&gt;Here, we are going to focus on how to manage applications smartly with Helm.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is Helm?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; is a package manager for Kubernetes. Helm is an open-source project originally created by &lt;a href="https://deislabs.io/" rel="noopener noreferrer"&gt;DeisLabs&lt;/a&gt; and donated to the &lt;a href="https://www.cncf.io/" rel="noopener noreferrer"&gt;Cloud Native Foundation&lt;/a&gt; (&lt;em&gt;CNCF&lt;/em&gt;). The CNCF now maintains and has graduated the project. This means that it is mature and not just a fad.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgh8fo8tm9v47rk7d5q3p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgh8fo8tm9v47rk7d5q3p.png" alt="helm"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Package management is not a new concept in the software industry. On Linux distros, you manage software installation and removal with package managers such as &lt;a href="https://www.redhat.com/sysadmin/how-manage-packages" rel="noopener noreferrer"&gt;YUM/RPM&lt;/a&gt; or &lt;a href="https://ubuntu.com/server/docs/package-management" rel="noopener noreferrer"&gt;APT&lt;/a&gt;. On Windows, you can use &lt;a href="https://chocolatey.org/" rel="noopener noreferrer"&gt;Chocolatey&lt;/a&gt; or &lt;a href="https://brew.sh/" rel="noopener noreferrer"&gt;Homebrew&lt;/a&gt; on Mac.&lt;/p&gt;

&lt;p&gt;Helm lets you package and deploy complete applications in Kubernetes. A package is called a “&lt;em&gt;Chart”&lt;/em&gt;. Helm uses a templating system based on &lt;a href="https://pkg.go.dev/html/template" rel="noopener noreferrer"&gt;Go template&lt;/a&gt; to render Kubernetes manifests from charts. A chart is a consistent structure separating templates and values.&lt;/p&gt;

&lt;p&gt;As a package, a chart can also manage dependencies with other charts. For example, if your application needs a MySQL database to work you can include the chart as a dependency. When Helm runs at the top level of the chart directory it installs whole dependencies. You have just a single command to render and release your application to Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6dh6yr5140fdramg64y.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6dh6yr5140fdramg64y.JPG" alt="here"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Helm charts use versions to track changes in your manifests — thus you can install a specific chart version for specific infrastructure configurations. Helm keeps a release history of all deployed charts in a dedicated workspace. This makes easier application updates and rollbacks if something wrong happens.&lt;/p&gt;

&lt;p&gt;Helm allows you to compress charts. The result of that is an artifact comparable to a Docker image. Then, you can send it to a distant repository for reusability and sharing.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Are the Benefits of Using Helm?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Helm provides you the ability to install applications with a single command. A chart can contain other charts as dependencies. You can consequently deploy an entire stack with Helm. You can use Helm like &lt;a href="https://docs.docker.com/compose/" rel="noopener noreferrer"&gt;docker-compose&lt;/a&gt; but for Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A chart includes templates for various Kubernetes resources to form a complete application. This reduces the microservices complexity and simplifies their management in Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Charts can be compressed and sent to a distant repository. This creates an application artifact for Kubernetes. You can also fetch and deploy existing Helm charts from repositories. This is a strong point for reusability and sharing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helm maintains a history of deployed release versions in the Helm workspace. When something goes wrong, rolling back to a previous version is simply — canary release is facilitated with Helm for zero-downtime deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helm makes the deployment highly configurable. Applications can be customized on the fly during the deployment. By changing parameters, you can use the same chart for multiple environments such as dev, staging, and production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Streamline CI/CD pipelines — Forward GitOps best practices.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Quick Look On The Problem Helm Solves
&lt;/h3&gt;

&lt;p&gt;Basic Kubernetes practice is to write YAML manifests manually. We’ll create minimum YAML files to deploy NGINX in Kubernetes.&lt;/p&gt;

&lt;p&gt;Here is the Deployment that will create Pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1  
kind: Deployment  
metadata:  
  name: nginx  
spec:  
  selector:  
    matchLabels:  
      app: nginx  
  replicas: 1  
  template:  
    metadata:  
      labels:  
        app: nginx  
    spec:  
      containers:  
      - name: nginx  
        image: nginx:1.21.6  
        ports:  
        - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Service exposes NGINX to the outside. The link with pod is done via the selector:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1  
kind: Service  
metadata:  
  name: nginx  
spec:  
  selector:  
    app: nginx  
  ports:  
    - protocol: TCP  
      port: 80  
      targetPort: 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kubernetes service for NGINX: service.yaml&lt;/p&gt;

&lt;p&gt;Now we have to create the previous resources with the kubectl command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create -f deployment.yaml  
$ kubectl create -f service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We check all resources are up and running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get deployment -l app=nginx  
NAME    READY   UP-TO-DATE   AVAILABLE   AGE  
nginx   1/1     1            1           8m29s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -l app=nginx                                                                                        
NAME                     READY   STATUS    RESTARTS   AGE  
nginx-65b89996ff-dcfs9   1/1     Running   0          2m26s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get svc -l app=nginx   
NAME    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE  
nginx   ClusterIP   10.106.79.171   &amp;lt;none&amp;gt;        80/TCP    4m58s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Issues with this method&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Specific values in YAML manifests are hardcoded and not reusable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Redundant information to specify such as labels and selectors leads to potential errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubectl does not handle potential errors after execution. You’ve to deploy each file one after the other.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There’s no change traceability.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create a Helm Chart From Scratch
&lt;/h3&gt;

&lt;p&gt;Helm can create the chart structure in a single command line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm create nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Understand the Helm chart structure
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1681327359207%2F13d45dac-8cf2-4c01-9074-9e02740d1ea1.png%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1681327359207%2F13d45dac-8cf2-4c01-9074-9e02740d1ea1.png%2520align%3D"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Chart.yaml&lt;/code&gt;: A YAML file containing information about the chart.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;charts&lt;/code&gt;: A directory containing any charts upon which this chart depends on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;templates&lt;/code&gt;: this is where Helm finds the YAML definitions for your Services, Deployments, and other Kubernetes objects. You can add or replace the generated YAML files for your own.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;templates/NOTES.txt&lt;/code&gt;: This is a templated, plaintext file that gets printed out after the chart is successfully deployed. This is a useful place to briefly describe the next steps for using the chart.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;templates/_helpers.tpl&lt;/code&gt;: That file is the default location for template partials. Files whose name begins with an underscore are assumed to &lt;em&gt;not&lt;/em&gt; have a manifest inside. These files are not rendered to Kubernetes object definitions but are available everywhere within other chart templates for use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;templates/tests&lt;/code&gt;: tests that validate that your chart works as expected when it is installed&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;values.yaml&lt;/code&gt;: The default configuration values for this chart&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Customize the templates
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;values.yaml&lt;/code&gt; is loaded automatically by default when deploying the chart. Here we set the image tag to &lt;code&gt;1.21.5&lt;/code&gt; :&lt;/p&gt;

&lt;p&gt;Please note that You can specify a specific &lt;code&gt;values.yaml&lt;/code&gt; file to customize the deployment for environment-specific settings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install The Helm Chart
&lt;/h3&gt;

&lt;p&gt;Good advice before deploying a Helm chart is to run the linter if you made an update:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm lint nginx  
==&amp;gt; Linting nginx  
[INFO] Chart.yaml: icon is recommended
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1 chart(s) linted, 0 chart(s) failed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run Helm to install the chart in dry-run and debug mode to ensure all is ok:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm install --debug --dry-run nginx nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using helm linter and dry-run install with debug mode will save you precious time in your development.&lt;/p&gt;

&lt;p&gt;To install the chart, remove the &lt;code&gt;--dry-run&lt;/code&gt; flag:&lt;/p&gt;

&lt;p&gt;You can see the templated content of the &lt;code&gt;NOTES.txt&lt;/code&gt; explaining how to connect to the application.&lt;/p&gt;

&lt;p&gt;Now, you can retrieve the release in the Helm workspace:&lt;/p&gt;

&lt;h3&gt;
  
  
  Upgrade The Helm Release
&lt;/h3&gt;

&lt;p&gt;Imagine you want to upgrade the container image to &lt;code&gt;1.21.6&lt;/code&gt; for testing purposes.&lt;/p&gt;

&lt;p&gt;Instead of creating a new &lt;code&gt;values.yaml&lt;/code&gt;, we'll change the setting from the command line.&lt;/p&gt;

&lt;p&gt;The pod is using the new container image as well:&lt;/p&gt;

&lt;p&gt;The upgrade is visible in the chart history:&lt;/p&gt;

&lt;p&gt;Change is inspectable with &lt;code&gt;helm diff&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm diff revision nginx 1 2  
default, nginx, Deployment (apps) has changed:  
  # Source: nginx/templates/deployment.yaml  
  apiVersion: apps/v1  
  kind: Deployment  
  metadata:  
    name: nginx  
    labels:  
      helm.sh/chart: nginx-0.1.0  
      app.kubernetes.io/name: nginx  
      app.kubernetes.io/instance: nginx  
      app.kubernetes.io/version: "1.0.0"  
      app.kubernetes.io/managed-by: Helm  
  spec:  
    replicas: 1  
    selector:  
      matchLabels:  
        app.kubernetes.io/name: nginx  
        app.kubernetes.io/instance: nginx  
    template:  
      metadata:  
        labels:  
          app.kubernetes.io/name: nginx  
          app.kubernetes.io/instance: nginx  
      spec:  
        serviceAccountName: nginx  
        securityContext:  
          {}  
        containers:  
          - name: nginx  
            securityContext:  
              {}  
-           image: "nginx:1.21.5"  
+           image: "nginx:1.21.6"  
            imagePullPolicy: IfNotPresent  
            ports:  
              - name: http  
                containerPort: 80  
                protocol: TCP  
            livenessProbe:  
              httpGet:  
                path: /  
                port: http  
            readinessProbe:  
              httpGet:  
                path: /  
                port: http  
            resources:  
              {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Rollback The Helm Release
&lt;/h3&gt;

&lt;p&gt;The upgrade was not conclusive and you want to go back. As Helm keeps all the changes, rollback is very straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm rollback nginx 1  
Rollback was a success! Happy Helming!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pod is now back to &lt;code&gt;1.21.5&lt;/code&gt; container image:&lt;/p&gt;

&lt;h3&gt;
  
  
  Uninstall The Helm Chart
&lt;/h3&gt;

&lt;p&gt;Uninstalling a Helm chart is trivial as the installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm uninstall nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Reuse Existing Helm Charts
&lt;/h3&gt;

&lt;p&gt;A lot of famous projects provide Helm chart to make the integration more user-friendly. They provide the charts through a repository. You have just to add it on your side:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm repo add bitnami [https://charts.bitnami.com/bitnami](https://charts.bitnami.com/bitnami)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once added, update your local cache to synchronize info with remote repositories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now install the chart on your Kubernetes cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm install nginx bitnami/nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Charts are deployed with default values. You can inspire and specify a custom &lt;code&gt;values.yaml&lt;/code&gt;to match your needs!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm install my-release bitnami/nginx -f values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s all folks. Today we have looked at how to use Helm.&lt;/p&gt;

&lt;p&gt;Please stay tuned and subscribe for more articles and study materials on DevOps, Agile, DevSecOps and App Development.&lt;/p&gt;

&lt;p&gt;If you’d like to learn more about Infrastructure as Code, or other modern technology approaches, Please read or other articles.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
    <item>
      <title>🚀 Your Guide to Prometheus Monitoring on Kubernetes with Grafana</title>
      <dc:creator>Lionel♾️☁️</dc:creator>
      <pubDate>Sun, 20 Aug 2023 01:54:06 +0000</pubDate>
      <link>https://forem.com/aws-builders/your-guide-to-prometheus-monitoring-on-kubernetes-with-grafana-gi8</link>
      <guid>https://forem.com/aws-builders/your-guide-to-prometheus-monitoring-on-kubernetes-with-grafana-gi8</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;Hey fam🌟 In the fast-changing tech world of today, keeping an eye on the health of your apps has become the key to a smooth user experience. What do you know? Kubernetes is here to help you handle containers at scale as your trusted helper. But how do you keep track of all these bits of code that are flying around? Here come Prometheus and Grafana, a powerful pair that turns data into superhero insights.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F1%2Al7D8_-9DfVLdrOQKKEpmhg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F1%2Al7D8_-9DfVLdrOQKKEpmhg.jpeg" alt="K8S" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Prometheus 📈
&lt;/h3&gt;

&lt;p&gt;It is your &lt;strong&gt;measurements guru&lt;/strong&gt;. It's an open-source wizard that not only gets data from your apps and services, but also adds some alerting magic. It's like having your own treasure chest full of info.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open source monitoring tool&lt;/li&gt;
&lt;li&gt;Out-of-the-box monitoring capabilities for the Kubernetes&lt;/li&gt;
&lt;li&gt;It collects and stores metrics as time-series data, recording information with a timestamp&lt;/li&gt;
&lt;li&gt;Works by pulling and collecting metrics from targets by scraping metrics HTTP endpoints.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, meet&lt;/p&gt;

&lt;h3&gt;
  
  
  Grafana 📊
&lt;/h3&gt;

&lt;p&gt;It is your &lt;strong&gt;visual storyteller&lt;/strong&gt;. It uses Prometheus data to create eye-catching visualizations. Think of your data as vibrant graphs that tell the story of your apps' performance and alert you to potential problems before they arise.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open source visualization and analytics software.&lt;/li&gt;
&lt;li&gt;It helps you to query, visualize, alert on, and even explore your metrics.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this guide, we will set up Prometheus to watch your Kubernetes cluster and invite Grafana to the party. Whether you're an experienced Kubernetes user or just starting out, we've got you covered with everything you need to know to set up a rock-solid tracking system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A640%2Fformat%3Awebp%2F1%2Az_1MIlAUXvAhXRPlnmr9PA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A640%2Fformat%3Awebp%2F1%2Az_1MIlAUXvAhXRPlnmr9PA.png" alt="Grafana" width="400" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So our Key Components are&lt;/p&gt;

&lt;h2&gt;
  
  
  Key components:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Prometheus server — Processes and stores all your metrics data&lt;/li&gt;
&lt;li&gt;Alert Manager — The manager sends alerts to any systems/channels&lt;/li&gt;
&lt;li&gt;Grafana — Visualize all scraped data in your UI&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrm8puna2ohhrydp7h2d.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrm8puna2ohhrydp7h2d.gif" alt="go" width="498" height="280"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Let's Go!!!!!!!!!!!!!!!!!&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Installing
&lt;/h2&gt;

&lt;p&gt;There are many ways to setup Prometheus and Grafana. Let us look at some of them:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Set up your Prometheus and Grafana configuration files and run them in the correct order.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prometheus Operator: Used to streamline and automate the administration of your Prometheus monitoring stack in your Kubernetes environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helm Chart (Recommended) : Using Helm Chart, set up and Prometheus Operator which includes Grafana.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Why Helm ?
&lt;/h2&gt;

&lt;p&gt;Helm is a package manager for Kubernetes. In other words, it simplifies the setting up and installation of all the components of an installation, all in one command. It is recommended to use Helm as it will take care of all the config steps and you would not miss any.&lt;/p&gt;

&lt;p&gt;Helm has three significant benefits that it adds to the process of app deployments to the Kubernetes cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Speed of Deployment&lt;/strong&gt; – It helps us speed up our speed of app deployments with a single command.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Using prebuilt application configurations&lt;/strong&gt; – Whatever the config that you need for your infrastructure, someone else already has a prebuilt application configurations that can be used.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy rollbacks&lt;/strong&gt; – Last but not least, Helm makes it more easy for us to upgrade and rollback the versions of our apps.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So let us look at our prerequisites&lt;/p&gt;
&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Setup Kubernetes (Using Kubeadm)&lt;/li&gt;
&lt;li&gt;Install Helm Package manager&lt;/li&gt;
&lt;li&gt;Download the Helm charts for setting up Prometheus and Grafana&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let us Get Started!&lt;/p&gt;
&lt;h2&gt;
  
  
  Creating our Infrastructure
&lt;/h2&gt;

&lt;p&gt;We will be setting up our Kubernetes cluster on AWS, using &lt;strong&gt;Kubeadm&lt;/strong&gt;. I will be using two (2) &lt;strong&gt;t2.medium&lt;/strong&gt; ec2 instances for this installation. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2AgZx__pSkPgkstj7arN4-8g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2AgZx__pSkPgkstj7arN4-8g.png" alt="ec2" width="786" height="193"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Installing Kubernetes using Kubeadm
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Step 1
&lt;/h4&gt;

&lt;p&gt;We will &lt;code&gt;ssh&lt;/code&gt; into the 2 VMs (the one we will use as &lt;strong&gt;&lt;em&gt;MASTER &amp;amp; WORKER&lt;/em&gt;&lt;/strong&gt; node) and run some commands there to configure the environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt-get update &lt;span class="nt"&gt;-y&lt;/span&gt;  

apt-get &lt;span class="nb"&gt;install &lt;/span&gt;docker.io &lt;span class="nt"&gt;-y&lt;/span&gt;  

service docker restart  

curl &lt;span class="nt"&gt;-s&lt;/span&gt; https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -  

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb http://apt.kubernetes.io/ kubernetes-xenial main"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/etc/apt/sources.list.d/kubernetes.list  

apt-get update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 2
&lt;/h4&gt;

&lt;p&gt;Now we need to configure the &lt;strong&gt;&lt;em&gt;MASTER NODE&lt;/em&gt;&lt;/strong&gt;. So now we will &lt;code&gt;ssh&lt;/code&gt; into it and run some commands&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubeadm init &lt;span class="nt"&gt;--pod-network-cidr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;192.168.0.0/16  

&lt;span class="c"&gt;# If above one fails then run below command  &lt;/span&gt;

kubeadm token create &lt;span class="nt"&gt;--print-join-command&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 3
&lt;/h4&gt;

&lt;p&gt;Let us proceed with our installation in the &lt;strong&gt;MASTER NODE&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube  

&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/kubernetes/admin.conf &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config  

&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 4
&lt;/h4&gt;

&lt;p&gt;Next, we need to run some &lt;code&gt;kubectl&lt;/code&gt; commands on our Master Node to complete the installation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/calico.yaml  


kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.49.0/deploy/static/provider/baremetal/deploy.yaml  

&lt;span class="o"&gt;(&lt;/span&gt;after that check &lt;span class="s2"&gt;"kubectl get nodes"&lt;/span&gt; &lt;span class="nb"&gt;command &lt;/span&gt;on master node you can see the worker node configured with master&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let us look at what it looks like now&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2AXJX8v-SfL1Q7TziUEkBpaA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2AXJX8v-SfL1Q7TziUEkBpaA.png" alt="Here" width="786" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Helm
&lt;/h2&gt;

&lt;p&gt;Now we can install Helm on our &lt;strong&gt;master node&lt;/strong&gt;. For that, you can run these commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; get_helm.sh &lt;span class="se"&gt;\ &lt;/span&gt;https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 

&lt;span class="nb"&gt;chmod &lt;/span&gt;700 get_helm.sh 

./get_helm.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2ANeXEDcztPCHmWE5yo_KuUQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2ANeXEDcztPCHmWE5yo_KuUQ.png" alt="pic" width="786" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up our Monitoring Environment
&lt;/h2&gt;

&lt;p&gt;Now that we have installed &lt;code&gt;Helm Package Manager&lt;/code&gt; we need to add &lt;strong&gt;Helm Stable Charts&lt;/strong&gt; for our local machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add stable https://charts.helm.sh/stable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2Ak4h1LAp-3r7qQB3Xq93Kwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2Ak4h1LAp-3r7qQB3Xq93Kwg.png" alt="stable" width="786" height="38"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we will be adding the Helm repository of prometheus to our local machine&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we need to create the &lt;strong&gt;namespace&lt;/strong&gt; where we will install prometheus&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now go ahead and install our kube-prometheus stack. The kube-stack-prometheus will be installed with Helm, with a grafana deployment embedded.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;stable prometheus-community/kube-prometheus-stack &lt;span class="nt"&gt;--version&lt;/span&gt; 48.3.1 &lt;span class="nt"&gt;-n&lt;/span&gt; prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;-n&lt;/code&gt; is added to specify the namespace where you want the installation to be done.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Having installed it ... this is the screen we will have next&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2ABrbpVznEG7OUj3VuNVhwuQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2ABrbpVznEG7OUj3VuNVhwuQ.png" alt="prom" width="786" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great Job!!!!! It is successfully installed. Now let us go ahead and check our pod resources&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2A8qNFpqBbDFu7MpTYV9yGLA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2A8qNFpqBbDFu7MpTYV9yGLA.png" alt="kube" width="786" height="161"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2APWpxEWBwvP77cs0xKEn05g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2APWpxEWBwvP77cs0xKEn05g.png" alt="kubee" width="786" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These all show that broth Prometheus and Grafana have been successfully installed. &lt;/p&gt;

&lt;h2&gt;
  
  
  Enabling external access to our Infrastructure
&lt;/h2&gt;

&lt;p&gt;When a Kubernetes cluster has been created, &lt;strong&gt;Cluster IP&lt;/strong&gt; is usually created. However, for us to be able to enable external access, we need to have a &lt;strong&gt;LoadBalancer&lt;/strong&gt; or &lt;strong&gt;NodePort&lt;/strong&gt; service.&lt;/p&gt;

&lt;p&gt;For that, we will therefore need to edit the &lt;strong&gt;Prometheus&lt;/strong&gt; and &lt;strong&gt;Grafana&lt;/strong&gt; service. Run the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl edit svc stable-kube-prometheus-sta-prometheus &lt;span class="nt"&gt;-n&lt;/span&gt; prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2AhN0E6JmMQfg5HdA8YamgVg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2AhN0E6JmMQfg5HdA8YamgVg.png" alt="pro" width="786" height="227"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl edit svc stable-grafana &lt;span class="nt"&gt;-n&lt;/span&gt; prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2A9g2cXPWOFNWoNz0xNAlsDA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2A9g2cXPWOFNWoNz0xNAlsDA.png" alt="gra" width="786" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you finish editing both services, make sure that you close and save the file -&amp;gt; &lt;code&gt;Escape :wq&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now we should have 2 loadbalancers in our Cluster. Let us check that&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2AAAMn1MI9iX_eGygTueHqLA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2AAAMn1MI9iX_eGygTueHqLA.png" alt="load" width="786" height="103"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can access Grafana. So copy the Loadbalancer link.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2ADwMN-GZs2FQhgBsnZQ2t3w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2ADwMN-GZs2FQhgBsnZQ2t3w.png" alt="graff" width="709" height="28"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2ASPSxyJPzsHaJch1zYapUSw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2ASPSxyJPzsHaJch1zYapUSw.png" alt="grafana" width="786" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring our Grafana server
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Grafana&lt;/strong&gt; dashboards give you instant visual insights, making complex data easier to understand. They also let you watch in real time and help you make decisions based on data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Discovering the password 🔍
&lt;/h3&gt;

&lt;p&gt;To login to your grafana account, we will use the default username and password. The default user is &lt;code&gt;admin&lt;/code&gt; and the password can be found with the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get secrets
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will be see different secrets as below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F40pqp1wpjntzqck4n4nz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F40pqp1wpjntzqck4n4nz.png" alt="here" width="800" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mine is &lt;code&gt;stable-grafana&lt;/code&gt;. Replace that with yours.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get secret stable-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Please note that &lt;code&gt;stable-grafana&lt;/code&gt; is the name of the grafana secret on my deployment. Yours could be different. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We can create different kinds of dashboards, depending on our needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grafana&lt;/strong&gt; dashboards are very easy to create. Let us go through the steps to create our dashboard. In your &lt;strong&gt;Grafana&lt;/strong&gt; server, go ahead and create our dashboards.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Creating Kubernetes Monitoring Dashboard
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;strong&gt;+&lt;/strong&gt; button on left panel and select ‘&lt;strong&gt;Import&lt;/strong&gt;’&lt;/li&gt;
&lt;li&gt;Enter &lt;strong&gt;12740&lt;/strong&gt; dashboard id under Grafana.com Dashboard&lt;/li&gt;
&lt;li&gt;Click ‘&lt;strong&gt;Load&lt;/strong&gt;’&lt;/li&gt;
&lt;li&gt;Select ‘&lt;strong&gt;Prometheus&lt;/strong&gt;’ as the &lt;em&gt;endpoint&lt;/em&gt; under &lt;code&gt;Prometheus Data Sources&lt;/code&gt; drop down.&lt;/li&gt;
&lt;li&gt;Click ‘&lt;strong&gt;Import&lt;/strong&gt;’&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This &lt;strong&gt;DASHBOARD&lt;/strong&gt; will show monitoring dashboard for all the nodes in our cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2AQxuJaS4LbsHZ43rMtg-c3A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2AQxuJaS4LbsHZ43rMtg-c3A.png" alt="graf" width="786" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Creating Kubernetes Cluster Monitoring Dashboard
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;strong&gt;+&lt;/strong&gt; button on left panel and select ‘&lt;strong&gt;Import&lt;/strong&gt;’&lt;/li&gt;
&lt;li&gt;Enter &lt;strong&gt;3119&lt;/strong&gt; dashboard id under Grafana.com Dashboard&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2ABfmsREEqQUpy9a3UY2CwZg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2ABfmsREEqQUpy9a3UY2CwZg.png" alt="ret" width="774" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Click ‘&lt;strong&gt;Load&lt;/strong&gt;’&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select ‘&lt;strong&gt;Prometheus&lt;/strong&gt;’ as the &lt;em&gt;endpoint&lt;/em&gt; under &lt;code&gt;Prometheus Data Sources&lt;/code&gt; drop down.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A750%2Fformat%3Awebp%2F1%2AvVTOvyFESRHjFhfypUsXEA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A750%2Fformat%3Awebp%2F1%2AvVTOvyFESRHjFhfypUsXEA.png" alt="grafff" width="688" height="453"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click ‘&lt;strong&gt;Import&lt;/strong&gt;’&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This &lt;strong&gt;DASHBOARD&lt;/strong&gt; will show monitoring dashboard for all the nodes in our cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2AOQOsxIWiUx_ys1RYC5Kh9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2AOQOsxIWiUx_ys1RYC5Kh9g.png" alt="frt" width="786" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Creating POD Monitoring Dashboard
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;strong&gt;+&lt;/strong&gt; button on left panel and select ‘&lt;strong&gt;Import&lt;/strong&gt;’&lt;/li&gt;
&lt;li&gt;Enter &lt;strong&gt;6417&lt;/strong&gt; dashboard id under Grafana.com Dashboard&lt;/li&gt;
&lt;li&gt;Click ‘&lt;strong&gt;Load&lt;/strong&gt;’&lt;/li&gt;
&lt;li&gt;Select ‘&lt;strong&gt;Prometheus&lt;/strong&gt;’ as the &lt;em&gt;endpoint&lt;/em&gt; under &lt;code&gt;Prometheus Data Sources&lt;/code&gt; drop down.&lt;/li&gt;
&lt;li&gt;Click ‘&lt;strong&gt;Import&lt;/strong&gt;’&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2A04pzse02PT1njF0r1H3leg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A786%2Fformat%3Awebp%2F1%2A04pzse02PT1njF0r1H3leg.png" alt="pod" width="786" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations!!!!!!!!!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0jvrgi0y3asbqbebqnx.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0jvrgi0y3asbqbebqnx.gif" alt="yay" width="498" height="258"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Your journey to master Prometheus Monitoring on Kubernetes using Grafana is complete! 🚀 You can effortlessly gather, visualise, and analyse metrics from your dynamic Kubernetes system thanks to this hands-on tutorial.&lt;/p&gt;

&lt;p&gt;By combining Prometheus' data collection strength with Grafana's stunning dashboards, you can confidently sail containerized application into the seas. From resource utilization to anomaly detection and alert setup, you've learned key skills for monitoring system health and performance.&lt;/p&gt;

&lt;p&gt;Prometheus and Grafana help you optimise, debug, and innovate with Kubernetes and monitoring. Hope your monitoring trip is informative and your applications sparkle! 🌟🌐📊&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Docker Security: Clair</title>
      <dc:creator>Lionel♾️☁️</dc:creator>
      <pubDate>Tue, 08 Aug 2023 19:30:00 +0000</pubDate>
      <link>https://forem.com/softwaresennin/docker-security-clair-hno</link>
      <guid>https://forem.com/softwaresennin/docker-security-clair-hno</guid>
      <description>&lt;p&gt;Docker offers an amazing solution for packaging applications along with their dependencies. You can find the application itself, its supporting elements like Maven or NPM packages, the base Operating System, and other necessary tools like Java and NodeJS in Docker images. By creating a build pipeline, you can very easily create a Docker image whenever there are code changes. The best part is that docker images contain all the dependencies, allowing you to deploy them on any platform that supports Docker, be it an on-prem Kubernetes setup or a cloud-based platform like &lt;a href="https://aws.amazon.com/ecs/" rel="noopener noreferrer"&gt;AWS ECS&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Image Scanning
&lt;/h2&gt;

&lt;p&gt;Since docker images contain all the required OS files for the application to run, it is very essential to carefully examine all the packages installed in the docker image and be sure that there are no vulnerabilities. Docker's philosophy is creating lightweight and minimal containers that serve a specific purpose. In other words, it is recommended to always create separate images for different applications. This approach perfectly aligns with the microservice ecosystem, where each microservice has its own dedicated docker image.&lt;/p&gt;

&lt;h3&gt;
  
  
  Image Scanning in Registries
&lt;/h3&gt;

&lt;p&gt;The easiest way of scanning docker images is scanning them inside of registries. Both &lt;a href="https://dockerhub.com/" rel="noopener noreferrer"&gt;Dockerhub&lt;/a&gt; and &lt;a href="https://quay.io/" rel="noopener noreferrer"&gt;Quay&lt;/a&gt; offer built-in image scanning capabilities, but there are a few limitations to keep in mind. Currently, DockerHub's scanning feature is only available for private repositories by default. On the other hand, Quay's image security scanning is exclusive to Quay Enterprise.&lt;/p&gt;

&lt;h3&gt;
  
  
  Clair - Open Source Image Scanner
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/coreos/clair/" rel="noopener noreferrer"&gt;Clair&lt;/a&gt;, developed by CoreOS, is a fantastic open source vulnerability scanner specifically designed for docker images. It is capable of gathering vulnerabilities from various vulnerability databases for different operating systems like Debian, Ubuntu, Red Hat, Alpine, and Oracle Linux. The best part is that Clair can be easily obtained as a docker image, enabling one-off scans to be seamlessly integrated into the build pipeline. However, when running Clair for the first time, it needs to download vulnerability information, which can be quite time-consuming, taking around 20-30 minutes. This delay becomes problematic when implementing a continuous integration and continuous deployment (CICD) pipeline.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://cloud.docker.com/repository/docker/arminc/clair-db" rel="noopener noreferrer"&gt;arminc/clair-db&lt;/a&gt; image on Docker Hub solves this. It runs a daily build of the vulnerability database and creates pre-populated database images that are ready to use right away. By utilizing this pre-built database, the need for time-consuming downloads during the initial setup of Clair is eliminated. This makes it incredibly convenient for seamless integration into CICD pipelines.&lt;/p&gt;

&lt;p&gt;To run the pre-built database, simply follow these steps:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; db arminc/clair-db


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Clair requires  a &lt;code&gt;config.yaml&lt;/code&gt; file that contains config like the DB password to get started. Another docker image, still from &lt;code&gt;arminc&lt;/code&gt; solves this, providing  a default config which is embedded in Clair. The command below can be run to connect with our db.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 6060:6060 &lt;span class="nt"&gt;--link&lt;/span&gt; db:postgres &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; clair arminc/clair-local-scan


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  How Clair works
&lt;/h3&gt;

&lt;p&gt;Clair exposes an API to scan the individual docker layers. To be able to run a scan, a Clair client is needed which can do the job. A useful tool for this is &lt;a href="https://github.com/optiopay/klar" rel="noopener noreferrer"&gt;Klar&lt;/a&gt;, which is a popular CLI client written in Go language which can run point and shoot scans. However, it needs the image to be scanned to be inside of a registry already and cannot scan images that are locally stored. To use image scanning in a CICD pipeline, it is always better to run scans locally on an image before pushing it to the registry. Having results available before release allows us to choose whether we can still proceed and push that image or break the build based on the number and severity of the vulnerabilities in the docker image. An alternative, that can still help us run local scans is &lt;a href="https://github.com/arminc/clair-scanner" rel="noopener noreferrer"&gt;Clair-Scanner&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We can run &lt;code&gt;clair-runner&lt;/code&gt; tool by providing it the IP of the docker bridge gateway to run a local scan, by running this command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# get docker gateway IP&lt;/span&gt;
&lt;span class="nv"&gt;DOCKER_GATEWAY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;docker network inspect bridge &lt;span class="nt"&gt;--format&lt;/span&gt; &lt;span class="s2"&gt;"{{range .IPAM.Config}}{{.Gateway}}{{end}}"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;# download scanner cli&lt;/span&gt;
wget &lt;span class="nt"&gt;-qO&lt;/span&gt; clair-scanner https://github.com/arminc/clair-scanner/releases/download/v8/clair-scanner_linux_amd64 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chmod&lt;/span&gt; +x clair-scanner
&lt;span class="c"&gt;# run scan on local image - dubu in this case&lt;/span&gt;
./clair-scanner &lt;span class="nt"&gt;--ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_GATEWAY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; dubu:latest


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Running a local scan
&lt;/h3&gt;

&lt;p&gt;To be able to run a local scan on a docker image before pushing it. You can use this &lt;code&gt;docker-compose.yml&lt;/code&gt; script&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3'&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arminc/clair-db&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;clairdb&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;

  &lt;span class="na"&gt;clair&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arminc/clair-local-scan&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;clairlocal&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;6060:6060"&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;By using docker-compose, it streamlines the process and ensures that the linked containers (if we create more than 1) can communicate seamlessly. Now let's look at our bash script that can automate this&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;

&lt;span class="c"&gt;# Function to check if a command executed successfully&lt;/span&gt;
check_command&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$?&lt;/span&gt; &lt;span class="nt"&gt;-ne&lt;/span&gt; 0 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Error executing the previous command!"&lt;/span&gt;
        &lt;span class="nb"&gt;exit &lt;/span&gt;1
    &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Start the Clair and database containers&lt;/span&gt;
docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
check_command

&lt;span class="c"&gt;# Give some time for the database to initialize. &lt;/span&gt;
&lt;span class="c"&gt;# Note: It's better to have a health check to confirm when the db is ready, but for simplicity, we use sleep here.&lt;/span&gt;
&lt;span class="nb"&gt;sleep &lt;/span&gt;20

&lt;span class="c"&gt;# Download and prepare the clair-scanner&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; clair-scanner &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;wget &lt;span class="nt"&gt;-qO&lt;/span&gt; clair-scanner https://github.com/arminc/clair-scanner/releases/download/v8/clair-scanner_linux_amd64
    check_command
    &lt;span class="nb"&gt;chmod&lt;/span&gt; +x clair-scanner
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# Get Docker's bridge network gateway&lt;/span&gt;
&lt;span class="nv"&gt;DOCKER_GATEWAY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;docker network inspect bridge &lt;span class="nt"&gt;--format&lt;/span&gt; &lt;span class="s2"&gt;"{{range .IPAM.Config}}{{.Gateway}}{{end}}"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
check_command

&lt;span class="c"&gt;# Scan the specified Docker image&lt;/span&gt;
./clair-scanner &lt;span class="nt"&gt;--ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_GATEWAY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; dubu:latest
check_command


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  Let us break down the bash script
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Function &lt;code&gt;check_command&lt;/code&gt;&lt;/strong&gt;: This function checks the exit status of the last executed command. If the command failed (i.e., didn't return a status of 0), it outputs an error message and exits the script. This provides better error handling throughout your script.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Download Check&lt;/strong&gt;: The script checks if &lt;code&gt;clair-scanner&lt;/code&gt; already exists before attempting to download it. This can save time if you're running the script multiple times.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Health Check Note&lt;/strong&gt;: I've added a note in the script to emphasize that using a health check for the database would be a more robust solution than just waiting a set amount of time with &lt;code&gt;sleep&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Please note that after copying this script, to execute it, you need to make sure it has execute permissions (&lt;code&gt;chmod +x your_script_name.sh&lt;/code&gt;) and then run it (&lt;code&gt;./your_script_name.sh&lt;/code&gt;).&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Conclusion: CICD Integration
&lt;/h2&gt;

&lt;p&gt;The whole plan is for us to be able to integrate image scanning with out CICD pipeline. To do this, you can use this script below as part of your Jenkins pipeline.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight groovy"&gt;&lt;code&gt;

&lt;span class="n"&gt;stage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"docker_scan"&lt;/span&gt;&lt;span class="o"&gt;){&lt;/span&gt;
    &lt;span class="n"&gt;steps&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;script&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;// Ensure the containers are not already running (cleanup from a previous run).&lt;/span&gt;
            &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'docker rm -f db || true'&lt;/span&gt;
            &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'docker rm -f clair || true'&lt;/span&gt;

            &lt;span class="c1"&gt;// Start the Clair database and wait for it to initialize.&lt;/span&gt;
            &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'docker run -d --name db arminc/clair-db'&lt;/span&gt;
            &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'sleep 15'&lt;/span&gt; &lt;span class="c1"&gt;// Consider a more robust health check here.&lt;/span&gt;

            &lt;span class="c1"&gt;// Start the Clair container.&lt;/span&gt;
            &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'docker run -p 6060:6060 --link db:postgres -d --name clair arminc/clair-local-scan'&lt;/span&gt;
            &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'sleep 1'&lt;/span&gt;

            &lt;span class="c1"&gt;// Download the clair-scanner if not already present.&lt;/span&gt;
            &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'''
                if [ ! -f clair-scanner ]; then
                    wget -qO clair-scanner https://github.com/arminc/clair-scanner/releases/download/v8/clair-scanner_linux_amd64
                    chmod +x clair-scanner
                fi
            '''&lt;/span&gt;

            &lt;span class="c1"&gt;// Get Docker's bridge network gateway and run the scanner.&lt;/span&gt;
            &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'''
                DOCKER_GATEWAY=$(docker network inspect bridge --format "{{range .IPAM.Config}}{{.Gateway}}{{end}}")
                ./clair-scanner --ip="$DOCKER_GATEWAY" myapp:latest || exit 0
            '''&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;post&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;always&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;// Clean up, remove the Clair and database containers.&lt;/span&gt;
            &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'docker rm -f db'&lt;/span&gt;
            &lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="s1"&gt;'docker rm -f clair'&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;clair-scanner&lt;/code&gt; by default will break the build if any issues are found. In this code we ignore the exit code by appending &lt;code&gt;|| exit 0&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let us take a look at out Jenkins build&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmarb5un7ebnqjd1orus.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmarb5un7ebnqjd1orus.png" alt="Clair"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>security</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>Jenkins Security: Storing Secrets</title>
      <dc:creator>Lionel♾️☁️</dc:creator>
      <pubDate>Fri, 04 Aug 2023 19:20:00 +0000</pubDate>
      <link>https://forem.com/softwaresennin/jenkins-security-storing-secrets-3pke</link>
      <guid>https://forem.com/softwaresennin/jenkins-security-storing-secrets-3pke</guid>
      <description>&lt;p&gt;Jenkins is by far one of my favorite open source tools. Jenkins is an open source automation that that helps and provides support for building and deploying apps. Jenkins Plugins are what enable it to be customized and extended. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You are new to Jenkins? Quickly try it out by running its docker image, with the command&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -p 8080:8080 jenkins/jenkins&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  CI/CD
&lt;/h2&gt;

&lt;p&gt;Automation is fundamental to &lt;em&gt;Continuous Integration&lt;/em&gt; and &lt;em&gt;Countinuous Delivery&lt;/em&gt;. Jenkins build can be setup for Continuous Integration tasks such as running integration testing. And Continuous Delivery activities such as building a final jar and pushing it to a repository or building a Docker image and pushing it to a registry. However all such integrations require credentials for authorization. Applications tend have a lot of other secrets such as Certificates for authentication for &lt;a href="https://en.wikipedia.org/wiki/Mutual_authentication" rel="noopener noreferrer"&gt;MASSL&lt;/a&gt;, credentials for upstream systems, databases and API tokens.&lt;/p&gt;

&lt;p&gt;We cannot talk of &lt;em&gt;Continuous Integration&lt;/em&gt; and &lt;em&gt;Continuous Delivery&lt;/em&gt; (&lt;code&gt;CI/CD&lt;/code&gt;) and not talk about automation. Jenkins is a CI tool and its builds can be configured to run Continuous Integration tasks such software integration testing. Not only that but Continuous Delivery actions such as creating a final jar (java app artifact) and pushing it to a repository or even creating a Docker image and pushing it to a registry are examples of Continuous Delivery activities. All of these integrations however need credentials to be able to work. Certificates for authentication for &lt;a href="https://en.wikipedia.org/wiki/Mutual_authentication" rel="noopener noreferrer"&gt;MASSL&lt;/a&gt; credentials for upstream systems, databases, and API tokens are common secrets in applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jenkins Plugins for Secret Management
&lt;/h2&gt;

&lt;p&gt;Below are some plugins that allow basic credential management: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;em&gt;&lt;a href="https://wiki.jenkins.io/display/JENKINS/Credentials+Binding+Plugin" rel="noopener noreferrer"&gt;Credentials Binding Plugin&lt;/a&gt;&lt;/em&gt;: These allow credentials to be bound to environment variables for use in build steps.&lt;/li&gt;
&lt;li&gt;  &lt;em&gt;&lt;a href="https://wiki.jenkins.io/display/JENKINS/Credentials+Plugin" rel="noopener noreferrer"&gt;Credentials Plugin&lt;/a&gt;&lt;/em&gt;: On the other hand, this plugin allows you to store your credentials in Jenkins. &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;N.B. These plugins are available in default installation of Jenkins.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Below are secrets which are supported by default:&lt;br&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmalikashish8.github.io%2Fassets%2Fimages%2Fstoring-secrets-in-jenkins%2Fdefault_plugin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmalikashish8.github.io%2Fassets%2Fimages%2Fstoring-secrets-in-jenkins%2Fdefault_plugin.png" title="Default plugin secret options" alt="test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Using secrets in Freestyle Job
&lt;/h3&gt;

&lt;p&gt;For our test, let’s create a Free style Jenkins Project and use &lt;code&gt;Credentials&lt;/code&gt; plugins to store our &lt;code&gt;Dockerhub&lt;/code&gt; credentials. As seen below, select &lt;em&gt;Username and password (separated)&lt;/em&gt; and enter bindings for the credentials. Basically, these are the environment variables that username and password will be available as:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmalikashish8.github.io%2Fassets%2Fimages%2Fstoring-secrets-in-jenkins%2Fbindings.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmalikashish8.github.io%2Fassets%2Fimages%2Fstoring-secrets-in-jenkins%2Fbindings.png" title="Enter bindings" alt="test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once that is done, next click on &lt;em&gt;Add&lt;/em&gt; and select the right &lt;code&gt;scope&lt;/code&gt; and enter Dockerhub credentials:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmalikashish8.github.io%2Fassets%2Fimages%2Fstoring-secrets-in-jenkins%2Fdockerhub_credentials.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmalikashish8.github.io%2Fassets%2Fimages%2Fstoring-secrets-in-jenkins%2Fdockerhub_credentials.png" title="Enter Dockerhub credentials" alt="test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now here is an example of how we use our credentials&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmalikashish8.github.io%2Fassets%2Fimages%2Fstoring-secrets-in-jenkins%2Fuse_secrets.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmalikashish8.github.io%2Fassets%2Fimages%2Fstoring-secrets-in-jenkins%2Fuse_secrets.png" title="use credentials" alt="secrets"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Credentials in Jenkins Pipeline
&lt;/h3&gt;

&lt;p&gt;Now let us look how we can use our credentials in Jenkins Pipeline. In the following stage of our &lt;code&gt;Jenkinsfile&lt;/code&gt; we are will be using &lt;code&gt;artifactory-credentials&lt;/code&gt; to login to the artifactory and push our docker images to it. Also note that &lt;code&gt;usernameVariable&lt;/code&gt; and &lt;code&gt;passwordVariable&lt;/code&gt; are available in the &lt;code&gt;withCredentials&lt;/code&gt; block:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

    stage("docker_push") {
      withCredentials([usernamePassword(credentialsId: 'artifactory-credentials', 
        passwordVariable: 'ARTIFACTORY_KEY', 
        usernameVariable: 'ARTIFACTORY_USER')]) 
      {
        sh "echo $ARTIFACTORY_KEY | docker login -u $ARTIFACTORY_USER --password-stdin ${REGISTRY_URL}"
        sh "docker tag myapp:latest ${REGISTRY_URL}/myapp:${shortCommit}"
        sh "docker push ${REGISTRY_URL}/myapp:${shortCommit}"
      } 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Retrieving Secrets
&lt;/h3&gt;

&lt;p&gt;Jenkins tries to provide some sense of security by masking the credentials in logs. It does that by looking for an exact match and replaces them with asterisks (*****). But note also that this control can simply be bypassed by a user with edit permission by encoding the credentials in the pipeline and printing them in logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrwfo5mne0aat0i6hwir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrwfo5mne0aat0i6hwir.png" alt="test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkqpz89x6ivyy0ys5awnu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkqpz89x6ivyy0ys5awnu.png" alt="test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please note that the first echo results in our password being &lt;code&gt;masked&lt;/code&gt; since it matches our password string.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now let us decode the encoded password and see what it gives us:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

dev@ubu:~ $ echo YWRtaW4xMjM0Cg== | base64 -d
admin1234


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let us explain this. Actually, Jenkins credentials are stored in Jenkins master &lt;a href="https://jenkins.io/doc/book/using/using-credentials/#credential-security" rel="noopener noreferrer"&gt;encrypted by Jenkins instance id&lt;/a&gt;. This file usually exists in your linux server at &lt;code&gt;/var/lib/jenkins/credentials.xml&lt;/code&gt;. It is therefore very easy to decrypt credentials saved in Jenkins for Jenkins administrators. Moreover, do note that any user that can SSH to Jenkins can read the encrypted credentials file since it has read permissions for everyone. Let us look at that:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

dev@ubu:/var/lib/jenkins$ ls -la credentials.xml 
-rw-r--r-- 1 jenkins jenkins 4650 Mar 26 09:28 credentials.xml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This file has passwords encrypted with Jenkins Id:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&amp;lt;com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl&amp;gt;
    &amp;lt;scope&amp;gt;GLOBAL&amp;lt;/scope&amp;gt;
    &amp;lt;id&amp;gt;artifactory-credentials&amp;lt;/id&amp;gt;
    &amp;lt;description&amp;gt;&amp;lt;/description&amp;gt;
    &amp;lt;username&amp;gt;admin&amp;lt;/username&amp;gt;
    &amp;lt;password&amp;gt;{AQAAABAAAAAQom3LN7ei0wdm9cdOlGOa4GxDHzpndn0BUPeI4biARto=}&amp;lt;/password&amp;gt;
&amp;lt;/com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Our favorite CI-tool, Jenkins provides a handy utility at &lt;code&gt;/script&lt;/code&gt; of the URL that can be used to decrypt passwords with the following script:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

encryptedPassword = '{AQAAABAAAAAQom3LN7ei0wdm9cdOlGOa4GxDHzpndn0BUPeI4biARto=}'
passwd = hudson.util.Secret.decrypt(encryptedPassword)
println(passwd)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4sm55ahy6cd64lf3horo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4sm55ahy6cd64lf3horo.png" alt="test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In Conclusion&lt;/strong&gt;, it is very clear that secrets entered in Jenkins will not be disclosed. Any user that can use a secret in a build can decode and see it. All and any user that can SSH into Jenkins master server can decrypt all secrets using &lt;code&gt;/script&lt;/code&gt; utility if they can login to Jenkins Web UI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Management Issues with Hardcoding Credentials
&lt;/h2&gt;

&lt;p&gt;As we just saw, the &lt;code&gt;principle of least privilege&lt;/code&gt; cannot be effectively employed when just using Jenkins &lt;code&gt;Credentials&lt;/code&gt; Plugin. Therefore, there has to be a better solution for auditable credential sharing within team, credential rotation and automatic build provisioning that we can use. For that, I would suggest a number of credential management products such as &lt;a href="https://www.vaultproject.io/" rel="noopener noreferrer"&gt;Hashicorp Vault&lt;/a&gt;, &lt;a href="https://www.cyberark.com/products/privileged-account-security-solution/enterprise-password-vault/" rel="noopener noreferrer"&gt;Cyberark Vault&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/secretsmanager/" rel="noopener noreferrer"&gt;AWS Secrets Manager&lt;/a&gt;. Some open source products like &lt;a href="https://github.com/fugue/credstash" rel="noopener noreferrer"&gt;CredStash&lt;/a&gt; for AWS help with credential management.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>tutorial</category>
      <category>productivity</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
