<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Senthil Raja Chermapandian</title>
    <description>The latest articles on Forem by Senthil Raja Chermapandian (@senthilrch).</description>
    <link>https://forem.com/senthilrch</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/senthilrch"/>
    <language>en</language>
    <item>
      <title>A Gentle Introduction to WebAssembly</title>
      <dc:creator>Senthil Raja Chermapandian</dc:creator>
      <pubDate>Wed, 03 May 2023 07:22:00 +0000</pubDate>
      <link>https://forem.com/kcdchennai/a-gentle-introduction-to-webassembly-1h57</link>
      <guid>https://forem.com/kcdchennai/a-gentle-introduction-to-webassembly-1h57</guid>
      <description>&lt;h2&gt;
  
  
  What is WebAssembly?
&lt;/h2&gt;

&lt;p&gt;WebAssembly provides a secure, fast, and efficient compilation target for a wide range of modern programming languages. WebAssembly’s simplicity as a runtime and sandbox environment lends to various use cases such as containers, blockchain and IoT.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Web needed a new Programming Language
&lt;/h2&gt;

&lt;p&gt;During JavaScript’s earlier years as a scripting language, interactions were limited to dynamic menus, button-clicked responses, and pop-up dialogs. JavaScript as a language experienced great growth in the pursuit of interactivity. &lt;/p&gt;

&lt;p&gt;However, Javascript suffers in execution speed and performance predictability compared to compiled languages. JavaScript lacked the ability to advance browsers with features such as low-level networking, multithreaded code, graphics, and streaming video codecs. This limitation affected applications where performance is critical, such as video editing, CAD, and AI.&lt;/p&gt;

&lt;p&gt;WebAssembly (wasm) serves to address some of JavaScript’s limitations. Wasm was designed as a compact compilation target to enhance performance for the web. In 2017 when WebAssembly was released, the WebAssembly working group stated,&lt;/p&gt;

&lt;p&gt;“WebAssembly or wasm is a new portable, size- and load-time-efficient format suitable for compilation to the web.”&lt;/p&gt;

&lt;p&gt;WebAssembly is an instruction-set architecture with low-level bytecode that runs on web browsers (and servers). WebAssembly serves as a compilation target for higher-level languages including C++, Rust, C, and Go.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design Goals
&lt;/h2&gt;

&lt;p&gt;WebAssembly was developed by a team called the W3C WebAssembly Community Group. This group included engineers from Mozilla, Google, Apple, Microsoft, and various other organizations. The W3C WebAssembly Community Group solidified key design goals as a foundation for WebAssembly in the project’s infancy. The team prioritized efficiency, portability, legibility, security, and compatibility.&lt;/p&gt;

&lt;p&gt;The most significant of these goals include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimum Viable Product (MVP)&lt;/li&gt;
&lt;li&gt;Performance&lt;/li&gt;
&lt;li&gt;Security&lt;/li&gt;
&lt;li&gt;Independency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;WebAssembly’s unique features inspired adoption for a range of applications. We will dig into notable features of WebAssembly beyond the browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  Minimum Viable Product
&lt;/h2&gt;

&lt;p&gt;Web specifications are generally developed through collaboration amongst multiple parties. Often this results in initial ideas slowly going through specification before implementation. To expedite the process, the initial WebAssembly specification was conceived as a Minimum Viable Product (MVP) with a number of specific use cases targeted for the MVP scope.&lt;/p&gt;

&lt;p&gt;The MVP features met the needs of image processing, audio processing, computer aided design, games, and various other applications that are computationally intensive. The tooling developed around the MVP focused on the migration of existing applications, predominantly those written in C++. The MVP was not designed as a general-purpose web application development platform.&lt;/p&gt;

&lt;p&gt;This focused approach ensured that WebAssembly was quickly released in just a few years. However, the initial release was missing certain features that could make it more broadly useful. More features have been and will continue to be released over time.&lt;/p&gt;

&lt;p&gt;These design decisions have resulted in a technology that is rapidly loaded, decoded, and executed at near-native speed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance
&lt;/h2&gt;

&lt;p&gt;Initial download speed, compilation time, and runtime performance hold parallel importance for web technologies. With a size and load-time efficient binary, WebAssembly optimally utilizes common hardware capabilities to execute at near-native speed.&lt;/p&gt;

&lt;p&gt;Historically, JavaScript design decisions hindered the language’s performance. While JavaScript evolved to mitigate limitations, WebAssembly addresses some issues directly.&lt;/p&gt;

&lt;p&gt;Issues Include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;JavaScript is distributed in text format and WebAssembly is a binary.&lt;/li&gt;
&lt;li&gt;JavaScript is not strongly typed, so optimization requires complex runtime monitoring. WebAssembly is strongly typed and can be compiled very quickly to the native machine code.&lt;/li&gt;
&lt;li&gt;The WebAssembly binary format is designed to support parallel decoding across multiple threads and streamed instantiation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;WebAssembly’s efficiency applies to host targets beyond the web. The W3C group carefully specified “host” rather than “browser” regarding a host which loads and interoperates with WebAssembly. Modules can be hosted and executed across several environments, including cloud, Internet of Things (IoT), or blockchain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;WebAssembly runs in the browser and from a security perspective, browsers are rife with vulnerabilities. WebAssembly incorporates a few features which help to mitigate opportunities for exploits.&lt;/p&gt;

&lt;p&gt;The WebAssembly runtime adheres to the same security policies as its host environment. Within the web, WebAssembly conforms to the same-origin policy; a page cannot load and execute a malicious WebAssembly module from a different origin.&lt;/p&gt;

&lt;p&gt;WebAssembly modules run within a sandbox and are therefore isolated from the host environment, so all functionality must be imported. This sandbox separates the execution environments of different modules. In addition, the sandbox validates code to minimize data corruption.&lt;/p&gt;

&lt;p&gt;WebAssembly can access linear memory and functions but requires a host for import and export. The lack of I/O significantly reduces the attack surface as the WebAssembly instance is only able to access what is available through the interfaces it is linked with.&lt;/p&gt;

&lt;h2&gt;
  
  
  Independency
&lt;/h2&gt;

&lt;p&gt;Aspects of WebAssembly, such as its low-level binary format, lend towards independencies contributing to the language’s portability and performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hardware Independent:&lt;/strong&gt;&lt;br&gt;
WebAssembly compiles on all modern systems, including desktop and mobile.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Language Independent:&lt;/strong&gt;&lt;br&gt;
WebAssembly acts as a compilation target for various languages and continues to innovate by implementing requests. WebAssembly does not promote or favor any language. Coding language communities generate additional tools for utilizing WebAssembly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Platform Independent:&lt;/strong&gt;&lt;br&gt;
WebAssembly’s host can be browser or non-browser. WebAssembly can be embedded in browsers, run as a virtual machine, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  WebAssembly as a non-browser runtime
&lt;/h2&gt;

&lt;p&gt;Initially, WebAssembly targeted specific browser-based use cases, such as migrating performance-intensive applications into the browser. However, WebAssembly retains a platform-agnostic host environment. WebAssembly has been adopted as a non-browser runtime due to a few features:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sandbox Environment:&lt;/strong&gt;&lt;br&gt;
WebAssembly modules provide a layer of security because modules execute within a sandbox environment. The environment increases safety for cloud computing where executing processes share the same underlying physical resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vendor-neutral:&lt;/strong&gt;&lt;br&gt;
WebAssembly is flexible and untied to a particular vendor. This versatility allows for use cases to expand and meet various needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Runtime:&lt;/strong&gt;&lt;br&gt;
Runtime features, such as streaming compilation and simple validation rules, contribute to fast start times and near-native performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory:&lt;/strong&gt;&lt;br&gt;
WebAssembly does not have a specific approach to memory management (e.g., garbage collection) or its own system-level APIs. As a result, the WebAssembly runtime is simple and lightweight.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bytecode Alliance
&lt;/h2&gt;

&lt;p&gt;WebAssembly’s rapid adoption spurred innovation and investment from various organizations. Many projects sought to extend similar capabilities and solutions. In response, the founding organizations Mozilla, Red Hat, Intel, and Fastly formed the Bytecode Alliance to foster collaboration and promote innovation.&lt;/p&gt;

&lt;p&gt;The Bytecode Alliance’s mission is to:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“provide state-of-the-art foundations to develop runtime environments and language toolchains where security, efficiency, and modularity can all coexist across a wide range of devices and architectures.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The Bytecode Alliance initiated a sub-project called the WebAssembly System Interface (&lt;a href="https://github.com/WebAssembly/WASI"&gt;WASI&lt;/a&gt;). WASI is an API that allows WebAssembly access to system features such as files, filesystems, Berkeley sockets, clocks, and random numbers. WASI acts as a system-level interface for WebAssembly, so incorporating a runtime into a host environment and building a platform is easier.&lt;/p&gt;

&lt;p&gt;While WASI runs in the browser, WASI is not browser-dependent and therefore does not rely on Web APIs or Javascript compatibility. With WASI, WebAssembly modules can run outside of the browser. Wasmtime, another Bytecode Alliance project, is a WebAssembly runtime that can be used as a host for non-browser use cases.&lt;/p&gt;

&lt;p&gt;WASI maintains WebAssembly’s security advantages by extending WebAssembly’s sandboxed environment to include I/O. The API isolates modules and provides each module with permissions to particular parts of the system.&lt;/p&gt;

&lt;p&gt;WASI use cases include cross-platform applications, code reuse across platforms, and a single runtime for applications. The Bytecode Alliance continues to contribute and accept projects with several roadmap goals for WASI addressed in online documentation on the Bytecode Alliance &lt;a href="https://bytecodealliance.org/"&gt;website&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>GitHub Actions workflow for Go Continuous Integration</title>
      <dc:creator>Senthil Raja Chermapandian</dc:creator>
      <pubDate>Mon, 07 Mar 2022 21:53:29 +0000</pubDate>
      <link>https://forem.com/kcdchennai/github-actions-workflow-for-go-continuous-integration-7b</link>
      <guid>https://forem.com/kcdchennai/github-actions-workflow-for-go-continuous-integration-7b</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Author:&lt;/strong&gt;&lt;/em&gt; &lt;em&gt;Saravanan is Polyglot Programmer, Technology blogger, DevOps Evangelist and Cloud DevOps Architect. He is interested to talk about  Go, NodeJS, C++, Ruby, Bash, Python and PowerShell&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;My Workflow&lt;/li&gt;
&lt;li&gt;Submission Category&lt;/li&gt;
&lt;li&gt;Yaml File and Link to Code&lt;/li&gt;
&lt;li&gt;GitHub Action Workflow Run Output&lt;/li&gt;
&lt;li&gt;Additional Info

&lt;ul&gt;
&lt;li&gt;How to Add GitHub Actions workflow&lt;/li&gt;
&lt;li&gt;How to Integrate Slack with GitHub Actions Workflow&lt;/li&gt;
&lt;li&gt;Go Source Code Details&lt;/li&gt;
&lt;li&gt;Go REST API Unit testing&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;I've created a GitHub action that performs the Go continuous integration process on the Go source code repository&lt;/li&gt;
&lt;li&gt;Interested Go developers and DevOps Engineers can use this workflow to create the &lt;strong&gt;Continuous Integration&lt;/strong&gt; for their GitHub repo&lt;/li&gt;
&lt;li&gt;Created this workflow for the submission of &lt;strong&gt;actionshackathon21&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Workflow Details
&lt;/h2&gt;

&lt;h1&gt;
  
  
  Go Continuous Integration workflow
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;The workflow we created will be doing the Continuous Integration process on this repository.&lt;/li&gt;
&lt;li&gt;Whenever there is a code check-in happens on &lt;strong&gt;main branch&lt;/strong&gt;, then CI workflow will be triggered&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Below are the details of this workflow&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code checkout into the workspace&lt;/li&gt;
&lt;li&gt;Install Go and runs the Go linting process for doing code review&lt;/li&gt;
&lt;li&gt;Get the build dependencies&lt;/li&gt;
&lt;li&gt;Builds the code&lt;/li&gt;
&lt;li&gt;Runs the Unit test, if it is success it moves to next stage&lt;/li&gt;
&lt;li&gt;Docker image is created for the code and pushed into &lt;strong&gt;docker hub&lt;/strong&gt; registry&lt;/li&gt;
&lt;li&gt;Finally workflow posts the status of each step to a &lt;strong&gt;slack channel&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Here is the link to the GitHub actions workflow &lt;strong&gt;Yaml&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Marketplace GitHub Actions used in this Workflow
&lt;/h2&gt;

&lt;p&gt;I've leveraged existing actions available from &lt;a href="https://github.com/marketplace?type=actions"&gt;GitHub Actions Marketplace&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;actions/checkout@v2&lt;/strong&gt;  - &lt;a href="https://github.com/marketplace/actions/checkout"&gt;Action &lt;/a&gt;that is used to checkout code.&lt;/li&gt;
&lt;li&gt;*&lt;em&gt;reviewdog/action-golangci-lint@v2 *&lt;/em&gt;- Go installation and linting &lt;a href="https://github.com/marketplace/actions/run-golangci-lint-with-reviewdog"&gt;action&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mr-smithers-excellent/docker-build-push@v5&lt;/strong&gt; - Docker build and push &lt;a href="https://github.com/marketplace/actions/docker-build-push-action"&gt;action&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;*&lt;em&gt;act10ns/slack@v1 *&lt;/em&gt;- Slack GitHub Action Integration &lt;a href="https://github.com/marketplace/actions/slack-github-actions-slack-integration"&gt;action&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Submission Category
&lt;/h2&gt;

&lt;h1&gt;
  
  
  DIY Deployments
&lt;/h1&gt;

&lt;h1&gt;
  
  
  Yaml File and Link to Code
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Actions Workflow &lt;strong&gt;Yaml&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Go_CI_Workflow

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:

  build:
    name: Build
    runs-on: ubuntu-latest
    steps:
    - name: Check out code to Build
      uses: actions/checkout@v2

    - name: Install Go and Run Code Linting
      id: Install-Go
      uses: reviewdog/action-golangci-lint@v2

    - name: Get dependencies to Build
      id: Get-dependencies-to-Build
      run: |
        go get -v -t -d ./...
        if [ -f Gopkg.toml ]; then
            curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
            dep ensure
        fi

    - name: Build Code
      id: Build-Code
      run: |
        go build -v .

    - name: Unit Test
      id: Unit-Test-Run
      run: |
        go test

    - name: Build &amp;amp; push Docker image
      id: Build-and-Push-Docker-Image
      uses: mr-smithers-excellent/docker-build-push@v5
      with:
        image: gsdockit/goapiauth
        tags: latest
        registry: docker.io
        dockerfile: Dockerfile
        username: ${{ secrets.DOCKER_USERNAME }}
        password: ${{ secrets.DOCKER_PASSWORD }}

    - name: Posting Action Workflow updates to Slack
      uses: act10ns/slack@v1
      with: 
        status: ${{ job.status }}
        steps: ${{ toJson(steps) }}
        channel: '#github-updates'
      if: always()
    env:
      SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The full source code can be accessible in this &lt;a href="https://github.com/chefgs/dev_go_actions"&gt;GitHub repo&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  GitHub Action Workflow Run Output
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Here is how it runs the workflow and it can be viewable from the Actions tab&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JBpbwH2Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/51pbw1hhc7gf9c4j58lk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JBpbwH2Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/51pbw1hhc7gf9c4j58lk.png" alt="Image description" width="663" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Info
&lt;/h2&gt;

&lt;p&gt;Developer - Saravanan G (chefgs)&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Other details that might be helpful to users starting to write GitHub Actions&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Add GitHub Actions workflow
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Our workflow gets created in &lt;strong&gt;.github/workflows&lt;/strong&gt; as an &lt;strong&gt;.yaml&lt;/strong&gt; file&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Method 1:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Step 1: We can find *&lt;em&gt;Actions *&lt;/em&gt; tab on the repo&lt;/li&gt;
&lt;li&gt;Step 2: Click on the tab and choose &lt;strong&gt;New Workflow&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Step 3: Choose workflow from the pre-defined template&lt;/li&gt;
&lt;li&gt;Step 4: Edit the workflow yaml according to the required stages&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;h2&gt;
  
  
  Method 2:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create workflow for yourself from scratch using the &lt;a href="https://docs.github.com/en/actions/quickstart#introduction"&gt;documentation guide&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Enjoy creating the GitHub Workflow&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Integrate Slack with GitHub Actions Workflow
&lt;/h2&gt;

&lt;p&gt;Here is the list of steps to integrate slack with github actions,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add **github **app to the slack channel&lt;/li&gt;
&lt;li&gt;Add &lt;strong&gt;incoming webhook&lt;/strong&gt; app and pick &lt;strong&gt;webhook url&lt;/strong&gt; and keep it&lt;/li&gt;
&lt;li&gt;Run the below command in slack channel to **subscribe **to repo updates. (while adding, it asks for auth with github and we choose repo)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/github subscribe user/repo

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Add &lt;strong&gt;incoming webhook URL&lt;/strong&gt; as *&lt;em&gt;secret *&lt;/em&gt; in the specific repo we want to get updates&lt;/li&gt;
&lt;li&gt;Add the below code at the end of Action workflow to get the job updates in slack
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  - name: Posting Action Workflow updates to Slack
      uses: act10ns/slack@v1
      with: 
        status: ${{ job.status }}
        steps: ${{ toJson(steps) }}
        channel: '#github-actions-updates'
      if: always()
    env:
      SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Each steps in workflow should have &lt;strong&gt;id: step-name&lt;/strong&gt; to get the status of the step in Slack update&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*&lt;em&gt;Now let's see the details about the code and it's functionality...&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Source Code Details
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Code Objective
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;We will be creating a REST API that listens on **localhost **port **1357 **and has the API versioning with a query string parameter.&lt;/li&gt;
&lt;li&gt;Sample URL format we are planning to create, &lt;strong&gt;&lt;a href="http://localhost:1357/api/v1/PersonId/Id456"&gt;http://localhost:1357/api/v1/PersonId/Id456&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Function main
&lt;/h1&gt;

&lt;p&gt;Add the below code function &lt;strong&gt;main()&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding API Versioning and Basic authentication
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  // Define gin router
  router := gin.Default()

  // Create Sub Router for customized API version and basic auth
  subRouterAuthenticated := router.Group("/api/v1/PersonId", gin.BasicAuth(gin.Accounts{
    "basic_auth_user": "userpass",
  }))

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Passing Query String Parameters
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  subRouterAuthenticated.GET("/:IdValue", GetMethod)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  REST API Listening on Port
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  listenPort := "1357"
  // Listen and Server on the LocalHost:Port
  router.Run(":"+listenPort)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Get Method
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Define a **GetMethod **function and add the following code&lt;/li&gt;
&lt;li&gt;It fetches and prints the &lt;strong&gt;Person IdValue&lt;/strong&gt; from the query string parameter passed in the API URL
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func GetMethod(c *gin.Context) {
  fmt.Println("\n'GetMethod' called")
  IdValue := c.Params.ByName("IdValue")
  message := "GetMethod Called With Param: " + IdValue
  c.JSON(http.StatusOK, message)

  // Print the Request Payload in console
  ReqPayload := make([]byte, 1024)
  ReqPayload, err := c.GetRawData()
  if err != nil {
        fmt.Println(err)
        return
  }
  fmt.Println("Request Payload Data: ", string(ReqPayload))
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Go REST API Unit testing
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Go testing&lt;/strong&gt; module can be used for creating unit testing code for Go source&lt;/li&gt;
&lt;li&gt;Go testing module code has been found in &lt;strong&gt;api_authtest.go&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Run the command &lt;strong&gt;go test&lt;/strong&gt; to run the tests&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Join us
&lt;/h2&gt;

&lt;p&gt;Register for &lt;em&gt;Kubernetes Community Days Chennai 2022&lt;/em&gt; at &lt;a href="http://www.kcdchennai.in"&gt;kcdchennai.in&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kcdchennai</category>
      <category>github</category>
      <category>devops</category>
      <category>go</category>
    </item>
    <item>
      <title>Create a Multi-Cloud Setup of Kubernetes cluster</title>
      <dc:creator>Senthil Raja Chermapandian</dc:creator>
      <pubDate>Mon, 07 Mar 2022 21:51:42 +0000</pubDate>
      <link>https://forem.com/kcdchennai/create-a-multi-cloud-setup-of-kubernetes-cluster-936</link>
      <guid>https://forem.com/kcdchennai/create-a-multi-cloud-setup-of-kubernetes-cluster-936</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Author:&lt;/strong&gt;&lt;/em&gt; &lt;em&gt;Vrukshali Torawane is Pursuing bachelors in Computer Science and Engineering. Also, she is an intern at data on kubernetes community. She Own the certifications of Automation with Ansible and Specialist in Containers and Kubernetes. She is a cloud and Devops enthusiast&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CREATE A MULTI-CLOUD SETUP of K8S cluster:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Launch node in AWS&lt;/li&gt;
&lt;li&gt;Launch node in Azure&lt;/li&gt;
&lt;li&gt;Launch node in GCP&lt;/li&gt;
&lt;li&gt;One node on the cloud should be Master Node&lt;/li&gt;
&lt;li&gt;Then set up multi-node Kubernetes cluster.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So to do this task, I have created three nodes :&lt;br&gt;
Master node on AWS, Slave nodes on AWS, Azure, and GCP.&lt;/p&gt;
&lt;h2&gt;
  
  
  Let’s start:
&lt;/h2&gt;
&lt;h1&gt;
  
  
  First: Setting up Kubernetes master on AWS:
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0c1m8awsgjc9w7hs15oo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0c1m8awsgjc9w7hs15oo.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-1:&lt;/strong&gt; For installing kubelet, kubeadm, kubectl first, we need to set up a repo for this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim /etc/yum.repos.d/k8s.repo
# content inside repo k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-2:&lt;/strong&gt; Installing required software :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yum install docker kubelet kubeadm kubectl iproute-tc -y

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-3:&lt;/strong&gt; Starting and enabling services :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl enable --now docker
systemctl enable --now kubelet

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-4:&lt;/strong&gt; We also need to pull docker images using kubeadm. It pulls images of the config files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubeadm config  images pull

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-5:&lt;/strong&gt; Now, we need to change the docker cgroup driver into systemd&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim /etc/docker/daemon.json
{  
"exec-opts": ["native.cgroupdriver=systemd"]
} 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-6:&lt;/strong&gt; Since we have made changes in docker, we need to restart the docker service :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl restart docker

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-7:&lt;/strong&gt; Setting up a network bridge to 1 :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "1" &amp;gt; /proc/sys/net/bridge/bridge-nf-call-iptable

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-8:&lt;/strong&gt; The important step is while initializing Master, *&lt;em&gt;while running preflight the main thing is we need to associate the token to the public IP of instance, so that any of the other nodes can easily connect, so for this use :&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--control-plane-endpoint=&amp;lt;PUBLIC_IP&amp;gt;:6443
kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint=&amp;lt;public_ip&amp;gt;:6443 --ignore-preflight-errors=NumCPU          --ignore-preflight-errors=Mem

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-9:&lt;/strong&gt; Now, make a directory for Kube config files and give permission to them :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-10:&lt;/strong&gt; Apply flannel :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-11:&lt;/strong&gt; Final step: Generate token so that slave nodes could connect to master node :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubeadm token create --print-join-command

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Second: Setting up Kubernetes nodes on AWS, Azure, GCP :&lt;/strong&gt;&lt;br&gt;
(Note: Follow same steps in all the three platforms)&lt;/p&gt;

&lt;p&gt;👉🏻 AWS&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7bqnrb7sffwg5jc9atn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7bqnrb7sffwg5jc9atn.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉🏻 Azure&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fylt6p7tylzj3hbhhvp28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fylt6p7tylzj3hbhhvp28.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉🏻 GCP&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvrvmve5f0sw8n1x4id8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvrvmve5f0sw8n1x4id8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-1:&lt;/strong&gt; For installing kubelet, kubeadm, kubectl first, we need to set up a repo for this :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim /etc/yum.repos.d/k8s.repo
# content inside repo k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-2:&lt;/strong&gt; Installing required software :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yum install docker kubelet kubeadm kubectl iproute-tc -y

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-3:&lt;/strong&gt; Starting and enabling services :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl enable --now docker
systemctl enable --now kubelet

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-4:&lt;/strong&gt; We also need to pull docker images using kubeadm. It pulls images of the config files :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubeadm config  images pull

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-5:&lt;/strong&gt; Now, we need to change the docker cgroupdriver into systemd :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim /etc/docker/daemon.json
{  
"exec-opts": ["native.cgroupdriver=systemd"]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-6:&lt;/strong&gt; Since we have made changes in docker, we need to restart the docker service :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl restart docker

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-7:&lt;/strong&gt; Setting up a network bridge to 1 :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "1" &amp;gt; /proc/sys/net/bridge/bridge-nf-call-iptable

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step-8:&lt;/strong&gt; Copy-paste the token generated in the master node….&lt;/p&gt;

&lt;h2&gt;
  
  
  Finally, in the master node :
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see that all the nodes are connected and are ready :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiy5h9tcsx17ryrqzkt2t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiy5h9tcsx17ryrqzkt2t.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Join us
&lt;/h2&gt;

&lt;p&gt;Register for &lt;em&gt;Kubernetes Community Days Chennai 2022&lt;/em&gt; at &lt;a href="http://www.kcdchennai.in" rel="noopener noreferrer"&gt;kcdchennai.in&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kcdchennai</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>multicloud</category>
    </item>
    <item>
      <title>Chatbots for Cloud Native Incident and Change Management</title>
      <dc:creator>Senthil Raja Chermapandian</dc:creator>
      <pubDate>Mon, 07 Mar 2022 21:49:45 +0000</pubDate>
      <link>https://forem.com/kcdchennai/chatbots-for-cloud-native-incident-and-change-management-1j69</link>
      <guid>https://forem.com/kcdchennai/chatbots-for-cloud-native-incident-and-change-management-1j69</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Author:&lt;/strong&gt;&lt;/em&gt; &lt;em&gt;Shiva. working with one of the next generation Pay roll and IHCM Gaints. K8s ethusiast| DevOps story teller|automation and toil avoidence evangalist| people leader| cricket lover &amp;amp; photographer&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Kubernetes is the de-facto standard for container Micro Service orchestration. GitOps is a code version control-based approach towards build, release engineering &amp;amp; change management. It has empowered developers to be fully responsible and empowers a truly agile delivery. &lt;/p&gt;

&lt;p&gt;Lots of organizations are still finding efficient and productive ways for developers accessing Kubernetes workloads in production environments. While many are finding innovative ways, this blog is about one of the possible ways that is tried and tested, of course with a custom development effort.&lt;/p&gt;

&lt;p&gt;Self-Servicing Kubernetes workloads, especially during critical incident and change management phases, with faster turnaround is an interesting problem to solve. Tools around Kubernetes have evolved and has brought two of ways for self-service. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;UI Based Self Service
&lt;/li&gt;
&lt;li&gt;Chatbot based Self service
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both of course with a solid RBAC system backing them. &lt;/p&gt;

&lt;p&gt;In this blog, I would be discussing about the later (using chatbots for Kubernetes), since the former has standard tooling in market (Rancher GUI with ID and Auth management being one of my favorites). &lt;br&gt;
Before I put through the problem statement, refer to the diagram blow showing the sequence from incident escalation through fix and de-escalation. The below diagram shows the possible areas of efficiency improvement targeted in this blog. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ir00MBkg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e4xgej94slah55wukao4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ir00MBkg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e4xgej94slah55wukao4.png" alt="Image description" width="480" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem statement:
&lt;/h2&gt;

&lt;p&gt;When we had a bunch of Kubernetes specialists doing incident and change management, the question before us was&lt;br&gt;&lt;br&gt;
● Can we provide tooling to Ops to be quick on Incident resolution steps – hence enhancing the Mean Time to Recover?&lt;br&gt;&lt;br&gt;
● Can we provide a single window for all incident management actions – Can it be a chat room that promotes visibility and empowers quicker service for Ops? &lt;br&gt;
We chose a chatbot with errbot framework! Yay – You got it right Python powered bot with Python being the darling of many DevOps &amp;amp; SRE (Site Reliability Engineering) Professionals. At least one thirds of the issues that landed up as critical incident had a secret recipe for fix  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Micro service rollback (and /or) &lt;/li&gt;
&lt;li&gt;Micro service restart (and/or) &lt;/li&gt;
&lt;li&gt;Micro service roll forwards &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While we let the best of Kubernetes experts to well architect a micro service to be placed on Kubernetes cluster and best of release engineers to form a CI (Continuous Integration) CD (Continuous Deployment) strategy for Kubernetes workloads, we decided to build a simple bot service that can do all the above incident fix functions using native Kubernetes ways, using python Kubernetes operator (e.g. &lt;a href="https://kopf.readthedocs.io/en/stable/"&gt;https://kopf.readthedocs.io/en/stable/&lt;/a&gt;).  &lt;/p&gt;

&lt;h2&gt;
  
  
  One Window for All Operations Approach – Chat Rooms for Swift actions
&lt;/h2&gt;

&lt;p&gt;With this approach we had SRE working from chatrooms using chatbots restarting, rolling back or promoting the services in minutes through a single window interface! &lt;br&gt;
Imagine the SRE working through K8s command lines and CICD tool interfaces Vs using chatbot –&lt;strong&gt;Viola!! Valuable few minutes of MTTR saved!&lt;/strong&gt;&lt;br&gt;
Some added reinforcements were done at process level to keep the source of truth at release engineering and version control level according to the organization needs to ensure sanctity of actions in an incident. &lt;/p&gt;

&lt;h2&gt;
  
  
  Taking it beyond SREs and DevOps - Are we empowering engineering community?
&lt;/h2&gt;

&lt;p&gt;Empowering Engineering community was next big question we had to answer – More the engineering community is dependent on SRE, more demanding and toiling would be SRE roles!&lt;br&gt;&lt;br&gt;
Think of this sequence - An Incident lands on a micro service team alerting service (e.g., PagerDuty), gets later transferred to SRE – Just for a K8s restart, roll back or promotion across environments- losing valuable time to Recover while following the procedures incident call transfers!&lt;br&gt;&lt;br&gt;
What if we empowered the microservice teams to use Self Serviceable Chatbots? – That was our way forward. &lt;/p&gt;

&lt;p&gt;For our way forward we wanted to round off a bot capability beyond Kubernetes operations, a bot that can be reliable, does the exact same process each time and can cater to all business use cases in incident management – In other words, a responsible bot! &lt;/p&gt;

&lt;h2&gt;
  
  
  A responsible bot – a process guide, a toil breaker, and a swiftness enhancer
&lt;/h2&gt;

&lt;p&gt;When we had to design a responsible bot, we had the following features to be built&lt;br&gt;&lt;br&gt;
The bot should &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validate the need for a self-service action
&lt;/li&gt;
&lt;li&gt;Enable right user to have right access (least access for effective incident mitigation) &lt;/li&gt;
&lt;li&gt;Track and trace all actions of self service (leading us to questions to be answered on a well-designed environment architecture) &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How does the bot work?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The responsible bot
&lt;/h3&gt;

&lt;p&gt;● Takes a mandatory justification &amp;amp; confirms it - for prod like environments - observability monitor or alert link (e.g., PagerDuty alert link) or ticket in proper state (e.g., Business approved Jira Ticket) for the case of well detected but not well alerted instances. &lt;br&gt;
● Has a granular access to cluster, namespace, chat room and even an action on who can perform what by having a bot RBAC feature (e.g. &lt;a href="https://casbin.org/docs/en/rbac-api"&gt;https://casbin.org/docs/en/rbac-api&lt;/a&gt;) &lt;br&gt;
● Can track and trace every action thro logs (e.g., ELK logs) and incident action traces (e.g., Logging action trace on Jira tickets with action owner)&lt;br&gt;&lt;br&gt;
Do you agree that this is a responsible Bot Indeed? If not, look at the value statement below &lt;br&gt;
The bot aims at  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;shorter incident mean time to recoveries &lt;/li&gt;
&lt;li&gt;cutting time for the co-ordination between Dev &amp;amp; Ops for production like environments &lt;/li&gt;
&lt;li&gt;empowers engineering community to do informed Kubernetes actions even on production without ops dependencies &lt;/li&gt;
&lt;li&gt;tracks action and records them for bettering the state of micro service deployments if necessary. &lt;/li&gt;
&lt;li&gt;Fully self-serviceable!
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MTTRs can even faster from few tens of minutes to few minutes after receiving an incident alert! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NXi4Ymr0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yf7pqey0mpjdjtke4wcx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NXi4Ymr0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yf7pqey0mpjdjtke4wcx.png" alt="Image description" width="880" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Seeding into Self Service Changes for the needy:
&lt;/h3&gt;

&lt;p&gt;Organizations have the challenge of Continuous Releases to production like environments – they depend on release management team sometimes as there is a last mile human validation needed. The bot that is responsible has capability now to replace a release management personal if tuned in the right way!&lt;br&gt;&lt;br&gt;
Empower development and test teams to do responsible self-serviceable releases – is a case and space for chat bot to be enabling faster change management executions! What do you think? Do you have this problem or are you fully on Continuous Releases to Production? &lt;/p&gt;

&lt;h2&gt;
  
  
  Join us
&lt;/h2&gt;

&lt;p&gt;Register for &lt;em&gt;Kubernetes Community Days Chennai 2022&lt;/em&gt; at &lt;a href="http://www.kcdchennai.in"&gt;kcdchennai.in&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kcdchennai</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Container Runtime Interface (CRI), Docker deprecation &amp; Dockershim</title>
      <dc:creator>Senthil Raja Chermapandian</dc:creator>
      <pubDate>Mon, 07 Mar 2022 21:48:20 +0000</pubDate>
      <link>https://forem.com/kcdchennai/container-runtime-interface-cri-docker-deprecation-dockershim-5178</link>
      <guid>https://forem.com/kcdchennai/container-runtime-interface-cri-docker-deprecation-dockershim-5178</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Author:&lt;/strong&gt;&lt;/em&gt; &lt;em&gt;Jothimani Radhakrishnan (Lifion by ADP). A Software Product Engineer, Cloud enthusiast | Blogger | DevOps | SRE | Python Developer. I usually automate my day-to-day stuff and Blog my experience on challenging items.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;Hey Docker lovers &amp;lt;3, this is not going to be a happy story for you guys. Yes, for me as well.  Like everyone, I loved using Docker very much. When I get to know about this project (Dockershim), it was a heartbreaking moment for me. &lt;/p&gt;

&lt;p&gt;Before knowing about Dockershim let us discuss the following.&lt;/p&gt;

&lt;p&gt;Kubelet takes care of managing worker nodes in relation to the master node. It ensures that the specified containers for the pod are up and running. &lt;/p&gt;

&lt;p&gt;To know more about kubelet: &lt;a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/"&gt;https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Container runtime:
&lt;/h2&gt;

&lt;p&gt;Container runtime is a software that is responsible for running containers. To create a pod, kubelet needs a container runtime environment. For a long time, Kubernetes used Docker as its default container runtime. &lt;/p&gt;

&lt;p&gt;This creates a problem/dependency that whenever docker release updates/upgrades it breaks the Kubernetes. &lt;/p&gt;

&lt;p&gt;There are also several container runtime tools.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;containerd&lt;/li&gt;
&lt;li&gt;CRI-O&lt;/li&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;Rocket&lt;/li&gt;
&lt;li&gt;LXD&lt;/li&gt;
&lt;li&gt;OpenVZ
&lt;/li&gt;
&lt;li&gt;Windows Server Containers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Okay! Coming back to the context of this blog. Docker is going to be deprecated from Kubernetes as default and containerd is going to replace the place.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Dockershim?
&lt;/h2&gt;

&lt;p&gt;Docker existed as a default engine in k8s, after introducing additional CRI access (Container runtime interface) in k8s, Kubernetes created an adaptor component called dockershim.&lt;/p&gt;

&lt;p&gt;The dockershim adapter allows the kubelet to interact with Docker as if Docker were a CRI compatible runtime.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---D7BKH_4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uagynqvmgl0o6ajmk942.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---D7BKH_4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uagynqvmgl0o6ajmk942.png" alt="Image description" width="601" height="163"&gt;&lt;/a&gt;&lt;br&gt;
Img src: Kubernetes documentation&lt;br&gt;
Switching to Containerd as a container runtime eliminates the middleman&lt;/p&gt;

&lt;h2&gt;
  
  
  Points to Check to make sure your environment is not affected by this change.
&lt;/h2&gt;

&lt;p&gt;You can continue to use Docker to build images, using Docker is not considered a dependency.&lt;/p&gt;

&lt;p&gt;All these below pointers should be considered and updated as per your native CRI.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make sure to update your docker commands operations that are running inside the pod. For example, listing running containers in the worker node using docker ps which might not work after the deprecation.&lt;/li&gt;
&lt;li&gt;Check for any private registries or image mirror settings in the Docker configuration file (like /etc/docker/daemon.json)&lt;/li&gt;
&lt;li&gt;Any scripts that ssh in worker nodes and does any docker CRUD operations.&lt;/li&gt;
&lt;li&gt;Any third-party tools using docker needs to be updated. Migrating Telemetry&lt;/li&gt;
&lt;li&gt;Any alerts that are configured based on Docker specific errors should be updated.&lt;/li&gt;
&lt;li&gt;Any automation or bootstrap scripts based on Docker commands should be updated.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The list is not limited as mentioned above and might vary based on your adoption of usage.&lt;/p&gt;

&lt;p&gt;To know more about the deprecation FAQ: &lt;a href="https://kubernetes.io/blog/2020/12/02/dockershim-faq/"&gt;https://kubernetes.io/blog/2020/12/02/dockershim-faq/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Thank you, &lt;/p&gt;

&lt;p&gt;Happy containerd! :p &lt;/p&gt;

&lt;h2&gt;
  
  
  Reference:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://developer.ibm.com/blogs/kube-cri-overview/"&gt;https://developer.ibm.com/blogs/kube-cri-overview/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/"&gt;https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/"&gt;https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Join us
&lt;/h2&gt;

&lt;p&gt;Register for &lt;em&gt;Kubernetes Community Days Chennai 2022&lt;/em&gt; at &lt;a href="http://www.kcdchennai.in"&gt;kcdchennai.in&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kcdchennai</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Deploying squid 🦑 in k3s on an RPI4B</title>
      <dc:creator>Senthil Raja Chermapandian</dc:creator>
      <pubDate>Mon, 07 Mar 2022 21:46:42 +0000</pubDate>
      <link>https://forem.com/kcdchennai/deploying-squid-in-k3s-on-an-rpi4b-jb0</link>
      <guid>https://forem.com/kcdchennai/deploying-squid-in-k3s-on-an-rpi4b-jb0</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Author:&lt;/strong&gt;&lt;/em&gt; &lt;em&gt;Leon. A Devops Engineer. Loves Contributing to Opensource. Python and Golang Developer.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is k3s?
&lt;/h2&gt;

&lt;p&gt;K3s is lightweight Kubernetes which is easy to install and is a single binary under 100 MB by Rancher. One can read more about it &lt;a href="https://rancher.com/docs/k3s/latest/en/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing K3s?
&lt;/h2&gt;

&lt;p&gt;K3s provides an installation script that is a convenient way to install it as a service on systemd or openrc based systems. This script is available at &lt;a href="https://get.k3s.io"&gt;https://get.k3s.io&lt;/a&gt;. To install K3s using this method, just run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -sfL https://get.k3s.io | sh -

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After running this installation:&lt;/p&gt;

&lt;p&gt;The K3s service will be configured to automatically restart after node reboots or if the process crashes or is killed&lt;br&gt;
Additional utilities will be installed, including kubectl, crictl, ctr, k3s-killall.sh, and k3s-uninstall.sh&lt;br&gt;
A kubeconfig file will be written to /etc/rancher/k3s/k3s.yaml and the kubectl installed by K3s will automatically use it&lt;/p&gt;
&lt;h2&gt;
  
  
  Why Squid?
&lt;/h2&gt;

&lt;p&gt;Well I've been using Squid Proxy on my Raspberry Pi 2B for quite some time and I really wanted to get my hands dirty with Kubernetes.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is Squid?
&lt;/h2&gt;

&lt;p&gt;Taken from the &lt;a href="https://wiki.archlinux.org/title/Squid"&gt;ArchWiki&lt;/a&gt; Squid is a caching proxy for HTTP, HTTPS and FTP, providing extensive access controls.&lt;/p&gt;
&lt;h2&gt;
  
  
  Let's begin
&lt;/h2&gt;

&lt;p&gt;I couldn't find a squid proxy container for my raspberry pi so I had to roll out my own Containerfile.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Now we can build this image with the version tag, this is important if you wish to rollout new changes to your application, having the latest flag is good having a version tag in your deployment is better, this is what I've observed and it's also a deployment strategy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;podman build squid-proxy:v1 .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The problem now is how do I get Kubernetes to recognize this image that I've built? Since it's not on Docker Hub. I could, well deploy this to Docker Hub too, but, I didn't I instead spun up a docker registry container.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;podman run -d -p 5000:5000 --restart on-failure --name registry registry:2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Then I ran the following&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;podman build -t localhost:5000/squid-proxy:v1 .
podman push --tls-verify=false localhost:5000/squid-proxy:v1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Note: The &lt;strong&gt;--tls-verify=false&lt;/strong&gt; is required as we don't have TLS and the image push fails, since this is for learning purposes I've done it without TLS.&lt;/p&gt;
&lt;h2&gt;
  
  
  Deploying Squid🦑 to Kubernetes.
&lt;/h2&gt;

&lt;p&gt;I'm still pretty new to Kubernetes, the basic unit of Kubernetes is a pod, a pod is something that can hold multiple containers within it. While I can deploy this within a pod directly I didn't, Kubernetes will do that for me via the deployment file.&lt;/p&gt;

&lt;p&gt;If you want to get an overview of how Kubernetes works you can watch this video by &lt;a class="mentioned-user" href="https://dev.to/techworld_with_nana"&gt;@techworld_with_nana&lt;/a&gt;, &lt;a href="https://www.youtube.com/watch?v=X48VuDVv0do"&gt;Kubernetes Course&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I decided to go this way, Create a LoadBalancer Service and a Deployment that would deploy SQUID to Kubernetes.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;There is a &lt;strong&gt;volume&lt;/strong&gt; folder that is common for both the containers since SQUID doesn't send logs to &lt;strong&gt;STDOUT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To run this you can simply do&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f squid-deployment.yml
deployment.apps/squid-dployment unchanged
service/squid-service unchanged

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a Service and Deployment will create a pod&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get service
$ kubectl get service

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get svc
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE
kubernetes      ClusterIP      10.43.0.1       &amp;lt;none&amp;gt;           443/TCP          48d
squid-service   LoadBalancer   10.43.114.127   192.168.31.151   3128:32729/TCP   3d14h

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To get the deployments&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get deploy
$ kubectl get deployment

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which will show the following&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME              READY   UP-TO-DATE   AVAILABLE   AGE
squid-deployment   1/1     1            1           7d19h

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last thing is to get the pods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get po OR kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
squid-dployment-86bfc8d664-swwzj   2/2     Running   0          4d14h
svclb-squid-service-54nrc          1/1     Running   0          4d1h

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once everything is up and running you can see the following in the curl headers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a7Bb8Cjs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g5urbc87v1v7sz61txmr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a7Bb8Cjs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g5urbc87v1v7sz61txmr.png" alt="Image description" width="880" height="187"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To see the logs and follow them you can do the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl logs service/squid-service -c tailer -f

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's all.&lt;/p&gt;

&lt;p&gt;This is a simple and easy deployment using Kubernetes, if I've made any mistakes or you would like to suggest any changes please drop a comment below &amp;lt;3&lt;/p&gt;

&lt;h2&gt;
  
  
  Join us
&lt;/h2&gt;

&lt;p&gt;Register for &lt;em&gt;Kubernetes Community Days Chennai 2022&lt;/em&gt; at &lt;a href="http://www.kcdchennai.in"&gt;kcdchennai.in&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kcdchennai</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>docker</category>
    </item>
    <item>
      <title>Optimising Python workloads for Kubernetes</title>
      <dc:creator>Senthil Raja Chermapandian</dc:creator>
      <pubDate>Mon, 07 Mar 2022 21:45:15 +0000</pubDate>
      <link>https://forem.com/kcdchennai/optimising-python-workloads-for-kubernetes-1d6c</link>
      <guid>https://forem.com/kcdchennai/optimising-python-workloads-for-kubernetes-1d6c</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Author:&lt;/strong&gt;&lt;/em&gt; &lt;em&gt;Jothimani Radhakrishnan (Lifion by ADP). A Software Product Engineer, Cloud enthusiast | Blogger | DevOps | SRE | Python Developer. I usually automate my day-to-day stuff and Blog my experience on challenging items.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;This blog is about how I solved the problem of shrinking a loop processing time in Python from 30sec to 5sec (6x)&lt;/p&gt;

&lt;p&gt;I had a use-case where the python script calls Jira API and scraps data, doing some calculations and creating a tuple. On average, it took approx. 1s per loop, say there are 30 tickets --&amp;gt; 30 loops and 30s.  &lt;/p&gt;

&lt;p&gt;In order to make my script powerful, I have to understand the concurrency and that’s when I explored this crucial concept in python and an idea popped for this blog :) &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A quick tip:&lt;/strong&gt; You should use threading if your program is network-bound or multiprocessing if it is CPU-bound.&lt;/p&gt;

&lt;p&gt;Let’s briefly catch up about Multi-processing vs Multi-threading vs Asyncio&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-processing
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1jr867qp0r8wjf8s3zfp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1jr867qp0r8wjf8s3zfp.png" alt="Multiprocessing"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For powerful computation related queries. Multi-processing Python program could fully utilize all the CPU cores and native threads available.&lt;/li&gt;
&lt;li&gt;Each python process is independent of each other, and they don’t share memory. &lt;/li&gt;
&lt;li&gt;Performing collaborative tasks in Python using multiprocessing requires the use of API provided by the operating system.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Multi-threading
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpm66s3nstsv3gva9mi2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpm66s3nstsv3gva9mi2o.png" alt="Multithreading"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All processes shares the same memory i.e it runs as a single process with multiple threads way which is good for i/o bound process.&lt;/li&gt;
&lt;li&gt;Caveats: Pre-emption - CPU has full power to revoke or reschedule any running thread.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Python Asyncio
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Single process and single-thread and make better use of CPU sitting idle when waiting for the I/O.&lt;/li&gt;
&lt;li&gt;Event loop in asyncio which routinely measures the progress of the tasks. If the event loop has measured any progress, it would schedule another task for execution. Therefore this minimizes the time spent on waiting for I/O. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Based on the use case, I used multi-threading since the function is very quick and ephemeral (killing and restarting a thread will not create problems to the function)&lt;/p&gt;

&lt;h2&gt;
  
  
  Running it in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Okay! What are the points to be considered while running this in Kubernetes?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Requests – Soft limit of the resource&lt;/li&gt;
&lt;li&gt;Limits – Hard stop for a resource (CPU / RAM). Post this hard stop usage, OOM will come into play.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, if we don’t specify any resource limits, Kubernetes decides and assigns resource dynamically &lt;/p&gt;

&lt;h2&gt;
  
  
  Coming back to my use case
&lt;/h2&gt;

&lt;p&gt;I provisioned a pod with 1 CPU and 5 threads; this needs to be calculated and initialized based on the nature of your function. (we can discuss this thread allocation process as a separate post.)&lt;/p&gt;

&lt;p&gt;We all know that resources can be controlled using requests and limits, and kubernetes manages automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From k8s doc:&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;If the node where a Pod is running has enough of a resource available, it's possible (and allowed) for a container to use more resources than its request for that resource specifies. However, a container is not allowed to use more than its resource limit.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Happy processing! &lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor" rel="noopener noreferrer"&gt;https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://leimao.github.io/blog/Python-Concurrency-High-Level/" rel="noopener noreferrer"&gt;https://leimao.github.io/blog/Python-Concurrency-High-Level/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ubuntu.com/blog/job-concurrency-in-kubernetes-lxd-cpu-pinning-to-the-rescue" rel="noopener noreferrer"&gt;https://ubuntu.com/blog/job-concurrency-in-kubernetes-lxd-cpu-pinning-to-the-rescue&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Join us
&lt;/h2&gt;

&lt;p&gt;Register for &lt;em&gt;Kubernetes Community Days Chennai 2022&lt;/em&gt; at &lt;a href="http://www.kcdchennai.in" rel="noopener noreferrer"&gt;kcdchennai.in&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kcdchennai</category>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>python</category>
    </item>
    <item>
      <title>Kubernetes Operators: Cruise Control for Managing Cloud-Native Apps</title>
      <dc:creator>Senthil Raja Chermapandian</dc:creator>
      <pubDate>Mon, 07 Mar 2022 21:43:02 +0000</pubDate>
      <link>https://forem.com/kcdchennai/kubernetes-operators-cruise-control-for-managing-cloud-native-apps-107d</link>
      <guid>https://forem.com/kcdchennai/kubernetes-operators-cruise-control-for-managing-cloud-native-apps-107d</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Author:&lt;/strong&gt;&lt;/em&gt; &lt;em&gt;Senthil Raja Chermapandian is Principal Software Engineer in Ericsson. He is a Certified Kubernetes Administrator (CKA), Maintainer of Open Source project “kube-fledged”, Tech Blogger, Speaker &amp;amp; Organizer of KCD Chennai. He specialises in Machine Learning, Distributed Systems, Edge Computing, Cloud-Native Software Development, WebAssembly, Kubernetes and Google Cloud Platform&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;Managing Applications in Production has traditionally been a Human-centric affair. Large Ops Teams were tasked with managing the day-to-day operations of the running Apps. These teams had experts with deep domain and functional knowledge of the App and its Infrastructure, and often relied on heroes to save the day when an outage occurred.&lt;/p&gt;

&lt;p&gt;When the number of Apps and their complexities started growing, inefficiencies and cost overruns crept in. IT Organizations resorted to Automation Tools to address these challenges. As a result, the Ops team evolved to become more Tool-centric, relying on tools, automation and scripts for tasks like monitoring, alerting, patching, backup/restore etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code-centric App Management:
&lt;/h2&gt;

&lt;p&gt;In due course, Organizations started embracing Cloud-Native Applications and Infrastructure. The fundamental approach to designing and building Apps changed: Cloud, Containers and Micro-services took center-stage and users begun to take elasticity, scalability and resiliency for granted. This trend created new challenges and made way for the wide-spread adoption of DevOps and SRE approaches to elevate efficiencies in Operations. Relying on Humans and Tools alone for managing Cloud-Native Applications won’t help.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uMEVeQ6O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p3suecc3eaclvh6v1uxs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uMEVeQ6O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p3suecc3eaclvh6v1uxs.png" alt="Image description" width="875" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The answer is a Code-centric approach to managing a Cloud-Native App in Production. In Code-centric approach, the Ops team transforms itself into a group of Software engineers with a goal to codify all of the domain knowledge and operational tasks required for managing a Cloud-Native App. Code becomes the fundamental asset for managing the App, with Humans and Tools augmenting wherever needed. This codified asset is capable of performing all the operational tasks for managing the App. Let’s call this codified asset as Ops-App. An Ops-App is often built by SREs using the same language, framework and constructs used by the App it manages. The Ops-App handles all the operational activities of the App viz. Deployment, Upgrade, Patching, Backup/Restore, Monitoring, Alerting, Scaling etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Kubernetes Operator?
&lt;/h2&gt;

&lt;p&gt;Kubernetes allows us to pool Compute Resources from a group of physical and virtual servers, and makes these resources available to Containerized applications on-demand. It manages the lifecycle of containers, and provides various critical capabilities: automation, self-healing, persistent volumes, RBAC, auto-scaling etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JjWDVcG6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mz00oqaful9y7euppb94.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JjWDVcG6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mz00oqaful9y7euppb94.png" alt="Image description" width="875" height="717"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Kubernetes Operator, in simple terms, is an Ops-App for a Containerized Cloud-Native App running in Kubernetes. Kubernetes has inherent support for several basic App management tasks like self-healing, scaling, monitoring etc. A Kubernetes Operator “amplifies” these basic capabilities into a fully blown Ops-App that has the entire domain knowledge of the App embedded in it. It does this, by simply extending or enhancing the capabilities of the Kubernetes cluster that hosts the App (Kubernetes allows users to add new API resources via a feature called Custom Resource Definition (CRD)). A user can then use the native Kubernetes command line tool (kubectl) or APIs to interact with the Ops-App, in the same way the user interacts with the Kubernetes Control plane. And, you get to run both the App and the Ops-App in the same Kubernetes cluster, allowing you to use existing CI/CD pipelines to manage the release of the App and its corresponding Ops-App. In Kubernetes parlance, the Ops-App is called an Operator.&lt;br&gt;
Let’s assume the Dev Team has written a Java Spring boot Application and has Containerized the App into a Container Image. The Ops Team has to deploy this App into a Production Kubernetes Cluster and manage the lifecycle of the App. Rather than deploying this App directly, the Ops team would write an Operator for this App. The Operator “knows” how to programmatically deploy the App into the Cluster with the right configuration, resource requirements etc. The Operator can also be written in a fashion that it knows “much more” than Deploying the App: it knows how to apply patches, roll-out upgrades, back up the App’s data, monitor the metrics of the App, initiate scaling, restore the backup etc. A human being would be required to intervene only for tasks which the Operator cannot perform. As a result, managing an App becomes highly efficient and cost-optimized. This is a powerful pattern for managing Cloud-Native Apps in Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Operator Capability Levels
&lt;/h2&gt;

&lt;p&gt;Operators come in different maturity levels in regards to their lifecycle management capabilities for the application or workload they manage. The capability model aims to provide guidance in terminology to express what features users can expect from an operator.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--newpOm0f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/brziagbziu6wb40hynk2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--newpOm0f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/brziagbziu6wb40hynk2.png" alt="Image description" width="875" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each capability level is associated with a certain set of management features the Operator offers around the managed workload. Operator that do not manage a workload and/or are delegating to off-clusters orchestration services would remain at Level 1. Capability levels are accumulating, i.e. Level 3 capabilities require all capabilities desired from Level 1 and 2.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Operator Framework
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---VDvzZzG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xnao4h3013x6c9esa7to.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---VDvzZzG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xnao4h3013x6c9esa7to.png" alt="Image description" width="342" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Kubernetes Operator brings much needed efficiencies in managing a Containerized App. Excellent! However, writing an Operator today can be difficult because of challenges such as using low level APIs, writing boilerplate code, and a lack of modularity which leads to duplication. The Operator Framework removes the pain out of writing a Kubernetes Operator. It is an open source toolkit to manage Operators in an effective, automated, and scalable way. It provides a SDK which can be downloaded. Refer to this &lt;a href="https://faun.pub/writing-your-first-kubernetes-operator-8f3df4453234"&gt;blog&lt;/a&gt; to get in-depth understanding on using the Operator SDK for writing an Operator.&lt;br&gt;
If you are looking for pre-built, readily usable Operators, it’s available in &lt;a href="https://operatorhub.io/"&gt;operatorhub.io&lt;/a&gt;. This is a collection of Open-sourced Operators for popular Applications. You can either use them as-is or download the source code and modify it as per your use case. As of this writing, the repository has 209 Operators for Apps ranging from Akka to Wildfly. You could &lt;a href="https://operatorhub.io/contribute"&gt;submit &lt;/a&gt;your Operator to operatorhub.io for others to discover and use it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;Kubernetes Operators can be very valuable in managing Cloud-Native Applications in Kubernetes. If you could build a group of SREs with the right skillset for writing and maintaining Operators, you’d benefit a lot. I believe this article has been helpful in getting a big picture of Kubernetes Operators and its benefits. Share your feedback in the comments section.&lt;/p&gt;

&lt;h2&gt;
  
  
  Join us
&lt;/h2&gt;

&lt;p&gt;Register for &lt;em&gt;Kubernetes Community Days Chennai 2022&lt;/em&gt; at &lt;a href="http://www.kcdchennai.in"&gt;kcdchennai.in&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kcdchennai</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Kube-fledged: Cache Container Images in Kubernetes</title>
      <dc:creator>Senthil Raja Chermapandian</dc:creator>
      <pubDate>Mon, 07 Mar 2022 21:40:58 +0000</pubDate>
      <link>https://forem.com/kcdchennai/kube-fledged-cache-container-images-in-kubernetes-3g58</link>
      <guid>https://forem.com/kcdchennai/kube-fledged-cache-container-images-in-kubernetes-3g58</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Author:&lt;/strong&gt;&lt;/em&gt; &lt;em&gt;Senthil Raja Chermapandian is Principal Software Engineer in Ericsson. He is a Certified Kubernetes Administrator (CKA), Maintainer of Open Source project “kube-fledged”, Tech Blogger, Speaker &amp;amp; Organizer of KCD Chennai. He specialises in Machine Learning, Distributed Systems, Edge Computing, Cloud-Native Software Development, WebAssembly, Kubernetes and Google Cloud Platform&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;“The Peregrine Falcon is renowned for its speed, reaching over 200 mph during its characteristic high-speed dive, making it the fastest bird in the world, as well as the fastest member of the animal kingdom.” (Source: Wikipedia)&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;When a Containerized Application is deployed to a Kubernetes cluster, the K8s control plane schedules the Pod to a worker node in the cluster. The Node agent (Kubelet) running in the worker node, co-ordinates with the container runtime (e.g. containerd) installed in the node, and pulls the necessary container images from the Image Registry. Depending upon the size of the images and the network bandwidth available, it takes time to pull all the images to the node. So, in any containerized application, we should be cognizant of the delay introduced due to fetching the images from the registry. Traditional applications that run as processes (e.g. managed by systemd), however, do not suffer from this delay because all necessary files are already installed in the machine.&lt;/p&gt;

&lt;p&gt;Imagine your Containerized Application experiences a sudden surge in traffic, and it needs to immediately scale out horizontally (i.e. additional instances need to be created). If you had configured Horizontal Pod Autoscaler (HPA), K8s control plane creates additional replicas of Pods. However these Pods won’t be available for handling the increased traffic, until the required images are pulled and the containers are up and running. Or assume your Application needs to process high-speed real-time data. Such applications have stringent requirements on how rapidly they can be started-up and scaled, because of the very nature of the purpose it fulfils. In short, there are several use cases where the delay introduced due to pulling the images from the registry is not acceptable. Moreover, the network connectivity between the cluster and the image registry could suffer from poor bandwidth or the connectivity could be totally lost. There are scenarios, especially in Edge computing, where Applications have to gracefully tolerate intermittent network connectivity.&lt;/p&gt;

&lt;p&gt;These challenges could be solved by different means. One solution that would immensely help in these scenarios is to have the container images cached directly on the cluster worker nodes, so that Kubelet doesn’t needs to pull these images, but immediately use the images already cached in the nodes. In this blog, I’ll explain how one can use &lt;a href="https://github.com/senthilrch/kube-fledged" rel="noopener noreferrer"&gt;kube-fledged&lt;/a&gt;, an open source project, to build and manage a cache of container images in a Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Existing Solutions
&lt;/h2&gt;

&lt;p&gt;Before I introduce you to kube-fledged, let me briefly describe existing solutions to tackle this problem. The widely-used approach is to have a Registry mirror running inside the Cluster. Two widely used solutions are i) in-cluster self hosted registry ii) pull-through cache. In the former solution, a local registry is run within the k8s cluster and it is configured as a mirror registry in the container runtime. Any image pull request is directed to the in-cluster registry. If this fails, the request is directed to the primary registry. In the latter solution, the local registry has caching capabilities. When the image is pulled the first time, it is cached in the local registry. Subsequent requests for the image are served by the local registry.&lt;/p&gt;

&lt;h1&gt;
  
  
  Drawbacks of existing solutions
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;Setting up and maintaining the Local registry mirror consumes considerable computational and human resources.&lt;/li&gt;
&lt;li&gt;For huge clusters spanning multiple regions, we need to have multiple local registry mirrors. This introduces unnecessary complexities when application instances span multiple regions. You might need to have multiple Deployment manifests each pointing to the local registry mirror of that region.&lt;/li&gt;
&lt;li&gt;These approaches don’t fully solve the requirement for achieving rapid starting of a Pod since there is still a notable delay in pulling the image from the local mirror. There are several use cases which cannot tolerate this delay.&lt;/li&gt;
&lt;li&gt;Nodes might lose network connectivity to the local registry mirror so the Pod will be stuck until the connectivity is restored.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Overview of kube-fledged
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fewadfmsdp3fmt6r6m5v0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fewadfmsdp3fmt6r6m5v0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;kube-fledged&lt;/strong&gt; is a kubernetes add-on or operator for creating and managing a cache of container images directly on the worker nodes of a kubernetes cluster. It allows a user to define a list of images and onto which worker nodes those images should be cached (i.e. pulled). As a result, application pods start almost instantly, since the images need not be pulled from the registry. kube-fledged provides CRUD APIs to manage the lifecycle of the image cache, and supports several configurable parameters in order to customize the functioning as per one’s needs. (URL: &lt;a href="https://github.com/senthilrch/kube-fledged" rel="noopener noreferrer"&gt;https://github.com/senthilrch/kube-fledged&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;kube-fledged is designed and built as a general-purpose solution for managing an image cache in Kubernetes. Though the primary use case is to enable rapid Pod start-up and scaling, the solution supports a wide variety of use cases as mentioned below&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Applications that require rapid start-up. For e.g. an application performing real-time data processing needs to scale rapidly due to a burst in data volume.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Serverless Functions since they need to react immediately to incoming events.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IoT applications that run on Edge devices, because the network connectivity between the edge device and image registry would be intermittent.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If images need to be pulled from a private registry and everyone cannot be granted access to pull images from this registry, then the images can be made available on the nodes of the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If a cluster administrator or operator needs to roll-out upgrades to an application and wants to verify before-hand if the new images can be pulled successfully.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How kube-fledged works
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkomooynvuy0mzg02m1bh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkomooynvuy0mzg02m1bh.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes allows developers to extend the kubernetes api via Custom Resources. kube-fledged defines a custom resource of kind “&lt;strong&gt;ImageCache&lt;/strong&gt;” and implements a custom controller (named kubefledged-controller). kubefledged-controller does the heavy-lifting for managing image cache. Users can use kubectl commands for creation and deletion of ImageCache resources.&lt;/p&gt;

&lt;p&gt;kubefledged-controller has a built-in Image Manager routine that is responsible for pulling and deleting images. Images are pulled or deleted using kubernetes jobs. If enabled, image cache is refreshed periodically by the refresh worker. kubefledged-controller updates the status of image pulls, refreshes and image deletions in the status field of ImageCache resource. kubefledged-webhook-server is responsible for validating the fields of the ImageCache resource.&lt;/p&gt;

&lt;p&gt;If you need to create an image cache in your cluster, you only need to create an ImageCache manifest by specifying the list of images to be pulled, along with a &lt;em&gt;nodeSelector&lt;/em&gt;. The &lt;em&gt;nodeSelector&lt;/em&gt; is used to specify the nodes onto which the images should be cached. If you want the images to be cached in all the nodes of the cluster, then omit the &lt;em&gt;nodeSelector&lt;/em&gt;. When you submit the manifest to your cluster, the API server will POST a validating webhook event to kubefledged-webhook-server. The webhook server validates the &lt;em&gt;cacheSpec&lt;/em&gt; of the manifest. Upon receiving a successful response from the webhook server, API server persists the ImageCache resource in etcd. This triggers an Informer notification to kubefledged-controller, which queues the request. The request is picked up by the Image cache worker, which creates multiple image pull requests (one request per image per node) and places them in the image pull/delete queue. These requests are handled by the image manager routine. For every request, the image manager creates a k8s job that is responsible for pulling the image into the cache. The image manager keeps track of the jobs it creates and once a job completes, it places a response in a separate queue. The image cache worker then aggregates all the results from the image manager and finally updates the status section of the ImageCache resource.&lt;/p&gt;

&lt;p&gt;kube-fledged has a refresh worker routine which runs periodically to keep the image cache refreshed. If it discovers that any image is missing in the cache (perhaps removed by kubelet’s image garbage collection), it re-pulls the image into the cache. Images with &lt;em&gt;:latest&lt;/em&gt; tag are always re-pulled during the refresh cycle. By default, the refresh cycle is triggered every &lt;em&gt;5m&lt;/em&gt;. Users can modify it to a different value or completely disable the auto-refresh mechanism while deploying kube-fledged. An on-demand refresh mechanism is also supported, using which users can request kube-fledged to refresh the image cache immediately.&lt;/p&gt;

&lt;h2&gt;
  
  
  Image Cache actions supported by kube-fledged
&lt;/h2&gt;

&lt;p&gt;kube-fledged supports the following image cache actions. All these actions can be performed using kubectl or by directly submitting a REST API request to the Kubernetes API server:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create Image Cache&lt;/li&gt;
&lt;li&gt;Modify Image Cache&lt;/li&gt;
&lt;li&gt;Refresh Image Cache&lt;/li&gt;
&lt;li&gt;Purge Image Cache&lt;/li&gt;
&lt;li&gt;Delete Image Cache&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Supported Container Runtimes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;docker&lt;/li&gt;
&lt;li&gt;containerd&lt;/li&gt;
&lt;li&gt;cri-o&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Supported Platforms
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;linux/amd64&lt;/li&gt;
&lt;li&gt;linux/arm&lt;/li&gt;
&lt;li&gt;linux/arm64&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try kube-fledged
&lt;/h2&gt;

&lt;p&gt;The quickest way to try out kube-fledged is to deploy it using the YAML manifests in the project’s GitHub Repo (&lt;a href="https://github.com/senthilrch/kube-fledged" rel="noopener noreferrer"&gt;https://github.com/senthilrch/kube-fledged&lt;/a&gt;). You could also deploy it using helm chart and helm operator. Find below the steps for deploying kube-fledged using manifests:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clone the source code repository&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ mkdir -p $HOME/src/github.com/senthilrch
$ git clone https://github.com/senthilrch/kube-fledged.git $HOME/src/github.com/senthilrch/kube-fledged
$ cd $HOME/src/github.com/senthilrch/kube-fledged


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Deploy kube-fledged to the cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ make deploy-using-yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;Verify if kube-fledged deployed successfully&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;$ kubectl get pods -n kube-fledged -l app=kubefledged&lt;br&gt;
$ kubectl get imagecaches -n kube-fledged (Output should be: 'No resources found')&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Similar solutions&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;Find below a list of similar open source solutions that I have noticed. These solutions try to address the problem using alternate approaches (If you happen to know of other similar solutions, please add a comment to this blog).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stargz Snapshotter&lt;/strong&gt;: Fast container image distribution plugin with lazy pulling (URL: &lt;a href="https://github.com/containerd/stargz-snapshotter" rel="noopener noreferrer"&gt;https://github.com/containerd/stargz-snapshotter&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Uber Kraken&lt;/strong&gt;: Kraken is a P2P Docker registry capable of distributing TBs of data in seconds (URL: &lt;a href="https://github.com/uber/kraken" rel="noopener noreferrer"&gt;https://github.com/uber/kraken&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Imagewolf&lt;/strong&gt;: ImageWolf is a PoC that provides a blazingly fast way to get Docker images loaded onto your cluster, allowing updates to be pushed out quicker (URL: &lt;a href="https://github.com/ContainerSolutions/ImageWolf" rel="noopener noreferrer"&gt;https://github.com/ContainerSolutions/ImageWolf&lt;/a&gt;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There are Applications and Use cases which require rapid start-up and scaling. The delay introduced by pulling images from the registry might not be acceptable in such cases. Moreover the network connectivity to the registry might be unstable/intermittent. And there could be security reasons for not granting access to secure registries to all the users. Kube-fledged can be a simple and useful solution to build and mange a cache of container images directly on the cluster worker nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Join us
&lt;/h2&gt;

&lt;p&gt;Register for &lt;em&gt;Kubernetes Community Days Chennai 2022&lt;/em&gt; at &lt;a href="http://www.kcdchennai.in" rel="noopener noreferrer"&gt;kcdchennai.in&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kcdchennai</category>
      <category>kubernetes</category>
      <category>containers</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Kubernetes Community Days Chennai Chapter - Logo Highlights</title>
      <dc:creator>Senthil Raja Chermapandian</dc:creator>
      <pubDate>Mon, 07 Mar 2022 21:25:49 +0000</pubDate>
      <link>https://forem.com/kcdchennai/kubernetes-community-days-chennai-chapter-logo-highlights-1kf7</link>
      <guid>https://forem.com/kcdchennai/kubernetes-community-days-chennai-chapter-logo-highlights-1kf7</guid>
      <description>&lt;p&gt;&lt;em&gt;by Gayathri R, Co-Organizer, KCD Chennai 2022&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes Community Days(KCD) Chennai&lt;/strong&gt; chapter logo hosts multiple identities such as the tri-color of the Indian National flag, monuments in Chennai and green avenue trees.&lt;/p&gt;

&lt;p&gt;In the middle of the logo, is the steering arms of the Kubernetes rudder, 3 stripes of the Indian national flag (Saffron, white and green), lined by 7 iconic monuments in Chennai :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Chennai Central station&lt;/strong&gt; - One of the most prominent landmarks of Chennai that serves as the main gateway by Rail to the rest of the country.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vailankanni Shrine&lt;/strong&gt; - Known as the 'Lourdes of the East', the Velankanni Church is one of the most revered pilgrimages for Catholics in India.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;San Thome Cathedral Basilica&lt;/strong&gt; - It's a stunning piece of architecture steeped in religious history.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ripon Building&lt;/strong&gt; - It is the seat and headquarters of the Greater Chennai Corporation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kapaleeshwarar Temple&lt;/strong&gt; - It's one of the prominent Shiva temples in India situated in Mylapore, Chennai.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;St. George’s Fort&lt;/strong&gt; - It is historically called as White Town. It's the first English (later British) fortress in India&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Valluvar Kottam&lt;/strong&gt; - It is the memorial monument of the Tamil poet Thiruvalluvar who wrote the famous Thirukkural.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Logo symbolizes - &lt;strong&gt;Team spirit, talent, learning and action&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Join the KCD Chennai Chapter: &lt;a href="//www.kcdchennai.in"&gt;www.kcdchennai.in&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Follow us in Twitter: &lt;a href="//twitter.com/kcdchennai"&gt;twitter.com/kcdchennai&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Follow us in LinkedIn: &lt;a href="//linkedin.com/in/kcdchennai"&gt;linkedin.com/in/kcdchennai&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Join our slack channel: &lt;a href="//slack.cncf.io"&gt;slack.cncf.io&lt;/a&gt; #kcd-chennai&lt;/li&gt;
&lt;li&gt;Email: &lt;a href="//mailto:organizers-chennai@kubernetescommunitydays.org"&gt;organizers-chennai@kubernetescommunitydays.org&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kcdchennai</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Kubernetes Community Days Chennai 2022</title>
      <dc:creator>Senthil Raja Chermapandian</dc:creator>
      <pubDate>Mon, 07 Mar 2022 21:19:08 +0000</pubDate>
      <link>https://forem.com/kcdchennai/kubernetes-community-days-chennai-2022-11mh</link>
      <guid>https://forem.com/kcdchennai/kubernetes-community-days-chennai-2022-11mh</guid>
      <description>&lt;p&gt;Kubernetes Community Days (KCDs) are community-organized events that gather adopters and technologists from open source and cloud native communities for education, collaboration, and networking. KCDs are supported by the Cloud Native Computing Foundation (CNCF). These fun, locally-defined events help grow and sustain the Kubernetes community.&lt;/p&gt;

&lt;p&gt;We are hosting the very first Kubernetes Community Days in Chennai on 3-4 June 2022. It's a two day Virtual event with Keynotes, Breakout sessions and much more. We invite the open source community to benefit from this event. Come join us!!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Join the KCD Chennai Chapter: &lt;a href="//www.kcdchennai.in"&gt;www.kcdchennai.in&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Follow us in Twitter: &lt;a href="//twitter.com/kcdchennai"&gt;twitter.com/kcdchennai&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Follow us in LinkedIn: &lt;a href="//linkedin.com/in/kcdchennai"&gt;linkedin.com/in/kcdchennai&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Join our slack channel: &lt;a href="//slack.cncf.io"&gt;slack.cncf.io&lt;/a&gt; #kcd-chennai&lt;/li&gt;
&lt;li&gt;Email: &lt;a href="//mailto:organizers-chennai@kubernetescommunitydays.org"&gt;organizers-chennai@kubernetescommunitydays.org&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kcdchennai</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Woot…Kubernetes Adds Support for Swap Memory</title>
      <dc:creator>Senthil Raja Chermapandian</dc:creator>
      <pubDate>Thu, 02 Sep 2021 07:39:25 +0000</pubDate>
      <link>https://forem.com/kcdchennai/woot-kubernetes-adds-support-for-swap-memory-1912</link>
      <guid>https://forem.com/kcdchennai/woot-kubernetes-adds-support-for-swap-memory-1912</guid>
      <description>&lt;h2&gt;
  
  
  Woot moment
&lt;/h2&gt;

&lt;p&gt;When I was going through the Kubernetes 1.22 release announcement blog, there was a woot moment in it for me. Within the section titled Major Themes, was a brief mention about a new alpha feature titled Node system swap support. My mind re-winded back to the year 2016, when I took up a new assignment with a Telecom MNC as PaaS platform developer. My job required me to spin up 2-node K8S clusters very often (back then we did not have tools like kubeadm, kubespray etc.) using Cloud VMs. I would sometimes forget disabling swap on the nodes and the cluster wouldn’t come up. After a bit of frustration, coffee and help from colleagues, I would then realize I had once again forgotten to disable swap!! Many of my fellow colleagues had same experiences as well, so much so, if someone faced issues in bootstrapping a K8s cluster, the first piece of query would be “Did you disable swap?”. Nostalgia apart, back then I did not dig deeper into the significance of disabling swap when standing up a K8s cluster.&lt;/p&gt;

&lt;p&gt;So when I read the release announcement blog, I decided to dig deeper into this topic. Soon after the 1.22 release, Elana Hashman (Red Hat) published a blog, which shed more light into this new alpha feature. This blog is inspired by Elana’s blog and I have used some of it’s contents here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why prior releases of K8s did not support swap?
&lt;/h2&gt;

&lt;p&gt;My digging led me to issue #53533 raised by Denis Gladkikh in year 2017. This was a bug report that said: &lt;strong&gt;Kubelet/Kubernetes 1.8 does not work with Swap enabled on Linux Machines.&lt;/strong&gt; Before I get into the details, lets digress a bit to understand the purpose of Swap Memory.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Swap is a space on the host disk that is used when the amount of physical RAM memory is full. When a Linux system runs out of RAM, inactive memory pages are moved from the RAM to the swap space, so RAM can be allocated to processes that require it the most. Swap space can take the form of either a dedicated swap partition or a swap file. In most cases, when running Linux on a virtual machine, a swap partition is not present, so the only option is to create a swap file. Swappiness is a Linux kernel property that defines how often the system will use the swap space. Swappiness can have a value between 0 and 100. A low value will make the kernel to try to avoid swapping whenever possible, while a higher value will make the kernel to use the swap space more aggressively.&lt;br&gt;
In prior releases, Kubernetes did not support the use of swap memory: having swap available has very strange and bad interactions with memory limits. For example, a container that hits its memory limit would then start spilling over into swap. This also has an effect that if the memory gets exhausted on the node, it will potentially become completely locked up — requiring a restart of the node, rather than just slowing down and recovering a while later.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A workaround was available for those who really wanted swap. kubelet had to be run with &lt;code&gt;--fail-swap-on=false.&lt;/code&gt; Containers which do not specify a memory requirement will then by default be able to use all of the machine memory, including swap. This might only really be a viable strategy if none of the containers ever specify an explicit memory requirement…&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenarios considered and supported
&lt;/h2&gt;

&lt;p&gt;There are a number of possible ways that one could envision swap use on a node.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Swap is enabled on a node’s host system, but the kubelet does not permit Kubernetes workloads to use swap.&lt;/li&gt;
&lt;li&gt;Swap is enabled at the node level. The kubelet can permit Kubernetes workloads scheduled on the node to use some quantity of swap, depending on the configuration.&lt;/li&gt;
&lt;li&gt;Swap is set on a per-workload basis. The kubelet sets swap limits for each individual workload.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The alpha feature is limited in scope to the first two scenarios. The third scenario is not implemented yet. Perhaps this might be considered in a future release..&lt;/p&gt;

&lt;h2&gt;
  
  
  Use cases that could benefit from swap support
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Improved Node Stability
&lt;/h3&gt;

&lt;p&gt;cgroupsv2 improved memory management algorithms, such as oomd, strongly recommend the use of swap. Hence, having a small amount of swap available on nodes could improve better resource pressure handling and recovery.&lt;/p&gt;

&lt;h3&gt;
  
  
  Long-running applications that swap out startup memory
&lt;/h3&gt;

&lt;p&gt;Applications such as the Java and Node runtimes rely on swap for optimal performance. Initialization logic of applications can be safely swapped out without affecting long-running application resource usage&lt;/p&gt;

&lt;h3&gt;
  
  
  Memory Flexibility
&lt;/h3&gt;

&lt;p&gt;There are numerous cases in which cost of additional memory is prohibitive, or elastic scaling is impossible (e.g. on-premise/bare metal deployments). Occasional cron job with high memory usage and lack of swap support means cloud nodes must always be allocated for maximum possible memory utilization, leading to over-provisioning/high costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Local development and systems with fast storage
&lt;/h3&gt;

&lt;p&gt;Local development or single-node clusters and systems with fast storage may benefit from using available swap (e.g. NVMe swap partitions, one-node clusters). Linux has optimizations for swap on SSD, allowing for performance boosts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Low footprint systems
&lt;/h3&gt;

&lt;p&gt;For example, edge devices with limited memory. Edge compute systems/devices with small memory footprints (&amp;lt;2Gi). Clusters with nodes &amp;lt;4Gi memory&lt;/p&gt;

&lt;h2&gt;
  
  
  Enabling Swap Support
&lt;/h2&gt;

&lt;p&gt;Swap can be enabled as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provision swap on the target worker nodes,&lt;/li&gt;
&lt;li&gt;Enable the NodeMemorySwap feature flag on the kubelet,&lt;/li&gt;
&lt;li&gt;Set --fail-on-swap flag to false, and&lt;/li&gt;
&lt;li&gt;(Optional) Allow Kubernetes workloads to use swap by setting &lt;code&gt;MemorySwap.SwapBehavior=UnlimitedSwap&lt;/code&gt; in the kubelet config.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;MemorySwap.SwapBehavior&lt;/code&gt; configures swap memory available to container workloads. May be one of :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;“LimitedSwap”&lt;/code&gt;: workload combined memory and swap usage cannot exceed pod memory limit&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;“UnlimitedSwap”:&lt;/code&gt; workloads can use unlimited swap, up to the allocatable limit.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the feature flag is enabled, the user must still set &lt;code&gt;--fail-swap-on=false&lt;/code&gt; to adjust the default behaviour. A node must have swap provisioned and available for this feature to work. If there is no swap available, but the feature flag is set to true, there will still be no change in existing behaviour.&lt;/p&gt;

&lt;p&gt;The feature flag can be disabled while the &lt;code&gt;--fail-swap-on=false&lt;/code&gt; flag is set, but this would result in undefined behaviour. To turn this off, the kubelet would need to be restarted. If a cluster admin wants to disable swap on the node without repartitioning the node, they could stop the kubelet, set swapoff on the node, and restart the kubelet with &lt;code&gt;--fail-swap-on=true.&lt;/code&gt; The setting of the feature flag will be ignored in this case.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caveats
&lt;/h2&gt;

&lt;p&gt;Having swap available on a system reduces predictability. Swap’s performance is worse than regular memory, sometimes by many orders of magnitude, which can cause unexpected performance regressions. Furthermore, swap changes a system’s behaviour under memory pressure, and applications cannot directly control what portions of their memory usage are swapped out. Since enabling swap permits greater memory usage for workloads in Kubernetes that cannot be predictably accounted for, it also increases the risk of noisy neighbours and unexpected packing configurations, as the scheduler cannot account for swap memory usage.&lt;/p&gt;

&lt;p&gt;The performance of a node with swap memory enabled depends on the underlying physical storage. When swap memory is in use, performance will be significantly worse in an I/O operations per second (IOPS) constrained environment, such as a cloud VM with I/O throttling, when compared to faster storage mediums like solid-state drives or NVMe.&lt;/p&gt;

&lt;p&gt;Use of swap is not recommended for certain performance-constrained workloads or environments. Cluster administrators and developers should benchmark their nodes and applications before using swap in production scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  Points to Remember
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The CRI has changed, so container runtimes (e.g. containerd, cri-o) won’t be able to actually accept swap configs until they have been updated against the new CRI.&lt;/li&gt;
&lt;li&gt;The work isn’t over with this release; there will be a multi-release graduation process. This feature won’t be graduating until at least 1.25.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Swap support in Kubernetes should appeal to a wide variety of users: cluster administrators and developers. This feature could be one more lever in reducing infrastructure costs for businesses. However since the feature is still in alpha stage, it is not ready for production usage. Readers are encouraged to perform benchmarking tests to evaluate the suitability for their workloads/use cases and provide feedback to K8s SIG Node WG via following means:&lt;/p&gt;

&lt;p&gt;SIG Node meets regularly and can be reached via Slack (channel #sig-node), or the SIG’s mailing list. Feel free to&lt;br&gt;
reach out to Elana Hashman (@ehashman on Slack and GitHub) if you’d like to help.&lt;/p&gt;

&lt;h2&gt;
  
  
  Join us
&lt;/h2&gt;

&lt;p&gt;Register for &lt;em&gt;Kubernetes Community Days Chennai 2022&lt;/em&gt; at &lt;a href="http://www.kcdchennai.in"&gt;kcdchennai.in&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>linux</category>
      <category>kcdchennai</category>
      <category>posted</category>
    </item>
  </channel>
</rss>
