<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Poonam Pawar</title>
    <description>The latest articles on Forem by Poonam Pawar (@poonam1607).</description>
    <link>https://forem.com/poonam1607</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/poonam1607"/>
    <language>en</language>
    <item>
      <title>AWS Project using SHELL SCRIPTING for DevOps</title>
      <dc:creator>Poonam Pawar</dc:creator>
      <pubDate>Wed, 31 May 2023 10:00:31 +0000</pubDate>
      <link>https://forem.com/kcdchennai/ws-project-using-shell-scripting-for-devops-115m</link>
      <guid>https://forem.com/kcdchennai/ws-project-using-shell-scripting-for-devops-115m</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the real-world DevOps scenario, The &lt;strong&gt;AWS Resource Tracker&lt;/strong&gt; script is widely used to provide an overview of the AWS resources being utilised within an environment.&lt;/p&gt;

&lt;p&gt;It aims to help organisations monitor and manage their AWS resources effectively. The script utilises the AWS Command Line Interface (&lt;strong&gt;CLI&lt;/strong&gt;) to fetch information about different AWS services, such as &lt;strong&gt;S3 buckets, EC2 instances, Lambda functions, and IAM users&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does this project do?
&lt;/h2&gt;

&lt;p&gt;By running the AWS Resource Tracker script, users can quickly obtain a list of S3 buckets, EC2 instances, Lambda functions, and IAM users associated with their AWS account.&lt;/p&gt;

&lt;p&gt;This information can be valuable for various purposes, including auditing, inventory management, resource optimisation, and security assessment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build the project
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Create EC2 Instance
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;. Go to your &lt;strong&gt;AWS&lt;/strong&gt; account and log in, then search for &lt;strong&gt;EC2&lt;/strong&gt; Instances in the search bar. Or you can click on the services button that is on the top left corner of your dashboard, from there also you can search for the same.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45bp44kldk9i1yhnl8dk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45bp44kldk9i1yhnl8dk.png" alt="Imageaws"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then click on the &lt;strong&gt;Launch Instance&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;. Give it a &lt;strong&gt;name&lt;/strong&gt; of your choice, &lt;strong&gt;ubuntu&lt;/strong&gt; as a machine image, select your &lt;strong&gt;key-pair&lt;/strong&gt; or create one if not have any. Leave the instance type &lt;strong&gt;t2.micro&lt;/strong&gt; ie free tier as the same and click again on the &lt;strong&gt;Launch Instance&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fresa6qttug4lqtl7l48r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fresa6qttug4lqtl7l48r.png" alt="Image aws"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can see your &lt;strong&gt;instance&lt;/strong&gt; up and &lt;strong&gt;running&lt;/strong&gt; like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12bnduj2ggqlebdjn8lg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12bnduj2ggqlebdjn8lg.png" alt="Image aws"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;. Now click on the instance &lt;strong&gt;id&lt;/strong&gt; which will give you detailed information about the running instance, there copy the public &lt;strong&gt;IP address&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F750rec8gy31u2876xc5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F750rec8gy31u2876xc5a.png" alt="Image aws"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the Instance&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;. Now open up your terminal and run the below command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ssh -i /Users/poonampawar/Downloads/my-key-pair.pem ubuntu@ip_add&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We have to go inside the directory where you have downloaded your key-pair and in my case, it is in &lt;code&gt;/Downloads&lt;/code&gt; folder.&lt;/p&gt;

&lt;p&gt;Check yours and give a path accordingly. And don't forget to replace &lt;code&gt;ip_add&lt;/code&gt; with your copied one. This will take you to the virtual machine which we have created in AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t67paiubn1xyvyxafm5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3t67paiubn1xyvyxafm5.png" alt="Image aws"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Script
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 5&lt;/strong&gt;. Now create a shell script file named &lt;code&gt;aws_resource_tracker.sh&lt;/code&gt; and copy and paste the below script.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#########################
# Author: Your Name
# Date: 28/05/23
# Version: v1
#
# This script will report the AWS resource usage
########################

set -x

# AWS S3
# AWS EC2
# AWS Lambda
# AWS IAM Users

# list s3 buckets
echo "Print list of s3 buckets"
aws s3 ls

# list EC2 Instances
echo "Print list of ec2 instances"
aws ec2 describe-instances | jq '.Reservations[].Instances[].InstanceId'

# list lambda
echo "Print list of lambda functions"
aws lambda list-functions

# list IAM Users
echo "Print list of iam users"
aws iam list-users


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1q4n733hq6o8ybgsxc4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1q4n733hq6o8ybgsxc4.png" alt="Image aws"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It is a good convention to always provide your details so that it will make it easier for other developers to contribute if they want to ie, metadata.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The command &lt;code&gt;set -x&lt;/code&gt; is given here in order to debug the script, this will print the command which we are running and then prints the output.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The commands &lt;code&gt;aws s3 ls&lt;/code&gt;, &lt;code&gt;aws lambda list-functions&lt;/code&gt; and &lt;code&gt;aws iam list-users&lt;/code&gt; print the information of the given statement.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The command&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws ec2 describe instances | jq '.Reservations[].Instances[].InstanceId'&lt;/code&gt; will give you all the Instance IDs present in your &lt;code&gt;aws&lt;/code&gt; in &lt;code&gt;JSON&lt;/code&gt; format.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test the script
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 6&lt;/strong&gt;. Run the below command to see the output.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;./aws_resource_tracker.sh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The output will look like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsk0ke7hl253msylpx9m1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsk0ke7hl253msylpx9m1.png" alt="Image o/p"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Apply CronJob
&lt;/h3&gt;

&lt;p&gt;To use cron jobs with your script, you can schedule it to run at specific intervals using the cron syntax. Here's an example of how you can modify your script to use cron jobs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open a terminal and run the following command to edit the &lt;br&gt;
crontab file:&lt;br&gt;
&lt;code&gt;crontab -e&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If prompted, choose your preferred text editor.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6328ifon40392btyjxb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6328ifon40392btyjxb.png" alt="Image o/p"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add a new line to the crontab file to schedule your 
script. For example, to run the script every day at 9 AM, 
you can add the following line:
&lt;code&gt;0 9 * * * /path/to/your/script.sh&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Modify the path &lt;code&gt;/path/to/your/script.sh&lt;/code&gt; to the actual &lt;br&gt;
  path where your script is located.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4vc06fc6kjsfxxo6tv52.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4vc06fc6kjsfxxo6tv52.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Save the crontab file and exit the text editor.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The above cron expression (0 9 * * *) represents the schedule: minute (0), hour (9), any day of the month, any month, and any day of the week. You can customize the schedule based on your requirements using the cron syntax.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By adding this line to the crontab file, your script will be executed automatically according to the defined schedule. The output of the script will be sent to your email address by default, or you can redirect the output to a file if needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure that the script is executable &lt;code&gt;(chmod +x /path/to/your/script.sh)&lt;/code&gt; and that the necessary environment variables and AWS CLI configurations are set up correctly for the script to run successfully within the cron environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitoring and auditing&lt;/strong&gt;: By running this script periodically using cron jobs, you can monitor and audit your AWS resources. It provides insights into the status and details of different resources, such as S3 buckets, EC2 instances, Lambda functions, and IAM users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource inventory&lt;/strong&gt;: The script helps in maintaining an up-to-date inventory of your AWS resources. It lists the S3 buckets, EC2 instances, Lambda functions, and IAM users, allowing you to have a clear understanding of what resources exist in your environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Troubleshooting&lt;/strong&gt;: In case of any issues or incidents, this script can be used to quickly gather information about the relevant AWS resources. For example, if there is an issue with an EC2 instance, you can run the script to get the instance ID and other details for further investigation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automation and reporting&lt;/strong&gt;: The script can be integrated into an automated pipeline or workflow to generate regular reports about AWS resource usage. This information can be valuable for tracking costs, resource utilization, and compliance requirements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability and efficiency&lt;/strong&gt;: In larger environments with numerous AWS resources, manually retrieving information about each resource can be time-consuming and error-prone. By using this script, you can automate the process and retrieve resource details in a consistent and efficient manner.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, this script simplifies the process of gathering information about AWS resources, enhances visibility into your infrastructure, and supports effective management and monitoring of your DevOps environment.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Github link: &lt;a href="https://github.com/Poonam1607/shell-scripting-projects" rel="noopener noreferrer"&gt;https://github.com/Poonam1607/shell-scripting-projects&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Resource: For better understanding and visual learning you can check out this tutorial - &lt;a href="https://youtu.be/gx5E47R9fGk" rel="noopener noreferrer"&gt;https://youtu.be/gx5E47R9fGk&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;This project is purely based on my learnings. It may occur error while performing in your setup. If you find any issue with it, feel free to reach out to me.&lt;/p&gt;

&lt;p&gt;Thank you🖤!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>shell</category>
      <category>devops</category>
      <category>kcdchennai</category>
    </item>
    <item>
      <title>Kubernetes Cluster Maintenance</title>
      <dc:creator>Poonam Pawar</dc:creator>
      <pubDate>Tue, 30 May 2023 10:16:13 +0000</pubDate>
      <link>https://forem.com/kcdchennai/kubernetes-cluster-maintenance-58k8</link>
      <guid>https://forem.com/kcdchennai/kubernetes-cluster-maintenance-58k8</guid>
      <description>&lt;h2&gt;
  
  
  Introduction✍️
&lt;/h2&gt;

&lt;p&gt;Till now we have done a lot of things!!🥹&lt;/p&gt;

&lt;p&gt;Recap👇&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes Installation &amp;amp; Configurations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes Networking, Workloads, Services&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes Storage &amp;amp; Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;kudos to you!👏🫡 This is literally a lot...!😮‍💨&lt;/p&gt;

&lt;p&gt;By doing all these workloads your cluster is tiered now🤕. You have to give the energy back and make it faster. It is very important to make your &lt;strong&gt;cluster healthy&lt;/strong&gt;😄 and fine.&lt;/p&gt;

&lt;p&gt;So it's high time to understand the Kubernetes cluster maintenance stuff now.&lt;/p&gt;

&lt;p&gt;In our today's learning, we will be covering cluster upgradation, backing up and &lt;strong&gt;restoring&lt;/strong&gt; the data and &lt;strong&gt;scaling&lt;/strong&gt; our Kubernetes cluster. So, let's get started!🚀&lt;/p&gt;

&lt;h2&gt;
  
  
  Cluster Upgradation♿
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Let's say you are running your application in the Kubernetes cluster having &lt;strong&gt;master&lt;/strong&gt; node and &lt;strong&gt;worker&lt;/strong&gt; nodes. Pods and replicas are up and running.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now you want to &lt;strong&gt;upgrade&lt;/strong&gt; your nodes. As everyone wants to keep updated themselves and so do the nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When a node is in upgradation mode, it generally goes down and is no longer in use. So you cannot keep your master and worker node together in the upgrade state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;So firstly, you will upgrade the master node. While it's upgrading, your worker node cannot deploy new pods or do any modifications. The Pods that are running already, only they will be available for the users to access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Though the users are not going to be impacted as they still have the application up and running.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When the master node is done with the upgradation and again up and running, now we can upgrade our worker nodes. But in this also, we cannot make worker nodes goes down altogether as this may impact the users who are using the applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If we have three Worker nodes in our cluster, they will go one after the other in the upgrade state. When &lt;code&gt;node01&lt;/code&gt; goes down, the pods and replicas running in that node will shift to the other working nodes for a while ie in &lt;code&gt;node02&lt;/code&gt; and &lt;code&gt;node03&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then &lt;code&gt;node02&lt;/code&gt; will go down after &lt;code&gt;node01&lt;/code&gt; is upgraded and available again for the users. The pod distribution of &lt;code&gt;node02&lt;/code&gt; will go to &lt;code&gt;node01&lt;/code&gt; and &lt;code&gt;node03&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And the same procedure will follow up to upgrade &lt;code&gt;node03&lt;/code&gt;. This is how we upgrade our cluster in Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There is another way to upgrade the cluster, you can deploy a worker node with the updated version into your cluster and shift the workloads of the older one to the new ones then delete the older node. This is how you can achieve the upgradation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let's do this practically. First, the master node:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubeadm&lt;/code&gt; which is a tool for managing clusters has an upgrade command that helps in upgrading the clusters. Run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubeadm upgrade plan&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Run the above command to see the detailed information on the &lt;strong&gt;upgrade plan&lt;/strong&gt; and gives you the information of the upgrade plan if your system is needed.&lt;/p&gt;

&lt;p&gt;Then run the &lt;code&gt;drain&lt;/code&gt; command to make it &lt;code&gt;un-schedulable&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F348wg85q9mp4x4wj39on.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F348wg85q9mp4x4wj39on.png" alt="Image drain"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now install all the packages to use &lt;code&gt;kubelet&lt;/code&gt; as it is a must in running &lt;code&gt;controlplane&lt;/code&gt; node:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyeghfdhh2g0m0t94boe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyeghfdhh2g0m0t94boe.png" alt="Image cp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, Run the below command to upgrade the version:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;apt-get upgrade -y kubeadm=1.12.0-00&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now to upgrade the cluster, run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubeadm upgrade apply v1.12.0&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It will pull the necessary images and upgrade the cluster components.&lt;/p&gt;

&lt;p&gt;Now run the below command to see the changes&lt;/p&gt;

&lt;p&gt;&lt;code&gt;systemctl restart kubelet&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now it's time to upgrade the worker node one at a time. Follow these commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# First move all the workloads of node-1 to the others

$ kubectl drain node-1
# this terminate all the pods from a node &amp;amp; reschedule them on others

$ apt-get upgrade -y kubeadm=1.12.0-00
$ apt-get upgrade -y kubelet=1.12.0-00

$ kubeadm upgrade node config --kubelet-version v1.12.0
# upgrade the node config for the new kubelet version

$ systemctl restart kubelet

# as we marked the node un-schedulable above, wee need to make schedule again
$ kubectl uncordon node-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqn3m3ysu8s1m3vec7ugh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqn3m3ysu8s1m3vec7ugh.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3066gimwfgwbiiw1s3gs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3066gimwfgwbiiw1s3gs.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvudtuw6hsufcc2diojsk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvudtuw6hsufcc2diojsk.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femzf1fjaighld5a2d4fp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femzf1fjaighld5a2d4fp.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxzshycmrtwdbmne8r5j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxzshycmrtwdbmne8r5j.png" alt="Image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl&lt;/code&gt; command will not work on the worker node, it will only work on the master node that's why after applying all the commands come back on the &lt;code&gt;controlplane&lt;/code&gt; and make the node available again for scheduling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backup &amp;amp; Restore Methods🛗
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We have till now deployed many numbers of applications in the Kubernetes cluster using &lt;code&gt;pods&lt;/code&gt;, &lt;code&gt;deployments&lt;/code&gt; and &lt;code&gt;services&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;So there are many files that are important to be backed up. Like the &lt;code&gt;ETCD&lt;/code&gt; cluster where all the information about clusters is stored.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Persistent volumes&lt;/strong&gt; storage is where we store the pod's data as we learned above.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can store all these files in a source code repository like &lt;strong&gt;GitHub&lt;/strong&gt; which is a good practice. In this even if you lost your whole cluster you still can deploy it again if you're using GitHub.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A better approach to back up your file is to query the &lt;code&gt;kube-api&lt;/code&gt; server using the &lt;code&gt;kubectl&lt;/code&gt; or by accessing the API server directly and saving all resource configurations for all objects created on the cluster as a copy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can also choose to back up the &lt;code&gt;ETCD&lt;/code&gt; server itself instead of the files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Like the screenshots on your phone, you take snapshots here of the database by using the &lt;code&gt;etdctl&lt;/code&gt; utilities snapshot save command.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;ETCDCTL_API=3 etcdctl \ snapshot save &amp;lt;name&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, if you want to restore the snapshot. First, you have to stop the server as the restore process requires the restart &lt;code&gt;ETCD&lt;/code&gt; cluster and &lt;code&gt;kube-api&lt;/code&gt; server&lt;/p&gt;

&lt;p&gt;&lt;code&gt;service kube-apiserver stop&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then run the &lt;code&gt;restore&lt;/code&gt; command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ETCDCTL_API=3 etcdctl \ snapshot restore &amp;lt;name&amp;gt; \ --data-dir &amp;lt;path&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now restart the services which we stopped earlier&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$systemctl daemon-reload
$service etcd restart
$service kube-apiserver start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Scaling Clusters📶
&lt;/h2&gt;

&lt;p&gt;We have done the scaling of pods in the Kubernetes cluster very well. Now what if you want to scale your cluster? Let's see how it can be done.&lt;/p&gt;

&lt;p&gt;According to the capacity worker nodes adjust themselves by adding or removing from the cluster.&lt;/p&gt;

&lt;p&gt;Kubernetes provides several tools and methods for scaling a cluster, like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manual Scaling🫥&lt;/li&gt;
&lt;li&gt;Horizontal Pod Autoscaler (HPA)▶️&lt;/li&gt;
&lt;li&gt;Cluster Autoscaler↗️&lt;/li&gt;
&lt;li&gt;Vertical Pod Autoscaler (VPA)⏫&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's have a look at all one by one&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Manual Scaling&lt;/strong&gt; - As the name suggests, we have to scale it manually using the &lt;code&gt;kubectl&lt;/code&gt; command. Or if you're using any cloud provider, increase or decrease the number of worker nodes manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Horizontal Pod Autoscaler (HPA)&lt;/strong&gt; - It automatically scales the number of replicas of a deployment or a replica set based on the observed CPU utilisation or other custom metrics.&lt;/p&gt;

&lt;p&gt;When defining the definition file, you must ensure the usage of memory and cpu&lt;/p&gt;

&lt;p&gt;To use &lt;code&gt;utilisation-based&lt;/code&gt; resource scaling, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; type: Resource
 resource:
   name: cpu
   target:
     type: Utilization
     averageUtilization: 60
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is also known as "&lt;strong&gt;scaling out&lt;/strong&gt;". It involves adding more replicas of a pod to a deployment or replica set to handle the increased load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster Autoscaler&lt;/strong&gt; - Based on the pending pods and the available resources in the cluster, it automatically scales the number of worker nodes in a cluster.&lt;/p&gt;

&lt;p&gt;Read more about this scaler in detail &lt;a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="noopener noreferrer"&gt;here&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vertical Pod Autoscaler (VPA)&lt;/strong&gt; - It automatically adjusts the resource requests and limits of the containers in a pod based on the observed resource usage.&lt;/p&gt;

&lt;p&gt;It is also known as "&lt;strong&gt;scaling up&lt;/strong&gt;," which involves increasing the CPU, memory, or other resources allocated to a single pod.&lt;/p&gt;




&lt;p&gt;Thank you!🖤&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>maintenance</category>
      <category>devops</category>
      <category>kcdchennai</category>
    </item>
    <item>
      <title>Simply Deploying Kubernetes Workloads</title>
      <dc:creator>Poonam Pawar</dc:creator>
      <pubDate>Mon, 29 May 2023 06:00:29 +0000</pubDate>
      <link>https://forem.com/kcdchennai/simply-deploying-kubernetes-workloads-loi</link>
      <guid>https://forem.com/kcdchennai/simply-deploying-kubernetes-workloads-loi</guid>
      <description>&lt;h2&gt;
  
  
  Introduction✍️
&lt;/h2&gt;

&lt;p&gt;Before moving forward directly to our main topics let's recall the sub-topics which is crucial for the next ones. So that you can grasp the context in a much more clear way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Every other Relationship👀
&lt;/h2&gt;

&lt;p&gt;First, let's talk about every relationship which we have to keep in mind forever from now onward.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;POD -&amp;gt; CONTAINER -&amp;gt; NODE -&amp;gt; CLUSTER&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Feh-ZmBB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f5gu9bwa8of93b2d4rqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Feh-ZmBB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f5gu9bwa8of93b2d4rqq.png" alt="Imagepods" width="794" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Are you thinking of the same as what I am thinking?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Dabbe pe dabba, uske upper phir se ek dabba...🤪&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;No?? Ok ok! Forgive me:)&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(translation: don't mind it, please. Thank you!)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So basically you got the idea of how pods are encapsulated into the containers which are placed inside the node and the group of nodes becomes a cluster.&lt;/p&gt;

&lt;p&gt;And we have already covered their workings in the previous articles.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-requisites👉
&lt;/h2&gt;

&lt;p&gt;So a must-have pre-requisite topic is &lt;strong&gt;ReplicaSets&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now be serious guys and let's start the conversation on &lt;code&gt;replicaset&lt;/code&gt; aka &lt;code&gt;rs&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  ReplicaSets🗂️
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--85NzyhVQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5j65fjmtomsae9yds6hy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--85NzyhVQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5j65fjmtomsae9yds6hy.png" alt="Image RS" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Continuing the imagination from the previous blog of your application deployment...&lt;/p&gt;

&lt;p&gt;You have your application running into a pod. Now suddenly your application traffic grows and you didn't prepare your app for this huge number and due to this it crashes.&lt;/p&gt;

&lt;p&gt;Or imagine another scenario where you have to update your app version from &lt;code&gt;1.0&lt;/code&gt; to &lt;code&gt;2.0&lt;/code&gt; ie &lt;code&gt;v1&lt;/code&gt; to &lt;code&gt;v2&lt;/code&gt; and in doing so your app stops running and the users fail to access the application.&lt;/p&gt;

&lt;p&gt;In case of application failure, you need another &lt;strong&gt;instance&lt;/strong&gt; of your application that saves you from crashes and running at the same time. So that the users cannot lose access to the application.&lt;/p&gt;

&lt;p&gt;This is where the &lt;strong&gt;replication controller&lt;/strong&gt; (now as &lt;strong&gt;replicaset&lt;/strong&gt; the upgradation version) comes in as a savior. ReplicaSets always takes care of running multiple instances of a single pod running in the k8s cluster.&lt;/p&gt;

&lt;p&gt;It helps us to automatically bring up the new pod when the existing ones fail to run.&lt;/p&gt;

&lt;p&gt;You can set it to either one or hundreds, it's totally up to your choice.&lt;/p&gt;

&lt;p&gt;It also helps us to balance the load in our k8s clusters. It maintains the &lt;strong&gt;load balance&lt;/strong&gt; when the demand increases by creating instances of the pod in other clusters too.&lt;/p&gt;

&lt;p&gt;So, it helps us to scale our application when the demand increases.&lt;/p&gt;

&lt;p&gt;A simple Replica Yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replicaset-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      tier: nginx
  template:
    metadata:
      labels:
        tier: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create -f /root/replicaset-demo.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To scale &lt;code&gt;up&lt;/code&gt; or &lt;code&gt;down&lt;/code&gt; your replicas, you can use &lt;code&gt;kubectl scale&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl scale rs replicaset-demo --replicas=5&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To check the replicasets use &lt;code&gt;get&lt;/code&gt; command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get rs&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will show you all the previous and newly created replicas by you in the system.&lt;/p&gt;

&lt;p&gt;This is how you can create replicas of your pods.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployments🏗️
&lt;/h2&gt;

&lt;p&gt;This comes to the highest in the hierarchy level of deploying our applications to production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TWYvzcMp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4cqi4qg2n2bkgqq4dxge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TWYvzcMp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4cqi4qg2n2bkgqq4dxge.png" alt="Image Deploy" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you have many instances running into the k8s cluster and want to update the version of your application then the rolling updates do their job one after the other instead of making it down at the same time and then up altogether for obvious reasons.&lt;/p&gt;

&lt;p&gt;These rolling updates and rollback performance is done by the k8s deployment.&lt;/p&gt;

&lt;p&gt;As we know that every pod deploys single instances of our application and each container is encapsulated in the pod and then such pods are deployed using replica sets.&lt;/p&gt;

&lt;p&gt;Then comes Deployment with all the capabilities of doing upgradation of the whole set of environment production.&lt;/p&gt;

&lt;p&gt;A simple deploymentyaml file &lt;code&gt;deployment-definition-httpd.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd-frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      name: httpd-frontend
  template:
    metadata:
      labels:
        name: httpd-frontend
    spec:
      containers:
      - name: httpd-frontend
        image: httpd:2.4-alpine
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create -f deployment-definition-httpd.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To check the deployments you have created use &lt;code&gt;get&lt;/code&gt; command, deployment aka &lt;code&gt;deploy&lt;/code&gt; :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get deploy&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can also check for the specific one by giving its name:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get deploy httpd-frontend&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To get more detailed information regarding the deployment, run the &lt;code&gt;describe&lt;/code&gt; command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl describe deploy httpd-frontend&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To check all the workloads you have created till now, use &lt;code&gt;get all&lt;/code&gt; commands:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get all&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3-nYZMuQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cpleobjonnik5c4mg5wx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3-nYZMuQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cpleobjonnik5c4mg5wx.png" alt="Image get" width="538" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is how you can deploy your pods.&lt;/p&gt;

&lt;h2&gt;
  
  
  StatefulSets🏗️📑
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Some topics will repeat themselves but for the sake of connecting them to the next one, it is necessary to do so. So bear with me.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The use of Deployment is to deploy all the pods together to make sure every pod is up and running, all these things we have seen above.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;But what if you have many servers in the form of pods that you want it to run in order? For this, deployment will not help you because there is no order specified in it for running pods in the k8s cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For that, you need &lt;strong&gt;StatefulSet&lt;/strong&gt;. It ensures to run the pods in the sequential order which you want to run. First, the pod is deployed and it must be in a running state then only the second one will be deployed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stateful set is similar to deployment sets, they create pods based on the templates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;They can scale &lt;code&gt;up&lt;/code&gt; and scale &lt;code&gt;down&lt;/code&gt; as per the requirements. And can perform &lt;strong&gt;rolling updates&lt;/strong&gt; and &lt;strong&gt;rollbacks&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Like, you want to deploy the &lt;code&gt;master-node&lt;/code&gt; first then the &lt;code&gt;worker-node-1&lt;/code&gt; up completely then starts running and after that &lt;code&gt;worker-node-2&lt;/code&gt; will be up and run itself into the k8s cluster. StatefulSet can help you to achieve this.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As in the deployment sets, when a pod goes down and a new pod comes up, it is with a different pod name and different &lt;strong&gt;IPs&lt;/strong&gt; come in.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;But in StatefulSets when a pod goes down and a new pod comes up, it will be the same name that you have specifically defined for that pod.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;So the StateulSets maintain an identity for each of its pods that helps to maintain the order of the deployment of your pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you want to use StatefulSets just for the sake of identity purposes for the pods and not for the sequential deploying order then you can manually remove the commands of the order maintenance, just you have to make some changes in the YAML file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Though It is not necessary for you to use this StatefulSet. As it totally depends on your need for the application. If you have servers that require an order to run or you need a stable naming convention for your pods. Then this is the right choice to use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can create a StatefulSet yaml file just like the deployment file with some changes like the main one is &lt;code&gt;kind&lt;/code&gt; as StatefulSet. Take a look:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
  labels:
        app: msql
spec:
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql
  replicas: 3
  selector:
      matchLabels:
          app: mysql
  serviceName: mysql-h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;than run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create -f statefulset-definition.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To scale up or down use the &lt;code&gt;scale&lt;/code&gt; command with the numbers you wanted to scale:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl scale statefulset mysql --replicas=5&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is how you can work with StatefulSets.&lt;/p&gt;

&lt;h2&gt;
  
  
  DaemonSets🤖
&lt;/h2&gt;

&lt;p&gt;Till now, we have deployed the replicasets as per the demand so the required number of pods is always up and running.&lt;/p&gt;

&lt;p&gt;Daemon sets are like replica sets. It helps you to deploy multiple instances of the pod.&lt;/p&gt;

&lt;p&gt;So what's the difference?&lt;/p&gt;

&lt;p&gt;It runs one &lt;strong&gt;copy&lt;/strong&gt; of your pod on each node of your cluster.&lt;/p&gt;

&lt;p&gt;Whenever you add a new node, it makes sure that a replica of the pod is automatically added to that node. And removed automatically when the node is removed.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The main difference is, the daemon sets make sure that one copy of the pod is always present in all the nodes in the k8s cluster. On the other hand replica set runs a specified number of replicas of the pod which you defined in the YAML file.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;It is also used to deploy the monitoring agent in the form of a pod.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Creating a daemon file is much similar to the replica file which we have already created above. You just have to make some changes and you are done.&lt;/p&gt;

&lt;p&gt;The main difference is of course the &lt;code&gt;kind&lt;/code&gt; ie ReplicaSet to DaemonSet&lt;/p&gt;

&lt;p&gt;Take a look &lt;code&gt;fluentd.yaml&lt;/code&gt; is the name of the file with the content below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: elasticsearch
  name: elasticsearch
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - image: registry.k8s.io/fluentd-elasticsearch:1.20
        name: fluentd-elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f fluentd.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To get the daemonstate with all the namespace which you have created, run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get daemonsets --all-namespaces&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zLQaWVO_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v78do0ps4it79sd66o5o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zLQaWVO_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v78do0ps4it79sd66o5o.png" alt="Image daemon" width="761" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is how you can create DemonSets&lt;/p&gt;

&lt;h2&gt;
  
  
  JOBS🧑‍💻
&lt;/h2&gt;

&lt;p&gt;As we all know, Kubernetes ensures that our application is always &lt;strong&gt;up&lt;/strong&gt; and &lt;strong&gt;running&lt;/strong&gt; no matter what.&lt;/p&gt;

&lt;p&gt;Let's say we perform work in our application and when it successfully completed it shows the successful message and then again comes in a running state because that's the nature of k8s.&lt;/p&gt;

&lt;p&gt;Jobs in k8s does the work in your application and when it becomes successful it shows the completed message and then stops the work as it is no longer in use.&lt;/p&gt;

&lt;p&gt;But why do we need JOBS?&lt;/p&gt;

&lt;p&gt;Let's assume you are using your camera and clicked some pictures. So this enabling of camera work is going on in your application, k8s is running your app in the pod and after use, you disabled your camera. But k8s again starts running it as it always looks up for the application up and running. And you are unaware of this pod running.&lt;/p&gt;

&lt;p&gt;Then isn't it a dangerous task?&lt;/p&gt;

&lt;p&gt;That's why we need JOBS so that in a specific work when the task is completed it does not go into a running state again.&lt;/p&gt;

&lt;p&gt;So a task you may not want to run continuously, then using a Job would be appropriate. Once the task is complete, the Job can be terminated, and the pod will not start again unless a new Job is created.&lt;/p&gt;

&lt;p&gt;It is done so because a spec member ie &lt;code&gt;restartPolicy&lt;/code&gt; is set as always by default.&lt;/p&gt;

&lt;p&gt;So in the creation of the JOBS yaml file, we set it to never&lt;/p&gt;

&lt;p&gt;Run the command to create a job definition file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create job throw-dice-job --image=kodekloud/throw-dice --dry-run=client -o yaml &amp;gt; throw-dice-job.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the following YAML file to create the job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: batch/v1
kind: Job
metadata:
  name: throw-dice-job
spec:
  backoffLimit: 15 # This is so the job does not quit before it succeeds.
  template:
    spec:
      containers:
      - name: throw-dice
        image: kodekloud/throw-dice
      restartPolicy: Never
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above file, we have created a JOB to throw the dice until it gets &lt;code&gt;six&lt;/code&gt; and the chances offered is 15 times to play.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f throw-dice-job.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is how you can create your own job file.&lt;/p&gt;

&lt;h3&gt;
  
  
  CronJob🥸
&lt;/h3&gt;

&lt;p&gt;A CronJob is a JOB that will perform the task on a given &lt;strong&gt;time period&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You can &lt;strong&gt;schedule&lt;/strong&gt; your job to do a &lt;strong&gt;task&lt;/strong&gt; at a specific &lt;strong&gt;time&lt;/strong&gt; you want.&lt;/p&gt;

&lt;p&gt;It supports complex scheduling, with the ability to specify the minute, hour, day of the month, month, and day of the week.&lt;/p&gt;

&lt;p&gt;It can be used to create one-off Jobs or Jobs that run multiple times.&lt;/p&gt;

&lt;p&gt;It can be used to run parallel processing tasks by specifying the number of pods to be created.&lt;/p&gt;

&lt;p&gt;Example,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Updating the phone at midnight to avoid interruptions while using the phone.🤳&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Email scheduling, you schedule an email using cronjob to perform this task. periodically.🧑‍💻&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let us now schedule a job to run at 21:30 hours every day.&lt;/p&gt;

&lt;p&gt;Create a CronJob for this.&lt;/p&gt;

&lt;p&gt;Use the following YAML file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: batch/v1
kind: CronJob
metadata:
  name: throw-dice-cron-job
spec:
  schedule: "30 21 * * *"
  jobTemplate:
    spec:
      completions: 3
      parallelism: 3
      backoffLimit: 25 # This is so the job does not quit before it succeeds.
      template:
        spec:
          containers:
          - name: throw-dice
            image: kodekloud/throw-dice
          restartPolicy: Never
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is how you can use the CronJob file.&lt;/p&gt;

&lt;p&gt;Every example I have given above is just for the sake of explanation. Don't take that seriously.&lt;/p&gt;

&lt;p&gt;Now we have deployed every workload of the Kubernetes cluster. It takes practice to make hands-free in creating replicas, deployments, jobs etc. So do the practice.&lt;/p&gt;




&lt;p&gt;Thank you!🖤&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>deployments</category>
      <category>devops</category>
      <category>kcdchennai</category>
    </item>
    <item>
      <title>Kubernetes Networking</title>
      <dc:creator>Poonam Pawar</dc:creator>
      <pubDate>Sat, 27 May 2023 09:18:32 +0000</pubDate>
      <link>https://forem.com/kcdchennai/kubernetes-networking-18mk</link>
      <guid>https://forem.com/kcdchennai/kubernetes-networking-18mk</guid>
      <description>&lt;h2&gt;
  
  
  Introduction✍️
&lt;/h2&gt;

&lt;p&gt;Networking in k8s is such a crucial topic to understand and a must for working in k8s. Here we will discuss networking, particularly in the work use of k8s directly. So before you come to this topic you must have knowledge of basic computer networking.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Switching&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Routing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Gateways&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bridges&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  PODS📦
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Before jumping right into the Kubernetes networking stuff. First, let's recall the Pods concept because we will be using Pods in every sentence further.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Whatever we are doing, the main goal is to deploy our application in the form of containers on a worker node in the cluster which must be up and running.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As Kubernetes does not &lt;strong&gt;deploy&lt;/strong&gt; the containers directly on the nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The containers are encapsulated into an object known as Pods&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Pod is a single instance of an application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A Pod is the smallest object that you can create in Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A simple yaml pod file:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
      apiVersion: v1
      kind: Pod
      metadata:
        name: demo-pod
      spec:
        containers:
        - name: demo-container
          image: nginx
          ports:
          - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Then run the &lt;code&gt;kubectl&lt;/code&gt; apply command once you have created the YAML file to deploy it in the Kubernetes cluster with the necessary file name you have given.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f demo-pod.yaml&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is how you can start creating PODs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Network Policies📋
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In Kubernetes, every routing network traffic instruction is set by the network policies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A set of tools that defines the communication between the pods in a cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is a powerful tool used for the security of network traffic in k8s clusters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Allowance of traffics from one specific pod to another one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Restricting traffic to a specific set of ports and protocols.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implementation is done by Network APIs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can be applied to namespaces or individual pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A simple network policy yaml file:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  apiVersion: networking.k8s.io/v1
  kind: NetworkPolicy
  metadata:
    name: demo-network-policy
  spec:
    podSelector:
      matchLabels:
        app: my-app
    policyTypes:
    - Ingress
    ingress:
    - from:
      - podSelector:
          matchLabels:
            app: allowed-app
      ports:
      - protocol: TCP
        port: 80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;For the deployment of the Network Policy YAML file, use the &lt;code&gt;kubectl&lt;/code&gt; apply command:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f demo-network-policy.yaml&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is how you can define your own network policies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Services🪖
&lt;/h2&gt;

&lt;p&gt;Let's say, you deployed a pod having a web application running on it and you want to access your application outside the pod. Then how to access the application as an external user?&lt;/p&gt;

&lt;p&gt;The k8s cluster setup in your local machine has an IP address similar to your system IP but the pod has a different &lt;strong&gt;IP address&lt;/strong&gt; which is inside the node.&lt;/p&gt;

&lt;p&gt;As the pod and your system share different addresses, there is no way to access the application directly into the system.&lt;/p&gt;

&lt;p&gt;So we need something in-between to fill the gap and gives us access to web application directly from our systems.&lt;/p&gt;

&lt;p&gt;Getting started with Kubernetes Services - Spectro Cloud&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kGeyCPj---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/imjqjst89wdsppr0d1y8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kGeyCPj---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/imjqjst89wdsppr0d1y8.png" alt="Image svc" width="650" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is where &lt;strong&gt;Kubernetes Services&lt;/strong&gt; comes into the picture.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;k8s services activate the communication between several components inside-out the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It helps us to connect the application together with other applications and users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is an object just like PODs, Replicas and Deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It listens to a port on the node and forwards the requests to the port where the application is running inside the pod.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A simple service yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: demo-service
spec:
  selector:
    app.kubernetes.io/name: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Services Types📑:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;NodePort Service&lt;/li&gt;
&lt;li&gt;ClusterIP Service&lt;/li&gt;
&lt;li&gt;LoadBalancer Service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;1) NodePort Service - It makes an internal POD accessible to the Port on the Node.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q0Li1KBg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgo6sd54q06d2tvv04y9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q0Li1KBg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgo6sd54q06d2tvv04y9.png" alt="Figure 1: Kubernetes NodePort service" width="738" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the port number inside the pod is the target port&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the port number in the service area is the 2nd Port&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and the port on the node is NodePort where we're going to access the web server externally.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the valid range of NodePort (by default): 3000 to 32767&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;labels &amp;amp; selectors are necessary to describe in the spec section in case of multiple pods in a node or multiple nodes in a cluster.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2) ClusterIP Service - It creates a virtual IP inside the cluster to enable communication between different services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DlykfcvK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dfi0hikbd6surfmbdh9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DlykfcvK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dfi0hikbd6surfmbdh9.png" alt="Figure2: Kubernetes ClusterIP service" width="613" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a service created of a tier of your application like backend or frontend which helps to group all the pods of a tier and provide a single interface for other pods to access this service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3) LoadBalancer Service - It exposes the service on a publicly accessible IP address in a supported cloud provider.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZL5QP7w6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dqt6yy9o3sfijmblms2v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZL5QP7w6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dqt6yy9o3sfijmblms2v.png" alt="Figure3: Kubernetes LoadBalancer service" width="597" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A demo NodePort yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: my-demo-service
spec:
  type: NodePort
  selector:
    run: nginx
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30007
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now run,&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create -f myservice.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is how you can create your service of NodePort type. By changing the &lt;code&gt;spec&lt;/code&gt; &lt;code&gt;type&lt;/code&gt; to &lt;code&gt;ClusterIP&lt;/code&gt; or &lt;code&gt;LoadBalancer&lt;/code&gt;, whatever you wish to work with just change it with the name and port according to it.&lt;/p&gt;

&lt;h2&gt;
  
  
  CNI (Computer Network Interface)🌐
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;CNI is a set of standards that defines how to configure networking challenges in a container runtime environment like Docker and Kubernetes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is a simple plugin-based architecture.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It defines how the plugin should be developed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Plugins are responsible for configuring the network interface.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CNI comes with a set of supported plugins already. Like &lt;strong&gt;bridge, VLAN, IPVLAN, MACVLAN&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker does not implement CNI. It has its own set of standards known as CNM ie, Container Network Model.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We cannot run a docker container to specify the network plugin to use CNI.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;But you can do it manually by creating a docker container without any network configuration&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;docker run --network=none nginx&lt;/code&gt;&lt;br&gt;
then invoke the bridge plugin by yourself.&lt;/p&gt;

&lt;h3&gt;
  
  
  CNI in k8s🕸️
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kubernetes is responsible for creating container network namespaces&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Attaching those namespaces to the right network&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can see the network plugins set to CNI by running the below command&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;ps -aux | grep kubelet&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The CNI plugin is configured in the kubelet service on each node in the cluster&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The CNI bin directory has all the supported CNI plugins as executables.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;ls /opt/cni/bin&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;k8s supports various CNI plugins like Calico, Weave Net, Flannel, DHCP and many more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can identify which plugin is currently used with the help of this command&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;ls /etc/cni/net.d&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kubelet finds out which plugin will be used.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  DNS in k8s📡
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Kubernetes deploys a built-in DNS server by default when you set up a cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Let's say we have three nodes k8s cluster having pods and services deployed in it having node name and IP address assigned to each one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;All pods and services can connect to each other using their IPs and to make the web app available to external users we have services defined on them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Every new entry of a pod goes into the DNS server record by&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;cat &amp;gt;&amp;gt; /etc/resolv.conf&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The k8s DNS keeps the record of the created services and maps the service name to the IP address. So anyone can reach by the service name itself.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DNS implemented by Kubernetes was known as kube-dns and later versions CoreDNS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The CoreDNS server is deployed as a POD in the kube-system namespace in the k8s cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To look for DNS Pods, run the command&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;kubectl get pods -n kube-system&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Ingress🔵
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Let's say you have an application deployed in the k8s cluster with a database pod having required services for external accessing with the NodePort in it. The replicas are going up and down as per the demand of the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You configured the DNS server so that anyone can access the web app by typing their name instead of IP address every time&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;like, &lt;code&gt;http://my-app.com:port&lt;/code&gt; instead of &lt;code&gt;http://&amp;lt;node-ip&amp;gt;:port&lt;/code&gt; and if you do not want others to type port number also then you served a proxy-server layer between your cluster and DNS server. so that anyone can access it by just&lt;br&gt;
&lt;code&gt;http://my-app.com&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now, you want some new features to be added to your application. You developed the code and deployed the &lt;code&gt;webapp&lt;/code&gt; again for the new features of pods and services into the cluster. Again setting up the proxy server in-between to access the &lt;code&gt;webapp&lt;/code&gt; in just a single domain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Maintaining all these outside the cluster is a tedious task to do when your application scales. Every time a new feature adds on, you have to do all the layerings again and again.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This is where Ingress comes in. Ingress helps users to access your application with a single URL.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ingress is just another k8s definition file inside the cluster&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You have to expose it one time to make it accessible outside the cluster either with NodePort or the cloud provider like GCP which uses the LoadBalancer service.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Ingress Controller🔹
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the case of using ingress in a k8s cluster, an ingress controller must be deployed and configured.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mostly used ingress controllers are &lt;strong&gt;Nginx, Traefik, and Istio&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy the application on the k8s cluster and configure them to route traffic to other services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The configuration involves defining &lt;strong&gt;URL&lt;/strong&gt; Routes, Configuring &lt;strong&gt;SSL&lt;/strong&gt; certificates etc. This set of rules to configure ingress is called an &lt;strong&gt;Ingress Resource&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the controller is running, resources can be created next and configured to route traffic to different services in the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is created using definition files like the previous ones we created for Pods, Services and Deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ingress Controller is not built-in in them by default. You have to create it &lt;strong&gt;manually&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A simple &lt;strong&gt;ingress&lt;/strong&gt; yaml file for the sake of explanation:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      apiVersion: networking.k8s.io/v1
      kind: Ingress
      metadata:
        name: minimal-ingress
        annotations
          nginx.ingress.kubernetes.io/rewrite-target: /
      spec:
        ingressClassName: nginx-example
        rules:
        - http:
            paths:
            - path: /testpath
              pathType: Prefix
              backend:
                service:
                  name: test
                  port:
                    number: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret "my-cert"&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      kubectl create ingress simple --rule="foo.com/bar=svc1:8080,tls=my-cert"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is how you can start working with Ingress in Kubernetes.&lt;/p&gt;




&lt;p&gt;Thankyou!🖤&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>networking</category>
      <category>devops</category>
      <category>kcdchennai</category>
    </item>
    <item>
      <title>Kubernetes PODS &amp; SERVICES Discovery</title>
      <dc:creator>Poonam Pawar</dc:creator>
      <pubDate>Thu, 25 May 2023 13:38:26 +0000</pubDate>
      <link>https://forem.com/kcdchennai/kubernetes-pods-services-discovery-27m2</link>
      <guid>https://forem.com/kcdchennai/kubernetes-pods-services-discovery-27m2</guid>
      <description>&lt;h2&gt;
  
  
  Introduction✍️
&lt;/h2&gt;

&lt;p&gt;In this particular blog, we are going to talk about how all these workloads can be exposed outside to the world with the help of these Services and DNS. So that external users can also access your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to expose Kubernetes workloads to the outside world using Services?🌏
&lt;/h2&gt;

&lt;p&gt;Before heading to these topics you must have the knowledge of all the workloads.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-requisites👇
&lt;/h3&gt;

&lt;p&gt;PODS📦&lt;br&gt;
DNS🧑‍💻&lt;br&gt;
Services⚙️&lt;/p&gt;

&lt;p&gt;Services are available in three main types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;ClusterIP&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;NodePort&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;LoadBalancer&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ClusterIP is the default service created by k8s. With the help of this type, you can access your application just within your cluster. It will provide you with the benefit of load balancing and discovery, which we will learn about in the below section in detail.&lt;/p&gt;

&lt;p&gt;If you want to check all your created services, use get command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GoTqZOxd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g5plplu46bneau3lghrh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GoTqZOxd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g5plplu46bneau3lghrh.png" alt="svc" width="747" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, you can write with the alias:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl&lt;/code&gt; aka &lt;code&gt;k&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;services&lt;/code&gt; aka &lt;code&gt;svc&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0EIEbxtB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/102c8x0ki8ad9s0b9i3n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0EIEbxtB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/102c8x0ki8ad9s0b9i3n.png" alt="k s" width="760" height="135"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Trust me, this saves a lot of time.🥺&lt;/p&gt;

&lt;p&gt;And also, I have covered the file creation already in detail.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;NodePort&lt;/code&gt; can provide you to access your application within the cluster as well as to those who have access to the worker nodes you have set up during the creation.&lt;/p&gt;

&lt;p&gt;Creating a &lt;code&gt;service&lt;/code&gt; of &lt;code&gt;NodePort&lt;/code&gt; type, the &lt;code&gt;/root/service-definition-1.yaml&lt;/code&gt; file as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: Service
metadata:
  name: webapp-service
  namespace: default
spec:
  ports:
  - nodePort: 30080
    port: 8080
    targetPort: 8080
  selector:
    name: simple-webapp
  type: NodePort
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following command to create a &lt;code&gt;webapp-service&lt;/code&gt; service as follows: -&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f /root/service-definition-1.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WxomB9UQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b8lqqfth0pc87g4o55np.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WxomB9UQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b8lqqfth0pc87g4o55np.png" alt="svc" width="761" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Create the service-definition-1.yaml -&amp;gt; Add the service template using vi/vim -&amp;gt; Edit the details in the file -&amp;gt; Use command apply to create it&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I am not going into details about the types of services because I have covered everything in my previous blog learnings.🫣&lt;/p&gt;

&lt;p&gt;We will only be focusing on:&lt;/p&gt;

&lt;h3&gt;
  
  
  Exposing Kubernetes workloads to the outside world👶
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;LoadBalancer&lt;/code&gt; is the type of service that will expose your application to outside the world.&lt;/p&gt;

&lt;p&gt;This will only work on &lt;strong&gt;cloud providers&lt;/strong&gt;. There will be an elastic load balancer that will be created by the cloud providers which is a public IP address through which you can access your application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud Control Manager&lt;/strong&gt; which is inside your master node will generate public IP adders using any cloud provider eg AWS and returns to the service so that anyone can access your application using that IP address.&lt;/p&gt;

&lt;p&gt;To create a service obviously, we need all the pods and &lt;code&gt;deployment-definition&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;So here it is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
      - name: webapp
        image: my/webapp:latest
        ports:
        - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, time to create a &lt;code&gt;service-definition-2&lt;/code&gt; yaml file with &lt;code&gt;LoadBalance&lt;/code&gt; type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: webapp-service
spec:
  selector:
    app: webapp
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First run,&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f /root/service-definition-2.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now the service is created. You can access this through the internet with the correct IP address and port number provided by this service.&lt;/p&gt;

&lt;p&gt;Check the IP address and Ports by &lt;code&gt;get&lt;/code&gt; command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get svc &amp;lt;name&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to discover Services and Pods within a Kubernetes cluster using DNS and other mechanisms?🧐
&lt;/h2&gt;

&lt;p&gt;As we know how service works and its purpose But did you think how it tackled the problems? How does it send requests to the users? How does it provide everything requested by the users?&lt;/p&gt;

&lt;p&gt;Let's see the solution:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service&lt;/strong&gt; act as a &lt;strong&gt;load balancer&lt;/strong&gt; by using a component known as &lt;code&gt;kube-proxy&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So instead of accessing the IP addresses, it helps the users to give a specific service name to the users to access the application.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kube-proxy&lt;/code&gt; will forward the requests coming from several users.&lt;/p&gt;

&lt;p&gt;So without Services, your application will not work even if you have everything like pods, deployments, etc ready. When a pod goes down then you're unable to provide your applications to the users in case of no service available.&lt;/p&gt;

&lt;p&gt;But the question remains the same. How service is handling the IP addresses, as every time a pod goes down, it comes up with a different address. So how this is going on?&lt;/p&gt;

&lt;p&gt;If three of the pods are deployed in the cluster and each one has its IP address and whenever a pod goes down and comes up always with the new IPs then how service is managed all the time with the new IPs and managing all the user's requests?&lt;/p&gt;

&lt;p&gt;This is handled by Service Discovery.&lt;/p&gt;

&lt;h3&gt;
  
  
  Service Discovery🕵️
&lt;/h3&gt;

&lt;p&gt;This comes up with a new process called &lt;strong&gt;Labels&lt;/strong&gt; &amp;amp; &lt;strong&gt;Selectors&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of working with IP addresses and keeping track of it. Service is using the concept of labels &amp;amp; selectors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Labels📋
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It is just a key-value pair.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can name it by your conventions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is used to organize and select the objects in the cluster&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can label Pods, Deployments, and Services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Example, &lt;code&gt;key=app&lt;/code&gt; and &lt;code&gt;value=myapp&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;metadata:
  labels:
    app: myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Selectors✅
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It must be defined same name as the labels defined.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is used to select objects based on their labels.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can be used for a single or a groped objects that matches specific labels.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Example, create a Service that selects all Pods with the label &lt;code&gt;app=myapp&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yamlCopy codespec:
  selector:
    app: myapp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will give labels in our pod definition yaml file so that whenever a pod goes down and comes up with new IPs, it will always have the same label in it as we defined in the template. A new pod is always created with the given template.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The replica sets will deploy the pods using the &lt;strong&gt;labels&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Services&lt;/strong&gt; will keep track of all the deployments by the labels.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Labels&lt;/strong&gt; are just a name given to a pod nothing else.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So this is the service discovery mechanism that uses labels &amp;amp; selectors.&lt;/p&gt;

&lt;p&gt;So you created a deployment definition yaml file and on that the metadata section you will define a label which can be any name provided by you for your deployed application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pod Discovery Using DNS🌐
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;CoreDNS&lt;/strong&gt; server is deployed as a POD in &lt;code&gt;kube-system&lt;/code&gt; namespace in the k8s cluster.&lt;/p&gt;

&lt;p&gt;It creates a service to make it available to other components within a cluster. And the service is named &lt;code&gt;kube-dns&lt;/code&gt; by default.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get service -n kube-system&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It uses a file called &lt;strong&gt;Corefile&lt;/strong&gt; which is located at &lt;code&gt;/etc/coredns&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cat /etc/coredns/Corefile&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In this file, you have all the configurations of plugins.&lt;/p&gt;

&lt;p&gt;One of the plugins that make &lt;code&gt;CoreDNS&lt;/code&gt; work with k8s is the Kubernetes plugin.&lt;/p&gt;

&lt;p&gt;A top-level domain name is set as a &lt;code&gt;cluster.local&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;DNS&lt;/strong&gt; configuration on &lt;strong&gt;PODS&lt;/strong&gt; is done by k8s automatically when the pod is created.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;config&lt;/code&gt; file of the &lt;code&gt;kubelet&lt;/code&gt; will give you the IP of the DNS server and the domain:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cat /var/lib/kubelet/config.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can access your &lt;code&gt;service&lt;/code&gt; by just&lt;/p&gt;

&lt;p&gt;&lt;code&gt;name-service&lt;/code&gt; or&lt;/p&gt;

&lt;p&gt;&lt;code&gt;name-service.default&lt;/code&gt; or&lt;/p&gt;

&lt;p&gt;&lt;code&gt;name-service.default.svc&lt;/code&gt; or&lt;/p&gt;

&lt;p&gt;&lt;code&gt;name-service.default.svc.cluster.local&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;lt;service-name&amp;gt;.&amp;lt;namespace&amp;gt;.svc.cluster.local&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Example, &lt;code&gt;service-name=mywebapp&lt;/code&gt; and &lt;code&gt;namespace=default&lt;/code&gt; which is created automatically as default. Then,&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mywebapp.default.svc.cluster.local&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you try to manually lookup for the service using &lt;code&gt;nslookup&lt;/code&gt; or the host command&lt;/p&gt;

&lt;p&gt;&lt;code&gt;host &amp;lt;name-service&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pod Discovery Using Environment Variables🔗
&lt;/h3&gt;

&lt;p&gt;The container which has the information of Services and PODS that are running inside the k8s cluster has an environment variable automatically set by Kubernetes.&lt;/p&gt;

&lt;p&gt;These variables are used to discover the IP address, port numbers and hostname for the services and running pods.&lt;/p&gt;

&lt;p&gt;Example,&lt;/p&gt;

&lt;p&gt;the &lt;code&gt;SERVICE_HOST&lt;/code&gt; and &lt;code&gt;SERVICE_PORT&lt;/code&gt; environment variables are automatically set in containers running inside a Service's Pod.&lt;/p&gt;

&lt;p&gt;the &lt;code&gt;HOSTNAME&lt;/code&gt; environment variable is set to the hostname of the Pod and the &lt;code&gt;MY_POD_IP&lt;/code&gt; environment variable is set to the Pod's IP address.&lt;/p&gt;

&lt;p&gt;Let's create a simple &lt;code&gt;service&lt;/code&gt; yaml file to use for further process with type &lt;code&gt;ClusterIP&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: myapp-service
  labels:
    app: myapp
spec:
  type: ClusterIP
  selector:
    app: myapp
  ports:
  - name: http
    port: 80
    targetPort: 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's create a &lt;code&gt;pod&lt;/code&gt; YAML file that runs &lt;code&gt;image=nginx&lt;/code&gt; by setting two &lt;code&gt;env&lt;/code&gt; variables &lt;code&gt;MYAPP_SERVICE_HOST&lt;/code&gt; and &lt;code&gt;MYAPP_POD_IP&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: nginx
    env:
    - name: MYAPP_SERVICE_HOST
      value: "myapp-service.default.svc.cluster.local"
    - name: MYAPP_POD_IP
      valueFrom:
        fieldRef:
          fieldPath: status.podIP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;MYAPP_SERVICE_HOST&lt;/code&gt; is set to &lt;code&gt;myapp.default.svc.cluster.local&lt;/code&gt; and&lt;/p&gt;

&lt;p&gt;&lt;code&gt;MYAPP_POD_IP&lt;/code&gt; is set using a &lt;code&gt;fieldRef&lt;/code&gt; that retrieves the Pod's IP address from its status.&lt;/p&gt;

&lt;p&gt;Together, these files define a web application that can be accessed using the virtual IP address assigned to the Service.&lt;/p&gt;

&lt;p&gt;When a client sends a request to the Service's IP address on port 80, the request will be forwarded to one of the Pods with the &lt;code&gt;app: myapp&lt;/code&gt; label and the response will be sent back to the client.&lt;/p&gt;

&lt;p&gt;This is how you can discover pods using environment variables.&lt;/p&gt;




&lt;p&gt;Thank you!🖤&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>kcdchennai</category>
    </item>
    <item>
      <title>Kubernetes: Storage &amp; Security</title>
      <dc:creator>Poonam Pawar</dc:creator>
      <pubDate>Thu, 18 May 2023 10:48:47 +0000</pubDate>
      <link>https://forem.com/kcdchennai/kubernetes-storage-security-3bp</link>
      <guid>https://forem.com/kcdchennai/kubernetes-storage-security-3bp</guid>
      <description>&lt;h2&gt;
  
  
  Introduction✍️
&lt;/h2&gt;

&lt;p&gt;It's time to share Secrets!🤫&lt;/p&gt;

&lt;p&gt;Obviously not mine.😜&lt;/p&gt;

&lt;p&gt;In this blog let's talk about the &lt;strong&gt;storage&lt;/strong&gt;, storage &lt;strong&gt;classes&lt;/strong&gt;, &lt;strong&gt;security&lt;/strong&gt; policies, &lt;strong&gt;network&lt;/strong&gt; policies, security layers, and everything that comes under the store room and the security room of the Kubernetes. These topics play a very crucial role in the making of k8s clusters. Let's see how it does!&lt;/p&gt;

&lt;h2&gt;
  
  
  Storage🛢️
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Volumes🎚️
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In Kubernetes, the pods are not permanent. Once a pod is created, it will be destroyed when the requirement finishes and one comes up in place of it. Just like docker containers, the data processed inside the pod also get deleted when the pod destroys. This will occur data loss which is a big problem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To resolve this problem, volumes come up&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We attach a volume to the pod. So now, whenever a pod processed some data inside it, it gives also gets stored in the volume. If now pod is going to be deleted we have still our data alive.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The data generated by the pod is now stored in the volume!&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's create a simple single node k8s cluster &lt;code&gt;volume-specific&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: randon-number-generator
spec:
  container:
  - image: alpine
    name: alpine
    command: ["/bin/sh","-c"]
    args: ["shuf -i 0-100 -n 1 &amp;gt;&amp;gt; /opt/number.out;"]
    volumeMounts:
    - mountPath: /opt
      name: data-volume
  volumes:
  - name: data-volume
    hostPath:
      path: /data
      type: Directory

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above yaml file, we are generating a random number in a given range so that we can save that number in our volumes. &lt;code&gt;volumeMounts&lt;/code&gt; are used to get the same data within the pod. &lt;code&gt;hostPath&lt;/code&gt; is simply giving the &lt;code&gt;path directory&lt;/code&gt; where the data will be stored.&lt;/p&gt;

&lt;p&gt;This is how even after pod deletion we can still have our data in &lt;code&gt;/data dir&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: This is only for single-node k8s clusters.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;When you will work on a large-scale project, there will not be a single node cluster. Hundreds of Pods running at a time and giving the volume path as /data to all the pod's data is not recommended in the multi-node cluster for obvious reasons.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This is because the PODs would use the /data directory on all the nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;So we need proper storage solutions. Kubernetes supports different standard storage solutions like NFS, glusterFS and many more.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You will get a basic idea of how to define them in the further topics.&lt;/p&gt;

&lt;h4&gt;
  
  
  Persistent Volumes📻
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Whenever we create volumes, every time we need to configure them in a pod-definition file. So every configuration information is required to configure storage within the file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now imagine we are running hundreds of pods and each time when a user wants to deploy the pods, they would have to configure storage every time for each pod in their environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;So doing this every time is not a best practice for us.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Here, &lt;strong&gt;Persistent Volumes&lt;/strong&gt; come up. It is a cluster-wide room of storage volumes configured by an administrator to make use of the users deploying applications on the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now we can use the storage using persisting volume claims&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's create a definition yaml file named pv-definition.yaml for this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv1
spec:
  accessModes:
    - ReadWriteOnce
  capacity: 
    storage: 1Gi
  awsElasticBlockStore:
    volumeID: &amp;lt;volume-id&amp;gt;
    fsType: ext4

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There can be different kinds of &lt;code&gt;accessMode&lt;/code&gt; like,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;ReadOnlyMany&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ReadWriteOnce&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ReadWriteMany&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;awsElasicBlockStore&lt;/code&gt; is one of the supported storage solutions we talked about above for the multi-node cluster. This will provide specific &lt;code&gt;volume-id&lt;/code&gt; and &lt;code&gt;type&lt;/code&gt; instead of just &lt;code&gt;/data&lt;/code&gt; dir path to differentiate better.&lt;/p&gt;

&lt;p&gt;Now run,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f pv-definition.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check the created persistent volumes, you know what command to use now:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get persistentvolume&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Persistent Volume Claims📑
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;After creating the persistentvolume, it's the time to create a persistent volume claim to make the storage available to a node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Persistent volumes and Persistent volume claims are two separate objects in k8s.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An administrator creates a persistent volume and a user creates a persistent volume claim to use the storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubernetes binds the persistent volume of the claims as per the requests and required properties set on the volumes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubernetes checks the sufficient capacity while binding volumes to the claims.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It also uses labels &amp;amp; selectors to bind on a specific persistent in case of multiple possible matches to the right volumes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's create a claim definition file named &lt;code&gt;pvc-definition.yaml&lt;/code&gt; now,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources: 
    requests:
      storage: 500Mi

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When this file is created, Kubernetes looks for the &lt;code&gt;volume&lt;/code&gt; file which was created above.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create -f pvc-definition.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;accessModes&lt;/code&gt; match, then it checks for the storage capacity and here to store &lt;code&gt;500Mi&lt;/code&gt; in &lt;code&gt;1Gi&lt;/code&gt; is the perfect match for them since there is no other available.&lt;/p&gt;

&lt;p&gt;So the claim is bound to the volume.&lt;/p&gt;

&lt;p&gt;To check the claims file use the get command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get persistentvolumeclaim&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage Classes🗑️
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We have created the persistent volumes and persistent volume claims but before creating this volume you must have created a disk on google cloud.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You have to manually provision the disk whenever your application needs storage on &lt;code&gt;gc&lt;/code&gt;. And then manually create a persistent volume file using the same name defined during the creation of disk specifications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This whole process is called Static Provisioning Volumes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To automate this process fully, we have Storage Class.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You just have to define a provisioner like Google Storage and then everything will be seen by the provisioner. It automatically provisions the storage and attaches that to the pod when a claim is made.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This is called Dynamic Provisioning Volumes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now you don't have to create persistent volume manually.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There are many storage provisioners such as &lt;strong&gt;AWSEBS&lt;/strong&gt;, &lt;strong&gt;AzrueFile&lt;/strong&gt;, &lt;strong&gt;AzureDisk&lt;/strong&gt;, &lt;strong&gt;CephFS&lt;/strong&gt; and many more&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Take a look at a storage class definition file as &lt;code&gt;sc-definition.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: google-storage
provisioner: kubernetes.io/gce-pd

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After defining this file, you have to give the storage class name in &lt;code&gt;pvc&lt;/code&gt; file as the same as you have defined in the &lt;code&gt;sv&lt;/code&gt; file as &lt;code&gt;google-storage&lt;/code&gt; in this case.&lt;/p&gt;

&lt;p&gt;You can create different kinds of classes in storage using different types of disks like&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Silver🔘 Storage Class with the standard disk,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Gold🟠 Storage Class with SSD drives and&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Platinum⚪ Storage Class with SSD drives and replication.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  StatefulSets🏗️
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It creates pods based on the templates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can scale up and scale down as per the requirements. And can perform rolling updates and rollbacks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Like, you want to deploy the master-node first then the worker-node-1 up completely then starts running and after that worker-node-2 will be up and run itself into the k8s cluster. StatefulSet can help you to achieve this.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In this, when a pod goes down and a new pod comes up, it will be the same name that you have specifically defined for that pod.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It maintains an identity for each of its pods that helps to maintain the order of the deployment of your pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you want to use StatefulSets just for the sake of identity purposes for the pods and not for the sequential deploying order then you can manually remove the commands of the order maintenance, just you have to make some changes in the YAML file.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: If you have already learned this topic in my previous blog then feel free to skip this topic😇. Otherwise, continue learning!✊&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Though you don't need to use this StatefulSet. As it totally depends on your need for the application. If you have servers that require an order to run or you need a stable naming convention for your pods. Then this is the right choice to use.&lt;/p&gt;

&lt;p&gt;You can create a &lt;code&gt;StatefulSet&lt;/code&gt; yaml file just like the deployment file with some changes like the main one is kind as &lt;code&gt;StatefulSet&lt;/code&gt;. Take a look:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
  labels:
        app: msql
spec:
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql
  replicas: 3
  selector:
      matchLabels:
          app: mysql
  serviceName: mysql-h

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;than run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create -f statefulset-definition.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To scale up or down use the &lt;code&gt;scale&lt;/code&gt; command with the numbers you wanted to scale:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl scale statefulset mysql --replicas=5&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is how you can work with &lt;strong&gt;StatefulSets&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security🕵️
&lt;/h2&gt;

&lt;h3&gt;
  
  
  RBAC (Role-Based Access Control)🛂
&lt;/h3&gt;

&lt;p&gt;-As the name suggests itself the working, it is used to define roles to users or a group and bind them who granted those permissions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simply, granting the permissions for who will do what.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's quickly see how to create a &lt;code&gt;role-definition&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: developer
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods"]
  verbs: ["get", "list", "create", "delete"]
- apiGroups: [""] # "" indicates the core API group
  resources: ["ConfigMap"]
  verbs: ["create"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above file, we are creating the role for a developer who has resources as pods in which they can modify the pods and can allow to configure them.&lt;/p&gt;

&lt;p&gt;Now run &lt;code&gt;create&lt;/code&gt; command to create the role:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create -f developer-role.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, link the user to that role. For this create another object file called &lt;strong&gt;RoleBinding&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: devuser-developer-binding
subjects:
- kind: User # "" indicates the core API group
  name: dev-user
  apiGroups: rbac.authorization.k8s.io
roleRef: 
  kind: Role
  name: developer
  apiGroup: rbac.authorization.k8s.io

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run &lt;code&gt;create&lt;/code&gt; command to bind this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl ceate -f devuser-developer-binding.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To view the created roles, run the &lt;code&gt;get&lt;/code&gt; command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get roles&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To see the bindings you have created run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get rolebindings&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To view the role in detail run &lt;code&gt;describe&lt;/code&gt; command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl describe role developer&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To check your accessibility for a particular object, run the:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl auth can-i create deployments&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;By this, you can check access that you want to know like &lt;code&gt;delete nodes&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pod Security Policies (PSPs)📦
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It is a security policy defined for the users or a group to access the pods in the Kubernetes cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Yes! You guessed it right. It is one of the RBAC to grant permissions and bind them with whom granted with.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It defines a set of security policies that are applied to the pods based on their labels and annotations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When a pod is created, Kubernetes checks for the specific labels and annotations which we have defined above in the file to proceed it from creating to running state.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: Removed feature&lt;br&gt;
PodSecurityPolicy was deprecated in Kubernetes v1.21, and removed from Kubernetes in v1.25.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Instead of using PodSecurityPolicy, you can enforce similar restrictions on Pods using either or both:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/security/pod-security-admission/"&gt;Pod Security Admission&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a 3rd party admission plugin, that you deploy and configure yourself&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These new Pod Security Standards define three different policies to broadly cover the security spectrum.&lt;/p&gt;

&lt;p&gt;These policies are &lt;em&gt;cumulative&lt;/em&gt; and range from &lt;strong&gt;highly-permissive to highly-restrictive&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Learn more &lt;a href="https://kubernetes.io/docs/concepts/security/pod-security-standards/"&gt;here&lt;/a&gt;!&lt;/p&gt;

&lt;h3&gt;
  
  
  Secrets🧑‍💻
&lt;/h3&gt;

&lt;p&gt;Finally, it's time to decrypt the secrets in Kubernetes.&lt;/p&gt;

&lt;p&gt;Never tell your secrets to anyone. Ok!&lt;/p&gt;

&lt;p&gt;But what is meant by secrets in k8s?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Let's say, you have a simple web application that is connected to a database that displays a successful message on the screen while getting connected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;First, you have coded your user name and passwords into the source code. Then you understand that it's not a good idea to provide credentials like this.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then you create an object file called ConfigMap and put those values inside the yaml file. ConfigMap stores data in a plain text format. And that's again not a good idea.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So here, &lt;strong&gt;Secrets&lt;/strong&gt; come in. It is used to store sensitive information which you don't want to share it with anybody.&lt;/p&gt;

&lt;p&gt;It is similar to &lt;code&gt;configMaps&lt;/code&gt; but it stores the information in encoded format instead of plain text.&lt;/p&gt;

&lt;p&gt;So first encode your data. To do so, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ echo -n 'mysql' | base64
bXlzcWw=

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;vice-versa for decoding the text:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ echo -n 'bXlzcWw=' | base64 --decode
mysql

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check secrets, run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get secrets&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To get detailed information on secrets, run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl describe secrets&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, create a &lt;code&gt;secret-definition.yaml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Secret
metadat:
  name: myapp-secret
data:
  DB_Host: bXlzcWw=
  DB_User: cm9vdA==
  DB_Password: cGFzd3Jk

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now configure this secret with a &lt;code&gt;pod-definition.yaml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: simple-webapp-color
  labels:
    name: simple-webapp-color
spec:
  containers:
  - name: simple-webapp-color
    image: simple-webapp-color
    ports:
      - containerPort: 8080
    envFrom:
      - secretRef: 
          name: myapp-secret

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create -f pod-definition.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is how you can create your own secret file and configure it in the &lt;code&gt;pod-definition&lt;/code&gt; file by adding the &lt;code&gt;envFrom&lt;/code&gt; section.&lt;/p&gt;

&lt;p&gt;And list as many variables as you want as per your need. The name should be matched with the one you have created in the &lt;code&gt;secret-definition&lt;/code&gt; file. In this case, it is &lt;code&gt;myapp-secret&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Network Policies📋
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In Kubernetes, every routing network traffic instruction is set by the network policies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A set of tools that defines the communication between the pods in a cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is a powerful tool used for the security of network traffic in k8s clusters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Allowance of traffics from one specific pod to another one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Restricting traffic to a specific set of ports and protocols.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implementation is done by Network APIs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can be applied to namespaces or individual pods.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A simple network policy yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: demo-network-policy
    spec:
      podSelector:
        matchLabels:
          app: my-app
      policyTypes:
      - Ingress
      ingress:
      - from:
        - podSelector:
            matchLabels:
              app: allowed-app
        ports:
        - protocol: TCP
          port: 80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the deployment of the Network Policy YAML file, use the &lt;code&gt;kubectl&lt;/code&gt; apply command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f demo-network-policy.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is how you can define your own network policies.&lt;/p&gt;

&lt;h3&gt;
  
  
  TLS (Transport Layer Security)🌐
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Every communication over the internet must be secure. Otherwise, there is a high risk of hackers hacking the data which we are transferring through it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Likewise in Kubernetes clusters, there is a set of master nodes and worker nodes who communicates thoroughly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The communications between all the pods, nodes and API servers must be secured and encrypted.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Administrators trying to communicate with the master node via kubelet or through APIs directly, everything must be fully secure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The communication between the servers and the clients must be secure and encrypted.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To fulfill all these requirements we need security certificates. We all know about TLS certificates and how it works.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It works with an asymmetric key format where every end has its own key-value pair lock-key to decrypt the data.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let's understand how TLS can be defined in Kubernetes.&lt;/p&gt;

&lt;p&gt;In Kubernetes, it has two sides, one is the server side and another is the client side. Both sides must have security certificates signed by CA (Certificate Authority) to verify their identity. Let's look at both of them.&lt;/p&gt;

&lt;h4&gt;
  
  
  Server Certificates of the Servers🔏
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;KUBE-API Server&lt;/strong&gt; - This helps the k8s to expose the HTTP service that other components and external users use to manage the cluster. So it is an important component that must be secured enough.&lt;br&gt;
It has a certificate and a &lt;code&gt;key-pair&lt;/code&gt; named &lt;code&gt;apiserver.cert&lt;/code&gt; &lt;br&gt;
and &lt;code&gt;apiserver.key&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ETCD Server&lt;/strong&gt; - It stores all the information about the clusters so generate a certificate and key-value pair for this also. The naming conventions are&lt;br&gt;
&lt;code&gt;etcdserver.cert&lt;/code&gt; and &lt;code&gt;etcdserver.key&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;KUBELET Server&lt;/strong&gt; - It belongs in the worker node which also exposes the HTTP API endpoints to interact with others. Their cert and key-value are&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;kubelet.cert&lt;/code&gt; and &lt;code&gt;kubelet.key&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Client Certificate of the Clients🔐
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Administrator&lt;/strong&gt; - The clients who access the services are the admins. It also requires a certificate and a &lt;code&gt;key-value&lt;/code&gt; pair for access. Naming them as&lt;br&gt;
&lt;code&gt;admin.cert&lt;/code&gt; and &lt;code&gt;admin.key&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Schedulers&lt;/strong&gt; - It is another client who communicates with the &lt;code&gt;kube-api&lt;/code&gt; server to schedule the objects as per the requirements. So it also needs verification to talk to the server.&lt;br&gt;
Naming them as &lt;code&gt;scheduler.cert&lt;/code&gt; and &lt;code&gt;scheduler.key&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;KUBE-CONTROLLER MANAGER&lt;/strong&gt; - It also communicates with the &lt;code&gt;kube-api&lt;/code&gt; server and for the authentication it requires the same security checks.&lt;br&gt;
&lt;code&gt;controller-manager.cert&lt;/code&gt; and &lt;code&gt;controller-manage.key&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;KUBE-PROXY&lt;/strong&gt; - Another client-side component has the &lt;br&gt;
certificates and key&lt;br&gt;
&lt;code&gt;kube-proxy.cert&lt;/code&gt; and &lt;code&gt;kube-proxy.key&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now it's time to generate a certificate for our cluster. There are many tools to do so like &lt;strong&gt;EASYRSA&lt;/strong&gt;, &lt;strong&gt;OpenSSL&lt;/strong&gt;, &lt;strong&gt;CFSSL&lt;/strong&gt; and many more. We will be using &lt;strong&gt;OpenSSL&lt;/strong&gt; in this.&lt;/p&gt;

&lt;p&gt;Create a private key using the &lt;code&gt;openssl&lt;/code&gt; command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ openssl genrsa -out apiserver.key 2048&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It will generate a key called &lt;code&gt;apiserver.key&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now use the &lt;code&gt;request&lt;/code&gt; command to generate a certificate signing request for the previous one&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ openssl req -new -key apiserver.key -out apiserver.csr -subj "/CN=kube-apiserver"
  $ openssl x509 -req -in apiserver.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out apiserver.crt -days 365

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, create a secret to store the private key and certificates&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl create secret tls apiserver-certs --key=apiserver.key --cert=apiserver.crt -n kube-system

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Modify the Kubernetes API server configuration to use the &lt;strong&gt;TLS certificates&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Now edit the file of api server config to use &lt;code&gt;tls&lt;/code&gt; cert:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  $ vi /etc/kubernetes/manifests/kube-apiserver.yaml

  spec:
    containers:
    - name: kube-apiserver
      volumeMounts:
      - mountPath: /etc/kubernetes/pki/apiserver
        name: apiserver-certs
        readOnly: true
    volumes:
    - name: apiserver-certs
      secret:
        secretName: apiserver-certs

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the k8s api server to apply the new config&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ systemctl restart kubelet&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is how you can generate a certificate and configure TLS for the Kubernetes API server.&lt;/p&gt;




&lt;p&gt;Thank you!🖤&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>security</category>
      <category>devops</category>
      <category>kcdchennai</category>
    </item>
    <item>
      <title>Kubernetes: Architecture, Components, Installation &amp; Configuration</title>
      <dc:creator>Poonam Pawar</dc:creator>
      <pubDate>Mon, 15 May 2023 08:33:53 +0000</pubDate>
      <link>https://forem.com/kcdchennai/kubernetes-architecture-components-installation-configuration-3den</link>
      <guid>https://forem.com/kcdchennai/kubernetes-architecture-components-installation-configuration-3den</guid>
      <description>&lt;h2&gt;
  
  
  Introduction🧑‍🦯
&lt;/h2&gt;

&lt;p&gt;There are many ways to learn Kubernetes aka k8s. And one of the best ways to learn is by the official documentation itself. But for very beginners, it gets quite difficult to understand all the terms and technology directly through it.&lt;/p&gt;

&lt;p&gt;So, in this blog, I will break it down into as many pieces as I can. I will share how I learn in the simplest form.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes🕸️ -
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let me break down this definition.&lt;/p&gt;

&lt;p&gt;Container⛴️+ Orchestration🛟&lt;/p&gt;

&lt;p&gt;Let's imagine you have your application ready inside the Docker container🐬 to run. Now the next thing that should come to your mind is the further process of deployment, scaling and updating your application.&lt;/p&gt;

&lt;p&gt;The complete process of automatically deploying and managing is known as Container Orchestration.&lt;/p&gt;

&lt;p&gt;K8s provides you with an orchestration platform through which you can perform these tasks smoothly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture 👾
&lt;/h2&gt;

&lt;p&gt;Now that you got a basic understanding of what k8s do, let's jump into the architecture of it.&lt;/p&gt;

&lt;p&gt;Everything you're seeing in the below architecture is needed to set up your k8s cluster. And before going into the k8s cluster let us first understand the other main components of the architecture.&lt;/p&gt;

&lt;p&gt;This includes the two major nodes ie Master node and the Worker node&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iGIwslYc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/axhndxtmlwwvg33w7xiz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iGIwslYc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/axhndxtmlwwvg33w7xiz.png" alt="Architecture" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before figuring out who is the &lt;strong&gt;master&lt;/strong&gt; and who is the &lt;del&gt;slave&lt;/del&gt; worker node, first, understand all about &lt;strong&gt;NODES&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Nodes📦 -
&lt;/h3&gt;

&lt;p&gt;A node is a worker machine where we're going to install our k8s and the containers with your application ready will be launched by k8s.&lt;/p&gt;

&lt;p&gt;To scale up and down as per the demand there has to be more than one node.&lt;/p&gt;

&lt;p&gt;So here Cluster comes!&lt;/p&gt;

&lt;h3&gt;
  
  
  Cluster🗂️ -
&lt;/h3&gt;

&lt;p&gt;A cluster is a set of nodes grouped together which will help your application is always running state if other nodes fail to perform.&lt;/p&gt;

&lt;p&gt;This means if you have a cluster with multiple nodes running your application inside a container. And if one node goes down then the other one is up and running to save your application from crashing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Master Node📑 -
&lt;/h3&gt;

&lt;p&gt;Let's imagine you have a factory where all the machines are placed together in a room and working simultaneously as per the demand and requirements.&lt;/p&gt;

&lt;p&gt;Now if you noticed there is always a control room where everything is managed and controlled inside that room. This is the Master Node in our case scenario.&lt;/p&gt;

&lt;p&gt;This node keeps an eye about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Keeps watches on the nodes in the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Responsible for the actual orchestration of containers on the worker node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Information about the member cluster node is stored in it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Responsible for managing the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Node monitoring configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Managing workload balance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Controls the scheduling of containers onto worker nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provides an API endpoint that can be used to interact with the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Worker Node🧾 -
&lt;/h3&gt;

&lt;p&gt;The factory room where all the machines are placed to work ie, worker node in our case scenario.&lt;/p&gt;

&lt;p&gt;Let's see what the worker nodes do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Responsible for running the containers where your application resides.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Receives commands from the master node to run which and when node specifically.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reports back to the master node of their entire performance structure.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Components🧰
&lt;/h3&gt;

&lt;p&gt;Now let's deep dive into all the components you have seen on the architecture diagram quickly.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;h3&gt;
  
  
  kube-apiserver🔗:
&lt;/h3&gt;

&lt;p&gt;To interact with the k8s cluster, this API server helps to get done all the talks of it. It acts as the front end of the k8s. The command-line interface, management devices, and everything covers under this server.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h3&gt;
  
  
  etcd🖇️:
&lt;/h3&gt;

&lt;p&gt;All data is stored to manage the cluster in the key-value pair format. Basically, it is a backing storage of your k8s cluster. It implements the locks within the cluster to avoid conflicts between multiple nodes and masters.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h3&gt;
  
  
  kube-scheduler🧷:
&lt;/h3&gt;

&lt;p&gt;The distribution of workload and containers is done by the scheduler. It also assigns the container to the correct and required node. When anything goes down it notices and responds to the master and brings up the new container into the node as per the requirements.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All these above components come under the master node.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;h3&gt;
  
  
  Container runtime📎:
&lt;/h3&gt;

&lt;p&gt;It is the software that is responsible to run the containers eg, Docker.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h3&gt;
  
  
  kubelet🔎:
&lt;/h3&gt;

&lt;p&gt;It is the agent that runs on each node in the cluster.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h3&gt;
  
  
  kube-proxy📈:
&lt;/h3&gt;

&lt;p&gt;This handles network traffic between your containers.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All the above three are the worker node category component.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation⚙️ &amp;amp; Configuration🌐
&lt;/h2&gt;

&lt;p&gt;To run the Kubernetes cluster in your local machine environment you need pre-requisite software to install.&lt;/p&gt;

&lt;p&gt;Let's start with step-by-step guidance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update your Ubuntu package list:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;sudo apt-get update&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pre-requisite for installing k8s: Docker
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     sudo apt install docker.io -y
     sudo systemctl start docker
     sudo systemctl enable docker

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install all the necessary packages for k8s:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive- 
    keyring.gpg 
    https://packages.cloud.google.com/apt/doc/apt- 
    key.gpg

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now, add the Kubernetes signing key:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Update the system and install k8s:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  sudo apt update -y
  sudo apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Initialize the cluster (Master):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  sudo su
  kubeadm init

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Run this command on the Master Node:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Generate the Token for the configuration of Worker Node:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;kubeadm token create --print-join-command&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Paste the generated token output in the Worker Node:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  sudo su
  ---Paste the Join command on worker node with `--v=5`

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now execute this command in the Master Node :&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;kubectl get nodes&lt;/code&gt;&lt;/p&gt;




&lt;p&gt;Installation differ with the differnet operating system. If you are using other OS please checkout the steps here!&lt;/p&gt;




&lt;p&gt;Thankyou!🖤&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>architecture</category>
      <category>kcdchennai</category>
      <category>devops</category>
    </item>
    <item>
      <title>Cloud Journey</title>
      <dc:creator>Poonam Pawar</dc:creator>
      <pubDate>Sat, 13 May 2023 14:05:38 +0000</pubDate>
      <link>https://forem.com/kcdchennai/cloud-journey-5m5</link>
      <guid>https://forem.com/kcdchennai/cloud-journey-5m5</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Apnaa dhyaan ekatrit kar lijiye, ab ye vimaan(blog) apko aasmano ki unchaiyon me le jane vala hai.......&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;(translation)&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please fasten your seatbelt, now this airplane (blog) is going to take you to the heights of the sky...&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;First flight?✈️I mean new to cloud computing?😶‍🌫️&lt;/p&gt;

&lt;p&gt;No worries!😉I got you:)&lt;/p&gt;

&lt;p&gt;After reading this blog you will definitely gain some cloud computing fundamentals.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Cloud Computing?
&lt;/h2&gt;

&lt;p&gt;Cloud Computing is the delivery of on-demand computing services, including servers, storage, databases, networking, software and analytics, over the Internet, on a pay-as-you-go basis.&lt;/p&gt;

&lt;p&gt;Essentially, cloud computing allows users to access these computing resources remotely, without having to build or maintain their own infrastructure.&lt;/p&gt;

&lt;p&gt;This is what AI says about cc. &lt;code&gt;(alias: cloud computing ~ cc)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let's understand cc more simply,&lt;/p&gt;

&lt;p&gt;CC is like renting a house🏡 instead of buying one.&lt;/p&gt;

&lt;p&gt;Just like how you don't need to worry about the maintenance of the house when you rent it, in cc you don't need to worry about the maintenance of the physical server or the infrastructure.&lt;/p&gt;

&lt;p&gt;Instead, you can rent a virtual🏕️ space on the internet where you can store your data and run your applications. This virtual space is called the Cloud.☁️&lt;/p&gt;

&lt;p&gt;In the DevOps world, cc is important because it allows the team🤖 to quickly and easily deploy and manage applications without worrying about the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;With cc, developers can easily provision the required resources and test their applications, and operation teams can easily monitor🕵️ and manage the applications in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why cloud-computing is important in DevOps?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;CC is a key enabler of the continuous delivery and deployment of applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cloud platforms offer a variety of services, including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS), which can be leveraged to build, test, and deploy applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The cloud provides a flexible and scalable infrastructure that can be rapidly provisioned and de-provisioned to meet the demands of modern application development and deployment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What are some common use cases?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Application hosting&lt;/strong&gt;: Cloud platforms offer a range of services that can be used to host web applications, mobile applications, and APIs. These services can be scaled up or down based on demand, allowing companies to handle traffic spikes without overprovisioning their infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Continuous integration and delivery&lt;/strong&gt;: Cloud platforms provide tools and services that enable teams to build, test, and deploy applications in a continuous and automated fashion.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DevOps toolchain&lt;/strong&gt;: Cloud platforms offer a range of tools and services that can be used to manage the entire DevOps toolchain, from version control to monitoring and alerting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Disaster Recovery &amp;amp; Backup&lt;/strong&gt;: Cloud platforms can be used to store backups and replicate data to provide redundancy in case of a disaster.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How can you choose a cloud platform?
&lt;/h2&gt;

&lt;p&gt;To start deploying applications on the cloud, you first need to choose a cloud provider.&lt;/p&gt;

&lt;p&gt;Some popular options are:&lt;/p&gt;

&lt;p&gt;Amazon Web Services (&lt;strong&gt;AWS&lt;/strong&gt;),&lt;/p&gt;

&lt;p&gt;Microsoft &lt;strong&gt;Azure&lt;/strong&gt;, and&lt;/p&gt;

&lt;p&gt;Google Cloud Platform (&lt;strong&gt;GCP&lt;/strong&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J9CxaNF0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1olrmuqu49jv8eu7gflc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J9CxaNF0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1olrmuqu49jv8eu7gflc.png" alt="GCP" width="271" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each provider offers their own set of services, so it's important to do some research to find the one that best fits your needs.&lt;/p&gt;

&lt;p&gt;Some following factors you can keep in your mind while choosing the one:-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;💸&lt;strong&gt;Cost&lt;/strong&gt;: Different cloud platforms have different pricing models, and it is important to choose a platform that fits your budget.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;📡&lt;strong&gt;Services&lt;/strong&gt;: Different cloud platforms offer different services, and it is important to choose a platform that offers the services you need.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;⚙️&lt;strong&gt;Integration&lt;/strong&gt;: It is important to choose a cloud platform that integrates well with your existing tools and infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;📈&lt;strong&gt;Scalability&lt;/strong&gt;: It is important to choose a cloud platform that can scale up or down based on demand.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can start it for free as well for practicing purposes.&lt;/p&gt;

&lt;p&gt;All AWS, Microsoft Azure, GCP, Heroku, Digital Ocean etc offers a free trial that allows you to access some of their popular services for a limited time period.&lt;/p&gt;

&lt;p&gt;Once you've chosen a cloud platform, you can deploy your applications to it by creating virtual machines or using containerization technologies like Docker.&lt;/p&gt;

&lt;p&gt;With cloud computing, you can easily scale your applications up or down as needed, and you only pay for the resources you use.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to use a cloud platform?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Creating a virtual machine or container&lt;/strong&gt;: This involves creating a virtual machine or container that will host your application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configuring the virtual machine or container&lt;/strong&gt;: This involves configuring the virtual machine or container to run your application, including installing dependencies and configuring environment variables.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deploying the application&lt;/strong&gt;: This involves copying the application files to the virtual machine or container and starting the application.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Projects &amp;amp; Resources
&lt;/h2&gt;

&lt;p&gt;The best way to learn something is by reading their official documentation page.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cloud service provider documentation&lt;/strong&gt;: Cloud service providers like &lt;a href="https://docs.aws.amazon.com/?nc2=h_ql_doc_do"&gt;AWS&lt;/a&gt;, &lt;a href="https://learn.microsoft.com/en-us/azure/?wt.mc_id=rmskilling_docs_onboarding_inproduct_gdc&amp;amp;product=popular"&gt;Microsoft Azure&lt;/a&gt;, and &lt;a href="https://cloud.google.com/free?utm_source=google&amp;amp;utm_medium=cpc&amp;amp;utm_campaign=japac-IN-all-en-dr-BKWS-all-cloud-trial-EXA-dr-1605216&amp;amp;utm_content=text-ad-none-none-DEV_c-CRE_634320416318-ADGP_Hybrid%20%7C%20BKWS%20-%20EXA%20%7C%20Txt%20~%20GCP_General_gcp_misc-KWID_43700074200797688-aud-1644542956068%3Akwd-316837059054&amp;amp;userloc_9300152-network_g&amp;amp;utm_term=KW_gcp%20documentation&amp;amp;gclid=Cj0KCQjwocShBhCOARIsAFVYq0jxcuxUJHnQNhVfQlPjRW5LjnfVjE7hpYb80HqeqmJOqzr7VWuy52oaAp_FEALw_wcB&amp;amp;gclsrc=aw.ds"&gt;Google Cloud Platform&lt;/a&gt; offer extensive documentation on their platforms, including tutorials and guides.&lt;br&gt;
Free Trial and Free Tier  |  Google Cloud&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Start building on Google Cloud with $300 in free credits and free usage of 20+ products like Compute Engine and Cloud Storage, up to monthly limits.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/free"&gt;Google Cloud Free trail&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;These can be great resources for learning how to use their services and getting certified in cloud computing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://youtu.be/k1RI5locZE4"&gt;AWS Tutorial by Edureka&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start applying what you have learned by doing it practically. Try these projects for a kickstart&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://youtu.be/XeoZstvyew8"&gt;Simple DevOps Project&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is just a demo project so that you can get a glimpse of how things actually work. You can deploy your own web application into the cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What next?
&lt;/h2&gt;

&lt;p&gt;Now,&lt;/p&gt;

&lt;p&gt;💡 You know the fundamentals of cloud computing,&lt;/p&gt;

&lt;p&gt;🏁 You know where and how to start,&lt;/p&gt;

&lt;p&gt;🧰 You have the resources,&lt;/p&gt;

&lt;p&gt;🗯️ You have the projects,&lt;/p&gt;

&lt;p&gt;🏃 &lt;strong&gt;Go! Start your cloud journey now&lt;/strong&gt;.🚀&lt;/p&gt;

&lt;p&gt;Read the documentation, do hands-on practice, create your free account, and deploy applications.&lt;/p&gt;

&lt;p&gt;Just go and play around the clouds.&lt;/p&gt;

&lt;p&gt;Thank You! for giving your valuable time.🖤&lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>devops</category>
      <category>kcdchennai</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Linux: A Super Hero</title>
      <dc:creator>Poonam Pawar</dc:creator>
      <pubDate>Thu, 04 May 2023 15:00:09 +0000</pubDate>
      <link>https://forem.com/kcdchennai/linux-a-super-hero-1c85</link>
      <guid>https://forem.com/kcdchennai/linux-a-super-hero-1c85</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Linux is a powerful and versatile open-source operating system that is a Unix-like os that can be used in a variety of applications, from small-scale personal use to large-scale enterprise use.&lt;/p&gt;

&lt;p&gt;It offers many benefits including, stability, security and flexibility, making it a popular choice for many users and organisations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shell
&lt;/h2&gt;

&lt;p&gt;The Linux shell is a program that allows text-based interaction between the user and the operating system.&lt;/p&gt;

&lt;p&gt;This interaction is carried out by typing commands into the interface and receiving the response in the same way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of shell
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Bourne Shell (sh)&lt;/li&gt;
&lt;li&gt;Korn Shell (ksh)&lt;/li&gt;
&lt;li&gt;C Shell (csh or tcsh)&lt;/li&gt;
&lt;li&gt;Z Shell (zsh)&lt;/li&gt;
&lt;li&gt;Bourne again Shell (bash)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These shells may differ in their core concepts but all of them have one common thing to communicate between the users and os.&lt;/p&gt;

&lt;p&gt;When we log into the shell, the first thing to show up to you is your home directory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Home Directory&lt;/strong&gt; - &lt;code&gt;/home&lt;/code&gt; is a system-created directory that contains the home directories for almost all users in the Linux system.&lt;/p&gt;

&lt;p&gt;It allows user to store their personal data in the form of files and folders.&lt;/p&gt;

&lt;p&gt;Each user in the system gets their own unique home directory with all access.&lt;/p&gt;

&lt;p&gt;Representation- It is represented by the "tilde ~" symbol in the command line.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Home Directory = ~ (tilde)
[~]$
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Commands and Arguments
&lt;/h2&gt;

&lt;p&gt;To interact with the Linux system using the shell, a user has to give &lt;strong&gt;Commands&lt;/strong&gt;. Like-&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;echo&lt;/strong&gt;: to print a line of text on the screen use the &lt;code&gt;echo&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~]$ echo
[~]$

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;when you used &lt;code&gt;echo&lt;/code&gt; cmd, you did not tell what you want to print and as a result, it prints nothing.&lt;/p&gt;

&lt;p&gt;That's where an &lt;strong&gt;Argument&lt;/strong&gt; comes into the picture.&lt;/p&gt;

&lt;p&gt;An &lt;em&gt;Argument&lt;/em&gt; acts as an input to the command.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~]$ echo Hello
Hello
[~]$

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;It's not always necessary to give arguments. There can be a lot of commands which can run without any arguments. eg:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~]$ uptime
12:53  up 2 days, 22:05, 3 users, load averages: 2.89 2.61 2.79

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gq_8YR-_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/100wh2mrpj3z6gjb4v0c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gq_8YR-_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/100wh2mrpj3z6gjb4v0c.png" alt="uptime" width="655" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This &lt;code&gt;uptime&lt;/code&gt; is used to print information about how long the system has been running since the last reboot.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;command &amp;lt;arguments&amp;gt;
echo = command
Hello = argument

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;To check your current shell use-&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~]$ echo $SHELL
/bin/bash

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JSZdHIFm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lfdugr6r2jy8pz9ze37i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JSZdHIFm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lfdugr6r2jy8pz9ze37i.png" alt="shell" width="657" height="199"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Fun Task:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Let's do a quick task with some basic and most important Linux commands so that you can learn by doing.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task&lt;/strong&gt; - Create a directory structure with 3 directories under the home directory which is already created &lt;code&gt;/home/dev&lt;/code&gt;. These directories' name represents the categories of animal herbivorous, carnivorous and omnivorous which comes under home dir like &lt;code&gt;/home/dev/herbivorous&lt;/code&gt; under each category there are some animals and under this some foods. Each item is a directory.&lt;/p&gt;

&lt;p&gt;Let me explain this to you more easily with a diagram-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5M-KlsTo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wt2nxmz2446o3ok3v2ej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5M-KlsTo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wt2nxmz2446o3ok3v2ej.png" alt="fun task" width="800" height="552"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will be in our home directory by default&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~]$ pwd
/home/dev

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;pwd&lt;/code&gt; command prints the present working directory which is home in our case and dev is our username.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~]$ mkdir herbivorous
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;mkdir&lt;/code&gt; command is used to make a new directory on the given path&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~]$ mkdir carnivorous omnivorous
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;we can create as many directories as we want by writing them in one go as the above done.&lt;/p&gt;

&lt;p&gt;now all three directories have been created in our home path.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~]$ ls
herbivorous carnivorous omnivorous
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;ls&lt;/code&gt; command is used to show the list of files and folders on that particular directory.&lt;/p&gt;

&lt;p&gt;Now we have to make directories of &lt;code&gt;cows&lt;/code&gt; and &lt;code&gt;giraffes&lt;/code&gt; in the &lt;code&gt;herbivorous&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;To do this, we have to change our dir from &lt;code&gt;home&lt;/code&gt; and go inside the &lt;code&gt;herbivorous&lt;/code&gt; dir.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~]$ cd herbivorous
[~/herbivorous]$

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;the &lt;code&gt;cd&lt;/code&gt; command is used to change the directory.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~/herbivorous]$ pwd
/home/dev/herbivorous

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~/herbivorous]$ mkdir cow giraffe

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~/herbivorous]$ mkdir cow/grasses

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;here we made a new directory of &lt;code&gt;grasses&lt;/code&gt; inside &lt;code&gt;cow&lt;/code&gt; dir without going inside it.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~/herbivorous]$ mkdir -p cow/grasses
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;here we're creating directories cow and under that grasses together by using the &lt;code&gt;&amp;lt;-p&amp;gt;&lt;/code&gt; option.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's check out some more of the basic commands&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mv&lt;/code&gt; This mv command is used to move dir from src A to dest B&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~]$ mv /home/dev/source_dir /home/dev/destination_dir
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;cp&lt;/code&gt; This command is used to copy file/dir from one src to another&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ls
a.txt  b.txt  new

#Initially new is empty
$ ls new
#empty

$ cp a.txt b.txt new
#copying file a and b a file named new

$ ls new
a.txt  b.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;rm&lt;/code&gt; This command is used to remove file/dir&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ls
a.txt b.txt new
$ rm new
$ ls
a.txt b.txt
# file named new is being removed now
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;cat&lt;/code&gt; This command can be used in several ways. Some of them are:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat a.txt
This will show the content of file a.txt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat a.txt &amp;gt; b.txt
This will copy the content of file a.txt to b.txt file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cat &amp;gt; newfile
This will create a new file named newfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;touch&lt;/code&gt; This command is used to create a new file&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~]$ touch /home/dev/omnivorous/human/food.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Some Tips:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The alternative of cd cmd is &lt;code&gt;pushd&lt;/code&gt; cmd. This cmd remembers the current working directory before changing to the directory you want.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;let's say you're in &lt;code&gt;home&lt;/code&gt; dir and want to go to &lt;code&gt;/etc&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~] pushd /etc
/etc ~

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;now you can dir as many times as you want &lt;code&gt;cd /var cd /tmp&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;and to go back to the original dir use &lt;code&gt;popd&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[/tmp] popd
[~]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;To change the prompt - If you want to change the way how your prompt look to something else like your server name or your name itself
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~]: echo $PS1
[~]$
# current prompt lool like

[~]$ PS1="ubuntu-server:"
ubuntu-server:
# now it changes to the server name

ubuntu-server: PS1="[\d \t \u@\h:\w ] $ "
[Mon Mar 06 13:30:54 dev@macair:~ ] $
# now displaying with current date and time with user name too
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;date, time and username before '@' and /h /w give the hostname and the present working directory of the user, followed by '$' indicating a regular user.&lt;/p&gt;
&lt;h2&gt;
  
  
  Kernel
&lt;/h2&gt;

&lt;p&gt;The major component of the operating system and an interface between the system's hardware and processes.&lt;/p&gt;

&lt;p&gt;It can be thought of as a bridge between the hardware and the software components of a computer system.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~]$ uname
Linux
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Let's say you are running a program that needs access to the computer's memory. when the request is made, the kernel is responsible for granting the program access to the memory.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[~]$ uname -r
4.15.0.72-generic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This shows up in the kernel versions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NWDQ7CG---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vohic2xrryzu9ggmlrsg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NWDQ7CG---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vohic2xrryzu9ggmlrsg.png" alt="kernel" width="700" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The kernel looks up the four major tasks-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Memory Management - Keeps track of how much memory is used. &lt;/li&gt;
&lt;li&gt;Process Management - Which process can use the CPU? &lt;/li&gt;
&lt;li&gt;Device Drivers - An interpreter between the hardware and the processes. &lt;/li&gt;
&lt;li&gt;System Calls &amp;amp; Security - Receive requests for services from the processes.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Some Core Concepts
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;File Compression:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To inspect the size of the file use &lt;code&gt;du&lt;/code&gt; command&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SQWUHHSJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gmphym8s618brjvz9opf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SQWUHHSJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gmphym8s618brjvz9opf.png" alt="filecom" width="713" height="143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;du&lt;/code&gt; stands for Disk Usage and &lt;code&gt;-sh&lt;/code&gt; is a humanly readable size format.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Archive File:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Grouping multiple files into a single directory or a single file, you can use tar command&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S31hExsz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1j1vr7cwv6ir6m7ot3cl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S31hExsz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1j1vr7cwv6ir6m7ot3cl.png" alt="archive" width="699" height="117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-c&lt;/code&gt; flag specifies that the tar should create a new archive.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-v&lt;/code&gt; flag specifies that the tar will display the name of the file added to the archive.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-f&lt;/code&gt; flag specifies the name of the archive file that tar should create.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-cvf&lt;/code&gt; together, it will create a new archive with the specified name and add the specified files to it while displaying their names as they are added.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-xf&lt;/code&gt; flag is used to extract the contents from the tar file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Kx6t4At2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8wrjkvp8vki4hkfthw1y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Kx6t4At2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8wrjkvp8vki4hkfthw1y.png" alt="xf" width="705" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-czvf&lt;/code&gt; flag is used to compress the tar file to reduce its size.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Soq9mfi4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qqyu5q6vc9g5ezlwphh8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Soq9mfi4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qqyu5q6vc9g5ezlwphh8.png" alt="czvf" width="694" height="122"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;the tar archive "mydoc.tar" was created, and its content was compressed using gzip to create a compressed tar archive named "mydoc"&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  $tar -czvf mydoc.tar.gz mydoc/
  mydoc/
  mydoc/file1.txt
  mydoc/file2.txt
  mydoc/file3.txt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;-c&lt;/code&gt; creates a new archive file.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-z&lt;/code&gt; compress the archive using gzip.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-v&lt;/code&gt; displays the verbose output during the creation of the archive.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-f&lt;/code&gt; specify the name of the archive file.&lt;/p&gt;

&lt;p&gt;source:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kodekloud.com/courses/the-linux-basics-course/"&gt;KodeKloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.geeksforgeeks.org/cp-command-linux-examples/"&gt;geeksforgeeks&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's all for you to start your Linux ride. I have done my certifications in Linux Basics Course from KodeKloud.&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://kodekloud.com/certificate-verification/2D01B4BE1E89-2D01AEB884F1-2D01A92DE689/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://res.cloudinary.com/practicaldev/image/fetch/s--PhJ1VWKT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kodekloud.com/certificates/course-certificate/image/2D01B4BE1E89-2D01AEB884F1-2D01A92DE689/" height="565" class="m-0" width="800"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://kodekloud.com/certificate-verification/2D01B4BE1E89-2D01AEB884F1-2D01A92DE689/" rel="noopener noreferrer" class="c-link"&gt;
          Certificate Verification – KodeKloud
        &lt;/a&gt;
      &lt;/h2&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://res.cloudinary.com/practicaldev/image/fetch/s--AceEhC3F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://kodekloud.com/wp-content/uploads/2022/07/favicon-48x48-1.png" width="48" height="48"&gt;
        kodekloud.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;



&lt;p&gt;I am not promoting anything, just sharing my learning resources if you want to check them out.&lt;/p&gt;

&lt;p&gt;You can have the training and get certified by Linux Foundation itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://training.linuxfoundation.org/technology-catalog/?creative=606935296934&amp;amp;keyword=linux%20class&amp;amp;matchtype=p&amp;amp;network=g&amp;amp;device=c&amp;amp;pi_ad_id=606935296934&amp;amp;utm_term=linux%20class&amp;amp;utm_campaign=EMEA:+Search:+June+21+-+Sitewide+Discount&amp;amp;utm_source=adwords&amp;amp;utm_medium=ppc&amp;amp;hsa_acc=8666746580&amp;amp;hsa_cam=13435710405&amp;amp;hsa_grp=133584209699&amp;amp;hsa_ad=606935296934&amp;amp;hsa_src=g&amp;amp;hsa_tgt=kwd-101364025&amp;amp;hsa_kw=linux%20class&amp;amp;hsa_mt=p&amp;amp;hsa_net=adwords&amp;amp;hsa_ver=3&amp;amp;gclid=Cj0KCQjwr82iBhCuARIsAO0EAZz7WB1T0rg7qIoE6FMY_6r0o9G2XDjeWaZcladir3ow99LY4PAD3sQaArJSEALw_wcB"&gt;Linux Training &amp;amp; Certifications&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for the read!🖤🐧&lt;/p&gt;

</description>
      <category>linux</category>
      <category>beginners</category>
      <category>devops</category>
      <category>kcdchennai</category>
    </item>
    <item>
      <title>DevOps</title>
      <dc:creator>Poonam Pawar</dc:creator>
      <pubDate>Mon, 01 May 2023 06:55:39 +0000</pubDate>
      <link>https://forem.com/kcdchennai/devops-5co5</link>
      <guid>https://forem.com/kcdchennai/devops-5co5</guid>
      <description>&lt;p&gt;Those who are new to DevOps, find it quite difficult to know where to start their journey, I find it too. So, In this blog, I'll be sharing all my learnings (pre-requisite) so that you can start yours too and get your basics done.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is DevOps?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;DevOps&lt;/strong&gt; combines &lt;strong&gt;development&lt;/strong&gt; and &lt;strong&gt;operations&lt;/strong&gt; to increase the efficiency, speed and security of software development, to automate all the processes and delivery compared to the traditional process.&lt;/p&gt;

&lt;p&gt;The main key🗝 practices involved in DevOps are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CI/Cd, IaC, Monitoring and Logging&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let me break down this for you🫵 guys!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Dev&lt;/strong&gt;elopment + &lt;strong&gt;Op&lt;/strong&gt;erations = &lt;strong&gt;DevOps&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Development, "Dev" is the process of generating code that requires any software to work which involves practices like designing, writing and testing code.&lt;/p&gt;

&lt;p&gt;The operation, "Ops" is the process of deploying, maintaining and monitoring the software application.&lt;/p&gt;

&lt;p&gt;Both teams work closely to ensure that the code is properly integrated with the infrastructure and other components of the system. In DevOps, they work together as a single team to break down the traditional silos between these two groups so that development and operation can be more effective and efficient.&lt;/p&gt;

&lt;p&gt;More simply, you can think of it as making a toy house building🏢.&lt;/p&gt;

&lt;p&gt;In the making🏗 process, it's a teamwork game to play. You need to work together with your friends to make a strong building with different tools and materials like toy bricks, glue, paint🎨 etc.&lt;/p&gt;

&lt;p&gt;DevOps is the same, as working together in a team to build a software program using different tools and techniques like coding, testing and automation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learning Path:
&lt;/h2&gt;

&lt;p&gt;Now, the following is the step-by-step learning path to begin your journey:-&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Step 1&lt;/u&gt;: &lt;strong&gt;Getting started with GIT &amp;amp; GITHUB&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first and most important tool you have to learn to get started with DevOps is GIT &amp;amp; GITHUB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nqpc8W80--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qhvqa3di6rcij1e5xwoj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nqpc8W80--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qhvqa3di6rcij1e5xwoj.png" alt="Git&amp;amp;Github" width="798" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Learn the fundamentals of version control and collaboration with Git and Github.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Git:&lt;/strong&gt; An open-source distributed version control system that allows developers and operations teams to collaborate and keep records and save the changes made on a project. It's like saving the history of your project so that you can make changes as as many times you want without losing your previous one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Github:&lt;/strong&gt; A web-based platform that provides hosting for Git repositories and offers additional features for managing software development projects.&lt;/p&gt;

&lt;p&gt;I'll not be going into the deep. I'll share the resources at the end instead.&lt;br&gt;
&lt;a href="https://youtu.be/apGV9Kg7ics"&gt;Git &amp;amp; Github&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Step 2&lt;/u&gt;: &lt;strong&gt;Understanding Linux and Shell Scripting&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Explore the power of the Linux operating system and learn how to automate tasks with shell scripting.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Linux&lt;/strong&gt; is a free and open-source operating system based on Unix os.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shell Scripting&lt;/strong&gt; is a way of writing programs that can be run on a Linux or Unix command-line interface, called a shell.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Step 3&lt;/u&gt;: &lt;strong&gt;Programming Fundamentals with Golang and Python&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The next step is to learn a programming language, don't worry, you don't have to master anyone. Just for getting started you must know the basics of these.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Learn these two popular programming languages for DevOps and how they can be used to automate infrastructure and application deployments.&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This program prints Hello, world!
package main
import "fmt"

func main() {
  fmt.Println("Hello World!")
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;u&gt;Step 4&lt;/u&gt;: &lt;strong&gt;Building and Deploying applications with Docker&lt;/strong&gt;&lt;br&gt;
An illustration of a docker container &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gwpV7DgM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6175x05hwyhutjyb6tad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gwpV7DgM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6175x05hwyhutjyb6tad.png" alt="Docker" width="497" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Learn how to package, deploy and manage applications with a docker container.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Docker&lt;/strong&gt; is an essential tool in DevOps. It allows developers to package and deploy their applications using containerisation technology.&lt;/p&gt;

&lt;p&gt;Docker Deployment of your application:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qfnDHSj3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u2jgjo9ms9gbgah4cdtf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qfnDHSj3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u2jgjo9ms9gbgah4cdtf.png" alt="Docker Deployment" width="658" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Step 5&lt;/u&gt;: &lt;strong&gt;Automating workflows with Jenkins&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W_HgO94H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jumg9r3ogpw9l93s1kv0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W_HgO94H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jumg9r3ogpw9l93s1kv0.png" alt="Jenkins Dashboard" width="459" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Master the basics of CI and learn how to automate builds, tests, and deploy with Jenkins.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Creating new item on Jenkins Dashboard to set up Job&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jenkins&lt;/strong&gt; is an open-source automation server that supports &lt;em&gt;continuous integration&lt;/em&gt;(CI) and &lt;em&gt;continuous delivery&lt;/em&gt;(CD) workflows which enables teams to automatically build, test and deploy their code.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Step 6&lt;/u&gt;: &lt;strong&gt;Orchestrating with Kubernetes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Dive into the world of orchestration and learn how to deploy, scale and manage containerised applications across the cluster of servers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes&lt;/strong&gt; was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF). It is an open-source platform for managing and orchestrating containerised applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3YDI84_b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ub45qj6n04dtr4q5ek45.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3YDI84_b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ub45qj6n04dtr4q5ek45.png" alt="k8s cncf" width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click here to: &lt;a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/"&gt;Learn Kubernetes Basics&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Step 7&lt;/u&gt;: &lt;strong&gt;Infrastructure as Code with Ansible&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Explore the power of configuration management and learn how to automate infrastructure with Ansible.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ansible is a configuration management tool that allows you to define infrastructure as code and automate tasks such as configuration updates, software installation and system updates. It is a "push-based" configuration model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update sudo apt install software-properties-common sudo add-apt-repository --yes --update ppa:ansible/ansible sudo apt install ansible

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;u&gt;Step 8&lt;/u&gt;: &lt;strong&gt;Provisioning cloud resources with Terraform&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Discover how to define, provision and manage infrastructure with Terraform, a popular tool for infrastructure as code.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt; is a tool for the provisioning and management of cloud resources that allows you to create, update and destroy cloud resources such as virtual machines, databases and storage services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/l5k1ai_GBDE"&gt;Terraform&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Step 9&lt;/u&gt;: &lt;strong&gt;Automating configuration with Chef and Puppet&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Learn how to automate server configurations with two popular configuration tools, Chef and Puppet.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Both have similar functionality used for automating the deployment and configuration of infrastructure. Chef is a Ruby-based system automation-friendly. The automation tool used to configure, manage, deploy and orchestrate applications. These are "pull-based" configuration models.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Step 10&lt;/u&gt;: &lt;strong&gt;Integrating Security into DevOps with DevSecOps&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Understand the principles of DevSecOps and how to integrate security into your DevOps workflow.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;An approach to software development that integrates building security controls into every stage of the software development lifecycle from development and testing to deployment and operations.&lt;/p&gt;

&lt;p&gt;That's all to get your basics done.&lt;/p&gt;

&lt;p&gt;resource: (&lt;a href="https://kubernetes.io/"&gt;https://kubernetes.io/&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;This is not the exact path you have to follow, you can jumble. There are a lot of tools out there to learn for DevOps, once you start your journey you get to know about many more. I have just given you one way of learning. Start your journey and find your own way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learning Project Ideas:
&lt;/h2&gt;

&lt;p&gt;Some beginner-level project ideas that you should do while learning~&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;System Monitor&lt;/strong&gt; - Develop a shell script that monitors system usage such as CPU, memory, and disk usage. It can send an alert to the user when certain thresholds are reached.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multi-container application with Docker Compose&lt;/strong&gt; - Create a multi-container application using docker-compose, which allows you to define and run a multi-docker container as a single service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Load-balancing with Kubernetes&lt;/strong&gt; - Configure load balancing for a Kubernetes cluster by creating a service that distributes traffic across multiple pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ansible playbook for security hardening&lt;/strong&gt; - Develop an ansible playbook that automates security hardening tasks, such as disabling unnecessary services and setting up firewalls.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Jenkins pipeline&lt;/strong&gt; - Create a Jenkins pipeline that automates the entire software delivery process including building, testing and deploying applications.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Learning Resources:&lt;/strong&gt;&lt;br&gt;
There are a few free amazing resources that you must check out and get started with:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtube.com/playlist?list=PL9gnSGHSqcnoqBXdMwUTRod4Gi3eac2Ak"&gt;Kunal Kushwaha&lt;/a&gt;: This is an amazing free DevOps boot camp by Kunal Kushwaha on his youtube channel to get started with.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtube.com/playlist?list=PLy7NrYWoggjwV7qC4kmgbgtFBsqkrsefG"&gt;TechWorld with Nana&lt;/a&gt;: To learn basic concepts of various tools, you can check this youtube channel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/@MarcelDempers/featured"&gt;That DevOps Guy&lt;/a&gt;: This youtube channel is completely based on DevOps learnings by MarcelDempers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Pradumnasaraf/DevOps"&gt;Pradumna Saraf&lt;/a&gt;: This is one of the best resources by Pradumna Saraf you can find on his GitHub to get notes, play labs or anything else to get started.&lt;/p&gt;

&lt;p&gt;Sometimes it becomes quite difficult to set up systems for running DevOps tools getting head clear with the doubts. So a paid course can also be an option in this scenario.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kodekloud.com/learning-path-devops-basics/"&gt;Kodekloud&lt;/a&gt;: This is one of the best DevOps courses out there by Mumshad Mannambeth. They have hands-on lab practices after lecture videos and also have a playground where you can do whatever you have learned through the lectures. And that's what I'm doing too.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.techworld-with-nana.com/devops-bootcamp"&gt;TechWorld with Nana&lt;/a&gt;: Another amazing paid course by Nana you can do.&lt;/p&gt;

&lt;p&gt;EddieHub&lt;/p&gt;

&lt;p&gt;Community of inclusive Open Source people - Collaboration 1st, Code 2nd! Join our GitHub Org 👇 - EddieHub&lt;/p&gt;

&lt;p&gt;github.com&lt;/p&gt;

&lt;p&gt;Join this amazing GitHub community to get started with your open-source journey while learning DevOps to grow faster. They welcome newcomers open handedly.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Learn -&amp;gt; Contribute -&amp;gt; Collaborate -&amp;gt; Grow&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;That's All!! GO🏄‍♀️ and start your DevOps journey today!!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>beginners</category>
      <category>kcdchennai</category>
    </item>
  </channel>
</rss>
