<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: vivek kumar sahu</title>
    <description>The latest articles on Forem by vivek kumar sahu (@viveksahu26).</description>
    <link>https://forem.com/viveksahu26</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/viveksahu26"/>
    <language>en</language>
    <item>
      <title>Operating System (OS) with 32 bit will stop working after 19 January 1938....</title>
      <dc:creator>vivek kumar sahu</dc:creator>
      <pubDate>Mon, 31 Oct 2022 13:48:56 +0000</pubDate>
      <link>https://forem.com/viveksahu26/operating-system-os-with-32-bit-will-stop-working-after-19-january-1938-3g14</link>
      <guid>https://forem.com/viveksahu26/operating-system-os-with-32-bit-will-stop-working-after-19-january-1938-3g14</guid>
      <description>&lt;h2&gt;
  
  
  Do you know when 32 bit OS will stop working?
&lt;/h2&gt;

&lt;p&gt;Or&lt;/p&gt;

&lt;h2&gt;
  
  
  When will 32 bits computers  stop storing data?
&lt;/h2&gt;

&lt;p&gt;Before moving forward, it's always good to know the content:&lt;br&gt;
1) When 32 bit Unix Based OS will stop working exactly?&lt;br&gt;
2) Why it happens on exact time and date so?&lt;br&gt;
3) What are bits and Maximum value of standard bits?&lt;br&gt;
4) Obviously 32 bit has such a big capacity to store data/values, then on what type of storing data or values will lead to cross 32 bit maximum value ?&lt;br&gt;
5) Why time has to be memorized by OS. Is there any significance of time as such?&lt;br&gt;
6) Is there any particular date from when OS has to remember the time? &lt;br&gt;
7) Why 1901 is considered to be birth or origin of computers?? &lt;br&gt;
8) And in what unit does the time is being stored in 32 bit Unix-based OS ?? &lt;br&gt;
9) Calculation of seconds to be stored since 1901 till 1938 ?&lt;br&gt;
10) What all are the Devices that will be impacted by this ??&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Note:&lt;/code&gt; OS refers to 32 bit Linux or Unix based OS.&lt;/p&gt;

&lt;h2&gt;
  
  
  When 32 bit Unix Based OS will stop working exactly?
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;19 January 2038, at 03:14:07 UTC&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it will happen on particular date and time( and particular second too) so?
&lt;/h2&gt;

&lt;p&gt;Because on that particular time, the 32 bit will be exact to its maximum value. After one second later it will cross its maximum value. Due to which it won't be able to store further value of time.&lt;/p&gt;

&lt;h4&gt;
  
  
  Extra knowledge
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Techie detail:
&lt;/h4&gt;

&lt;p&gt;When it gets to Tue, Jan 19 03:14:07 2038, the next second(03:14:08) will be Fri, Dec 13 20:45:52 1901. Can you see the difference in the day &amp;amp; date at 7th and 8th second respectively.  At 7th seconds it's a Tuesday,Jan 19 03:14:07 2038 and just one second later at 8th second it's a Friday, Dec 13 20:45:52 1901.  However internally, if the code calculates the difference of time from Tue, Jan 19 03:14:07 2038 to Fri, Dec 13 20:45:52 1901, this is still just 1 second because that is how the numbers work internally - that is what the roll-over is about.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are bits and their Maximum values ?
&lt;/h2&gt;

&lt;p&gt;Before moving forward let me introduce you to the basic terminologies:&lt;br&gt;
&lt;code&gt;Bit&lt;/code&gt; or &lt;code&gt;bit&lt;/code&gt; : a single block([]) or one container in which  either &lt;code&gt;1&lt;/code&gt; or &lt;code&gt;0&lt;/code&gt; is stored. &lt;br&gt;
Similarly, &lt;br&gt;
2 bit means 2 blocks( [], [])  or  simply 2 containers in both either &lt;code&gt;0&lt;/code&gt; or &lt;code&gt;1&lt;/code&gt; can be stored.&lt;/p&gt;

&lt;p&gt;Similarly there are some standard bits in the OS world.&lt;/p&gt;

&lt;h3&gt;
  
  
  Standard bits:
&lt;/h3&gt;

&lt;p&gt;Let's look at the standard bits, such as 4 bit, 8 bit, 16 bit, 32 bit, 64 bit and 128 bit.&lt;/p&gt;

&lt;h4&gt;
  
  
  4 bit:
&lt;/h4&gt;

&lt;p&gt;4 containers ( [], [], [], [] ) &lt;/p&gt;

&lt;h4&gt;
  
  
  8 bit:
&lt;/h4&gt;

&lt;p&gt;8 containers ( [], [], [], [], [], [], [], [] )&lt;/p&gt;

&lt;h4&gt;
  
  
  16 bit:
&lt;/h4&gt;

&lt;p&gt;16 containers ( [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [] )&lt;/p&gt;

&lt;h4&gt;
  
  
  32 bit:
&lt;/h4&gt;

&lt;p&gt;32 containers ([], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [])&lt;/p&gt;

&lt;h4&gt;
  
  
  64 bit:
&lt;/h4&gt;

&lt;p&gt;64 containers,( [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [] . . . . . . .  [] )&lt;/p&gt;

&lt;p&gt;Each containers or block allows to store either 1 or 0&lt;/p&gt;

&lt;h4&gt;
  
  
  Extra Knowledge:
&lt;/h4&gt;

&lt;p&gt;To know how numbers are converted into binary , switch &lt;a href="https://www.tutorialspoint.com/how-to-convert-decimal-to-binary"&gt;here&lt;/a&gt;. And from binary to decimal, switch &lt;a href="https://www.tutorialspoint.com/how-to-convert-binary-to-decimal"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  But why does the OS store data in the form of binary ??
&lt;/h4&gt;

&lt;p&gt;It's because the hardware understands either &lt;code&gt;True&lt;/code&gt; or &lt;code&gt;False&lt;/code&gt;, &lt;code&gt;1&lt;/code&gt; or &lt;code&gt;0&lt;/code&gt;, switch &lt;code&gt;ON&lt;/code&gt; or Switch &lt;code&gt;OFF&lt;/code&gt;.&lt;br&gt;
Similarly, the hardware of a laptop/computer understood in &lt;code&gt;binary&lt;/code&gt; format( 0 or 1) only. Whereas it doesn't understand human readable &lt;code&gt;numerical&lt;/code&gt; format(1,2,3,4,5,6,7,8,9,0). Basically numerical format is first converted into ASCII values and then converted into binary format.&lt;/p&gt;

&lt;p&gt;Let's get back to our topic and talk about the &lt;code&gt;maximum&lt;/code&gt; value that can be stored in standard bits.&lt;/p&gt;

&lt;h3&gt;
  
  
  Maximum value of standard bits?
&lt;/h3&gt;

&lt;p&gt;Maximum values that 4 bit, 16 bit and 32 bit can store ?&lt;br&gt;
Condition of maximum value in any bits: &lt;code&gt;When all container or block are filled with 1&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  For example:
&lt;/h4&gt;

&lt;p&gt;In &lt;code&gt;4 bit&lt;/code&gt;, &lt;code&gt;maximum&lt;/code&gt; value that can be stored is: &lt;code&gt;15&lt;/code&gt;&lt;br&gt;
[1][1][1][1] = 1 x 2^3 + 1 x 2^2 + 1 x 2^1 + 1 x 2^0 = 8 + 4 + 2 + 1&lt;/p&gt;

&lt;p&gt;Similarly in &lt;code&gt;16 bit&lt;/code&gt;:&lt;br&gt;
[1][1][1][1][1][1][1][1][1][1][1][1][1][1][1][1] =  1 x 2^15 + 1 x 2^14 + 1 x 2^13 + 1 x 2^12 + 1 x 2^11 +  1 x 2^10 + 1 x 2^9 + 1 x 2^8 + 1 x 2^7 + . . . . .  + 1 x 2^3 + 1 x 2^2 + 1 x 2^1 + 1 x 2^0 = &lt;code&gt;65535&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Similarly, in &lt;code&gt;32 bit&lt;/code&gt;:&lt;br&gt;
[1][1][1][1][1][1][1][1] . . . . . . .  [1][1][1][1][1][1][1][1] = 1 x 2^31 + 1 x 2^30 + 1 x 2^29 +  . . . . . .  . &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 x 2^3 + 1 x 2^2 + 1 x 2^1 + 1 x 2^0  = 4,294,967,295 = approximately &lt;code&gt;429  crores&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, finally we came to know about the maximum value that 4 bit, 16 bit and 32 bit can store. Now you tell me, when will 32 bits stop working. So your answer will be: &lt;/p&gt;

&lt;h2&gt;
  
  
  And what type of data it will store. Because obviously 429 crore is such a big number ?
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;The data is time which is to be stored in seconds since 1901&lt;/code&gt;. And we all know time always increases, so it's value will keep on increasing further. &lt;/p&gt;

&lt;h2&gt;
  
  
  Why time has to be memorized by OS. Is there any significance of time as such?
&lt;/h2&gt;

&lt;p&gt;For now accept that Time plays a very important role in operating systems. For &lt;a href="https://en.wikipedia.org/wiki/System_time"&gt;more&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is there any particular date from when OS has to remember the time?
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;It's from 1901.&lt;/code&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Why 1901 is considered to be &lt;code&gt;birth&lt;/code&gt; or &lt;code&gt;origin&lt;/code&gt; of computers??
&lt;/h2&gt;

&lt;p&gt;It's because 1901 is considered as the reference time for the origin of Unix-based OS, In actually Unix-based OS were born around 1970. See, basically we count birth day on that particular day on which you were born. In Linux since the time is being counted from 1901 as a standard so just for sake 1901 year is called as the origin of the computer because of the time of the Unix based 32 bit OS is considered 1901 year as the origin. &lt;/p&gt;

&lt;h2&gt;
  
  
  And in what unit the time is being stored ??
&lt;/h2&gt;

&lt;p&gt;It's a small unit of time called a &lt;code&gt;second&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Calculation of seconds to be stored since 1901 till 1938 ?
&lt;/h2&gt;

&lt;p&gt;Oh got it that the Unix-based 32 bit OS has been storing time in seconds since 1901.  For example, suppose the 1st day passed after 1901, then 32 bit OS has to remember total seconds equals 24 hrs x 60 min x 60 sec = 86400 seconds. This number can't even be stored in a 16 bit register as it crosses the maximum value of 16 bit , i.e 65535. Now suppose a year passes w.r.t 1901. Means you are in 1902. Total number of seconds a 32 bit OS has to be stored is 365days x 24 hrs x 60 min x 60 seconds = 31,536,000( ~ 3 crores)&lt;/p&gt;

&lt;p&gt;Look at the number, are you able to digest how big this 31,536,000 number is.&lt;/p&gt;

&lt;p&gt;Now let's calculate the total number of seconds for 10 years:  10 x 31,536,000 = approx 31 crores.&lt;/p&gt;

&lt;p&gt;And as we know that the maximum limit of 32 bits is approx 429 crore. Are you able to feel how fast within the coming years it will surpass 32 bit maximum numbers. &lt;br&gt;
It has been said that in the year 2038(19 Jan) the 429 crore number will be surpassed. &lt;br&gt;
Let's feel it and prove it.&lt;br&gt;
The difference in years b/w 1901 to 2038 is 138 years. And if you talk about the actual value of seconds passed in  138 years, plus 18 days and time at 03:14:07 UTC(I.e ). But don’t get into actual value, just remember approx value of second in 137 years. &lt;br&gt;
1) Total passed years = 136. ---&amp;gt; Total second = 136×31536000 = 4,288,896,000 ~ 428 crores&lt;br&gt;
2) And on 137th year, on 19 Jan at 3:14:7 second. = 18×24×60×60 seconds = 1,555,200 ~ 10 lakhs&lt;/p&gt;

&lt;p&gt;Total seconds from 1901 to 19 Jan, 2038 = 429(428 crore + 10 lakh) crore = Maximum value of 32 bit = all 32 boxes or container filled with 1. To store more values needs more container or boxes. That's why 64 bit was considered over 32 bit.&lt;/p&gt;

&lt;p&gt;And max value of 32 bit is also approx to 429 crore. And as the 32 bit OS starts storing values greater than say 429 crore then it has to shift from 32 bit to 64 bit to store values greater than 429 crores. &lt;/p&gt;

&lt;p&gt;That's the exact reason why 32 bit Unix-based OS will stop working after 19 Jan 2038 at around 3 AM, 14 minute and 78th second.&lt;/p&gt;

&lt;h2&gt;
  
  
  What all are the Devices that will be impacted by this ??
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;As far as operating systems are concerned, Windows is okay, its operating system used a different method for dates from the start.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There are still some issues for 32 bit versions of the Linux operating system but they are being fixed. It should be okay by 2038 so long as you are able to update to the 64 bit of the operating system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Android is based on Linux, so if you have a 32 bit Android device then the operating system might not be able to handle dates beyond 2038 - but if not this should be fixed well before 2038.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This was all from this blog, hope you get to learn something interesting today. That's all from my side keep learning and keep upgrading yourself. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And last but not the least, let me know if you found any error in the blog through comment. That would be really appreciated ☺️&lt;/p&gt;

&lt;p&gt;Keep this flow chart for future reference&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--i0uqj8_S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6etfn9enlqxi9xtckuuo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i0uqj8_S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6etfn9enlqxi9xtckuuo.png" alt="Image description" width="800" height="1003"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://lucid.app/lucidspark/ba0c97cb-9a4c-42e5-8972-d7e539e1e378/edit?invitationId=inv_c1752b95-4a22-4f5a-be11-e59631353f0b#"&gt;https://lucid.app/lucidspark/ba0c97cb-9a4c-42e5-8972-d7e539e1e378/edit?invitationId=inv_c1752b95-4a22-4f5a-be11-e59631353f0b#&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  References:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://en.m.wikipedia.org/wiki/Year_2038_problem"&gt;https://en.m.wikipedia.org/wiki/Year_2038_problem&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://betterprogramming.pub/the-end-of-unix-time-what-will-happen-1b1a25ec1c20#:%7E:text=The%20most%20imminent%20overflow%20date,03%3A14%3A07%20UTC"&gt;https://betterprogramming.pub/the-end-of-unix-time-what-will-happen-1b1a25ec1c20#:~:text=The%20most%20imminent%20overflow%20date,03%3A14%3A07%20UTC&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How Kernel boot up ??</title>
      <dc:creator>vivek kumar sahu</dc:creator>
      <pubDate>Sat, 23 Jul 2022 22:35:00 +0000</pubDate>
      <link>https://forem.com/viveksahu26/how-kernel-boot-up--bje</link>
      <guid>https://forem.com/viveksahu26/how-kernel-boot-up--bje</guid>
      <description>&lt;p&gt;Hey tech lovers. This time back with one of the interesting topic of Linux, &lt;strong&gt;how Operating System starts&lt;/strong&gt;. The reason behind writing this blog is curiousity within me pulls down to know what's going under the hood when OS starts. After knowing can stop myself to share with you all. What I believe that an individual can not learn everything but on sharing things its become easier for learner to understand things properly within short spam of time. Always try to share your knowledge. It enhances  your knowledge as well as saves time of others.&lt;/p&gt;

&lt;h3&gt;
  
  
  How the Linux Kernel Boots?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcj7raa00deltn9so74hy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcj7raa00deltn9so74hy.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's begin with the topic "how Linux System starts or boots up". &lt;/p&gt;

&lt;h3&gt;
  
  
  When happens after pressing Power ON button of the computer.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0zch33kzrtg869omsq6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0zch33kzrtg869omsq6.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the user Power ON its machine, Power supply goes to motherboard. Motherboard tries to start CPU. After CPU  starts, first off all it clears old data from registers. After that CPU goes to its memory location(0xffff0000→ address sector on the hard disk) to see if any instruction is present or not. By the way that address is the home or location of BIOS. Basically, CPU executes BIOS program. And that's how BIOS starts. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;-&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Basically, BIOS or UEFI represents firmware. The general difference between them is that BIOS is traditional and UEFI is modern one. Your system may have either of them.&lt;br&gt;
To check which one is present in your system, run &lt;code&gt;efibootmgr&lt;/code&gt; command.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  BIOS(basic input/output system) performs POST (Power On Self Test) checks
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxzcyq7wmspv3fsembv5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxzcyq7wmspv3fsembv5.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After BIOS starts, it perform various checks or test on many components of hardware such as CPU test, RAM test, etc. The whole testing process is known as POST(Power On Self Test). Basically, the main task of BIOS is to ensure that the computer hardware are functioning correctly or not. If POST fails, the computer may not be usable and so the boot process does not continue. After successfully completion of POST operation. The next job of BIOS is to search the boot sector from the attached disks. And do you know, what boot sector contains?  It's Boot record popularly  knows as  Master Boot Record (MBR). Simply, BIOS loads MBR instructions code &lt;br&gt;
to RAM, execute it and handle over control to the MBR. &lt;/p&gt;
&lt;h3&gt;
  
  
  What MBR does?
&lt;/h3&gt;

&lt;p&gt;The first 512-byte sector on the hard drive along with the partition table. The total amount of space allocated for the actual MBR is 446 bytes and remaining are assigned for Partition table. &lt;br&gt;
Because the boot record must be so small, it is also not very smart and does not understand filesystem structures. Therefore the sole purpose of MBR is to locate and load GRUB. In order to accomplish this, GRUB must be located in the space between MBR till first partition on the drive. After loading GRUB into RAM, MBR handles control over to GRUB program.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;At sector 0 ---&amp;gt; MBR is located of size 512 bytes
Between sector 0 and 2048  ---&amp;gt; GRUB is located of size which is of size 1MiB(2048*512 bytes = 1024KiB = 1MiB)
From 2049 sectors onwards ---&amp;gt; Partitioning of disk starts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Hard disk is divided into sector. Sector is the smallest unit of Hard disk. 1 Sector = 512 byte&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fooel4sfg2kiavee01b4y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fooel4sfg2kiavee01b4y.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqumju2w6xag4b47y62kq.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqumju2w6xag4b47y62kq.jpeg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  GRUB(Grand Unified Boot Loader)
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Do You Know !!&lt;/p&gt;
&lt;h4&gt;
  
  
  What is the use of GRUB?
&lt;/h4&gt;

&lt;p&gt;_To know the answers you have to know what is booting first of all. &lt;br&gt;
In technical terms, booting means copying the kernel image from hard disk to memory and afterwards executing it. So, copying the kernel image from disk to memory is performed by the GRUB program. Along with the kernel image GRUB also helps to pass some parameters. This is where GRUB comes into use. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But the biggest problem for GRUB is how to search the kernel image in the disk. Listen carefully don't get confuse. I know you are thinking that searching any file in Linux is very easy, yes 100% right no doubt in that, but it is when your OS is in running state(i.e. Kernel is running + necessary drivers and programs are running). &lt;br&gt;
But currently only GRUB is running, you system is not yet started. What it mean is, neither kernel is running nor required drivers and necessary programs are running. So, here searching any file is like impossible task. Then how GRUB searches kernel image from disk.&lt;/p&gt;

&lt;p&gt;Let’s see how GRUB solves these problems. &lt;/p&gt;
&lt;h4&gt;
  
  
  GRUB has to find the answer of 2 question:-
&lt;/h4&gt;

&lt;p&gt;1) What is Kernel and its parameter?&lt;br&gt;
2) And how to find or search it in the hard disk ?&lt;/p&gt;

&lt;p&gt;To load the Kernel image into memory GRUB has to search Kernel image along with parameter from disk. To access hard disk and file system for searching Kernel image. GRUB takes help of  BIOS or UEFI. Btw, disk allows BIOS or UEFI to access hardware storage via Logical Block Addressing(LBA). Are you able to memorise how BIOS searched for MBR from disks.  LBA is a universal, simple way to access data from any disk. One more thing, most Boot Loader Program, GRUB, can read partition table( index page which maps files to their their corresponding address in the hard disk) which is built-in support for read-only access to the file system. Therefore, GRUB solves the problem of searching kernel images from the disk. Once it gets Kernel image(i.e. &lt;strong&gt;.iso&lt;/strong&gt; file) and it's parameter it loads to memory along with that it also loads *&lt;em&gt;initramfs *&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Below, BOOT_IMAGE refers to kernel image as well as it's parameter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BOOT_IMAGE=/boot/vmlinuz-4.15-generic  root=UUID=179hfy4955  ro  quiet splash  vt.handoff=1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;kernel image ---&amp;gt; &lt;code&gt;/boot/vmlinuz-4.15-generic&lt;/code&gt;&lt;br&gt;
Parameters ---&amp;gt; &lt;code&gt;root, ro(read only), vt.handoff,&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;initramfs&lt;/strong&gt; is an archive containing the kernel modules for  the hardware required during the boot time, initialization scripts, drivers, and more. These are modules which connect with hardware as well as software.  May be this question arises to your mind why Kernel image is not directly attached with it's modules. Why these modules are kept separately in form of &lt;code&gt;initramfs&lt;/code&gt;. Let's contradict our point, so let's say all kernel modules are present inside Kernel image. Basically what will happen, as the CPU start executing Kernel image then along with that all modules will also starts executing whether it is needed by the Kernel or not. And this leads to bottleneck of RAM or overflow of RAM. As a result memory may crash. So, that's why Kernel modules are separated and only those modules are taken from &lt;code&gt;initramfs&lt;/code&gt; which are required by the Kernel at that particular time. For example, we all know that there is module or driver for printer which is used when printer is attached. But during the boot time the printers driver module is not required. But if it is too executed then it's totally wastage of your RAM and CPU. To know more about &lt;code&gt;initramfs&lt;/code&gt; &lt;a href="https://wiki.ubuntu.com/Initramfs" rel="noopener noreferrer"&gt;refer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Once the GRUB load kernel image and execute/start it. The control is transfer from GRUB to Kernel and from here Kernel work starts&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;High level idea regarding the function of the kernel:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Kernel initialization&lt;/li&gt;
&lt;li&gt;CPU inspection&lt;/li&gt;
&lt;li&gt;Memory inspection&lt;/li&gt;
&lt;li&gt;Device bus discovery&lt;/li&gt;
&lt;li&gt;Device discovery&lt;/li&gt;
&lt;li&gt;Auxiliary kernel subsystem setup&lt;/li&gt;
&lt;li&gt;Root filesystem mount&lt;/li&gt;
&lt;li&gt;User space start(systemd)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;During the initialization process, Kernel must mount a root file system before starting init or systemd. We don't need to go in detail of it. Finally kernel last job is to start Systemd and also hand over control to Systemd.  This is the end of the boot process. At this point, the Linux kernel and systemd are running but unable to perform any productive tasks for the end user because nothing else is running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Systemd&lt;/strong&gt; is a system as well as service manager in the Linux. It is the mother of all process and programs.  From here all your services and user space programs are managed as well as controlled by it. In next blog will know about Systemd in detail.&lt;/p&gt;

&lt;h4&gt;
  
  
  flowchart:-
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://lucid.app/lucidchart/4345e110-7988-438b-a961-a39113f2d341/edit?invitationId=inv_d794a834-d90d-4931-a62d-94b2220a8c71" rel="noopener noreferrer"&gt;https://lucid.app/lucidchart/4345e110-7988-438b-a961-a39113f2d341/edit?invitationId=inv_d794a834-d90d-4931-a62d-94b2220a8c71&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://opensource.com/article/17/2/linux-boot-and-startup" rel="noopener noreferrer"&gt;https://opensource.com/article/17/2/linux-boot-and-startup&lt;/a&gt;&lt;br&gt;
&lt;a href="https://learning.oreilly.com/library/view/hands-on-booting-learn/9781484258903/" rel="noopener noreferrer"&gt;https://learning.oreilly.com/library/view/hands-on-booting-learn/9781484258903/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://learning.oreilly.com/library/view/how-linux-works/9781098128913/" rel="noopener noreferrer"&gt;https://learning.oreilly.com/library/view/how-linux-works/9781098128913/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks for reading it. Hope you liked it and gain some knowledge. Tried my best to make you under the topic step by step. Thank you guys. Will see you in next blog.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>boot</category>
      <category>grub</category>
      <category>bios</category>
    </item>
    <item>
      <title>GitHub Action</title>
      <dc:creator>vivek kumar sahu</dc:creator>
      <pubDate>Sun, 05 Jun 2022 10:47:01 +0000</pubDate>
      <link>https://forem.com/viveksahu26/github-action-d03</link>
      <guid>https://forem.com/viveksahu26/github-action-d03</guid>
      <description>&lt;p&gt;The name of the topic is GitHub and Action. GitHub represents GitHub repository, whereas Action represents action that is performed after some activity( in technical terms known as events) by the user on the github repository. &lt;br&gt;
Let's understand with examples: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Whenever someone pushes(activity or events) some commits to your repository, perform xyz action, or &lt;/li&gt;
&lt;li&gt;Whenever someone creates new issues(activity or events) in a repository, perform abc action, etc. &lt;/li&gt;
&lt;li&gt;In short, users perform activities such as: push commits, create PR, create new issues ----&amp;gt; in repository ----&amp;gt; and based on those activities action is performed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, basically what we get out of examples is that the user performs some activity on the repository and according to those activities, corresponding actions are performed. Whatever action to be performed is  written as a workflow. And workflows are present under &lt;code&gt;.github/workflows&lt;/code&gt;. These workflows totally depend on the type of activity performed. Because at the end activities or events will trigger workflow to run.&lt;br&gt;
For different events or activities separate workflows are defined. For example, for push activity separate workflows is defined, for new issues separate workflows is defined. And it depends on the maintainer of the project what type of action is hooked corresponding to the event or activity. The workflows are written in Kubernetes native language i.e. YAML. &lt;/p&gt;

&lt;p&gt;Now, let's understand GitHub Action in more &lt;code&gt;technical&lt;/code&gt; terms. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is a CI/CD platform that allows you to automate build, test and deploy. If you are beginners and don't know about terms build, test, and deploy. For now, assumes these are bunch of commands which are wrapped for simplicity. 
Generally, build and test actions are performed for every PR activity. So, as the new PR created these bunch of commands in return are executed.  Whereas deploy actions are performed for every merged PR. Similarly, bunch of commands runs for deploy in return action.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Components of GitHub Action
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Workflows
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Workflows represent action  executed automatically as it gets triggered by some kind of  event or activity performed by a user.&lt;/li&gt;
&lt;li&gt;Each workflow is defined for some activity or events.
Workflow is a configurable automated process that will run one or more jobs.&lt;/li&gt;
&lt;li&gt;Defined under .github/workflows. And written in YAML language.&lt;/li&gt;
&lt;li&gt;A repo can have multiple workflows for performing different sets of events.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Events
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Activity performed by the user triggers workflows to be executed.
&lt;code&gt;Examples:&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Create PR&lt;/li&gt;
&lt;li&gt;Create issues&lt;/li&gt;
&lt;li&gt;Push commits&lt;/li&gt;
&lt;li&gt;Release cycle&lt;/li&gt;
&lt;li&gt;Merge successful PRs&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Jobs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Each workflow contains one or multiple jobs.&lt;/li&gt;
&lt;li&gt;And jobs contain one or more steps. &lt;/li&gt;
&lt;li&gt;Step is script to be executed.&lt;/li&gt;
&lt;li&gt;Each step is executed inside the same runner.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Action
&lt;/h3&gt;

&lt;p&gt;Example: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When a user creates a new issue(activity or events), it triggers a corresponding workflow. As a result, the label of the issue is automatically added. &lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Runner
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;It is a server in which workflows run as they get triggered.&lt;/li&gt;
&lt;li&gt;Each runner runs a single job at a time.&lt;/li&gt;
&lt;li&gt;Each workflow is executed in fresh VMS or containers.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Understanding the workflow file:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yx13DiQo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6jsnux6d2us32kqbx8pe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yx13DiQo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6jsnux6d2us32kqbx8pe.png" alt="Image description" width="523" height="660"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: learn-github-actions
on: [push]
jobs:
  for-tutorial:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - run: echo "Hello, Setting up your first GA."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;name: learn-github-actions&lt;/code&gt; → Name of the GitHub Action&lt;br&gt;
&lt;code&gt;on: [push]&lt;/code&gt; → Name of the event or activity. GitHub keeps eye on the activity perform on the repo. As the activity is performed by the user it will trigger to perform action or run the workflow file.&lt;br&gt;&lt;br&gt;
&lt;code&gt;jobs:&lt;/code&gt; → list of one or more actions&lt;br&gt;
&lt;code&gt;check-bats-version:&lt;/code&gt; → name of vary first job&lt;br&gt;
&lt;code&gt;runs-on: ubuntu-latest&lt;/code&gt; →  machine in which job will run.&lt;br&gt;
&lt;code&gt;steps:&lt;/code&gt; → list of one or more script or command.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;uses: actions/checkout@v3&lt;/code&gt;→ step 1 --&amp;gt; Pull the code base from github.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;run: echo "Hello, Setting up your first GA."&lt;/code&gt;  ---&amp;gt; Execute command/script on ubuntu machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Checkout this small &lt;a href="https://github.com/viveksahu26/news-app/blob/main/.github/workflows/testing-github-action.yml"&gt;example&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;GitHub Action is similar to the concept of Newtons Third Law of Motion. Every action by user on github repo by the user has opposite custom reaction.&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This was basic overview of GitHub Action for beginners. Hope you liked it. This was all from my side for this weekend. For more do follow for more interesting blogs.   &lt;/p&gt;

&lt;p&gt;Comment below and let me know which topic you feel like complex, I will try my best to make it simple and interesting for you.&lt;/p&gt;

</description>
      <category>github</category>
      <category>beginners</category>
      <category>githubaction</category>
    </item>
    <item>
      <title>Linux Security Modules</title>
      <dc:creator>vivek kumar sahu</dc:creator>
      <pubDate>Thu, 02 Jun 2022 01:11:39 +0000</pubDate>
      <link>https://forem.com/viveksahu26/linux-security-modules-4e3a</link>
      <guid>https://forem.com/viveksahu26/linux-security-modules-4e3a</guid>
      <description>&lt;p&gt;What we will learn?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User dependent Security: DAC&lt;/li&gt;
&lt;li&gt;DAC&lt;/li&gt;
&lt;li&gt;LSM&lt;/li&gt;
&lt;li&gt;User-independent Security: MAC&lt;/li&gt;
&lt;li&gt;MAC&lt;/li&gt;
&lt;li&gt;Flow of command&lt;/li&gt;
&lt;li&gt;Types of LSM&lt;/li&gt;
&lt;li&gt;DAC vs MAC&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Linux is Secured?
&lt;/h2&gt;

&lt;p&gt;If we talk about security in linux. Users, files and processes play an important role in protecting your OS. In short, the OS is all about files. Among them, few files are very critical and important, therefore they must be secure. According to me, securing your files or securing a running process is what security means in Linux. For accessing files or processes users must have permissions. Unlike root users, other users have restricted and limited permission. Securing files can be achieved in 2 ways: One is user dependent and another is user independent. &lt;/p&gt;

&lt;h3&gt;
  
  
  What does it mean by user dependent security?
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Discretionary Access Control(DAC)
&lt;/h4&gt;

&lt;p&gt;In the early days when security was dependent on users of linux, the user played the most important role in securing your environment. If the user is root, then it has unlimited access to files and processes, otherwise a normal user has restricted access or limited access. User-dependent means that access to files and processes requires permission of the user. If the user has the permission then it can access files and processes otherwise could not. Here the user may be owner or may not be owner of the file, or root user or any other user included to a group which has access to this file. Suppose if an attacker gets access to the root user then your system will be controlled by the attacker. Though getting root access is not easy, it's not an impossible task. &lt;br&gt;
The terminology given for user-dependent security in the Linux world is known as Discretionary Access Control(DAC). According to definition, DAC (Discretionary Access Control) based access control is a means of restricting access to objects based(files, process,etc) on the identity of subjects or groups(users). &lt;/p&gt;

&lt;h4&gt;
  
  
  Example:
&lt;/h4&gt;

&lt;p&gt;Let’s take an example and try to understand through it.&lt;br&gt;
Suppose you are running a web server which hosts several websites. To allow access on websites we have to open several ports in the firewall. Hackers may use these ports to crack the system through the security exploits. And if that happens, hackers will gain the access permission of the web server process, as we know that port is associated with some web server that serves web pages.  To serve web pages, usually a web server process must have read permission on document root and write permission on the /tmp and /var/tmp directory. With this permission of root associated with the web server, hackers can write malicious scripts in /tmp directory and execute it. From here, the system is now in control of the attacker. We saw how one infected process can cause a huge security risk to all services running on the server. So, the conclusion of the story is that the attacker got access to the root user through a web server user to write a script and execute it. So, now the question arises, Is there any way to protect ? Yes, it’s like a third party solution. The third party solution is none other than a policy known as Security Policies or Security Modules or Linux Security modules(LSMs). &lt;/p&gt;

&lt;h2&gt;
  
  
  Linux Security modules(LSM)
&lt;/h2&gt;

&lt;p&gt;Since the policies are used for securing your environment that is why it is known as Linux Security based Policies or in more technical terms it is known as Linux Security Modules. Now, one more question arises is, where are these policies attached or hooked?  In short these policies are fitted after DAC checks. To understand it in more detail, you need to understand how commands are executed. Until and unless you don't have an idea how commands are executed,then you won’t be able to know where the policy is being added. &lt;br&gt;
For example, let's say a hacker after getting access to the root user wants to access files present in /etc directory to change its configuration as well also interested in seeing the confidential files such as passwd file and shadow file. &lt;br&gt;
 As we know that each and every user in linux is associated with some files or each file is associated with a user. Suppose there is a policy whose operation is to block some files from reading. After attaching that policy after DAC checks, the policy determines whether the user has permission to access that file or not; it will block opening of a particular file independent of any user trying to read it. One exciting solution for this is to secure linux by user-independent method. Basically, what user-independent means is, though the root user or any user has unlimited access but then also it can be denied from operation such as accessing files or processes, if external  policy is applied. And the terminology used for user-independent or policy based security is known as MAC.&lt;/p&gt;

&lt;h3&gt;
  
  
  user-independent security
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Mandatory access control(MAC)
&lt;/h4&gt;

&lt;p&gt;Mandatory access control is a security method where what is allowed is explicitly defined by policy. A user(normal or root both) or program can not do any more than is allowed by the policy confining it.&lt;br&gt;
MAC is of User-independent type security.&lt;br&gt;
To understand it you need to understand the flow-chart.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oj9cBMyW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ckvg4cpyck8pd2czvyfs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oj9cBMyW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ckvg4cpyck8pd2czvyfs.png" alt="Image description" width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Flow of open() system call:
&lt;/h3&gt;

&lt;p&gt;The LSM framework is integrated into the kernel which provides each LSM with hooks on essential functions of the kernel. The diagram above shows coarse call flow of the “open()” system call invoked by some process – &lt;br&gt;
As the user tries to open or see any file, it always triggers the open() syscalls function.&lt;br&gt;
A process in the user space calls open() on the file path &lt;br&gt;
The system call is dispatched and the path string is used to obtain a kernel file object and inode object. Inode table is nothing but you can relate it to your content in the first page of your copy. Remember in your childhood, you used to write chapter names and corresponding page numbers. So, similarly in inode table, file name and its corresponding address are provided.&lt;br&gt;
The parameters and syntax of command are checked. If incorrect then error is returned. &lt;br&gt;
If everything above goes fine. Then normal “Discrete access control” (DAC) file permissions are checked. What does it mean? It checks whatever operation is performed(like file access or process access), if the user has permission to execute it or not. Suppose a user is accessing a file, it will check if this user is the owner of the file, or is this user a member of a group which has permission to access this file or is it a root user.  If it doesn't have permission then the system call is terminated and error is returned to user space.&lt;br&gt;
&lt;code&gt;Note&lt;/code&gt;: As we talked above, Since the attacker has the access of root user so it can easily pass the security till here.&lt;br&gt;
Then questions arise where policy fits. Or where policy can be hooked. So, the answer is just after DAC checks. Now the attacker will face a problem because the next Security is user independent, i.e. linux security modules or linux security policies. Users from here onward, whether it's a root or normal, can only run those commands that are allowed to run from the policy. &lt;br&gt;
To attach policy in a hook, a framework is used, i.e.  the LSM framework. LSM frameworks provide hooks to attach your policy. As above we applied a policy which is to  block all execution of binary files or accessing of any file or folder which are inside /etc directory.  The policy blocks execution and the system is terminated and error is returned to the user space if a single LSM hook returns an error. &lt;br&gt;
Finally, when all security checks pass, the file is opened for the process and a new file descriptor is returned to the process in the user space.&lt;/p&gt;

&lt;h3&gt;
  
  
  Types of LSMs
&lt;/h3&gt;

&lt;p&gt;There are a total of 9 different types of LSM: &lt;code&gt;SELinux&lt;/code&gt; , &lt;code&gt;SMACK&lt;/code&gt; , &lt;code&gt;AppArmor&lt;/code&gt; , &lt;code&gt;TOMOYO&lt;/code&gt; ,&lt;code&gt;Yama&lt;/code&gt; , &lt;code&gt;LoadPin&lt;/code&gt; , &lt;code&gt;SafeSetID&lt;/code&gt; , &lt;code&gt;Lockdown&lt;/code&gt; , &lt;code&gt;BPF&lt;/code&gt;.  Out of which &lt;code&gt;SELinux&lt;/code&gt; and &lt;code&gt;AppArmor&lt;/code&gt; are popular one. &lt;code&gt;SELinux&lt;/code&gt; is used in RedHat Linux distribution whereas &lt;code&gt;AppArmor&lt;/code&gt; is used in Ubuntu and Suse. &lt;br&gt;
Complexity: Different LSM have different languages to write those policies. And believe me, it’s hard to write those policies. Btw, until and unless you are a professional or Linux Administrator you don't need to add any policies. Because by default all necessary policies are already applied to your distribution system, but for that LSMs must be enabled.&lt;/p&gt;

&lt;h2&gt;
  
  
  DAC vs MAC
&lt;/h2&gt;

&lt;p&gt;DAC (Discretionary Access Control) based access control is a means of restricting access to objects based on the identity of subjects or groups. For decades, Linux only had DAC-based access controls in the form of user and group permissions. One of the problems with DACs is that the primitives are transitive in nature. A user who is a privileged user could create other privileged users, and that user could have access to restricted objects.&lt;br&gt;
With MACs (Mandatory Access Control), the subjects (e.g., users, processes, threads) and objects (e.g., files, sockets, memory segments) each have a set of security attributes. These security attributes are centrally managed through MAC policies. In the case of MAC, the user/group does not make any access decision, but the access decision is managed by security attribute or security policies.&lt;/p&gt;

</description>
      <category>lsm</category>
      <category>linux</category>
      <category>security</category>
      <category>policy</category>
    </item>
    <item>
      <title>Beginners guide to eBPF...</title>
      <dc:creator>vivek kumar sahu</dc:creator>
      <pubDate>Fri, 25 Feb 2022 09:42:03 +0000</pubDate>
      <link>https://forem.com/viveksahu26/beginners-guide-to-ebpf-4en3</link>
      <guid>https://forem.com/viveksahu26/beginners-guide-to-ebpf-4en3</guid>
      <description>&lt;p&gt;If you are really curious about knowing different technologies. You will definetily love this technology too.  &lt;/p&gt;

&lt;h2&gt;
  
  
  What is eBPF??
&lt;/h2&gt;

&lt;p&gt;eBPF is a mechanism to attach your own program or any kind of custom program to a particular events(here events refers to system calls). Each time the event gets triggered by some process, your custom program will run. &lt;/p&gt;

&lt;p&gt;I know learning any thing with examples or practicals make your concept crystal clear. So, what you are waiting for. Let's start with one example or practical.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Practical
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Step:1&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Make a simple hello world program in c language. &lt;br&gt;
 &lt;em&gt;Note&lt;/em&gt;: Program should be written in c language.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;int hello_world(void *ctx) {
    bpf_trace_printk("Hello World!! \\n");
    return 0;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Step:2&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, the second step is to load your program with the help of &lt;code&gt;BPF&lt;/code&gt;. Basically, &lt;code&gt;BPF&lt;/code&gt; will compile your c language program into &lt;code&gt;bytecode&lt;/code&gt; . For now, please ignore &lt;code&gt;BPF&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Loading program in BPF
b = BPF(text=program)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Step:3&lt;/strong&gt;
And the third step is to attach your program with any event. So, here we will attach it with clone() or fork() event. Whenever any process is forked or cloned your program will get executed.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Getting system call name for fork or clone
clone = b.get_syscall_fnname("clone")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Attaching hello world program to an event i.e. clone or fork event.
b.attach_kprobe(event=clone, fn_name="hello_world")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above &lt;strong&gt;attch_kprobe&lt;/strong&gt; function is used. Basically &lt;code&gt;kprobe&lt;/code&gt; in simple terms means all kernel function. &lt;code&gt;syscall&lt;/code&gt; is known as function or probe in Kernel world. Basically, &lt;code&gt;kprobe&lt;/code&gt; has  knowledge of all &lt;code&gt;syscalls&lt;/code&gt; inside the kernel. That's why we called &lt;strong&gt;attach_kprobe&lt;/strong&gt; function to bind our &lt;code&gt;hellow world&lt;/code&gt;(or any custom program) program with &lt;code&gt;clone&lt;/code&gt; syscall event. &lt;/p&gt;

&lt;p&gt;Note: Each and every system calls or syscalls refers as events. In general terminology event is just an action.  &lt;/p&gt;

&lt;p&gt;These are main 3 steps. So, finally our program would look like,&lt;/p&gt;

&lt;p&gt;&lt;code&gt;hello_world.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from bcc import BPF
from time import sleep

# Programs print hello world !!
program = """
int hello_world(void *ctx) {
    bpf_trace_printk("Hello World!! \\n");
    return 0;
}
"""
# Loading program in BPF
b = BPF(text=program)

# Getting system call name for fork or clone
clone = b.get_syscall_fnname("clone")

# Attaching function i.e. program to an event i.e. clone event.
b.attach_kprobe(event=clone, fn_name="hello_world")

# printing
b.trace_print()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note&lt;/em&gt;&lt;/strong&gt;: You must need to install &lt;a href="https://www.brendangregg.com/blog/2019-01-01/learn-ebpf-tracing.html" rel="noopener noreferrer"&gt;bcc&lt;/a&gt; tools. &lt;/p&gt;

&lt;p&gt;Run the above program in separate terminal&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo python3 hello_world.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And on the other terminal run any commands like date, ls pwd, cmd, etc and see hello world is being printed on the terminal in which &lt;code&gt;hello_world.py&lt;/code&gt; program is running.&lt;/p&gt;

&lt;p&gt;That's all for this practical. Hope you got enjoyed the practical and got the basic idea.&lt;/p&gt;

&lt;p&gt;Let's go back in history to understand from where the concept of &lt;code&gt;eBPF&lt;/code&gt; arises?&lt;/p&gt;

&lt;p&gt;Basically, &lt;code&gt;eBPF&lt;/code&gt; is built on the top of classic BPF(cBPF). &lt;br&gt;
  In the initial days, &lt;strong&gt;Berkeley Packet Filter&lt;/strong&gt;*(BPF) was &lt;br&gt;
  used to filter the packets from user space.&lt;/p&gt;

&lt;h3&gt;
  
  
  But how ??
&lt;/h3&gt;

&lt;p&gt;By attaching pre-defined &lt;code&gt;bytecode&lt;/code&gt; program with chains of &lt;br&gt;
   syscalls. In the above example we attached one program to &lt;br&gt;
   one system calls. But in &lt;code&gt;BPF&lt;/code&gt;, many syscalls were &lt;br&gt;
  attached to different programs. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note&lt;/em&gt;&lt;/strong&gt;: In those time &lt;code&gt;BPF&lt;/code&gt; didn't had mechanism to attach any custom program with &lt;code&gt;syscalls&lt;/code&gt;. Only only &lt;code&gt;pre-defined&lt;/code&gt; or &lt;code&gt;in-built&lt;/code&gt; programs were attched to events. It was totally static in nature. Which means that user can't replace those built-in program with your custom program unlike &lt;code&gt;eBPF&lt;/code&gt; provides. So, basically &lt;code&gt;eBPF&lt;/code&gt; is extended version of &lt;code&gt;BPF&lt;/code&gt; which allows to run custom program in the Kernel.&lt;/p&gt;

&lt;p&gt;Now, let's go in more deep and try to understand &lt;strong&gt;the working of eBPF&lt;/strong&gt;.&lt;br&gt;
&lt;code&gt;eBPF&lt;/code&gt; uses bpf() sys calls to take code(i.e. Users customize programs written in c language) from user space and just after program being loaded by &lt;code&gt;BPF&lt;/code&gt;, it is compiled into &lt;code&gt;bytecode&lt;/code&gt;. After that it injects the &lt;code&gt;BPF-bytecode&lt;/code&gt; program into Kernel and bind to the specified event(in above example to &lt;code&gt;clone&lt;/code&gt; or &lt;code&gt;fork&lt;/code&gt; system call) with the help of &lt;code&gt;kprobe&lt;/code&gt;(having knowledge of all system calls inside Kernel). Parallelly , at the same time of injecting program in the Kernel, &lt;code&gt;BPF-bytecode&lt;/code&gt; programs are checked and verified to make sure it's safe from the perspective of security side.  While execution these &lt;code&gt;BPF-bytecode&lt;/code&gt; programs are first compiled by &lt;code&gt;JIT&lt;/code&gt; compiler to native languages or instructions according to the architecture of the computer. And that's how any sandboxed or custom program runs each time when events triggers. And just after execution program is removed from the Kernel space from security point of view. That’s how &lt;code&gt;eBPF&lt;/code&gt; helps in running user space programs into the Kernel space. This is how &lt;code&gt;eBPF&lt;/code&gt; helps to add more programmability into Kernel space.&lt;/p&gt;

&lt;p&gt;So, whenever any events or system calls is triggered by any process running on your system will result into execution of your custom program(hello world in above example).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pqv2xioidlhwe2nn6d9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pqv2xioidlhwe2nn6d9.png" alt="Image description"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;h2&gt;
  
  
  Relating &lt;code&gt;eBPF&lt;/code&gt; with &lt;code&gt;Javascript&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;eBPF&lt;/code&gt; does to Linux what &lt;code&gt;JavaScript&lt;/code&gt; does to HTML. (Sort of.) &lt;/p&gt;

&lt;h4&gt;
  
  
  But how ??,
&lt;/h4&gt;

&lt;p&gt;In initial days of a static HTML website, JavaScript lets you define mini programs that run on events such as clicking of mouse, hover of mouse, scrolling of pages, etc. On any event the corresponding JavaScript program runs in a safe virtual machine in the browser. Similarly with &lt;code&gt;eBPF&lt;/code&gt;, instead of a fixed kernel, you can now write mini programs that run on events like any syscalls such as forking, executing, reading, writting, etc, which are run in a safe virtual machine in the kernel. In reality, &lt;code&gt;eBPF&lt;/code&gt; is more like the v8 virtual machine that runs JavaScript, rather than JavaScript itself. eBPF is part of the Linux kernel.&lt;/p&gt;

&lt;p&gt;The more beautiful program you add in JavaScript the more attractive and beutiful your website looks. Similarly, it's depends on user which type of programs is added. It depend from use case to use case.&lt;/p&gt;

&lt;h2&gt;
  
  
  UseCase of eBPF
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Networking&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Hooks: Traffic Control, sockets, XDP&lt;/li&gt;
&lt;li&gt;Anti-DDoS&lt;/li&gt;
&lt;li&gt;Load balancing&lt;/li&gt;
&lt;li&gt;Routing, overlay, NAT&lt;/li&gt;
&lt;li&gt;TCP control
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Tracing and Monitoring&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Hooks: kProbes, uProbes, tracepoints and perf events&lt;/li&gt;
&lt;li&gt;Inspect, trace, profile kernel, or user space function.&lt;/li&gt;
&lt;li&gt;Aggregate and correlate metrics in the Kernel , return meaningful date
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Others&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Security(LSM such as AppArmor)&lt;/li&gt;
&lt;li&gt;Infrared protocols&lt;/li&gt;
&lt;li&gt;File System, Storage&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Reference:&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=lrSExTfS-iQ" rel="noopener noreferrer"&gt;Beginner guide to eBPF by LizRice&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.brendangregg.com/blog/2019-01-01/learn-ebpf-tracing.html" rel="noopener noreferrer"&gt;Learn eBPF&lt;/a&gt;&lt;br&gt;
&lt;a href="https://ebpf.io/what-is-ebpf/#what-is-ebpf" rel="noopener noreferrer"&gt;eBPF&lt;/a&gt;&lt;br&gt;
&lt;a href="https://ebpf.io/summit-2020/" rel="noopener noreferrer"&gt;Keynotes&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ebpf</category>
      <category>linux</category>
      <category>beginners</category>
    </item>
    <item>
      <title>OSI Model</title>
      <dc:creator>vivek kumar sahu</dc:creator>
      <pubDate>Fri, 26 Nov 2021 21:54:49 +0000</pubDate>
      <link>https://forem.com/viveksahu26/osi-model-420f</link>
      <guid>https://forem.com/viveksahu26/osi-model-420f</guid>
      <description>&lt;h1&gt;
  
  
  History of Internet
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HI1hc7gA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3qokht5s22or18hp1vgr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HI1hc7gA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3qokht5s22or18hp1vgr.png" alt="Image description" width="700" height="585"&gt;&lt;/a&gt;&lt;br&gt;
It was a need to share information. But the computers at those times were large and immobile and in order to make use of information stored in any one computer, one had to either travel to the site of the computer or have magnetic computer tapes sent through the conventional postal system.&lt;br&gt;
Another reason was the cold war which acted as a catalyst to the formation of the Internet. At that time this Internet was known as ARPANET.&lt;br&gt;
To share information it was decided to create networks between different computers which was present in different university over a wide range of area. And that where the the ARPANET arises.&lt;/p&gt;

&lt;h1&gt;
  
  
  ARPANET ??
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--87egdO0E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1xlfkuihnjgzmf43am0a.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--87egdO0E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1xlfkuihnjgzmf43am0a.jpg" alt="Image description" width="318" height="159"&gt;&lt;/a&gt;&lt;br&gt;
Advanced Research Projects Agency Network, ARPANET or ARPAnet began development in 1966 by the United States ARPA. ARPANET was a Wide Area Network (collection of computers connected by a communications network over a wide geographic area) linking many Universities and research centers, was first to use packet switching (information is broken into small segments of data known as packets and then reassembled when received at the destination), and was the beginning of what we consider the Internet today. ARPANET was created to make it easier for people to access computers, improve computer equipment, and to have a more effective communication method for the military. &lt;br&gt;
In short ARPANET was a concept of connecting two or more computers to share information between them. It becames possible after protocols came into existance.&lt;/p&gt;

&lt;h3&gt;
  
  
  FACT:
&lt;/h3&gt;

&lt;p&gt;January 1, 1983 is considered the official birthday of the Internet.&lt;/p&gt;

&lt;h1&gt;
  
  
  Protocols
&lt;/h1&gt;

&lt;p&gt;You can think of a protocol as a language, which allows  communication between two or more people.. Let’s say, people can only be able to share information among themselves if they speak the same language. As we all know every language has its own set of rules and grammar. In a similar way protocols also have a set of rules to communicate.&lt;br&gt;
Let's come to the bookish definition of protocol. A protocol is a standard set of rules that allow electronic devices to communicate with each other. These rules include what type of data may be transmitted, what commands are used to send and receive data, and how data transfers are confirmed.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Protocols
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iZ7y45Z3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/meh8i4slj622ss5390q4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iZ7y45Z3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/meh8i4slj622ss5390q4.jpg" alt="Image description" width="800" height="815"&gt;&lt;/a&gt;&lt;br&gt;
Post office Protocol (POP)  ---&amp;gt; designed for receiving incoming Emails.&lt;br&gt;
Simple mail transport Protocol (SMTP) ---&amp;gt; designed to send and distribute outgoing EMail.&lt;br&gt;
File Transfer Protocol (FTP)  ---&amp;gt; to move files from one computer to another.&lt;br&gt;
HyperText Transfer Protocol (HTTP) ---&amp;gt; designed for transferring a hypertext among two or more systems.&lt;br&gt;
Transmission Control Protocol (TCP) ---&amp;gt; It divides any message into series of packets that are sent from source to destination and there it gets reassembled at the destination.&lt;br&gt;
Internet Protocol (IP) ---&amp;gt; The IP addresses in packets help in routing them through different nodes in a network until it reaches the destination system. &lt;/p&gt;

&lt;p&gt;Let’s get back to the topic ARPANET, the communication between these computers was not completed until 1970, but as they began using the Network Control Protocol (NCP). NCP led to the development and use of the first computer-to-computer protocols like Telnet and File Transfer Protocol (FTP).&lt;/p&gt;

&lt;h1&gt;
  
  
  Fall of NCP and rise of TCP
&lt;/h1&gt;

&lt;p&gt;NCP’s could not keep up with the demands of the network and the variety of networks connected. Due to which it’s downfall started and to fulfill this demand TCP came into picture. TCP allowed different kinds of computers on different networks to "talk" to each other. By 1983, TCP/IP had become the only approved protocol on ARPANET, replacing the earlier NCP because of its versatility and modularity. Hence, ARPANET and the Defense Data Network officially changed to the TCP/IP standard on January 1, 1983, hence the birth of the Internet.&lt;/p&gt;

&lt;h1&gt;
  
  
  OSI Model
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hEveifoM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30qe0amqs1n5corcovt5.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hEveifoM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30qe0amqs1n5corcovt5.jpeg" alt="Image description" width="500" height="615"&gt;&lt;/a&gt;&lt;br&gt;
The OSI model is a learning tool for understanding step by step networking. &lt;br&gt;
It describes how information or data from an application of one computer transfers through a physical medium to an application in another computer. It is considered as an architectural model for inter-computer communications.&lt;br&gt;
For Example:- Much like a car is composed of independent functions which combine to accomplish the end-goal of moving the car forward: A battery powers the electronics, an alternator recharges the battery, an engine rotates a driveshaft, an axle transfers the driveshaft’s rotation to the wheels, and so on and so forth.&lt;br&gt;
Similarly, OSI model is divided into seven different layers, each of which fulfills a very specific function.&lt;/p&gt;

&lt;h1&gt;
  
  
  Characteristics of OSI Model
&lt;/h1&gt;

&lt;p&gt;The OSI model divides the whole task into seven layers. Each layer differs in terms of Protocol Data Unit(PDU). Each layer deals with the data. Before proceding further get familar with terminologies encapsulation and decapsulation . &lt;/p&gt;

&lt;p&gt;The OSI model breaks the responsibilities of the network into seven distinct layers.&lt;br&gt;
Each layer takes data from the previous layer and encapsulates it to make its Protocol Data Unit (PDU). The PDU is used to describe the data at each layer. PDUs are also part of TCP/IP. The applications of the Session layer are considered “data” for the PDU, preparing the application information for communication. Transport uses ports to distinguish what process on the local system is responsible for the data. The Network layer PDU is the packet. Packets are distinct pieces of data routed between networks. The Data Link layer is the frame or segment. Each packet is broken up into frames, checked for errors, and sent out on the local network. The Physical layer transmits the frame in bits over the medium.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5jDBHP4L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l93syu082d5lmqglb6pz.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5jDBHP4L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l93syu082d5lmqglb6pz.jpeg" alt="Image description" width="337" height="813"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Application Layer (PDU ---&amp;gt; data or message) → 7th
&lt;/h1&gt;

&lt;p&gt;The Application layer is the top layer of the OSI model and is the one the end user interacts with every day. This layer is not where actual applications live, but it provides the interface for applications that use it like a web browser&lt;br&gt;
L7 has PDU as [data]&lt;/p&gt;

&lt;h1&gt;
  
  
  Presentation Layer (PDU ---&amp;gt; data or message) → 6th Layer
&lt;/h1&gt;

&lt;p&gt;It translates the data from sender-dependent format into a common format and changes the common format into receiver-dependent format. It acts as a data translator for a network. It also helps in compressing the data.&lt;br&gt;
L6  has PDU as [data]&lt;/p&gt;

&lt;h1&gt;
  
  
  Session Layer (PDU ---&amp;gt; data or message) → 5th layer
&lt;/h1&gt;

&lt;p&gt;It is the 5th layer of the OSI model. It establishes a connection to send and receive data. &lt;br&gt;
L5  has PDU as [data]&lt;/p&gt;

&lt;h1&gt;
  
  
  Transport Layer ( PDU ---&amp;gt; segment ) → 4th Layer
&lt;/h1&gt;

&lt;p&gt;It is the 4th layer. It transfers the data between applications. It receives the data from the upper layer and converts it into smaller units known as segments. It is also known as an end-to-end layer because it provides point-to-point connection between  source and destination. Two protocols can be used in this layer: TCP and UDP.&lt;/p&gt;

&lt;h4&gt;
  
  
  NOTE:
&lt;/h4&gt;

&lt;p&gt;The responsibility of the network layer is to transmit the data from one computer to another computer. Whereas the responsibility of the transport layer is to transmit the message to the correct process(i.e. correct port )&lt;/p&gt;

&lt;p&gt;L4  has PDU as segment. Consisting of port + data of previous layer.&lt;/p&gt;

&lt;h1&gt;
  
  
  Network Layer ( PDU ---&amp;gt; packet) → 3rd Layer
&lt;/h1&gt;

&lt;p&gt;It is the 3rd layer of the osi model which manages device addressing, tracks the location of devices on the network. It determines the best path to move data from source to destination based on network conditions. &lt;br&gt;
The protocols used to route the network traffic are known as Network Layer Protocols.&lt;br&gt;
It also adds source and destination address to the header of the frame which helps in identifying the device on the internet.&lt;/p&gt;

&lt;p&gt;L3  has PDU as packet. Consisting of IP Address + data of previous layer.&lt;/p&gt;

&lt;h1&gt;
  
  
  Data-Link Layer ( PDU ---&amp;gt; frame) → 2nd Layer
&lt;/h1&gt;

&lt;p&gt;It is responsible for putting 1’s and 0’s on the wire, and pulling 1’s and 0’s from the wire. It groups together those 1’s and 0’s into chunks known as Frames. The data Link layer also adds the header and trailer to the frame. The header contains the destination and source hardware address. These frames are transmitted to the destination address mentioned in the header. &lt;/p&gt;

&lt;p&gt;This layer also maintains the rate at which data is transferred with the help of processing speed of the computer.&lt;br&gt;
It is also responsible for routing and forwarding packets.&lt;/p&gt;

&lt;p&gt;L2  has PDU as frame consisting of MAC Address + data of previous layer.&lt;/p&gt;

&lt;h1&gt;
  
  
  Physical Layer ( PDU ---&amp;gt; bit ) → 1st Layer
&lt;/h1&gt;

&lt;p&gt;It is responsible for the transfer of bits. It carries 1’s and 0’s between two nodes. These bits are transferred in the form of electric pulses, pulses of light, radio waves, etc depending on the type of medium used. Because at the end computer/machine/nodes understand only bits( “0” or “1”  format) &lt;/p&gt;

&lt;p&gt;L1  has  PDU as bits&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G0eiUsVH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yquvnxjujlpy4962yucq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G0eiUsVH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yquvnxjujlpy4962yucq.png" alt="Image description" width="800" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Medium (wire)     ---------       data(bits)&lt;br&gt;
Ethernet             --------&amp;gt;  electric pulses&lt;br&gt;
Wifi                     --------&amp;gt;  radio waves&lt;br&gt;
Fiber               --------&amp;gt;  pulses of light.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6Bi6SbQB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xz6ldo7pach5eio4uk6b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6Bi6SbQB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xz6ldo7pach5eio4uk6b.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4vA3KFa_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/txr6vgicj6vx5l6jk015.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4vA3KFa_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/txr6vgicj6vx5l6jk015.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;There was need to share information or data between computers(at those times computers were immobile) during the time of cold war. Birth of ARPANET took place, it's a group of computers connected through wire over wide range of area. After computer connected through medium like wire, cable, etc. Now to send data computer should understand the common set of rules and grammers to send data. Therefore first protocol NCP was created. And with the help of NCP protocol,  it becames possible to share information or data. But due to limitation in NCP, TCP was formed. &lt;br&gt;
And the study of how data transfers flow from one computers to another is known as OSI MODEL. Model has 7 different layers. Each layers has Protocol Data Unit(PDU), recieving data from previous layer and adding more info to it like Port, IP Address, MAC Address. And this concept is known as encaptulation. &lt;/p&gt;

</description>
      <category>osi</category>
      <category>networkstack</category>
      <category>tutorial</category>
      <category>linux</category>
    </item>
    <item>
      <title>How to begin your contribution to open source projects?</title>
      <dc:creator>vivek kumar sahu</dc:creator>
      <pubDate>Sun, 24 Oct 2021 08:09:42 +0000</pubDate>
      <link>https://forem.com/viveksahu26/how-to-begin-your-contribution-to-open-source-projects-kg0</link>
      <guid>https://forem.com/viveksahu26/how-to-begin-your-contribution-to-open-source-projects-kg0</guid>
      <description>&lt;h2&gt;
  
  
  Why to contribute?
&lt;/h2&gt;

&lt;p&gt;From a &lt;code&gt;professional point of view&lt;/code&gt; it is said that always &lt;code&gt;apply what you have learnt&lt;/code&gt;. To do the same there are 2-3 places where you can use your knowledge and apply it. First, it is a &lt;code&gt;company&lt;/code&gt; and the other one is &lt;code&gt;open source projects&lt;/code&gt;. And being a student you should choose an open source project, because the company is not going to hire you at your early stage of college.&lt;br&gt;
And another reason like everyone said is getting perks, experience in working on real projects, gives edge to your resume during placements, networking with people across different parts of the world,  etc, etc…..&lt;/p&gt;

&lt;h2&gt;
  
  
  Why does it seem overwhelming to contribute to open source projects for beginners?
&lt;/h2&gt;

&lt;p&gt;It’s because:-&lt;br&gt;
You don't know about that project. &lt;br&gt;
You never used that product. &lt;br&gt;
You have a fear of a huge code base. ( But the reality is you don’t have to contribute to the whole codebase. You have to contribute to a very small part of it).&lt;br&gt;
New way of communition through slack. And switching from &lt;code&gt;Instagram&lt;/code&gt;, &lt;code&gt;Facebook&lt;/code&gt; to &lt;code&gt;Slack&lt;/code&gt; feels like your world has completely changed. Initially when you join slack you get bored and start excusing yourself since you don’t understand anything about the discussion going inside the channel, so better you leave this place and this is not for you. I totally agree with you all that switching to slack is hard, but believe me if you endure the talks, meeting, discussion going inside slack for at least one month. You will see that each morning when you get up either you will open your &lt;code&gt;email&lt;/code&gt; or &lt;code&gt;slack&lt;/code&gt; or &lt;code&gt;github&lt;/code&gt; instead of &lt;code&gt;Insta&lt;/code&gt;, &lt;code&gt;Facebook&lt;/code&gt;, etc.&lt;br&gt;
&lt;code&gt;NOTE&lt;/code&gt;:&lt;br&gt;
Don’t keep this ego that in one day you will do all things like installation of projects, read documentation of project, and blah blah. Please give time. During Installation of the project you may get problems or stuck at something  which may take a time. Because at the beginning you don’t have any knowledge. &lt;/p&gt;

&lt;h2&gt;
  
  
  How much time does it take for a complete beginner to get aware about the project?
&lt;/h2&gt;

&lt;p&gt;In one word you need at least 1 month to know what a project is all about, what its use case is,  what problems it solves in the real industries, and many more things. &lt;br&gt;
Structure for the time to be given different parts of projects in the beginning:-&lt;br&gt;
For Installation→ 1-6 days (totally depends from person to person).&lt;br&gt;
For Reading documentation of project → 1 month&lt;br&gt;
Use the project until and unless you get a feel of it. &lt;br&gt;
Watch the hands-on of that project and try to apply it in your local workspace.&lt;br&gt;
Join the meeting. If you don’t understand anything in the meeting  it’s ok. But be with it. &lt;br&gt;
Slack → daily 1-2 hours&lt;br&gt;
Start asking any doubts in the slack. Don’t feel shy about it. &lt;/p&gt;

&lt;h2&gt;
  
  
  How to begin your contribution with Kyverno ??
&lt;/h2&gt;

&lt;p&gt;The best part about contributing to Kyverno is there are 2 options available for contributor to contribute on:-&lt;br&gt;
&lt;code&gt;Firstly&lt;/code&gt;, contributing to policies. It doesn’t require any coding experience. It only requires basic knowledge of Kubernetes resource, YAML, and also knowledge about kyverno. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;Secondly&lt;/code&gt;, through coding. By fixing bug, working on enhancement, etc.&lt;/p&gt;

&lt;p&gt;So, by the time you get ready to contribute through code you can contribute through its policies.  As the policies are written in the same language in which Kubernetes resources manifest are written down i.e YAML. And this is the best part of Kyverno. &lt;/p&gt;

&lt;p&gt;That's all from my side. May this blog helpful for complete beginner.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Internship at Nirmata on Kyverno(CNCF sanboxed project)...</title>
      <dc:creator>vivek kumar sahu</dc:creator>
      <pubDate>Mon, 18 Oct 2021 19:31:24 +0000</pubDate>
      <link>https://forem.com/viveksahu26/internship-at-nirmata-on-kyvernocncf-sanboxed-project-4b2c</link>
      <guid>https://forem.com/viveksahu26/internship-at-nirmata-on-kyvernocncf-sanboxed-project-4b2c</guid>
      <description>&lt;h2&gt;
  
  
  About myself ....
&lt;/h2&gt;

&lt;p&gt;Hello Everyone!! Myself Vivek Kumar Sahu, sophomore(at the time of writting this blog)  in Electronics and Telecommunication from Jabalpur Engineering College, India. My field of interests are cloud based technology as well as Linux world. I started exploring all these technologies just after Lockdown started and continued learning Linux and cloud-based technologies. After learning wanted to apply my knowledge and skills through some projects. Then i came to know that best way to make use of your skills and knowledge is by contributing to some open source project or by working in a company. But, being a student open source would be good choice. From there open source took place in my mind. Seriously, if you are complete beginner then 1-2 months you have to roam around the projects, meetings, slack channels, etc. And once you have understanding about the projects then you have to roam around the code written in those projects. It's not easy in the beginning but later on it is. &lt;br&gt;
After exploring different projects from CNCF, I came across a Sandboxed project known as &lt;a href="https://kyverno.io/"&gt;Kyverno&lt;/a&gt;.&lt;br&gt;&lt;br&gt;
And it was my right decision to pick the Kyverno project for contribution. For newbie contributor this is one of the good project to start their contribution with it and the reason behind it:-&lt;br&gt;
&lt;code&gt;Firstly&lt;/code&gt;, the Kyverno policies written for Kubernetes resources are written in the same language(i.e. YAML) as the resource manifest written in the Kubernetes. &lt;br&gt;
And &lt;code&gt;secondly&lt;/code&gt;, easy to relate different concepts of Kubernetes while applying different types of policies for different resources. So, it looks quite a relatable project with my skills. &lt;/p&gt;

&lt;p&gt;After one month of exploring this project I got an opportunity from Nirmata company (drives Kyverno Project) to work as an intern for 3 months in Kyverno Project. &lt;br&gt;
The mentor assigned to me was &lt;a href="https://www.linkedin.com/in/shuting-zhao-1a1aa912b/"&gt;shuting zhao&lt;/a&gt; and she is currently maintainer of the Kyverno project. She is really a calm, helpful and supportive in nature. She understand beginner contributor and gives time to settle down. &lt;/p&gt;
&lt;h2&gt;
  
  
  About Project
&lt;/h2&gt;

&lt;p&gt;Kyverno is the policy engine for Kubernetes. In technical terms Kyverno is care-taker of Kubernetes in terms of security. If you want to understand Kyverno in more detail, will recommend to you go through these blogs:- &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/pulse/kyverno-native-policy-manager-kubernetes-code-arun-singh/?trackingId=xlfXp3sLzav3R71xJ%2FtCEA%3D%3D"&gt;https://www.linkedin.com/pulse/kyverno-native-policy-manager-kubernetes-code-arun-singh/?trackingId=xlfXp3sLzav3R71xJ%2FtCEA%3D%3D&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/techloop/understanding-kyverno-policies-7e2d8651d7b1"&gt;https://medium.com/techloop/understanding-kyverno-policies-7e2d8651d7b1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;During internship, I had to work on &lt;code&gt;feature enhancement&lt;/code&gt; and it was &lt;a href="https://github.com/kyverno/kyverno/issues/1821"&gt;To extend the test command to support or handle mutate Policy&lt;/a&gt; and also &lt;br&gt;
&lt;a href="https://github.com/kyverno/policies/issues/88"&gt;To cover all sample policies&lt;/a&gt;. The same reaction I also had earlier after reading this project headline. Didn't understand anything. So, the recommendation to newbie contributors that the best way to understand any project is to read the documentation first and then install that project in your local machine and try to use it, play with it until and unless you get the feel about that project.&lt;/p&gt;
&lt;h3&gt;
  
  
  Description of feature-enhacement
&lt;/h3&gt;

&lt;p&gt;Let me explain about this &lt;code&gt;enhancement feature&lt;/code&gt; which I had to work upon. The feature needed to be added in the &lt;code&gt;Kyverno-CLI&lt;/code&gt; to extend the support of the &lt;code&gt;test&lt;/code&gt; command to support or handle &lt;code&gt;mutate policy&lt;/code&gt;. At that time &lt;code&gt;test&lt;/code&gt; command used to support for &lt;code&gt;validate policy&lt;/code&gt; but not for &lt;code&gt;mutate&lt;/code&gt; and &lt;code&gt;generate&lt;/code&gt; policy. So, it was like a extension feature for &lt;code&gt;test&lt;/code&gt; command to also support for &lt;code&gt;mutate policy&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Let me break the topic into 2 parts i.e. &lt;br&gt;
&lt;code&gt;What is test command, it's uses and how it works&lt;/code&gt; and &lt;br&gt;
&lt;code&gt;What is mutate policy&lt;/code&gt; and describe them separately.&lt;/p&gt;
&lt;h3&gt;
  
  
  What is a test and Why to use it and how does it work?
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;test&lt;/code&gt; is a command (in the Kyverno-cli) like other commands, which provides facility to test policies on resources before deploying policy directly to the cluster. To be more specific  &lt;code&gt;test&lt;/code&gt; command provides facility to the user to check or test the policies whether it's properly working on resources or not. Since, it's related to security of Kubernetes resource, so it is recommended to check policies before directly applying to live clusters.&lt;/p&gt;

&lt;p&gt;But how does the test command make sure that whatever policy applied on resources is correctly applied or not ?? &lt;br&gt;
Good question. Here comes the actual use of &lt;code&gt;test&lt;/code&gt; command. Basically &lt;code&gt;test&lt;/code&gt; command internally work is compares the actual result which is generated from Kyverno engine with expected result provided by the user. &lt;br&gt;
&lt;code&gt;Note&lt;/code&gt;: that expected result provided by the user may be wrong but actual result obtained from engine can not be. And if you want to get the actual result then use &lt;code&gt;apply&lt;/code&gt; command. To know more about apply command &lt;a href="https://kyverno.io/docs/kyverno-cli/#apply"&gt;visit here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let's understand more detail how test command works ?&lt;br&gt;
As the user will run below command,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$kyverno  test  &amp;lt;path of folders containing test.yaml file&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First of all, &lt;code&gt;test&lt;/code&gt; command looks for it's configuration file i.e. &lt;code&gt;test.yaml&lt;/code&gt; in the provided folder by the user.&lt;/p&gt;

&lt;p&gt;Structure of &lt;code&gt;test.yaml&lt;/code&gt; file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: mytests
policies: (contains list of path of policies file)
  - &amp;lt;path/to/policy.yaml&amp;gt;
  - &amp;lt;path/to/policy.yaml&amp;gt;
Resources: (contains list of path of resource file)
  - &amp;lt;path/to/resource.yaml&amp;gt;
  - &amp;lt;path/to/resource.yaml&amp;gt;
variables: variables.yaml (optional)
results:
- policy: &amp;lt;name&amp;gt;
  rule: &amp;lt;name&amp;gt;
  resource: &amp;lt;name&amp;gt;
  kind: &amp;lt;name&amp;gt;
  patchedResource: &amp;lt;path/to/patchedResource.yaml&amp;gt; (path of patchedResource file) 
  status/result: &amp;lt;pass/fail/skip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Path of Policy provided by the user under the &lt;code&gt;policies&lt;/code&gt; section in the &lt;code&gt;test.yaml&lt;/code&gt; file is fetched from it and similarly all resources are fetched from the path of resources provided by the user under the resources section in the &lt;code&gt;test.yaml&lt;/code&gt; file. After the policies and resources are fetched from the respective paths, then the policy with the help of &lt;code&gt;match/exclude&lt;/code&gt; block  &lt;code&gt;selects&lt;/code&gt; the resources one by one. If the policy doesn’t selects the resource then policy &lt;code&gt;skip&lt;/code&gt; that resource, which means the rule of the policy won’t be applied to that resource. &lt;br&gt;
But if policy selects the resource which means that the further rule of the policy will be applied to resources and &lt;br&gt;
&lt;code&gt;Lastly&lt;/code&gt;, irrespective of policy applied on resource or not applied, the Kyverno engine will generates a result which is known as an &lt;code&gt;actual result&lt;/code&gt; or &lt;code&gt;engine response&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;And also on the &lt;code&gt;other hand&lt;/code&gt;, user also need to provide the &lt;code&gt;expected result&lt;/code&gt; or &lt;code&gt;user-defined result&lt;/code&gt; under the &lt;code&gt;results&lt;/code&gt; section of the &lt;code&gt;test.yaml&lt;/code&gt; file. &lt;code&gt;Expected result&lt;/code&gt; here means that what user thinks of the result could be after policy applied on resources. The &lt;code&gt;common fields&lt;/code&gt; on whose basis comparison of &lt;code&gt;actual result&lt;/code&gt; or &lt;code&gt;engine response&lt;/code&gt; with &lt;code&gt;expected result&lt;/code&gt; or &lt;code&gt;user-defined&lt;/code&gt; result are done:- &lt;br&gt;
&lt;code&gt;policy&lt;/code&gt; name(name of the policy), &lt;br&gt;
&lt;code&gt;resource&lt;/code&gt; name(name of the resource), &lt;br&gt;
&lt;code&gt;rule&lt;/code&gt; name, &lt;br&gt;
&lt;code&gt;kind&lt;/code&gt;(type of resource), &lt;br&gt;
&lt;code&gt;patchedResource&lt;/code&gt;(path of patched resource or updated resource file, &lt;br&gt;
&lt;code&gt;namespace&lt;/code&gt;(optional)&lt;br&gt;
&lt;code&gt;result/status&lt;/code&gt;: (fail/pass/skip),&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Pass&lt;/code&gt; ----&amp;gt; when policy selects the resource and rule applied on the resource + patchedResource obtained from the engine response must be equal to the patched Resource provided by the user&lt;br&gt;
&lt;code&gt;Fail&lt;/code&gt;  ----&amp;gt; when policy selects the resource and rule applied on the resource + patchedResource obtained from the engine response is not equal to the patched Resource provided by the user&lt;br&gt;
&lt;code&gt;Skip&lt;/code&gt;  ----&amp;gt; When policy doesn’t select the resource because the resource description doesn’t match with the match/exclude block of rule of the policy, therefore rule is not applied on the resource. So, policy skips the resource. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MOEQddXp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zvwxp6jjvck27q49nohm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MOEQddXp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zvwxp6jjvck27q49nohm.png" alt="Alt Text" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  What is Mutate Policy ?
&lt;/h3&gt;

&lt;p&gt;As the mutate names suggest mutating something or updating something. Here updation could be anything like removing of any field or addition of any field or replacing of any field on Kubernetes resources. And resources updated through mutation policy is known as patched Resource. &lt;/p&gt;

&lt;p&gt;This was the overall design part of how the test command will work with  mutate policy.&lt;/p&gt;

&lt;p&gt;To know more about &lt;code&gt;test&lt;/code&gt; command &lt;a href="https://kyverno.io/docs/kyverno-cli/#test"&gt;switch here&lt;/a&gt;&lt;br&gt;
To know more about writting mutate policies &lt;a href="https://kyverno.io/docs/writing-policies/mutate/"&gt;switch here&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Hands-on
&lt;/h2&gt;

&lt;p&gt;Let's see the hand's on using &lt;code&gt;test&lt;/code&gt; command for &lt;code&gt;mutate policy&lt;/code&gt;. &lt;/p&gt;
&lt;h3&gt;
  
  
  For ClusterPolicy:-
&lt;/h3&gt;

&lt;p&gt;Let's try to understand the &lt;code&gt;field&lt;/code&gt; in the Policy first.&lt;br&gt;
1) &lt;code&gt;kind&lt;/code&gt;: &lt;code&gt;ClusterPolicy&lt;/code&gt; --&amp;gt; which means selecting resources in all namespaces or cluster wide.&lt;br&gt;
2) &lt;code&gt;metadata: &lt;br&gt;
        name: add-labels&lt;/code&gt;  ---&amp;gt; it is the name of Policy&lt;/p&gt;

&lt;p&gt;3) &lt;code&gt;spec: &lt;br&gt;
      rules:&lt;br&gt;
        - name: add-labels&lt;/code&gt;  ---&amp;gt; name of the rule, &lt;br&gt;
here coincidently, the name of policy is same as rule name. But don't think that they are related something like that. They are totally independent.&lt;br&gt;
NOTE:- Under the rule section there can be one or more than one rules. But the type of policy i.e. validate/mutate/generate must be any among them throughout the policy. &lt;/p&gt;

&lt;p&gt;Let's try to understand the policy through the below diagram.&lt;br&gt;
 &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vbnXSnlU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7i396xk1tzyel2y90d2s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vbnXSnlU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7i396xk1tzyel2y90d2s.png" alt="Alt Text" width="800" height="561"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The policy wants to say that selects all the resource of &lt;code&gt;kind: Pod&lt;/code&gt; in whole cluster(i.e. in all namespaces). And after selecting resources add the label &lt;code&gt;foo: bar&lt;/code&gt; to those resources.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;policy.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: add-labels
  annotations:
    pod-policies.kyverno.io/autogen-controllers: none
    policies.kyverno.io/description: &amp;gt;-
      Labels are used as an important source of metadata describing objects in various ways
      or triggering other functionality. Labels are also a very basic concept and should be
      used throughout Kubernetes. This policy performs a simple mutation which adds a label
      `foo=bar` to Pods, Services, ConfigMaps, and Secrets.
spec:
  rules:
  - name: add-labels
    match:
      resources:
        kinds:
        - Pod
    mutate:
      patchStrategicMerge:
        metadata:
          labels:
            foo: bar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below is the resource of &lt;code&gt;kind: Pod&lt;/code&gt;. Which means it will be selected by the policy.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;resource.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
spec:
  containers:
  - name: nginx
    image: nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below, is the patchedResource with added label &lt;code&gt;foo: bar&lt;/code&gt; with it. After mutate rule is applied to the above resource, label &lt;code&gt;foo: bar&lt;/code&gt; will be added to that resource, the resource gets updated because of that. And updated resource is known as a patchedResource. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;patchedResource,yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    foo: bar
spec:
  containers:
  - name: nginx
    image: nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below, is the configuration file of &lt;code&gt;test&lt;/code&gt; command. It contains path of above policies, path of above resources, path of above patched resource and results defined by the user.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;test.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: add-labels
policies:
  - add_labels.yaml
resources:
  - resource.yaml
results:
  - policy: add-labels
    rule: add-labels
    resource: myapp-pod
    patchedResource: patchedResource.yaml
    kind: Pod
    result: pass
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run the below command:-&lt;br&gt;
&lt;code&gt;$ kyverno test &amp;lt;path_of_folder_containing_test.yaml_file&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Executing add-labels...
applying 1 policy to 1 resource...
│───│────────────│────────────│───────────│────────│
│ # │ POLICY     │ RULE       │ RESOURCE  │ RESULT │
│───│────────────│────────────│───────────│────────│
│ 1 │ add-labels │ add-labels │ myapp-pod │ Pass   │
│───│────────────│────────────│───────────│────────│
(base)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  For Namespaced-policy ( Policy)
&lt;/h3&gt;

&lt;p&gt;Namespaced-policy are applied only on particular namespace.&lt;br&gt;
First of all, let's read out the policy. It says that select resource of &lt;code&gt;kind: Pod&lt;/code&gt; from &lt;code&gt;testing&lt;/code&gt; namespace and after selecting mutate or add the below field in it.&lt;br&gt;
      &lt;code&gt;"dnsConfig:&lt;br&gt;
            options:&lt;br&gt;
              - name: ndots&lt;br&gt;
                value: "1"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Policy.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kyverno.io/v1
kind: Policy
metadata:
  name: add-ndots
  namespace: testing
  annotations:
    policies.kyverno.io/title: Add ndots
    policies.kyverno.io/category: Sample
    policies.kyverno.io/subject: Pod
    policies.kyverno.io/description: &amp;gt;-
      The ndots value controls where DNS lookups are first performed in a cluster
      and needs to be set to a lower value than the default of 5 in some cases.
      This policy mutates all Pods to add the ndots option with a value of 1.
spec:
  background: false
  rules:
  - name: add-ndots
    match:
      resources:
        kinds:
        - Pod
    mutate:
      patchStrategicMerge:
        spec:
          dnsConfig:
            options:
              - name: ndots
                value: "1"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;resource.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    foo: bar
  namespace: testing
spec:
  containers:
  - name: nginx
    image: nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Patched Resource with added &lt;br&gt;
   &lt;code&gt;dnsConfig:&lt;br&gt;
    options:&lt;br&gt;
    - name: ndots&lt;br&gt;
      value: "1"&lt;/code&gt;&lt;br&gt;
in the resource.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;patchedResource.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  labels:
    foo: bar
  name: myapp-pod
  namespace: testing
spec:
  containers:
  - image: nginx:latest
    name: nginx
  dnsConfig:
    options:
    - name: ndots
      value: "1"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;test.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: add-nodeselector
policies:
  - policy.yaml
resources:
  - resource.yaml
results:
  - policy: testing/add-ndots   
    rule: add-ndots
    resource: myapp-pod
    patchedResource: patchedResource.yaml
    namespace: testing
    kind: Pod
    result: pass 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;$ kyverno test &amp;lt;path&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Executing add-nodeselector...
applying 2 policies to 1 resource...
│───│───────────────────│───────────│───────────────────────│────────│
│ # │ POLICY            │ RULE      │ RESOURCE              │ RESULT │
│───│───────────────────│───────────│───────────────────────│────────│
│ 1 │ testing/add-ndots │ add-ndots │ Pod/testing/myapp-pod │ Pass   │
│───│───────────────────│───────────│───────────────────────│────────│


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NOTE:-&lt;br&gt;
The difference in the &lt;code&gt;test.yaml&lt;/code&gt; between the &lt;code&gt;Namespaced&lt;/code&gt; policy and &lt;code&gt;ClusterPolicy&lt;/code&gt; is in their name of &lt;code&gt;policy&lt;/code&gt; under the &lt;code&gt;results&lt;/code&gt; section. If it is a &lt;code&gt;Namespaced&lt;/code&gt; policy then user need to provide the policy name as: &lt;code&gt;&amp;lt;namespace&amp;gt;/&amp;lt;policy_name&amp;gt;&lt;/code&gt; &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6WrdEi2b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qqw7qmg2jwvkwz6typs1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6WrdEi2b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qqw7qmg2jwvkwz6typs1.png" alt="Image description" width="338" height="65"&gt;&lt;/a&gt;&lt;br&gt;
whereas if it is a   &lt;code&gt;ClusterPolicy&lt;/code&gt; then policy name as: .&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tuhOJwu4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lym9wb7r285y5n30ic9i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tuhOJwu4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lym9wb7r285y5n30ic9i.png" alt="Image description" width="268" height="64"&gt;&lt;/a&gt;&lt;br&gt;
The &lt;code&gt;Namespaced&lt;/code&gt; policy is applied to resources present in that particular namespace. Whereas &lt;code&gt;ClusterPolicy&lt;/code&gt; is applied in all namespaces present in whole cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges I faced during project :-
&lt;/h3&gt;

&lt;p&gt;1) There are 2 types of policies:-&lt;br&gt;
ClusterPolicy --&amp;gt; Applied on whole cluster&lt;br&gt;
Policy --&amp;gt; Applied on a particular namespace, which means it will only select those resources having the same namespace as namespaced policy.&lt;br&gt;
Initially the solution works fine for ClusterPolicy but it wasn't supporting namespaced Policy. So, for the same I have to understand the concept of namespaced policy and then apply separate checks to filter resources according to the namespace of the policy.&lt;/p&gt;

&lt;p&gt;I think you all have got the basic idea about this enhancement feature for Kyverno-cli. And to know more about the Kyverno project go to &lt;a href="https://kyverno.io/docs/introduction/"&gt;kubectl get kyverno project&lt;/a&gt;&lt;br&gt;
I would highly thanks to &lt;a href="https://www.linkedin.com/in/jimbugwadia/"&gt;Jim Bugwadia&lt;/a&gt; for giving this opportunity. And once again highly thankful to my mentor &lt;a href="https://github.com/realshuting"&gt;Shuting zhao&lt;/a&gt;, for her continuous support and guidance throughout the project. And also special thanks to &lt;a href="https://github.com/vyankyGH"&gt;Vyankatesh&lt;/a&gt;, &lt;a href="https://github.com/NoSkillGirl"&gt;Pooja&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/chipzoller/"&gt;Chip Zoller&lt;/a&gt; for their continous help. It was really amazing to work with this community. I don't know how these 3 months passed, it seems like it started only a few days ago. In these 3 months learned many new things from the community members.&lt;/p&gt;

&lt;p&gt;That was all from mine side. Thanks for reading it. Hope you liked it. If you have any doubt regarding the Kyverno project join the &lt;a href="//k8s.slack.io/#kyverno"&gt;slack&lt;/a&gt; channel.&lt;/p&gt;

&lt;p&gt;Resources for beginners wanted to contribute to Kyverno project:-&lt;br&gt;
Kyverno --&amp;gt; &lt;a href="https://github.com/kyverno/kyverno"&gt;https://github.com/kyverno/kyverno&lt;/a&gt;&lt;br&gt;
Kyverno policies ---&amp;gt; &lt;a href="https://kyverno.io/policies/"&gt;https://kyverno.io/policies/&lt;/a&gt;&lt;br&gt;
Youtube channel ---&amp;gt; &lt;a href="https://www.youtube.com/c/Nirmata/videos"&gt;https://www.youtube.com/c/Nirmata/videos&lt;/a&gt;&lt;br&gt;
golang resources --&amp;gt; &lt;a href="https://gobyexample.com/"&gt;https://gobyexample.com/&lt;/a&gt; and &lt;a href="https://zetcode.com/all/#go"&gt;https://zetcode.com/all/#go&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kyverno</category>
      <category>internship</category>
      <category>kubernetes</category>
      <category>nirmata</category>
    </item>
    <item>
      <title>k-mean clustering and its real usecase in the security domain</title>
      <dc:creator>vivek kumar sahu</dc:creator>
      <pubDate>Mon, 19 Jul 2021 12:20:59 +0000</pubDate>
      <link>https://forem.com/viveksahu26/k-mean-clustering-and-its-real-usecase-in-the-security-domain-1fon</link>
      <guid>https://forem.com/viveksahu26/k-mean-clustering-and-its-real-usecase-in-the-security-domain-1fon</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hA-snusr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pj5qtaoqm6qoqs61w16c.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hA-snusr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pj5qtaoqm6qoqs61w16c.jpg" alt="Alt Text" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The k-means algorithm is one of the oldest and most commonly used clustering algorithms. It is a great starting point for new ml enthusiasts to pick up, given the simplicity of its implementation. Let's comes to the defination part first.&lt;/p&gt;

&lt;p&gt;K-means clustering is one of the simplest and popular unsupervised machine learning algorithms.&lt;/p&gt;

&lt;p&gt;Typically, unsupervised algorithms make inferences from datasets using only input vectors without referring to known, or labelled, outcomes.&lt;/p&gt;

&lt;p&gt;“The objective of K-means is simple: group similar data points together and discover underlying patterns. To achieve this objective, K-means looks for a fixed number (k) of clusters in a dataset.”&lt;/p&gt;

&lt;p&gt;You’ll define a target number k, which refers to the number of centroids you need in the dataset. A centroid is the imaginary or real location representing the center of the cluster.&lt;/p&gt;

&lt;p&gt;Every data point is allocated to each of the clusters through reducing the in-cluster sum of squares.&lt;/p&gt;

&lt;p&gt;In other words, the K-means algorithm identifies k number of centroids, and then allocates every data point to the nearest cluster, while keeping the centroids as small as possible.&lt;/p&gt;

&lt;p&gt;The ‘means’ in the K-means refers to averaging of the data; that is, finding the centroid.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the K-means algorithm works
&lt;/h2&gt;

&lt;p&gt;To process the learning data, the K-means algorithm in data mining starts with a first group of randomly selected centroids, which are used as the beginning points for every cluster, and then performs iterative (repetitive) calculations to optimize the positions of the centroids.&lt;/p&gt;

&lt;p&gt;It halts creating and optimizing clusters when either:&lt;/p&gt;

&lt;p&gt;The centroids have stabilized — there is no change in their values because the clustering has been successful.&lt;br&gt;
The defined number of iterations has been achieved.&lt;/p&gt;

&lt;p&gt;The way kmeans algorithm works is as follows:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lpE7Is49--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i995mhy1rseuxujf3gd2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lpE7Is49--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i995mhy1rseuxujf3gd2.jpg" alt="Alt Text" width="605" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Specify number of clusters K.&lt;/li&gt;
&lt;li&gt;Initialize centroids by first shuffling the dataset and then randomly selecting K data points for the centroids without replacement.&lt;/li&gt;
&lt;li&gt;Keep iterating until there is no change to the centroids. i.e assignment of data points to clusters isn’t changing.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Compute the sum of the squared distance between data points and all centroids.&lt;br&gt;
Assign each data point to the closest cluster (centroid).&lt;br&gt;
Compute the centroids for the clusters by taking the average of the all data points that belong to each cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use-cases of K-means Algorithm:-
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. document classification
&lt;/h3&gt;

&lt;p&gt;cluster documents in multiple categories based on tags, topics, and the content of the document. this is a very standard classification problem and k-means is a highly suitable algorithm for this purpose. the initial processing of the documents is needed to represent each document as a vector and uses term frequency to identify commonly used terms that help classify the document. the document vectors are then clustered to help identify similarity in document groups. here is a sample implementation of the k-means for document clustering.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. delivery store optimization
&lt;/h3&gt;

&lt;p&gt;optimize the process of good delivery using truck drones by using a combination of k-means to find the optimal number of launch locations and a genetic algorithm to solve the truck route as a traveling salesman problem. here is a whitepaper on the same topic .&lt;/p&gt;

&lt;h3&gt;
  
  
  3. identifying crime localities
&lt;/h3&gt;

&lt;p&gt;with data related to crimes available in specific localities in a city, the category of crime, the area of the crime, and the association between the two can give quality insight into crime-prone areas within a city or a locality. here is an interesting paper based on crime data from delhi firs.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. customer segmentation
&lt;/h3&gt;

&lt;p&gt;clustering helps marketers improve their customer base, work on target areas, and segment customers based on purchase history, interests, or activity monitoring. here is a white paper on how telecom providers can cluster pre-paid customers to identify patterns in terms of money spent in recharging, sending sms, and browsing the internet. the classification would help the company target specific clusters of customers for specific campaigns.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. fantasy league stat analysis
&lt;/h3&gt;

&lt;p&gt;analyzing player stats has always been a critical element of the sporting world, and with increasing competition, machine learning has a critical role to play here. as an interesting exercise, if you would like to create a fantasy draft team and like to identify similar players based on player stats, k-means can be a useful option. check out this article for details and a sample implementation.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. insurance fraud detection
&lt;/h3&gt;

&lt;p&gt;machine learning has a critical role to play in fraud detection and has numerous applications in automobile, healthcare, and insurance fraud detection. utilizing past historical data on fraudulent claims, it is possible to isolate new claims based on its proximity to clusters that indicate fraudulent patterns. since insurance fraud can potentially have a multi-million dollar impact on a company, the ability to detect frauds is crucial. check out this white paper on using clustering in automobile insurance to detect frauds.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. rideshare data analysis
&lt;/h3&gt;

&lt;p&gt;the publicly available uber ride information dataset provides a large amount of valuable data around traffic, transit time, peak pickup localities, and more. analyzing this data is useful not just in the context of uber but also in providing insight into urban traffic patterns and helping us plan for the cities of the future. here is an article with links to a sample dataset and a process for analyzing uber data.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. cyber-profiling criminals
&lt;/h3&gt;

&lt;p&gt;cyber-profiling is the process of collecting data from individuals and groups to identify significant co-relations. the idea of cyber profiling is derived from criminal profiles, which provide information on the investigation division to classify the types of criminals who were at the crime scene. here is an interesting white paper on how to cyber-profile users in an academic environment based on user data preferences.&lt;/p&gt;

&lt;h3&gt;
  
  
  9. call record detail analysis
&lt;/h3&gt;

&lt;p&gt;a call detail record (cdr) is the information captured by telecom companies during the call, sms, and internet activity of a customer. this information provides greater insights about the customer’s needs when used with customer demographics. in this article , you will understand how you can cluster customer activities for 24 hours by using the unsupervised k-means clustering algorithm. it is used to understand segments of customers with respect to their usage by hours.&lt;/p&gt;

&lt;h3&gt;
  
  
  10. automatic clustering of it alerts
&lt;/h3&gt;

&lt;p&gt;large enterprise it infrastructure technology components such as network, storage, or database generate large volumes of alert messages. because alert messages potentially point to operational issues, they must be manually screened for prioritization for downstream processes. clustering of data can provide insight into categories of alerts and mean time to repair, and help in failure predictions. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to configure CoreDNS in Linux(Part-1)..</title>
      <dc:creator>vivek kumar sahu</dc:creator>
      <pubDate>Fri, 26 Feb 2021 07:25:40 +0000</pubDate>
      <link>https://forem.com/viveksahu26/how-to-configure-coredns-in-linux-part-1-2k79</link>
      <guid>https://forem.com/viveksahu26/how-to-configure-coredns-in-linux-part-1-2k79</guid>
      <description>&lt;p&gt;For CoreDNS installation in Linux, you can proceed further.&lt;/p&gt;

&lt;p&gt;📌Go to this Link--&amp;gt;&lt;br&gt;
&lt;a href="https://github.com/coredns/coredns/releases/tag/v1.8.3"&gt;https://github.com/coredns/coredns/releases/tag/v1.8.3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now copy this link, as shown in the below image.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1Z4Zp347--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jowb65kjbmvr8q0rbmrf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1Z4Zp347--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jowb65kjbmvr8q0rbmrf.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📌Now run the below command to download the software.&lt;br&gt;
$ wget &lt;a href="https://github.com/coredns/coredns/releases/download/v1.8.3/coredns_1.8.3_linux_amd64.tgz"&gt;https://github.com/coredns/coredns/releases/download/v1.8.3/coredns_1.8.3_linux_amd64.tgz&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WOdQPhNP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/trfc0vlvnva46bjieoya.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WOdQPhNP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/trfc0vlvnva46bjieoya.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Software is installed but it is in a zip format.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---ydY8J5P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v5xxwqs14bvomc56upfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---ydY8J5P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v5xxwqs14bvomc56upfd.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📌So you need to unzip this file "coredns_1.8.3_linux_amd64.tgz"&lt;br&gt;
Now, run the command to unzip.&lt;br&gt;
tar -xvzf coredns_1.8.3_linux_amd64.tgz&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LGB5NMCI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q1mwgknq3ptn0cf8knja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LGB5NMCI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q1mwgknq3ptn0cf8knja.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And finally, your program CoreDNS is installed. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cly3NuqA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1iflfvqfmuyss41my6ox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cly3NuqA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1iflfvqfmuyss41my6ox.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>coredns</category>
      <category>linux</category>
    </item>
    <item>
      <title>How to install HELM Version (above 3) in Linux..</title>
      <dc:creator>vivek kumar sahu</dc:creator>
      <pubDate>Thu, 25 Feb 2021 10:11:35 +0000</pubDate>
      <link>https://forem.com/viveksahu26/how-to-install-helm-version-above-3-in-linux-3bf1</link>
      <guid>https://forem.com/viveksahu26/how-to-install-helm-version-above-3-in-linux-3bf1</guid>
      <description>&lt;p&gt;For Helm installation in Linux, you can proceed further.&lt;/p&gt;

&lt;p&gt;📌Go to this Link--&amp;gt; &lt;a href="https://github.com/helm/helm/releases"&gt;https://github.com/helm/helm/releases&lt;/a&gt;&lt;br&gt;
Now copy this link, as shown in the below image.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P_NmvWCe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yhricrgm0vlgiv8zusjh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P_NmvWCe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yhricrgm0vlgiv8zusjh.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📌Now run the below command to download the software.&lt;br&gt;
$ wget &lt;a href="https://get.helm.sh/helm-v3.5.2-linux-amd64.tar.gz"&gt;https://get.helm.sh/helm-v3.5.2-linux-amd64.tar.gz&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2IexBZ18--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/54m119tn3ai7s5l4vdcf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2IexBZ18--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/54m119tn3ai7s5l4vdcf.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Software is installed but it is in a zip format.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GD97vA_8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vlqwgljyuvxkb3c9kwou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GD97vA_8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vlqwgljyuvxkb3c9kwou.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📌So you need to unzip this file "helm-v3.5.2-linux-amd64.tar.gz"&lt;br&gt;
Now, run the command to unzip.&lt;br&gt;
$ tar -xvzf helm-v3.5.2-linux-amd64.tar.gz&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZwbV8sqo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/byuxzpktr1ixxh9bkyif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZwbV8sqo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/byuxzpktr1ixxh9bkyif.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📌In Linux, all program files needed to be stored in "/usr/bin/" folder. &lt;br&gt;
Copy this program file("linux-amd64/helm") into this folder("/usr/bin/").&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1b7JPjJv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nfgfklu4gr28ozhr7yaq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1b7JPjJv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nfgfklu4gr28ozhr7yaq.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📌Run the command to copy.&lt;br&gt;
$ cp   linux-amd64/helm   /usr/bin/&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5jEEniMd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2c06fo9tf4640qocfiz6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5jEEniMd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2c06fo9tf4640qocfiz6.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📌Now you can check, whether it is installed or not.&lt;br&gt;
$ helm version&lt;br&gt;
if version shows, it means it is successfully installed.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8gMYJk4z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2sy30idrar53oakxzire.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8gMYJk4z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2sy30idrar53oakxzire.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kubernetes Part-1</title>
      <dc:creator>vivek kumar sahu</dc:creator>
      <pubDate>Mon, 07 Dec 2020 17:38:43 +0000</pubDate>
      <link>https://forem.com/viveksahu26/kubernetes-part-1-1pja</link>
      <guid>https://forem.com/viveksahu26/kubernetes-part-1-1pja</guid>
      <description>&lt;h1&gt;
  
  
  &lt;strong&gt;KUBERNETES ARCHITECTURE&lt;/strong&gt;
&lt;/h1&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Master Node&lt;/strong&gt;
&lt;/h3&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Worker Node&lt;/strong&gt;
&lt;/h3&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Etcd&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--61KPftda--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/khghjj2zak8nob9p2xeh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--61KPftda--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/khghjj2zak8nob9p2xeh.png" alt="Alt Text" width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are curious to know, Why need of Kubernetes &amp;amp; Docker and their basic definations, then feel free to go through it---&amp;gt; &lt;a href="https://dev.to/viveksahu26/kubernetes-for-beginners-basic-part-why-need-of-docker-k8-s-2bgf"&gt;https://dev.to/viveksahu26/kubernetes-for-beginners-basic-part-why-need-of-docker-k8-s-2bgf&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;MASTER NODE&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;What is a Node? Node refers to a computer or pc or virtual machine. You can consider the master node as a manager position of any company and worker node as a employee of that company. It's totally up to you assign nodes as a worker or master, but keep in mind that their must be only one master node &amp;amp; more than one worker node under master node or in a cluster.&lt;br&gt;
Cluster is a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.&lt;br&gt;
It is responsible for managing the state of k8's cluster and it manages and control worker nodes. It also provides an environment for control panel.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;COMPONENTS OF MASTER NODES:-&lt;/strong&gt;
&lt;/h2&gt;

&lt;h4&gt;
  
  
  1)API Server
&lt;/h4&gt;

&lt;h4&gt;
  
  
  2)Scheduler
&lt;/h4&gt;

&lt;h4&gt;
  
  
  3)Controller Manager
&lt;/h4&gt;

&lt;h4&gt;
  
  
  4)Etcd
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;           &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P0X5sk2W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/qkpqr6ww7985jrayoha5.png" alt="Alt Text" width="517" height="501"&gt;&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  &lt;strong&gt;1)API Server:-&lt;/strong&gt;&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;It is known as "kube-api server", a central control plane component running on the master node. Every components of master node communicate with it. The API Server is the only master node component to talk to the etcd data store, both to read from and to save Kubernetes cluster state information. During, processing it reads the current state of the worker nodes from etcd and matches it with the Desired state. For now assume etcd as a data storage, we will cover it in details later.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2)Scheduler (kube-scheduler):-&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As soon as the new pods are created, Scheduler job is to assign new pods or non-schedule pods to a feasible Nodes. Every container in pods has different requirements for resources and every pod also has different requirements. So, according to these requirements scheduler searches for nodes existing in cluster fulfilling its requirements. And node which fulfilling its requirements or node meeting its requirements are knowns as feasible nodes. Questions arises, From where does the scheduler knows about the configuration of worker nodes? Etcd contains configuration of all worker Nodes existing in a cluster. Scheduler approaches to the api-server for configuration of nodes and api-server collects data from etcd and provides it to the scheduler.  Scheduler uses this Data(configuration of nodes) to schedule pod into a feasible Node. &lt;/p&gt;

&lt;h4&gt;
  
  
  NOTE:-
&lt;/h4&gt;

&lt;p&gt;If scheduler doesn't find feasible nodes for pods, then they remain as it is till scheduler finds feasible node and these pods are known as non-schedule pods.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3)Controller Manager:-&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here is one example of a control loop: a thermostat in a room.&lt;/p&gt;

&lt;p&gt;When you set the temperature, that's telling the thermostat about your desired state. The actual room temperature is the current state. The thermostat acts to bring the current state closer to the desired state, by turning equipment on or off.&lt;br&gt;
Controllers are control loops or watch loops ( non-terminating loop that regulates the state of a system) which continuously running across worker nodes and notice the current state of each nodes and parallelly(at the same time) comparing the cluster's desired state (provided by objects' configuration data) with current state. Here, current state refers to the task worker nodes are performing or doing. And after noticing, it sends current state data of each worker node to the api-server and api-server store these data in etcd for future use. The controller(s) are responsible for making the current state come closer to that desired state.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4)Etcd:-&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;It's a key-value data storage for storing Kubernetes cluster state. It also stores configuration data of each nodes existing in cluster. New data is written to the data store only by appending to it, data is never replaced in the data store.&lt;br&gt;
Note:- It is only accessible to kube-api server. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;WORKER NODE&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As the scheduler assign the feasible node(here refer to worker node) for a pod. Now, from here the role of worker node starts. Worker nodes knows its job very well that what to do next. &lt;/p&gt;

&lt;h4&gt;
  
  
  NOTE:-
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Btw in Kubernetes, each component knows their job very well, there is no need to tell each components what to do next. For ex.:- As we have seen above that Master node don't have to tell scheduler about its role, to schedule pods to feasible nodes. Scheduler knows its role very well and start working as soon as it gets newly created pods or non-scheduled pods. Similarly, worker nodes knows their job as soon as they get pods. It provides running environment for client applications. These applications are encapsulated in Pods, controlled by the cluster control plane agents running on the master node.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A Pod is the smallest scheduling unit in Kubernetes or in another words Pods are collections of one or more container. And inside these containers your application runs.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;WORKER NODE COMPONENTS&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1) Container Runtime
&lt;/h4&gt;

&lt;h4&gt;
  
  
  2) Kubelet
&lt;/h4&gt;

&lt;h4&gt;
  
  
  3) Kube-proxy
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ds6jeiNU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/6ipaxo9xyur7yhfyukzh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ds6jeiNU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/6ipaxo9xyur7yhfyukzh.png" alt="Alt Text" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1)Container Runtime&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The container runtime is the software that is responsible for running containers. Although Kubernetes is described as a "container orchestration engine" or "container management engine", it does not have the capability to directly handle containers. In order to manage a container's lifecycle, Kubernetes requires a container runtime on the node where a Pod and its containers are to be scheduled. Kubernetes supports several container runtimes are:-&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Docker&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Although a container platform which uses containerd as a container runtime, it is the most popular container runtime used with Kubernetes.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;CRI-O&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;A lightweight container runtime for Kubernetes, it also supports Docker image registries.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;containerd&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;A simple and portable container runtime providing robustness.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;frakti&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;A hypervisor-based container runtime for Kubernetes.&lt;/p&gt;

&lt;h4&gt;
  
  
  NOTE:-
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;In simple language, Kubernetes don't have capability to run container. So, it need tools to run these containers. And these tools are known as container runtime. Kubernetes job is to manage these containers.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2)Kubelet&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The kubelet is an agent running on each node and communicates with the control plane components(i.e api-server) from the master node. It gets instructions from the api-server and interacts with the container runtime on the node to run containers associated with the pod.&lt;/p&gt;

&lt;h4&gt;
  
  
  NOTE:-
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;As the pods get scheduled by scheduler to a feasible node existing in a cluster. Then worker node components i.e. kubelet starts its job.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XXzmJgbH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/bflzlmnbeaz3f0grpgqo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XXzmJgbH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/bflzlmnbeaz3f0grpgqo.png" alt="Alt Text" width="800" height="313"&gt;&lt;/a&gt;&lt;br&gt;
Kubelet also monitors the health of Pod's running container. It also connects the container runtime using container runtime interface(CRI). &lt;br&gt;
CRI implements two services:-&lt;br&gt;
1) &lt;strong&gt;Image Service&lt;/strong&gt;:- Responsible for all the image-related &lt;br&gt;
                       operations.&lt;br&gt;
2) &lt;strong&gt;Runtime Service&lt;/strong&gt;:- Responsible for all the pods and &lt;br&gt;
                         container based operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3)Kube-Proxy&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Kubernetes network proxy runs on each node. It is responsible for dynamic updates and maintenance of all networking rules on the node.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
