<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Sachin Kumar</title>
    <description>The latest articles on Forem by Sachin Kumar (@hackcoderr).</description>
    <link>https://forem.com/hackcoderr</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/hackcoderr"/>
    <language>en</language>
    <item>
      <title>Deploy an ML model inside docker container</title>
      <dc:creator>Sachin Kumar</dc:creator>
      <pubDate>Fri, 28 May 2021 14:06:41 +0000</pubDate>
      <link>https://forem.com/hackcoderr/deploy-an-ml-model-inside-docker-container-521c</link>
      <guid>https://forem.com/hackcoderr/deploy-an-ml-model-inside-docker-container-521c</guid>
      <description>&lt;p&gt;Welcome back to my another article. Here I will share you an idea how you can train &amp;amp; deploy your machine learning model inside docker container. So let's jumped directly on the installation part of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Installation
&lt;/h2&gt;

&lt;p&gt;For installing docker, I'm using AWS instance but you can use any OS according to your desire. So if you're also going the same then just launch your instance and follow the mentioned steps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note :&lt;/strong&gt; &lt;em&gt;If you are going to use RHEL8 then you have to run below script because RHEL8 doesn't contain docker software by default&lt;/em&gt; But for other one you can skip it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF &amp;gt;&amp;gt; /etc/yum.repos.d/docker.repo
[docker]
baseurl=https://download.docker.com/linux/centos/7/x86_64/stable/
gpgcheck=0
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can move ahead towards the installing so run the below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yum install docker -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case of &lt;em&gt;RHEL8&lt;/em&gt; you have to run below command to install docker. The above command doesn't work for RHEL8.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yum install docker-ce --nobest -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here &lt;em&gt;docker-ce&lt;/em&gt; is the software name and &lt;em&gt;nobest&lt;/em&gt; is for comfortable version of it.&lt;/p&gt;

&lt;p&gt;Let's check its version.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@dockerhost ~]# docker --version
Docker version 20.10.4, build d3cb89e
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now start all the services of Docker through the below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl enable --now docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and check its status with below command and you will get output something like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@dockerhost ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-05-28 03:02:43 UTC; 5min ago
     Docs: https://docs.docker.com
  Process: 4168 ExecStartPre=/usr/libexec/docker/docker-setup-runtimes.sh (code=exited, status=0/SUCCESS)
  Process: 4158 ExecStartPre=/bin/mkdir -p /run/docker (code=exited, status=0/SUCCESS)
 Main PID: 4173 (dockerd)
    Tasks: 7
   Memory: 37.6M
   CGroup: /system.slice/docker.service
           └─4173 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --default-ulimit nofile=1024:4...

May 28 03:02:42 dockerhost dockerd[4173]: time="2021-05-28T03:02:42.986402644Z" level=info msg="scheme \"unix\" no...=grpc
May 28 03:02:42 dockerhost dockerd[4173]: time="2021-05-28T03:02:42.986690523Z" level=info msg="ccResolverWrapper:...=grpc
May 28 03:02:42 dockerhost dockerd[4173]: time="2021-05-28T03:02:42.986968399Z" level=info msg="ClientConn switchi...=grpc
May 28 03:02:43 dockerhost dockerd[4173]: time="2021-05-28T03:02:43.033773756Z" level=info msg="Loading containers...art."
May 28 03:02:43 dockerhost dockerd[4173]: time="2021-05-28T03:02:43.193735930Z" level=info msg="Default bridge (do...ress"
May 28 03:02:43 dockerhost dockerd[4173]: time="2021-05-28T03:02:43.246294570Z" level=info msg="Loading containers: done."
May 28 03:02:43 dockerhost dockerd[4173]: time="2021-05-28T03:02:43.260680982Z" level=info msg="Docker daemon" com....10.4
May 28 03:02:43 dockerhost dockerd[4173]: time="2021-05-28T03:02:43.261199789Z" level=info msg="Daemon has complet...tion"
May 28 03:02:43 dockerhost systemd[1]: Started Docker Application Container Engine.
May 28 03:02:43 dockerhost dockerd[4173]: time="2021-05-28T03:02:43.283719170Z" level=info msg="API listen on /run...sock"
Hint: Some lines were ellipsized, use -l to show in full.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; In 4th line of above output, you can see that docker has running state.&lt;/p&gt;

&lt;p&gt;Now we can move towards deployment of ML Model. So let's pull a &lt;strong&gt;centos&lt;/strong&gt; docker image for building own docker image top of it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull centos:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check the images, run the below command, you can see your all the docker images like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@dockerhost ~]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
centos       latest    300e315adb2f   5 months ago   209MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Train a ML Model
&lt;/h2&gt;

&lt;p&gt;Now you have two approach to deploy a ml model inside container. First one is that we can go through a manual approach and second one is that you can build an own image with your trained ml model. So I am going with second approach. let's see how can deploy.&lt;/p&gt;

&lt;p&gt;Now train your ml model with jupyter or colab So I have written some code of lines to train a linear Regression model. So you can get an idea from here and deploy your desired model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
import joblib
dataset= pd.read_csv('/content/salary.csv')
Y = dataset["Salary"]
X = dataset["YearsExperience"]
X = np.array(X).reshape(30,1)
Y = np.array(Y).reshape(30,1)
mind = LinearRegression()
mind.fit(X,Y)
print("Weight Is :",mind.coef_)
print("Bias Is : ",mind.intercept_)
joblib.dump(mind,"model_salary_predict.pk1")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code, joblib function is used to save this model in the form of file so that we can use it in the prediction application. So let's see how I will use it.&lt;/p&gt;

&lt;p&gt;Now create a user interactive file so that after typing the value from user end, this application can show the output to client. So I built a basic template to client and used &lt;code&gt;model_salary_predict.pk1&lt;/code&gt; file which I saved my model inside it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print("---------------------------------------------------------------------")
print("           Model:Salary Prediction from year of experience           ")
print("---------------------------------------------------------------------")
exp=input("What is the experience: ")
exp=float(exp)
mind=jb.load('model_salary_predict.pk1')
salary = mind.predict([[exp]])
print("Salary will be:",salary)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now ML Model is ready to use so let save these codes with &lt;code&gt;.py&lt;/code&gt; extention.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build a Dockerfile.
&lt;/h2&gt;

&lt;p&gt;Before building docker image copy your model's files inside your docker system. So you can use &lt;code&gt;scp&lt;/code&gt; command for copying them inside docker system.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo scp -i &amp;lt;key.pem&amp;gt; &amp;lt;files_location&amp;gt; username@ip:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After copying these file inside docker system, output will look like that:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RAVNW2OR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ymjua5bp7oob4v7wfeoh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RAVNW2OR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ymjua5bp7oob4v7wfeoh.jpg" alt="Alt Text" width="797" height="99"&gt;&lt;/a&gt;&lt;br&gt;
Now time is to create own Dockerfile with our ml trained Model. So now again visit your docker system and create it.&lt;/p&gt;

&lt;p&gt;So, create a Dockerfile with your any favourite editor and write the below code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM centos:latest
RUN yum install python3 vim ncurses -y &amp;amp;&amp;amp; \
    python3 -m pip install --upgrade --force-reinstall pip


RUN pip install pandas scikit-learn
RUN mkdir ml
COPY linearregression.py salary.csv salary_predictor.py ml/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After writing the code, just run docker build command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -t &amp;lt;containername&amp;gt;:&amp;lt;version&amp;gt; .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running this command, output will look like that:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gprxKHE8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lsfw2oqrexywmnr7fa3z.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gprxKHE8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lsfw2oqrexywmnr7fa3z.jpg" alt="Alt Text" width="795" height="95"&gt;&lt;/a&gt;&lt;br&gt;
Now everything is ready to work so let's see.&lt;/p&gt;
&lt;h2&gt;
  
  
  launch a Docker container
&lt;/h2&gt;

&lt;p&gt;So now you can launch container with your image which you built previously.&lt;/p&gt;

&lt;p&gt;Launch the docker container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -it --name &amp;lt;conatinername&amp;gt; &amp;lt;imagename&amp;gt;:&amp;lt;version&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output should look like that:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--smsYq_-9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1dumo7ckefvty0crgtpe.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--smsYq_-9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1dumo7ckefvty0crgtpe.jpg" alt="Alt Text" width="800" height="62"&gt;&lt;/a&gt;&lt;br&gt;
Now you are inside your docker container, so let's see our ml model.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xNGLh-io--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fceof9dyrv626btqs8ex.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xNGLh-io--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fceof9dyrv626btqs8ex.jpg" alt="Alt Text" width="800" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After going &lt;code&gt;ml&lt;/code&gt; directory run the ml model with &lt;code&gt;python3&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 linearregression.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sMbly_Lg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/54r78cibzep46agib51t.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sMbly_Lg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/54r78cibzep46agib51t.jpg" alt="Alt Text" width="800" height="65"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let's see prediction&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IJRfAL-V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i02j529e7j3dz0tujc5d.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IJRfAL-V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i02j529e7j3dz0tujc5d.jpg" alt="Alt Text" width="800" height="152"&gt;&lt;/a&gt;&lt;br&gt;
Hopefully, you enjoy it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclustion
&lt;/h2&gt;

&lt;p&gt;Here I have tried to give an idea how you can deploy your machine learning model inside docker container and enjoy the power of containerization. &lt;br&gt;
But if you any doubt then feel free to do the comment in the comment section.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>machinelearning</category>
      <category>docker</category>
      <category>aws</category>
    </item>
    <item>
      <title>Heart diseases prediction app creation using cloud platforms &amp; MLOps tools</title>
      <dc:creator>Sachin Kumar</dc:creator>
      <pubDate>Fri, 14 May 2021 14:15:42 +0000</pubDate>
      <link>https://forem.com/hackcoderr/heart-diseases-prediction-app-creation-using-cloud-platforms-mlops-tools-15gp</link>
      <guid>https://forem.com/hackcoderr/heart-diseases-prediction-app-creation-using-cloud-platforms-mlops-tools-15gp</guid>
      <description>&lt;p&gt;Welcome back to my another projet based staff. Here I am going to discuss all this project from the very beginning to the end. So Hopefully, you will really enjoy it. So let's get started.&lt;/p&gt;

&lt;p&gt;As it's clear from the name &lt;strong&gt;Heart diseases prediction app creation using cloud platforms &amp;amp; MLOps tools&lt;/strong&gt;, I am going to create a health-related application with an industry approach. So let's see step by step all my activities for deploying this project in the production environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  What I going to perform.
&lt;/h3&gt;

&lt;p&gt;I will create this type of architecture to deploy my web app. so let see step by step.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuy6hsi5ak1i7b1z7nl2x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuy6hsi5ak1i7b1z7nl2x.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Note:
&lt;/h3&gt;

&lt;p&gt;Here I mentioned my git link where you can find all the related codes and stuff.&lt;br&gt;
&lt;a href="https://github.com/hackcoderr/heart-diseases-predictor" rel="noopener noreferrer"&gt;https://github.com/hackcoderr/heart-diseases-predictor&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Required knowledge
&lt;/h2&gt;

&lt;p&gt;To create this project, having good knowledge of the following tools and platforms is a prerequisite.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform&lt;/li&gt;
&lt;li&gt;Cloud Platforms

&lt;ul&gt;
&lt;li&gt;Amazon Web Services (AWS)&lt;/li&gt;
&lt;li&gt;Microsoft Azure&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Ansible&lt;/li&gt;
&lt;li&gt;Kubernetes&lt;/li&gt;
&lt;li&gt;Machine Learning&lt;/li&gt;
&lt;li&gt;Git and Github&lt;/li&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;Jenkins&lt;/li&gt;
&lt;li&gt;flask&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;these all tools and platforms will help us how we can automate this project. So let's see the usages of all mentioned staff one by one and why we are using them here. So let's started with terraform.&lt;/p&gt;
&lt;h1&gt;
  
  
  Terraform
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; is an open-source infrastructure as a code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files.&lt;/p&gt;
&lt;h3&gt;
  
  
  Why I'm using Terraform here.
&lt;/h3&gt;

&lt;p&gt;As it's mentioned in the above introduction of &lt;em&gt;Terraform&lt;/em&gt; that we use this to manage the cloud services So I want to use some cloud platforms (AWS, Azure, and GCP) here so that I can create the below-mentioned reproducible infrastructure.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs37k5nv4xqs2m4yxjbd2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs37k5nv4xqs2m4yxjbd2.png" alt="Terraform  infrastructure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now it's time to install the terraform. So let see its installation.&lt;/p&gt;
&lt;h3&gt;
  
  
  Install terraform.
&lt;/h3&gt;

&lt;p&gt;If you're using Linux os as terraform workstation then run the below commands otherwise go with the mentioned link and install terraform according to your OS.&lt;br&gt;
&lt;a href="https://www.terraform.io/downloads.html" rel="noopener noreferrer"&gt;https://www.terraform.io/downloads.html&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install wget -y
sudo wget https://releases.hashicorp.com/terraform/0.15.3/terraform_0.15.3_linux_amd64.zip 
sudo yum install unzip -y
sudo unzip terraform_0.15.3_linux_amd64.zip 
sudo mv terraform /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now check to terraform version with &lt;code&gt;terraform -version&lt;/code&gt; command.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fii1264fa9h3ldzffei1l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fii1264fa9h3ldzffei1l.png" alt="terraform -version"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;⚠️ Hopefully now it's clear what I am going to do with the help of Terraform as it's clearly mentioned in the above diagram. I'm going to use 2 cloud platforms (AWS and Azure). So let start with AWS Cloud then I will go with Azure.&lt;/p&gt;

&lt;p&gt;Before going onward, let me create a workspace where I will save all the things related to this project.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fityumun4ssxlnk14xe3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fityumun4ssxlnk14xe3k.png" alt="workspace"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Amazon Web Services (AWS)
&lt;/h1&gt;

&lt;p&gt;Amazon web service is an online platform that provides scalable and cost-effective cloud computing solutions. It is a broadly adopted cloud platform that offers several on-demand operations like compute power, database storage, content delivery, etc., to help corporates scale and grow.&lt;/p&gt;

&lt;p&gt;But if you want to more about it then visit the below link.&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/Amazon_Web_Services" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Amazon_Web_Services&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  AWS IAM
&lt;/h2&gt;

&lt;p&gt;AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.&lt;/p&gt;

&lt;p&gt;Now we will need &lt;code&gt;access key&lt;/code&gt; and &lt;code&gt;secret key&lt;/code&gt; for creating VPC and launch AWS instances by terraform tool that why we have to create &lt;strong&gt;AWS IAM User&lt;/strong&gt; with &lt;code&gt;AmazonVPCFullAccess&lt;/code&gt; and &lt;code&gt;AmazonEC2FullAccess&lt;/code&gt;. So download your IAM credential file.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk06uaahwv0oztqjkos59.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk06uaahwv0oztqjkos59.png" alt="AWS IAM"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Install AWS CLI
&lt;/h3&gt;

&lt;p&gt;Now install AWS CLI in your terraform workstation that will help to create making AWS profile and other staff. So if you are using linux then run the below commands and for others os visit the mentioned link.&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating AWS Profile
&lt;/h3&gt;

&lt;p&gt;Now you can easily make an AWS CLI profile which we will use in &lt;code&gt;aws.tf&lt;/code&gt; file as a profile. So let's see.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First of all log in with AWS CLI.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;After running the above command, give your &lt;code&gt;access key&lt;/code&gt; &amp;amp; &lt;code&gt;secret key&lt;/code&gt; which you downloaded during the creating &lt;strong&gt;AWS IAM User&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F984ec7wjbdv082nsoojg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F984ec7wjbdv082nsoojg.png" alt="aws configure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now run the below command to make your profile and the same is here, give your access and secret key.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure --profile profilename
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbbtvxpj91ste4oocdxc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbbtvxpj91ste4oocdxc.png" alt="aws profile"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;⚠️ Also, you can check your profile with the help of the below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure list [--profile profile-name]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now time is to move towards terraform code so make your workspace.&lt;br&gt;
Note: I am following mentioned workspace for terraform staff for easy understanding so you also can follow this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/root/hdp-project/terraform/aws/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;So create &lt;code&gt;aws.tf&lt;/code&gt; file inside above mentioned and write the below code.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "ap-south-1"
  profile = "hackcoderr"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here you can set any &lt;code&gt;region&lt;/code&gt; at the place &lt;code&gt;ap-south-1&lt;/code&gt; according to your need and give your profile name instead of my profile &lt;code&gt;hakcoderr&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Initializing terraform code
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;terraform init&lt;/code&gt; command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times. You can initialize using &lt;code&gt;terraform init&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hs1tmttqc7h73e79nrp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hs1tmttqc7h73e79nrp.png" alt="terraform-init"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Amazon VPC
&lt;/h3&gt;

&lt;p&gt;Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined. This virtual network closely resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "vpc" {
  cidr_block       = "192.168.0.0/16"
  instance_tenancy = "default"
  enable_dns_support   = "true"
  enable_dns_hostnames = "true"
  tags = {
    Name = "aws-heart-disease-predictor-vpc"
    Environment = "Production"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code, you can choose Classless Inter-Domain Routing &lt;strong&gt;(CIDR)&lt;/strong&gt; block range according to your desired and if you don't want to &lt;strong&gt;DNS&lt;/strong&gt; support then you can write &lt;code&gt;false&lt;/code&gt; to &lt;code&gt;enable_dns_support&lt;/code&gt;. and give any &lt;code&gt;tag&lt;/code&gt; as you want.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojyuseglxaya894ht3f9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojyuseglxaya894ht3f9.png" alt="AWS VPC"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating subnet
&lt;/h3&gt;

&lt;p&gt;Subnetwork or subnet is a logical subdivision of an IP network. The practice of dividing a network into two or more networks is called subnetting. AWS provides two types of subnetting one is Public which allows the internet to access the machine and another is private which is hidden from the internet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_subnet" "subnet-1a" {
  vpc_id     = aws_vpc.vpc.id
  cidr_block = "192.168.0.0/24"
  availability_zone = "ap-south-1a"
  map_public_ip_on_launch = "true"

  tags = {
    Name = "aws-heart-disease-predictor-sunbet"
    Environment = "Production"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here &lt;code&gt;CIDR&lt;/code&gt; range should be under your &lt;strong&gt;VPC CIDR&lt;/strong&gt; range otherwise it doesn't work and &lt;code&gt;map_public_ip_on_launch&lt;/code&gt; is used to assign public IP to instance after launching, choose any &lt;code&gt;availability_zone&lt;/code&gt; available your selected &lt;code&gt;region&lt;/code&gt;. You can give tags for easy recognition after creating subnets.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzir7lz04482x878t4ie2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzir7lz04482x878t4ie2.png" alt="AWS subnet"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Creating Internet Gateway
&lt;/h3&gt;

&lt;p&gt;An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_internet_gateway" "gw" {
  vpc_id = aws_vpc.vpc.id

  tags = {
    Name = "aws-heart-disease-predictor-internet-gateway"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the above code will create your respective internet gateway. you need to specify on which VPC you want to create an internet gateway. Also, you can give a name using a tag block.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodkofru6ov119mm5qidg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodkofru6ov119mm5qidg.png" alt="internet gateway"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Creating route table
&lt;/h3&gt;

&lt;p&gt;A routing table contains a set of rules, called routes, that are used to determine where network traffic from your subnet or gateway is directed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route_table" "route_table" {
  vpc_id = aws_vpc.vpc.id

  route {

gateway_id = aws_internet_gateway.gw.id
    cidr_block = "0.0.0.0/0"
  }

    tags = {
    Name = "aws-heart-disease-predictor-route-table"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You need to create a routing table for the internet gateway you have created above. Here, I am allowing all the IP rage. So my ec2 instances can connect to the internet world. we need to give the vpc_id so that we can easily allocate the routing table to the respective VPC. You can specify the name of the routing table using a tag block.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97k3npq2td2unofqt2fs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97k3npq2td2unofqt2fs.png" alt="Route table"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Route Table Association To Subnets
&lt;/h3&gt;

&lt;p&gt;We need to connect the routing table created for internet gateways to the respective subnets inside the vpc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Route Table Association
resource "aws_route_table_association" "route-association" {
  subnet_id      = aws_subnet.subnet-1a.id
  route_table_id = aws_route_table.route_table.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You need to specify which subnets you want to take to the public world. As if the subnets get associated(connected) to the Internet Gateway it will be a public subnet. But if you don’t associate subnets to the Internet gateway routing table then it will be known as private subnets. The instances which are launched in the private subnet are not able to connect from outside as it will not having public IP, also it will not be connected to the Internet Gateway. You need to specify the routing table for the association of the subnets. If you don’t specify the routing table in the above association block then the subnet will take the vpc’s route table. So if you want to take the ec2 instances to the public world then you need to specify the router in the above association block. It's upon you which IP range you want your ec2 instances to connect. Here I have to give 0.0.0.0/0 means I can access anything from the ec2 instances.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Security Group
&lt;/h3&gt;

&lt;p&gt;A security group acts as a virtual firewall for your EC2 instances to control incoming and outgoing traffic. If you don’t specify a security group, Amazon EC2 uses the default security group. You can add rules to each security group that allows traffic to or from its associated instances.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "SG" {
  name = "Heart-SG"
  vpc_id = "${aws_vpc.vpc.id}"
  ingress {
      from_port   = 0
      to_port     = 0
      protocol    = "-1"
      cidr_blocks = ["0.0.0.0/0"]
  }

 egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags ={
    Environment = "Production"
    Name= "aws-heart-disease-predictor-SG"
  }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above will create a security group that works as a firewall. So which type of traffic want to &lt;code&gt;engress&lt;/code&gt;&amp;amp; &lt;code&gt;ingress&lt;/code&gt; you can set here. But I want to all types of traffic SO  here I have given &lt;code&gt;all traffic&lt;/code&gt;. &lt;code&gt;-1&lt;/code&gt; means all. &lt;code&gt;from_port= 0&lt;/code&gt; &lt;code&gt;to_port=0&lt;/code&gt; &lt;code&gt;(0.0.0.0)&lt;/code&gt; that means we have disabled the firewall and(0.0.0.0/0) means all traffic I can able to access from this outbound rule. You can give the name of the respective Security Group.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxj8fvzp5oc8upv94xcgm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxj8fvzp5oc8upv94xcgm.png" alt="aws sg"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Creating code for AWS Instances
&lt;/h3&gt;

&lt;p&gt;An EC2 instance is nothing but a virtual server in Amazon Web services terminology. It stands for Elastic Compute Cloud. It is a web service where an AWS subscriber can request and provision a compute server in the AWS cloud. AWS provides multiple instance types for the respective business needs of the user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "Ansible_Controller_Node" {
  ami           = "ami-0a9d27a9f4f5c0efc"
  instance_type = "t2.micro"
  subnet_id = "${aws_subnet.subnet-1a.id}"
  vpc_security_group_ids = ["${aws_security_group.SG.id}"]
  key_name = "key"
 tags ={
    Environment = "${var.environment_tag}"
    Name= "Ansible_Controller_Node"
  }
}


resource "aws_instance" "K8S_Master_Node" {
  ami           = "ami-04bde106886a53080"
  instance_type = "t2.medium"
  subnet_id = "${aws_subnet.subnet-1a.id}"
  vpc_security_group_ids = ["${aws_security_group.SG.id}"]
  key_name = "key"
 tags ={
    Environment = "${var.environment_tag}"
    Name= "K8S_Master_Node"
  }

}
resource "aws_instance" "K8S_Slave1_Node" {
  ami           = "ami-04bde106886a53080"
  instance_type = "t2.medium"
  subnet_id = "${aws_subnet.subnet-1a.id}"
  vpc_security_group_ids = ["${aws_security_group.SG.id}"]
  key_name = "key"
 tags ={
    Environment = "${var.environment_tag}"
    Name= "K8S_Slave1_Node"
  }

}
resource "aws_instance" "K8S_Slave2_Node" {
  ami           = "ami-04bde106886a53080"
  instance_type = "t2.medium"
  subnet_id = "${aws_subnet.subnet-1a.id}"
  vpc_security_group_ids = ["${aws_security_group.SG.id}"]
  key_name = "key"
 tags ={
    Environment = "${var.environment_tag}"
    Name= "K8S_Slave2_Node"
  }

}
resource "aws_instance" "JenkinsNode" {
  ami           = "ami-0a9d27a9f4f5c0efc"
  instance_type = "t2.micro"
  subnet_id = "${aws_subnet.subnet-1a.id}"
  vpc_security_group_ids = ["${aws_security_group.SG.id}"]
  key_name = "key"
 tags ={
    Environment = "${var.environment_tag}"
    Name= "JenkinsNode"
  }

}

resource "aws_instance" "DockerNode" {
  ami           = "ami-0a9d27a9f4f5c0efc"
  instance_type = "t2.micro"
  subnet_id = "${aws_subnet.subnet-1a.id}"
  vpc_security_group_ids = ["${aws_security_group.SG.id}"]
  key_name = "key"
 tags ={
    Environment = "${var.environment_tag}"
    Name= "DockerNode"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above will launch EC2 instance so &lt;code&gt;ami&lt;/code&gt; &lt;code&gt;count&lt;/code&gt; and &lt;code&gt;instance_type&lt;/code&gt; you can choose according to your desired and write &lt;code&gt;tags&lt;/code&gt; as you want. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If you want to see complete code at a time then go through my git repo.&lt;br&gt;
&lt;a href="https://github.com/hackcoderr/heart-diseases-predictor/blob/master/terraform/aws/aws.tf" rel="noopener noreferrer"&gt;https://github.com/hackcoderr/heart-diseases-predictor/blob/master/terraform/aws/aws.tf&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Build and deploy the infrastructure
&lt;/h3&gt;

&lt;p&gt;With your Terraform template created, the first step is to initialize Terraform. This step ensures that Terraform has all the prerequisites to build your template in Azure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step is to have Terraform review and validate the template. This step compares the requested resources to the state information saved by Terraform and then outputs the planned execution. The Azure resources aren't created at this point.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything looks correct and you're ready to build the infrastructure in Azure, apply the template in Terraform.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once Terraform completes, your VM infrastructure is ready.&lt;/p&gt;

&lt;h1&gt;
  
  
  Microsoft Azure
&lt;/h1&gt;

&lt;p&gt;It's also a public cloud provider and provides resources and services as AWS provides. So hopefully, you have an idea about it otherwise you want to more about it then visit mentioned link. &lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/Microsoft_Azure" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Microsoft_Azure&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Install Azure CLI
&lt;/h3&gt;

&lt;p&gt;Here, we have to also install Azure CLI for the Azure profile So that run the terraform code for azure. So If you're using RHEL, CentOS, or Fedora as a linux then run the below commands otherwise follow this &lt;a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
echo -e "[azure-cli]
name=Azure CLI
baseurl=https://packages.microsoft.com/yumrepos/azure-cli
enabled=1
gpgcheck=1
gpgkey=https://packages.microsoft.com/keys/microsoft.asc" | sudo tee /etc/yum.repos.d/azure-cli.repo
sudo dnf install azure-cli -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So let's check the Azure CLI version just for confirmation.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1hnuq67fsnv20lmpdd8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1hnuq67fsnv20lmpdd8.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Login with Azure through CLI
&lt;/h3&gt;

&lt;p&gt;When we will work terraform, we have to provide the azure credentials for the Azure profile so that we can login with azure. so there are many ways to login with it and Azure CLI is one of them that I am going to use. So let's move ahead and login.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsifhg2y4g9ra7ppy4yv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsifhg2y4g9ra7ppy4yv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
As you will type the above command, yellow-colored instructions will come up. So browse the given URL that I have underlined with red color. After it, a window will pop up and give the given Code. Now you can see your azure credentials on your CLI. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If you get more than one subscription id then you can simply select one id with the below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az account set --subscription "My Demos"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So now time is to move towards writing the terraform code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure the Microsoft Azure Provider
&lt;/h3&gt;

&lt;p&gt;The provider section tells Terraform to use an Azure provider. It will use your azure credentials like &lt;code&gt;subscription_id&lt;/code&gt;, &lt;code&gt;client_id&lt;/code&gt;, &lt;code&gt;client_secret&lt;/code&gt;, and &lt;code&gt;tenant_id&lt;/code&gt; behind the scene.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "azurerm" {
    features {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create a resource group
&lt;/h3&gt;

&lt;p&gt;A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_resource_group" "hdp-rg" {
    name     = "Azure-HDP-ResourceGroup"
    location = "Central India"

    tags = {
        Name = "Azure-HDP-RG"
        environment = "Production"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above section creates a resource group named &lt;code&gt;Azure-HDP-ResourceGroup&lt;/code&gt; in the &lt;code&gt;Central India&lt;/code&gt; location. But these things you can manage according to your desire.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffql45jzbdwyju7uub0pl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffql45jzbdwyju7uub0pl.png" alt="hdp-rg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a virtual network
&lt;/h3&gt;

&lt;p&gt;It has the same concept as AWS VPC so let understand the template code for it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_virtual_network" "hdp-vnet" {
    name                = "Azure-HDP-Vnet"
    address_space       = ["192.168.0.0/16"]
    location            = azurerm_resource_group.hdp-rg.location
    resource_group_name = azurerm_resource_group.hdp-rg.name

    tags = {
        Name = "Azure-HDP-VNet"
        environment = "Production"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above section creates a virtual network named &lt;code&gt;Azure-HDP-Vnet&lt;/code&gt; in the &lt;code&gt;192.168.0.0/16&lt;/code&gt; address space.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmmg5xvy60otmojxptt4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmmg5xvy60otmojxptt4.png" alt="VNet"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create subnet
&lt;/h3&gt;

&lt;p&gt;It also works as AWS Subnet so let's see code directly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_subnet" "hdp-subnet" {
    name                 = "Azure-HDP-Subnet"
    resource_group_name  = azurerm_resource_group.hdp-rg.name
    virtual_network_name = azurerm_virtual_network.hdp-vnet.name
    address_prefixes       = ["192.168.0.0/24"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above section creates a subnet named &lt;code&gt;Azure-HDP-Subnet&lt;/code&gt; in the &lt;code&gt;Azure-HDP-Vnet&lt;/code&gt; virtual network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4q98puxfomtvly55goxi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4q98puxfomtvly55goxi.png" alt="azure-subnet"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create public IP address
&lt;/h3&gt;

&lt;p&gt;To access resources across the Internet, create and assign a public IP address to your VM. So I'm going to 3 VM's that's why I will need 3 Public IPs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_public_ip" "hdp-publicip-1" {
    name                         = "Azure-HDP-PublicIP-1"
    location                     = azurerm_resource_group.hdp-rg.location
    resource_group_name          = azurerm_resource_group.hdp-rg.name
    allocation_method            = "Dynamic"

    tags = {
        Name = "HDP-Public-IP-1"
        environment = "Production"
    }
}

resource "azurerm_public_ip" "hdp-publicip-2" {
    name                         = "Azure-HDP-PublicIP-2"
    location                     = azurerm_resource_group.hdp-rg.location
    resource_group_name          = azurerm_resource_group.hdp-rg.name
    allocation_method            = "Dynamic"

    tags = {
        Name = "HDP-Public-IP-2"
        environment = "Production"
    }
}

resource "azurerm_public_ip" "hdp-publicip-3" {
    name                         = "Azure-HDP-PublicIP-3"
    location                     = azurerm_resource_group.hdp-rg.location
    resource_group_name          = azurerm_resource_group.hdp-rg.name
    allocation_method            = "Dynamic"

    tags = {
        Name = "HDP-Public-IP-3"
        environment = "Production"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above section creates 3 public IP address named &lt;code&gt;Azure-HDP-PublicIP-1&lt;/code&gt; and so on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Network Security Group
&lt;/h3&gt;

&lt;p&gt;Network Security Groups control the flow of network traffic in and out of your VM.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_network_security_group" "hdp-sg" {
    name                = "Azure-HDP-SG"
    location            = azurerm_resource_group.hdp-rg.location
    resource_group_name = azurerm_resource_group.hdp-rg.name

    security_rule {
        name                       = "SSH"
        priority                   = 1001
        direction                  = "Inbound"
        access                     = "Allow"
        protocol                   = "Tcp"
        source_port_range          = "*"
        destination_port_range     = "22"
        source_address_prefix      = "*"
        destination_address_prefix = "*"
    }

    tags = {
        Name = "Azure-HDP-SG"
        environment = "Production"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above section creates a network security group named &lt;code&gt;Azure-HDP-SG&lt;/code&gt; and defines a rule to allow SSH traffic on &lt;code&gt;TCP port 22&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create virtual network interface card.
&lt;/h3&gt;

&lt;p&gt;A virtual network interface card (NIC) connects your VM to a given virtual network, public IP address, and network security group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_network_interface" "hdp-nic-1" {
    name                      = "myNIC-1"
    location                  = azurerm_resource_group.hdp-rg.location
    resource_group_name       = azurerm_resource_group.hdp-rg.name

    ip_configuration {
        name                          = "myNicConfiguration"
        subnet_id                     = azurerm_subnet.hdp-subnet.id
        private_ip_address_allocation = "Dynamic"
        public_ip_address_id          = azurerm_public_ip.hdp-publicip-1.id
    }

    tags = {
        Name = "HDP-NIC-1"
        Environment = "Production"
    }
}

resource "azurerm_network_interface" "hdp-nic-2" {
    name                      = "myNIC-2"
    location                  = azurerm_resource_group.hdp-rg.location
    resource_group_name       = azurerm_resource_group.hdp-rg.name

    ip_configuration {
        name                          = "myNicConfiguration"
        subnet_id                     = azurerm_subnet.hdp-subnet.id
        private_ip_address_allocation = "Dynamic"
        public_ip_address_id          = azurerm_public_ip.hdp-publicip-2.id
    }

    tags = {
        Name = "HDP-NIC-2"
        Environment = "Production"
    }
}

resource "azurerm_network_interface" "hdp-nic-3" {
    name                      = "myNIC-3"
    location                  = azurerm_resource_group.hdp-rg.location
    resource_group_name       = azurerm_resource_group.hdp-rg.name

    ip_configuration {
        name                          = "myNicConfiguration"
        subnet_id                     = azurerm_subnet.hdp-subnet.id
        private_ip_address_allocation = "Dynamic"
        public_ip_address_id          = azurerm_public_ip.hdp-publicip-3.id
    }

    tags = {
        Name = "HDP-NIC-3"
        Environment = "Production"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above section in a Terraform template creates 3 virtual NIC named &lt;code&gt;myNIC-1&lt;/code&gt; and so no, connected to the virtual networking resources you've created.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connect the security group to the network interface
&lt;/h3&gt;

&lt;p&gt;Now you can connect your nic cards with the security group which you have created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_network_interface_security_group_association" "hdp-nic-sg-1" {
    network_interface_id      = azurerm_network_interface.hdp-nic-1.id
    network_security_group_id = azurerm_network_security_group.hdp-sg.id
}

resource "azurerm_network_interface_security_group_association" "hdp-nic-sg-2" {
    network_interface_id      = azurerm_network_interface.hdp-nic-2.id
    network_security_group_id = azurerm_network_security_group.hdp-sg.id
}

resource "azurerm_network_interface_security_group_association" "hdp-nic-sg-3" {
    network_interface_id      = azurerm_network_interface.hdp-nic-3.id
    network_security_group_id = azurerm_network_security_group.hdp-sg.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above section in a Terraform template creates 3 security group associations connected to the nic cards you've created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnyy6wuzs80f20ts9kibx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnyy6wuzs80f20ts9kibx.png" alt="nic card"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the virtual machines
&lt;/h3&gt;

&lt;p&gt;The final step is to create  VMs and use all the resources created. So you see here 3 VMs named as &lt;code&gt;az-hdp-vm-1&lt;/code&gt; and so no.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; resource "azurerm_virtual_machine" "main-1" {
  name                  = "az-hdp-vm-1"
  location              = azurerm_resource_group.hdp-rg.location
  resource_group_name   = azurerm_resource_group.hdp-rg.name
  network_interface_ids = [azurerm_network_interface.hdp-nic-1.id]
  vm_size               = "Standard_DS1_v2"
  delete_os_disk_on_termination = true
  delete_data_disks_on_termination = true

  storage_image_reference {
    publisher = "RedHat"
    offer     = "RHEL"
    sku       = "8.1"
    version   = "latest"
  }
  storage_os_disk {
    name              = "hdp-disk-1"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  }
  os_profile {
    computer_name  = "hostname"
    admin_username = "hdpAdmin"
    admin_password = "Password1234!"
  }
  os_profile_linux_config {
    disable_password_authentication = false
  }

  tags = {
    Name = "Az-HDP-Slave-1"
    Environment = "Production"
  }
}


resource "azurerm_virtual_machine" "main-2" {
  name                  = "az-hdp-vm-2"
  location              = azurerm_resource_group.hdp-rg.location
  resource_group_name   = azurerm_resource_group.hdp-rg.name
  network_interface_ids = [azurerm_network_interface.hdp-nic-2.id]
  vm_size               = "Standard_DS1_v2"
  delete_os_disk_on_termination = true
  delete_data_disks_on_termination = true

  storage_image_reference {
    publisher = "RedHat"
    offer     = "RHEL"
    sku       = "8.1"
    version   = "latest"
  }
  storage_os_disk {
    name              = "hdp-disk-2"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  }
  os_profile {
    computer_name  = "hostname"
    admin_username = "hdpAdmin"
    admin_password = "Password1234!"
  }
  os_profile_linux_config {
    disable_password_authentication = false
  }
  tags = {
    Name = "Az-HDP-Slave-2"
    Environment = "Production"
  }
}


resource "azurerm_virtual_machine" "main-3" {
  name                  = "az-hdp-vm-3"
  location              = azurerm_resource_group.hdp-rg.location
  resource_group_name   = azurerm_resource_group.hdp-rg.name
  network_interface_ids = [azurerm_network_interface.hdp-nic-3.id]
  vm_size               = "Standard_DS1_v2"
  delete_os_disk_on_termination = true
  delete_data_disks_on_termination = true

  storage_image_reference {
    publisher = "RedHat"
    offer     = "RHEL"
    sku       = "8.1"
    version   = "latest"
  }
  storage_os_disk {
    name              = "hdp-disk-3"
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  }
 os_profile {
    computer_name  = "hostname"
    admin_username = "hdpAdmin"
    admin_password = "Password1234!"
  }
  os_profile_linux_config {
    disable_password_authentication = false
  }
  tags = {
    Name = "Az-HDP-Slave-3"
    Environment = "Production"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above section creates  3 VMs named &lt;code&gt;az-hdp-vm-1&lt;/code&gt; and &lt;code&gt;az-hdp-vm-2&lt;/code&gt; and &lt;code&gt;az-hdp-vm-3&lt;/code&gt; and attaches the virtual NICs named &lt;code&gt;myNIC-1&lt;/code&gt;, &lt;code&gt;myNIC-2&lt;/code&gt; and &lt;code&gt;myNIC-3&lt;/code&gt; respectlly. The latest &lt;code&gt;RHEL 8.1&lt;/code&gt; image is used, and a user named &lt;code&gt;azureuser&lt;/code&gt; is created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5ugpjr0i52c6wjhy2m2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5ugpjr0i52c6wjhy2m2.png" alt="azure vm"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Build and deploy the infrastructure
&lt;/h3&gt;

&lt;p&gt;With your Terraform template created, the first step is to initialize Terraform. This step ensures that Terraform has all the prerequisites to build your template in Azure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step is to have Terraform review and validate the template. This step compares the requested resources to the state information saved by Terraform and then outputs the planned execution. The Azure resources aren't created at this point.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything looks correct and you're ready to build the infrastructure in Azure, apply the template in Terraform.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once Terraform completes, your VM infrastructure is ready.&lt;/p&gt;

&lt;h1&gt;
  
  
  Creating Machine learning Model:
&lt;/h1&gt;

&lt;p&gt;Here now, we have to create a machine learning model. As the dataset is of classification problem then we have to choose classification algorithms. Here i trained the model with &lt;code&gt;Logistic Regression&lt;/code&gt;, &lt;code&gt;RandomForestClassifier&lt;/code&gt;, &lt;code&gt;DecisionTree Classsifier&lt;/code&gt;, &lt;code&gt;GradientBoostingClassifier&lt;/code&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Logistic Regression:
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.linear_model import LogisticRegression
lr_model=LogisticRegression()
lr_model.fit(X_train, y_train)
lr_y_model= lr_model.predict(X_test)
lr_y_model
from sklearn.metrics import accuracy_score
print("Logistic Regression Accuracy: ", accuracy_score(y_test, lr_y_model))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Logistic Regression Accuracy:  0.9180327868852459/opt/conda/lib/python3.7/site-packages/sklearn/linear_model/_logistic.py:765: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
  extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  RandomForestClassifier:
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.ensemble import RandomForestClassifier
rfc_model = RandomForestClassifier(n_estimators=10000, max_depth=100)
rfc_model
rfc_model.fit(X_train, y_train)
rfc_y_pred = rfc_model.predict(X_test)
rfc_y_pred
from sklearn.metrics import accuracy_score
print("Random Forest Accuracy: ", accuracy_score(y_test, rfc_y_pred))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Random Forest Accuracy: 0.7704918032786885&lt;/code&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  DecisionTreeClasssifier:
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.tree import DecisionTreeClassifier
dt_model = DecisionTreeClassifier()
dt_model.fit(X_train, y_train)
dt_y_pred=dt_model.predict(X_test)dt_y_pred
from sklearn.metrics import accuracy_score
print("Decision Tree Accuracy: ", accuracy_score(y_test, dt_y_pred))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Decision Tree Accuracy: 0.6721311475409836&lt;/code&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  GradientBoostingClassifier:
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.ensemble import GradientBoostingClassifier
GB_model = GradientBoostingClassifier(n_estimators=1000)
GB_model.fit(X_train, y_train)
y_pred_GB = GB_model.predict(X_test)
y_pred_GB
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred_GB)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;GradientBoostingClassifer Accuracy: 0.7868852459016393&lt;/code&gt;&lt;br&gt;
From the above model creation and comparision &lt;code&gt;Logistic Regression&lt;/code&gt; is giving much accuracy but i am taking model of &lt;code&gt;Random Forest&lt;/code&gt; and saving it to &lt;code&gt;.h5&lt;/code&gt; extension.&lt;/p&gt;

&lt;p&gt;
     &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Famit17133129%2FHeart_Diseases_Prediction_App_Creation_Using_MLOps_Tools%2Fmain%2FImages%2F2.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Famit17133129%2FHeart_Diseases_Prediction_App_Creation_Using_MLOps_Tools%2Fmain%2FImages%2F2.gif"&gt;&lt;/a&gt;
&lt;/p&gt;

&lt;h1&gt;
  
  
  Saving RandomForestClassifier Model:
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import joblib
joblib_file = "LogisticRegression_Heart_Prediction.h5"
joblib.dump(lr_model, joblib_file)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This above code will create a file named &lt;code&gt;RandomForest_Heart_Prediction.h5&lt;/code&gt; and we have to use this model while create a docker image in which flask we have to install. Below is the code for &lt;code&gt;dockerfile&lt;/code&gt;. &lt;code&gt;Code link&lt;/code&gt;→ &lt;/p&gt;

&lt;h4&gt;
  
  
  Complete code my machine learning model:
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://colab.research.google.com/drive/1hr2igd-gjnjGn335g8VWosY5i1LuEDZY" rel="noopener noreferrer"&gt;https://colab.research.google.com/drive/1hr2igd-gjnjGn335g8VWosY5i1LuEDZY&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we need to build the image using below &lt;code&gt;dockerfile code&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM centos:latestRUN yum install python3  python3-devel   gcc-c++ -y &amp;amp;&amp;amp; \
    python3 -m pip install --upgrade --force-reinstall pip &amp;amp;&amp;amp; \
    yum install sudo -y &amp;amp;&amp;amp; \
    yum install --assumeyes  python3-pip &amp;amp;&amp;amp; \
    pip install keras &amp;amp;&amp;amp; \
    pip install tensorflow --no-cache-dir  tensorflow &amp;amp;&amp;amp; \
    pip install --upgrade pip tensorflow &amp;amp;&amp;amp; \
    pip3 install flask &amp;amp;&amp;amp; \
    pip3 install joblib &amp;amp;&amp;amp; \
    pip3 install sklearn &amp;amp;&amp;amp; \
    mkdir  /heart_app &amp;amp;&amp;amp;  \
    mkdir /heart_app/templatesCOPY  LogisticRegression_Heart_Prediction.h5    /heart_app
COPY  app.py  /heart_app
COPY  myform.html  /heart_app/templates
COPY  result.html   /heart_app/templates
EXPOSE  4444
WORKDIR  /heart_app
CMD export FLASK_APP=app.py
ENTRYPOINT flask  run --host=0.0.0.0    --port=4444
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To build the docker image use below command.  &lt;code&gt;docker build -t image_name:version   .&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  My Docker image link:
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://hub.docker.com/repository/docker/hackcoderr/heart-diseases-predictor" rel="noopener noreferrer"&gt;https://hub.docker.com/repository/docker/hackcoderr/heart-diseases-predictor&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Ansible
&lt;/h1&gt;

&lt;p&gt;Ansible is an open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install Ansible
&lt;/h3&gt;

&lt;p&gt;I'm going to install an ansible setup AWS Instance named &lt;code&gt;ansible-controller-node&lt;/code&gt;  which I have launched before. So run mentioned commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install python3 git -y
git clone https://github.com/hackcoderr/Ansible-Setup.git
cd Ansible-Setup/
python3 script.py
sudo hostnamectl set-hostname ansible-controller
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If you want to know more about it then you can visit my &lt;a href="https://github.com/hackcoderr/Ansible-Setup" rel="noopener noreferrer"&gt;Ansible Steup Repository&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rest Explanation
&lt;/h2&gt;

&lt;p&gt;please visit the mentioned link for rest one and see its README.md file.&lt;br&gt;
&lt;a href="https://github.com/hackcoderr/heart-diseases-predictor" rel="noopener noreferrer"&gt;https://github.com/hackcoderr/heart-diseases-predictor&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>devops</category>
      <category>machinelearning</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Creating a Helm Chart for Grafana</title>
      <dc:creator>Sachin Kumar</dc:creator>
      <pubDate>Wed, 21 Apr 2021 07:29:29 +0000</pubDate>
      <link>https://forem.com/hackcoderr/creating-a-helm-chart-for-grafana-1c3f</link>
      <guid>https://forem.com/hackcoderr/creating-a-helm-chart-for-grafana-1c3f</guid>
      <description>&lt;p&gt;Welcome to my article. You will see all about the integration of Dockerfile, Helm, Grafana, etc, in this article. So let's get started without delay.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-requisite
&lt;/h2&gt;

&lt;p&gt;To perform this scenario you will need mentioned platform.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/console/"&gt;AWS Account&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Kubernetes Setup
&lt;/h2&gt;

&lt;p&gt;To demonstrate this scenario, first of all, we have to install the Kubernetes setup then we can move ahead for further part. So I am installing the Kubernetes cluster on the top of AWS. let's launch the instance with mentioned configuration.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ubuntu Server 18.04 LTS (HVM), SSD Volume Type&lt;/li&gt;
&lt;li&gt;t2.xlarge Instance type&lt;/li&gt;
&lt;li&gt;Minimum Storage 20 GiB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After launching AWS Instance, connect it with the help of any remote software eg. putty, etc, or ssh protocol and then follow the following steps.&lt;/p&gt;

&lt;p&gt;🔸 login with root power.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo su -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 Install kubectl.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 Update the instance and install docker.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update -y
sudo apt-get install docker.io -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 Install curl software to install Minikube.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install curl -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What's Minikube?
&lt;/h3&gt;

&lt;p&gt;Minikube is a utility you can use to run Kubernetes (k8s) on your local machine. It creates a single node cluster contained in a virtual machine (VM). This cluster lets you demo Kubernetes operations without requiring the time and resource-consuming installation of full-blown K8s.&lt;/p&gt;

&lt;p&gt;🔸 So let's install Minikube.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo chmod +x minikube
sudo mv minikube /usr/local/bin/
sudo apt install conntrack
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;⚠️ &lt;em&gt;Note&lt;/em&gt;: Now go into sudo if not gone.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo -i
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 Start the Minikube&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube start --vm-driver=none
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 Our single node cluster is ready to use so run the below command to check the minikube status.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💠 you will get output such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@ip-172-31-39-130:~# minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hopefully, your installation will had been completed. So now time is to move towards creating a Container image for Grafana. But before creating it, let's try to understand Grafana and Dockerfile.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's Grafana ?
&lt;/h3&gt;

&lt;p&gt;Grafana is a &lt;a href="https://en.wikipedia.org/wiki/Multi-platform"&gt;multi-platform open source&lt;/a&gt; analytics and &lt;a href="https://en.wikipedia.org/wiki/Interactive_visualization"&gt;interactive visualization&lt;/a&gt; web application. It provides charts, graphs, and alerts for the web when connected to supported data sources and End users can create complex monitoring dashboards using interactive query builders. Grafana is divided into a front end and back end, written in TypeScript and Go, respectively.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's Dockerfile ?
&lt;/h3&gt;

&lt;p&gt;Dockerfile is a simple text file that consists of instructions to build Docker images. Mentioned below is the syntax of a Dockerfile to creating Grafana Docker image.&lt;/p&gt;

&lt;p&gt;🔸 So create a file as Dockerfile and Ensure D should be capital in Dockerfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim Dockerfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and write the below code inside Dockerfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM centos:7
RUN yum install wget -y
RUN wget https://dl.grafana.com/oss/release/grafana-7.0.3-1.x86_64.rpm
RUN yum install grafana-7.0.3-1.x86_64.rpm -y
WORKDIR /usr/share/grafana
CMD /usr/sbin/grafana-server start &amp;amp;&amp;amp; /usr/sbin/grafana-server enable &amp;amp;&amp;amp; /bin/bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 After writing the code, build the docker image with the help of following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t username/imagename:version .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;eg. # docker build -t hackcoderr/grafana:v1 .&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;🔸  Now login to your DockerHub Account. But if you haven't DockerHub Account then first of all create it then move ahead with below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 After it, just push your image to DockerHub so that you can use it in the future.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push username/imagename:version 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💠 After running above command, the output should be similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@ip-172-31-39-130:~# docker push hackcoderr/grafana:v1
The push refers to repository [docker.io/hackcoderr/grafana]
4b603ec3a2e0: Pushed
a1be9f0c6dee: Pushed
9d1af48bd5b4: Pushed
174f56854903: Mounted from library/centos
v1: digest: sha256:6bd02f99b6e905582286b344980b2f83c75348876a58eb15786fd5baab04ce0b size: 1166
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💠 But if you don't want to create this container image then you can simply pull my pre-created image from the &lt;em&gt;Dockerhub&lt;/em&gt; with help of &lt;code&gt;docker pull&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull hackcoderr/grafana:v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;⚠️ We will use this image in the upcoming steps when we will create &lt;code&gt;deployment.yaml&lt;/code&gt; file in the Helm chart. But before it, let's know about Helm.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's Helm ?
&lt;/h3&gt;

&lt;p&gt;So let's try to understand what helm is?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Helm is package manager for Kubernetes&lt;/li&gt;
&lt;li&gt;Helm packages are called Charts.&lt;/li&gt;
&lt;li&gt;Helm Charts help define, install and upgrade complex Kubernetes application.&lt;/li&gt;
&lt;li&gt;Helm Charts can be versioned, shared, and published.&lt;/li&gt;
&lt;li&gt;Helm Charts can accept input parameter.

&lt;ul&gt;
&lt;li&gt;Kubectl need template engine to do this (Kubernetes, jinja etc)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Popular packages already available.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let's see how we can install Helm in the cluster.&lt;/p&gt;

&lt;p&gt;🔸 Run the below commands to install Helm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://get.helm.sh/helm-v3.5.2-linux-amd64.tar.gz
tar -xvzf helm-v3.5.2-linux-amd64.tar.gz

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💠 The output should be similar to the following, after running the &lt;code&gt;tar&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@ip-172-31-39-130:~# tar -xvzf helm-v3.5.2-linux-amd64.tar.gz
linux-amd64/
linux-amd64/helm
linux-amd64/LICENSE
linux-amd64/README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 After this, copy &lt;code&gt;linux-amd64/helm&lt;/code&gt; file in the &lt;code&gt;/usr/bin/&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp linux-amd64/helm /usr/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💠 Also, Ensure that Helm is installed or not with the help of the &lt;code&gt;helm version&lt;/code&gt; and the output should be similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@ip-172-31-39-130:~# helm version
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Create a Helm Chart
&lt;/h4&gt;

&lt;p&gt;let's create a new Helm Chart from the scratch. Helm created a bunch of files for you that are usually important for a production-ready service in Kubernetes. To concentrate on the most important parts, we can remove a lot of the created files. Let’s go through the only required files for this example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fecamIrn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8bg2vbgdbu7687novaki.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fecamIrn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8bg2vbgdbu7687novaki.jpg" alt="Helm file structure" width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔸 Create a Helm Chart for Grafana.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir grafana
cd grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💠 But here we need a project file that is called Chart.yaml and contains all the metadata information. &lt;br&gt;
🔸 So, create this file. Also, C should be capital in the &lt;code&gt;Chart.yaml&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim Chart.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 Write the below code inside &lt;code&gt;Chart.yaml&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v2
name: Grafana
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: 1.16.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 Make a &lt;code&gt;templates&lt;/code&gt; folder inside &lt;code&gt;grafana&lt;/code&gt; and go inside it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir templates
cd templates
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 Use this command to create a code of the &lt;code&gt;deployment.yaml&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create deployment grafana --image=hackcoderr/grafana:v1 --dry-run -o yaml &amp;gt; deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💠 The output should be similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@ip-172-31-39-130:~/grafana/templates# kubectl create deployment grafana --image=hackcoderr/grafana:v1 --dry-run -o yaml &amp;gt; deployment.yaml
W0420 18:01:34.362388   20835 helpers.go:557] --dry-run is deprecated and can be replaced with --dry-run=client.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 Go outside the &lt;code&gt;grafana&lt;/code&gt; directory and install the helm chart.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd 
helm install grafana grafana/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;⚠️ Here &lt;code&gt;grafana&lt;/code&gt; is the name of Helm Chart and &lt;code&gt;grafana/&lt;/code&gt; is the path of the chart. &lt;/p&gt;

&lt;p&gt;💠 After running above command, the output should be similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@ip-172-31-44-81:~# helm install grafana grafana/
NAME: grafana
LAST DEPLOYED: Tue Apr 20 18:10:44 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 Now again go inside &lt;code&gt;grafana/templates&lt;/code&gt; and use the below command to create a code of the &lt;code&gt;service.yaml&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd grafana/templates/
kubectl expose deployment grafana --port=3000 --type=NodePort --dry-run -o yaml &amp;gt; service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💠 The output should be similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@ip-172-31-39-130:~/grafana/templates# kubectl expose deployment grafana --port=3000 --type=NodePort --dry-run -o yaml &amp;gt; service.yaml
W0420 18:12:55.289972   23635 helpers.go:557] --dry-run is deprecated and can be replaced with --dry-run=client.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 After it, run the below command for exposing the &lt;code&gt;grafana&lt;/code&gt; pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💠 To ensure your pod is working well with the below commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
kubectl get deployment
kubectl get svc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💠 After running these commands, the output should be similar to the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--71jIbWVY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y7h5wbhkvswrdlw7hi5a.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--71jIbWVY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y7h5wbhkvswrdlw7hi5a.jpg" alt="Alt Text" width="659" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can check the list of the helm you have using the &lt;code&gt;helm list&lt;/code&gt; command. Now we can check the pods in which slaves are running in the Kubernetes cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@ip-172-31-39-130:~# helm list
NAME    NAMESPACE       REVISION        UPDATED                                STATUS   CHART           APP VERSION
grafana default         1               2021-04-20 18:10:44.189498387 +0000 UTCdeployed Grafana-0.1.0   1.16.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hopefully, Everything is working well till yet. So let's check that Grafana is working fine or not. For this, you have to take the &lt;code&gt;public_ip_of_instance&lt;/code&gt; and &lt;code&gt;port no.&lt;/code&gt; of your pod.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;eg.  15.207.72.25:31130&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Where &lt;code&gt;13.233.237.202&lt;/code&gt; is the public IP of my instance that contains all the setup which I have done till yet and &lt;code&gt;31130&lt;/code&gt; is the port no. of &lt;code&gt;grafana&lt;/code&gt; pod which you can see in the above screenshot after running &lt;code&gt;kubectl get svc&lt;/code&gt; command. So browse it.&lt;/p&gt;

&lt;p&gt;💠 After browsing it, login page will pop up. So login with by-default username and password &lt;code&gt;admin&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A3rfqtEU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ipxwk7fnmej4yuxvqb74.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A3rfqtEU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ipxwk7fnmej4yuxvqb74.jpg" alt="Alt Text" width="800" height="492"&gt;&lt;/a&gt;&lt;br&gt;
⚠️ After logging you will get the page for changing the username and password, So you can if you want.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tMHBcfpB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g24oqkaa44grybrl5w5n.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tMHBcfpB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g24oqkaa44grybrl5w5n.jpg" alt="Alt Text" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now Grafana is ready to use. So let's enjoy ☺️&lt;/p&gt;
&lt;h4&gt;
  
  
  Packing resources inside the Helm Chart.
&lt;/h4&gt;

&lt;p&gt;So helm chart ready inside the &lt;code&gt;grafana/&lt;/code&gt; directory, but we can’t publish it as it is. Firstly, we have to create a package for this helm chart.&lt;/p&gt;

&lt;p&gt;🔸 Create one directory named &lt;code&gt;charts&lt;/code&gt;. Make sure this directory should be inside &lt;code&gt;grafana&lt;/code&gt; directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir charts/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 Now, run the following command to packages the chart and store it inside the charts/ directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm package /root/grafana -d charts/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💠 you will get output something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@ip-172-31-44-81:~/grafana# helm package /root/grafana -d charts/
Successfully packaged chart and saved it to: charts/Grafana-0.1.0.tgz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Creating an index.yaml file.
&lt;/h4&gt;

&lt;p&gt;For every Helm repository, we must require an index.yaml file. The index.yaml file contains the information about the chart that is present inside the current repository/directory.&lt;/p&gt;

&lt;p&gt;🔸 For generating &lt;code&gt;index.yaml&lt;/code&gt; file inside &lt;code&gt;charts/&lt;/code&gt; directory, run following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo index charts/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;💠 You can see an &lt;code&gt;index.yaml&lt;/code&gt; file generated with the details of the chart.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VZjr6u1a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dhociga8qk4shp2xyr0h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VZjr6u1a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dhociga8qk4shp2xyr0h.jpg" alt="Alt Text" width="659" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Hosting the chart with GitHub pages
&lt;/h4&gt;

&lt;p&gt;Now everything is fine in our helm chart we can publish this to &lt;a href="https://artifacthub.io/"&gt;ArtifcatHub&lt;/a&gt; so that we can use it in the future when we require it. &lt;/p&gt;

&lt;p&gt;But before we have to host this chart anywhere. So that we can publish it to ArtifactHub. So I am going to host it with the GitHub page. but before doing this, We have to push the chart to Github. So let's follow the following steps.&lt;/p&gt;

&lt;p&gt;🔸 Install git and config cluster with your GitHub account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install git -y
git config --global user.name 'username'
git config --global user.email 'usermail@gmail.com'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 After installing &lt;code&gt;git&lt;/code&gt; go inside &lt;code&gt;grafana&lt;/code&gt; directory and then initialize it with the help of below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 Now add, commit then push this directory to GitHub.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add .
git commit -m "give any msg according to you"
git branch -M main
git remote add origin URL
git push -u origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔸 After pushing it, go to the Github repository and click on &lt;code&gt;setting&lt;/code&gt; then &lt;code&gt;GitHub Pages&lt;/code&gt;.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pKJ1DuEl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nl14kiypwvrosfiadxc2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pKJ1DuEl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nl14kiypwvrosfiadxc2.jpg" alt="Alt Text" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;⚠️ To activate the GitHub page, you have to select &lt;code&gt;branch&lt;/code&gt; first which you want to activate then &lt;code&gt;save&lt;/code&gt; it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kilUl_6I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cpx7mydtxe9lkamurju0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kilUl_6I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cpx7mydtxe9lkamurju0.jpg" alt="Alt Text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Publishing the Helm chart to ArtifactHub
&lt;/h4&gt;

&lt;p&gt;Artifact Hub includes a fine-grained authorization mechanism that allows organizations to define what actions can be performed by their members. It is based on customizable authorization policies that are enforced by the Open Policy Agent. So go to your &lt;a href="https://artifacthub.io/"&gt;ArtifactHub Account&lt;/a&gt; and login to it.&lt;/p&gt;

&lt;p&gt;🔸 Now click on &lt;code&gt;profile icon &amp;gt; control Panel&amp;gt; Add repository&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;⚠️ Make sure your repository's url should be like &lt;code&gt;https://username.github.io/repository_name/chart/&lt;/code&gt; when you're adding repository in ArtifactHub. Otherwise, it can create some issues related to &lt;code&gt;url&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HWrBtnnz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7q5837mznaju9etiwsxm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HWrBtnnz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7q5837mznaju9etiwsxm.jpg" alt="Alt Text" width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔸 After giving the required information to your Helm repository, click on &lt;code&gt;Add&lt;/code&gt;. if you had provided the right information then it will create a Helm repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2Zt3wzTX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/172jmtzuwhb4l3zyla3n.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2Zt3wzTX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/172jmtzuwhb4l3zyla3n.jpg" alt="Alt Text" width="800" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, we had successfully published our Helm chart to the &lt;a href="https://artifacthub.io/"&gt;ArtifactHub&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;The power of a great templating engine and the possibility of executing releases, upgrades, and rollbacks makes Helm great. On top of that comes the publicly available Helm Chart Hub that contains thousands of production-ready templates. This makes Helm a must-have tool in your toolbox if you work with Kubernetes on a bigger scale!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>github</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
