<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Jobin Keecheril</title>
    <description>The latest articles on Forem by Jobin Keecheril (@keecheriljobin).</description>
    <link>https://forem.com/keecheriljobin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/keecheriljobin"/>
    <language>en</language>
    <item>
      <title>Installing Kubernetes on CentOS</title>
      <dc:creator>Jobin Keecheril</dc:creator>
      <pubDate>Tue, 21 Apr 2020 14:09:19 +0000</pubDate>
      <link>https://forem.com/keecheriljobin/installing-kubernetes-on-centos-3m7b</link>
      <guid>https://forem.com/keecheriljobin/installing-kubernetes-on-centos-3m7b</guid>
      <description>&lt;h1&gt;
  
  
  Kubernetes
&lt;/h1&gt;

&lt;p&gt;Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.&lt;/p&gt;

&lt;p&gt;The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps for Installing Kubernetes on CentOS
&lt;/h2&gt;

&lt;p&gt;We will be creating a cluster of 3 machines, &lt;br&gt;
One master node &lt;br&gt;
Two worker nodes&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;em&gt;Perform the following steps on Master:&lt;/em&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SETTING UP KUBERNETES REPOSITORY&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a file named kube.repo or file_name.repo inside the directory structure i.e. /etc/yum.repos.d/kube.repo. &lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The filen_name doesn’t matter but the extension must be repo mandatorily.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;DISABLING SELINUX&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To do this, you will first need to turn it off directly&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;[root@master ~]# setenforce 0&lt;br&gt;
[root@master~]# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SWITCH OFF SWAP&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Comment out the reference to swap in /etc/fstab.  Start by editing the file:&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;[root@master ~]# swapoff -a&lt;br&gt;
[root@master ~]# vi /etc/fstab&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then comment out the appropriate line, as in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# /etc/fstab: static file system information.
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
# &amp;lt;file system&amp;gt; &amp;lt;mount point&amp;gt;   &amp;lt;type&amp;gt;  &amp;lt;options&amp;gt;    &amp;lt;dump&amp;gt;  &amp;lt;pass&amp;gt;
# / was on /dev/sda1 during installation
UUID=1d343a19-bd75-47a6-899d-7c8bc93e28ff /            ext4 errors=remount-ro 0    1
# swap was on /dev/sda5 during installation
#UUID=d0200036-b211-4e6e-a194-ac2e51dfb27d none         swap sw           0    0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;ENTERING 1 INTO IPTABLES&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now we will configure iptables to receive bridged network traffic. First edit the sysctl.conf file: &lt;br&gt;
&lt;code&gt;[root@master ~]# vi /etc/sysctl.conf&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And add these lines at the end of the file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;OR&lt;br&gt;
Run these 2 simple commands:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@master ~]# echo '1' &amp;gt; /proc/sys/net/bridge/bridge-nf-call-iptables&lt;br&gt;
[root@master ~]# echo '1' &amp;gt; /proc/sys/net/bridge/bridge-nf-call-ip6tables&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;INSTALLING DOCKER AND KUBEADM&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the package repositories are configured, run the given command to install kubeadm and docker packages.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;[root@master ~]# yum install kubeadm docker -y&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;STARTING AND ENABLING DOCKER AND KUBELET SERVICE&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Start and enable kubectl and docker service:&lt;br&gt;&lt;br&gt;
&lt;code&gt;[root@master ~]# systemctl restart docker &amp;amp;&amp;amp; systemctl enable docker&lt;br&gt;
[root@master ~]# systemctl restart kubelet &amp;amp;&amp;amp; systemctl enable kubelet&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;KUBEADM INITIALIZATION&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run the given command to  initialize and setup kubernetes master.&lt;br&gt;
&lt;code&gt;[root@master ~]# kubeadm init&lt;/code&gt;&lt;br&gt;
&lt;em&gt;Output:&lt;/em&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4q-H6ogk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6g5ies83y66mlk21k3ea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4q-H6ogk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6g5ies83y66mlk21k3ea.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Take note of the following lines that are inside the red block. You have to  execute these three commands on your master.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@master ~]# mkdir -p $HOME/.kube&lt;br&gt;
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;DEPLOY POD NETWORK TO THE CLUSTER&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;[root@master ~]# kubectl get nodes&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will show the master as not ready. To make the cluster status ready and kube-dns status running, deploy the pod network so that containers of different hosts can communicate with each other. POD network is an overlay network between the worker nodes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Run the given commands to deploy the network.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@master ~]# export kubever=$(kubectl version | base64 | tr -d '\n')&lt;br&gt;
[root@master ~]# kubectl apply -f &lt;br&gt;
"https://cloud.weave.works/k8s/net?k8s-version=$kubever"&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serviceaccount "weave-net" created
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
daemonset "weave-net" created
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now again execute the same command:&lt;br&gt;
&lt;code&gt;[root@master ~]# kubectl get nodes&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This time you will see the master is Ready.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@master ~]# kubectl get nodes
NAME           STATUS    AGE       VERSION
master                  Ready     2h        v1.7.5
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now let’s add worker nodes to the Kubernetes master nodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;em&gt;Perform the following steps on each worker node:&lt;/em&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;DISABLE SELINUX&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before disabling SELinux set the hostname on both the nodes as ‘worker-node1’ and ‘worker-node2’ respectively for understanding purposes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;[root@worker-node1 ~]# setenforce 0&lt;br&gt;
[root@worker-node1 ~]# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;CONFIGURE KUBERNETES REPOSITORIES ON BOTH THE WORKER NODES&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a file named kube.repo or file_name.repo inside the directory structure i.e. /etc/yum.repos.d/kube.repo. &lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The filen_name doesn’t matter but the extension must be repo mandatorily.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SWITCH OFF SWAP&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Comment out the reference to swap in /etc/fstab.  Start by editing the file:&lt;br&gt;
&lt;code&gt;[root@worker-node1 ~]# swapoff -a&lt;br&gt;
[root@worker-node1 ~]# vi /etc/fstab&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then comment out the appropriate line, as in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# /etc/fstab: static file system information.
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
# &amp;lt;file system&amp;gt; &amp;lt;mount point&amp;gt;   &amp;lt;type&amp;gt;  &amp;lt;options&amp;gt;    &amp;lt;dump&amp;gt;  &amp;lt;pass&amp;gt;
# / was on /dev/sda1 during installation
UUID=1d343a19-bd75-47a6-899d-7c8bc93e28ff /            ext4 errors=remount-ro 0    1
# swap was on /dev/sda5 during installation
#UUID=d0200036-b211-4e6e-a194-ac2e51dfb27d none         swap sw           0    0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;ENTERING 1 INTO IPTABLES&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now we will configure iptables to receive bridged network traffic. First edit the sysctl.conf file: &lt;br&gt;
&lt;code&gt;[root@worker-node1 ~]# vi /etc/sysctl.conf&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And add these lines at the end of the file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;OR&lt;br&gt;
Run these 2 simple commands:&lt;br&gt;
&lt;code&gt;[root@worker-node1 ~]# echo '1' &amp;gt; /proc/sys/net/bridge/bridge-nf-call-iptables&lt;br&gt;
[root@worker-node1 ~]# echo '1' &amp;gt; /proc/sys/net/bridge/bridge-nf-call-ip6tables&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;INSTALLING DOCKER AND KUBEADM ON BOTH WORKER NODES&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the package repositories are configured, run the given command to install kubeadm and docker packages.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;[root@worker-node1 ~]# yum install kubeadm docker -y&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;STARTING AND ENABLING DOCKER AND KUBELET SERVICE&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Start and enable kubectl and docker service:    &lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;[root@worker-node1 ~]# systemctl restart docker &amp;amp;&amp;amp; systemctl enable docker&lt;br&gt;
[root@worker-node1 ~]# systemctl restart kubelet &amp;amp;&amp;amp; systemctl enable kubelet&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;NOW JOIN WORKER NODES TO MASTER&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To join worker nodes to Master node, a token is required. Whenever kubernetes master is initialized then in the output we get the required commands and token. &lt;br&gt;
&lt;em&gt;Copy that command and run on both nodes: (We've noted the commands while installing Kubernetes on Master)&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;[root@worker-node1 ~]# kubeadm join --token a3bd48.1bc42347c3b35851 192.168.1.30:6443&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Output of above command would be something like below&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K__jRLd5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/q47uuxt2fc67ig7i9oj1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K__jRLd5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/q47uuxt2fc67ig7i9oj1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@worker-node2 ~]# kubeadm join --token a3bd48.1bc42347c3b35851 192.168.1.30:6443&lt;/code&gt;&lt;br&gt;
&lt;em&gt;Output&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nsLwmhcj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zmiv25veqx3h9sxt43ca.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nsLwmhcj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zmiv25veqx3h9sxt43ca.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Now verify status of Nodes from &lt;em&gt;Master node&lt;/em&gt; using kubectl command&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;[root@master ~]# kubectl get nodes&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME           STATUS    AGE       VERSION
master         Ready     2h        v1.7.5
worker-node1   Ready     20m       v1.7.5
worker-node2   Ready     18m       v1.7.5
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As we can see master and worker nodes are in ready status. This concludes that kubernetes has been installed successfully and also we have successfully joined two worker nodes. Thus we say our Kubernetes Cluster is Ready! Now we can create pods and services.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Do comment and let me know if I've missed any step or if any modifications are needed. Pointers are always welcomed!&lt;/em&gt;;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Thank you!
&lt;/h2&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>linux</category>
    </item>
    <item>
      <title>Monitoring single-node Ceph cluster using Prometheus &amp; Grafana on AWS</title>
      <dc:creator>Jobin Keecheril</dc:creator>
      <pubDate>Mon, 02 Mar 2020 17:39:44 +0000</pubDate>
      <link>https://forem.com/keecheriljobin/monitoring-single-node-ceph-cluster-using-prometheus-grafana-on-aws-3i77</link>
      <guid>https://forem.com/keecheriljobin/monitoring-single-node-ceph-cluster-using-prometheus-grafana-on-aws-3i77</guid>
      <description>&lt;h3&gt;
  
  
  Basic Information:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Ceph-&lt;/strong&gt;&lt;br&gt;
     Ceph is a free-software storage platform, implements object storage on a single distributed computer cluster, and provides interfaces for object-, block- and file-level storage. Ceph aims primarily for completely distributed operation without a single point of failure, and freely available. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prometheus-&lt;/strong&gt;&lt;br&gt;
     Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. Since its inception in 2012, many companies and organizations have adopted Prometheus, and the project has a very active developer and user community.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grafana-&lt;/strong&gt;&lt;br&gt;
     Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. It allows to create, explore, and share dashboards with your team and foster a data driven culture.&lt;/p&gt;


&lt;h3&gt;
  
  
  Integration of Ceph with Prometheus and Grafana on AWS:
&lt;/h3&gt;

&lt;p&gt;Let's divide this into &lt;strong&gt;3 tasks&lt;/strong&gt; so as to get our desired monitoring tool for ceph&lt;br&gt;
&lt;strong&gt;1. Installation &amp;amp; configuration of Ceph&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;2. Installing &amp;amp; configuring Prometheus&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;3. Installing &amp;amp; configuring Grafana&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; I'm assuming y'all know the basics of AWS. You need to launch an instance with few prerequisites:&lt;br&gt;
&lt;em&gt;1. Add 3 volumes for osds.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;2. Security groups. (Allow All_TCP, HTTP and HTTPS from anywhere)&lt;/em&gt;&lt;/p&gt;


&lt;h4&gt;
  
  
  1. Installation &amp;amp; configuration of Ceph:
&lt;/h4&gt;
&lt;h4&gt;
  
  
  ssh-keygen:
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# vim /etc/ssh/sshd_config&lt;/code&gt;&lt;br&gt;
Make these changes in the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PermitRootLogin yes
PasswordAuthentication yes
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# systemctl restart sshd&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# ssh-keygen&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# ssh-copy-id root@ip-10-0-0-100&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Configure Ceph repo:
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# vim /etc/yum.repos.d/ceph.repo&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;ceph&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="s"&gt;name=Ceph packages for $basearch&lt;/span&gt;
&lt;span class="s"&gt;baseurl=https://download.ceph.com/rpm-nautilus/el7/$basearch&lt;/span&gt;
&lt;span class="s"&gt;enabled=1&lt;/span&gt;
&lt;span class="s"&gt;priority=1&lt;/span&gt;
&lt;span class="s"&gt;gpgcheck=1&lt;/span&gt;
&lt;span class="s"&gt;gpgkey=https://download.ceph.com/keys/release.asc&lt;/span&gt;

&lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;ceph-noarch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="s"&gt;name=Ceph noarch packages&lt;/span&gt;
&lt;span class="s"&gt;baseurl=https://download.ceph.com/rpm-nautilus/el7/noarch&lt;/span&gt;
&lt;span class="s"&gt;enabled=1&lt;/span&gt;
&lt;span class="s"&gt;priority=1&lt;/span&gt;
&lt;span class="s"&gt;gpgcheck=1&lt;/span&gt;
&lt;span class="s"&gt;gpgkey=https://download.ceph.com/keys/release.asc&lt;/span&gt;

&lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;ceph-source&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="s"&gt;name=Ceph source packages&lt;/span&gt;
&lt;span class="s"&gt;baseurl=https://download.ceph.com/rpm-nautilus/el7/SRPMS&lt;/span&gt;
&lt;span class="s"&gt;enabled=0&lt;/span&gt;
&lt;span class="s"&gt;priority=1&lt;/span&gt;
&lt;span class="s"&gt;gpgcheck=1&lt;/span&gt;
&lt;span class="s"&gt;gpgkey=https://download.ceph.com/keys/release.asc&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Install ceph-deploy
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# yum install ceph-deploy -y&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# mkdir ceph-deploy&lt;/code&gt;&lt;br&gt;
&lt;code&gt;[root@ip-10-0-0-100 ~]# cd ceph-deploy&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Create the cluster:
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ceph-deploy]# ceph-deploy new {initial-monitor-node(s)}&lt;/code&gt;&lt;br&gt;
&lt;em&gt;example: [root@ip-10-0-0-100 ceph-deploy]# ceph-deploy new ip-10-0-0-100&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Install Ceph packages:
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ceph-deploy]# ceph-deploy install {node(s)}&lt;/code&gt;&lt;br&gt;
&lt;em&gt;example: [root@ip-10-0-0-100 ceph-deploy]# ceph-deploy install ip-10-0-0-100&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Deploy the initial monitor(s) and gather the keys
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ceph-deploy]# ceph-deploy gatherkeys node&lt;/code&gt;&lt;br&gt;
&lt;em&gt;example: [root@ip-10-0-0-100 ceph-deploy]# ceph-deploy gatherkeys ip-10-0-0-100&lt;/em&gt; &lt;br&gt;
&lt;code&gt;[root@ip-10-0-0-100 ceph-deploy]# ceph-deploy mon create-initial&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Create admin
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ceph-deploy]# ceph-deploy admin ip-10-0-0-100&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Create Mgr
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ceph-deploy]# ceph-deploy mgr create ip-10-0-0-100&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Create Mds
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ceph-deploy]# ceph-deploy mds create ip-10-0-0-100&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Create Osds
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ceph-deploy]# ceph-deploy osd create --data /dev/xvdb ip-10-0-0-100&lt;/code&gt;&lt;br&gt;
&lt;code&gt;[root@ip-10-0-0-100 ceph-deploy]# ceph-deploy osd create --data /dev/xvdc ip-10-0-0-100&lt;/code&gt;&lt;br&gt;
&lt;code&gt;[root@ip-10-0-0-100 ceph-deploy]# ceph-deploy osd create --data /dev/xvdd ip-10-0-0-100&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Check your cluster's status
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ceph-deploy]# ceph -s&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt; cluster:
    id:     5e4cfca2-43b5-13fd2-aee2b4808d95
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum ip-10-0-0-100 (age 3m)
    mgr: ip-10-0-0-100(active, since 3m)
    osd: 3 osds: 3 up (since 3m), 3 in (since 7h)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 18 GiB / 21 GiB avail
    pgs:     

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Installation &amp;amp; configuration of Prometheus:
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Update System
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# yum update -y&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Download Prometheus package
&lt;/h4&gt;

&lt;p&gt;Go to official Prometheus downloads &lt;a href="https://prometheus.io/download/"&gt;downloads&lt;/a&gt; page, and copy the URL of Linux “tar” file.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# wget https://github.com/prometheus/prometheus/releases/download/v2.16.0/prometheus-2.16.0.linux-amd64.tar.gz&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Configure Prometheus
&lt;/h4&gt;

&lt;h5&gt;
  
  
  Add a Prometheus user.
&lt;/h5&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# useradd --no-create-home --shell /bin/false prometheus&lt;/code&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Create needed directories.
&lt;/h5&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# mkdir /etc/prometheus&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# mkdir /var/lib/prometheus&lt;/code&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Change the owner of the above directories.
&lt;/h5&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# chown prometheus:prometheus /etc/prometheus&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# chown prometheus:prometheus /var/lib/prometheus&lt;/code&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Now go to Prometheus downloaded location and extract it.
&lt;/h5&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# tar -xvzf prometheus-2.16.0.linux-amd64.tar.gz&lt;/code&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Rename it as per your preference.
&lt;/h5&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# mv prometheus-2.16.0.linux-amd64 prometheuspackage&lt;/code&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Copy “prometheus” and “promtool” binary from the “prometheuspackage” folder to “/usr/local/bin”.
&lt;/h5&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# cp prometheuspackage/prometheus /usr/local/bin/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# cp prometheuspackage/promtool /usr/local/bin/&lt;/code&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Change the ownership to Prometheus user.
&lt;/h5&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# chown prometheus:prometheus /usr/local/bin/prometheus&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# chown prometheus:prometheus /usr/local/bin/promtool&lt;/code&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Copy “consoles” and “console_libraries” directories from the “prometheuspackage” to “/etc/prometheus folder”
&lt;/h5&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# cp -r prometheuspackage/consoles /etc/prometheus&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# cp -r prometheuspackage/console_libraries /etc/prometheus&lt;/code&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Change the ownership to Prometheus user
&lt;/h5&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# chown -R prometheus:prometheus /etc/prometheus/consoles&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# chown -R prometheus:prometheus /etc/prometheus/console_libraries&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Add and modify Prometheus configuration file.
&lt;/h4&gt;

&lt;p&gt;Configurations should be added to the  “/etc/prometheus/prometheus.yml”&lt;br&gt;
Now we will create the prometheus.yml file.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# vim /etc/prometheus/prometheus.yml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;global&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;scrape_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;

&lt;span class="na"&gt;scrape_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;prometheus_master'&lt;/span&gt;
    &lt;span class="na"&gt;scrape_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5s&lt;/span&gt;
    &lt;span class="na"&gt;static_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;localhost:9090'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;save and exit the file&lt;/p&gt;

&lt;h5&gt;
  
  
  Change the ownership of the file.
&lt;/h5&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# chown prometheus:prometheus /etc/prometheus/prometheus.yml&lt;/code&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Configure the Prometheus Service File.
&lt;/h5&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# vim /etc/systemd/system/prometheus.service&lt;/code&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Copy the following content to the file.
&lt;/h5&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
--config.file /etc/prometheus/prometheus.yml \
--storage.tsdb.path /var/lib/prometheus/ \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries

[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Save and the exit file.&lt;/p&gt;

&lt;h5&gt;
  
  
  Reload the systemd service.
&lt;/h5&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# systemctl daemon-reload&lt;/code&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Start the Prometheus service.
&lt;/h5&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# systemctl start prometheus&lt;/code&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Check service status.
&lt;/h5&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# systemctl status prometheus&lt;/code&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Access Prometheus Web Interface
&lt;/h5&gt;

&lt;p&gt;Use the following Url to access UI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://Server-IP:9090/graph
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Then you can see the following interface.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eKx97YF9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wvty1206id8s4cav5ok9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eKx97YF9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wvty1206id8s4cav5ok9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  3. Installation &amp;amp; configuration of Grafana:
&lt;/h4&gt;
&lt;h4&gt;
  
  
  Installing Grafana via YUM Repository
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# vim /etc/yum.repos.d/grafana.repo&lt;/code&gt;&lt;br&gt;
Add these lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Install Grafana
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# yum install grafana -y&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Enable Grafana Service
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# systemctl start grafana-server&lt;/code&gt;&lt;br&gt;
&lt;code&gt;[root@ip-10-0-0-100 ~]# systemctl enable grafana-server&lt;/code&gt;&lt;br&gt;
&lt;code&gt;[root@ip-10-0-0-100 ~]# systemctl status grafana-server&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Browse Grafana
&lt;/h4&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://Your Server IP or Host Name:3000/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3xezJBa2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/krre0q8c03zcgtu1e11c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3xezJBa2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/krre0q8c03zcgtu1e11c.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Mgr enable dashboard
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# ceph mgr module enable dashboard --force&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Mgr enable Prometheus
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# ceph mgr module enable prometheus --force&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Edit the ceph.conf
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# vim ceph-deploy/ceph.conf&lt;/code&gt;&lt;br&gt;
Add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[mon]
        mgr initial modules = dashboard

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Check Mgr Services
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# ceph mgr services&lt;/code&gt;&lt;br&gt;
{&lt;br&gt;
    "prometheus": "&lt;a href="http://ip-10-0-0-100.ec2.internal:9283/"&gt;http://ip-10-0-0-100.ec2.internal:9283/&lt;/a&gt;"&lt;br&gt;
}&lt;br&gt;
&lt;strong&gt;Copy this port number i.e. 9283&lt;/strong&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Edit the prometheus.yml
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# vim /etc/prometheus/prometheus.yml&lt;/code&gt;&lt;br&gt;
Modify: &lt;strong&gt;Paste 9283 here&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- targets: ['localhost:9283']
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Restart prometheus Service
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;[root@ip-10-0-0-100 ~]# systemctl restart prometheus&lt;/code&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Now you have successfully installed prometheus and grafana for ceph monitoring
&lt;/h3&gt;

&lt;p&gt;Let us begin the grafana setup now&lt;br&gt;
a. Login to grafana with default username and password as admin&lt;br&gt;
b. Create a datasource&lt;/p&gt;

&lt;p&gt;---&amp;gt;Select Prometheus&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--E32iQQqg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dqmn1fwy09y0p6k5dupw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E32iQQqg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/dqmn1fwy09y0p6k5dupw.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Click on Save and Test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;c. Click on Create---&amp;gt; Import&lt;br&gt;
 &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YII34Ujo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jtrnj3h4nxekcugme89l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YII34Ujo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jtrnj3h4nxekcugme89l.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Enter 7056 in:&lt;/strong&gt;&lt;br&gt;
Grafana.com Dashboard&lt;br&gt;
7056&lt;/p&gt;

&lt;p&gt;d. Choose Name and Ceph Prometheus data source&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RUT6PLVh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3s8abu7tojghovkxdhss.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RUT6PLVh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3s8abu7tojghovkxdhss.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Finally we have successfully integrated our Ceph Cluster with Prometheus and Grafana!
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TltL69SZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/o4qw6ppk5raeb0ej4tjr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TltL69SZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/o4qw6ppk5raeb0ej4tjr.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using this we can monitor our ceph storage graphically giving us much more and better insights!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>ceph</category>
      <category>grafana</category>
      <category>prometheus</category>
    </item>
    <item>
      <title>ANSIBLE ARCHITECTURE &amp; WORKING</title>
      <dc:creator>Jobin Keecheril</dc:creator>
      <pubDate>Fri, 03 Jan 2020 12:44:49 +0000</pubDate>
      <link>https://forem.com/keecheriljobin/ansible-architecture-working-co9</link>
      <guid>https://forem.com/keecheriljobin/ansible-architecture-working-co9</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IdLVmgo1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/skvvt051gys64k62ez0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IdLVmgo1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/skvvt051gys64k62ez0h.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hey guys! Let's look on Ansible architecture and it's working.&lt;/p&gt;

&lt;p&gt;Terminologies in Ansible:&lt;/p&gt;

&lt;p&gt;Control node&lt;/p&gt;

&lt;p&gt;Any machine with Ansible installed. You can run commands and playbooks, invoking /usr/bin/ansible or /usr/bin/ansible-playbook, from any control node. You can use any computer that has Python installed on it as a control node - laptops, shared desktops, and servers can all run Ansible. However, you cannot use a Windows machine as a control node. You can have multiple control nodes.&lt;/p&gt;

&lt;p&gt;Managed nodes&lt;/p&gt;

&lt;p&gt;The network devices (and/or servers) you manage with Ansible. Managed nodes are also sometimes called “hosts”. Ansible is not installed on managed nodes.&lt;/p&gt;

&lt;p&gt;Inventory&lt;/p&gt;

&lt;p&gt;A list of managed nodes. An inventory file is also sometimes called a “hostfile”. Your inventory can specify information like IP address for each managed node. An inventory can also organize managed nodes, creating and nesting groups for easier scaling. Inventories can be of two types static and dynamic, dynamic inventory can be covered while you go through Ansible thoroughly.&lt;/p&gt;

&lt;p&gt;Modules&lt;/p&gt;

&lt;p&gt;The units of code Ansible executes. Each module has a particular use, from administering users on a specific type of database to managing VLAN interfaces on a specific type of network device. You can invoke a single module with a task, or invoke several different modules in a playbook. &lt;/p&gt;

&lt;p&gt;Tasks&lt;/p&gt;

&lt;p&gt;The units of action in Ansible. You can execute a single task once with an ad-hoc command.&lt;/p&gt;

&lt;p&gt;Playbooks&lt;/p&gt;

&lt;p&gt;Ordered lists of tasks, saved so you can run those tasks in that order repeatedly. Playbooks can include variables as well as tasks. Playbooks are written in YAML and are easy to read, write, share and understand. &lt;/p&gt;

&lt;p&gt;WORKING:&lt;/p&gt;

&lt;p&gt;We can classify the Ansible architecture into 3 sections&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ansible Users and Playbooks&lt;/li&gt;
&lt;li&gt;Ansible Engine&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Hosts and Networking&lt;/p&gt;

&lt;p&gt;The Ansible Engine consists of Inventory, API, Modules and Plugins. A user writes playbooks i.e. set of tasks, then the playbook scans the inventory and matches for the listed hosts or IP addresses where the tasks must be executed. Ansible copies all the modules to the managed node and using Python API calls and plugins Ansible completes the given tasks. Once the tasks are completed/executed all the modules are destroyed on the Managed Nodes. Ansible on linux executes the modules on managed hosts using SSH.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Please feel free to correct ;)&lt;/p&gt;

&lt;h1&gt;
  
  
  automation #devops #ansible #discuss #linux
&lt;/h1&gt;

</description>
      <category>devops</category>
      <category>python</category>
      <category>linux</category>
    </item>
    <item>
      <title>End of an era?!</title>
      <dc:creator>Jobin Keecheril</dc:creator>
      <pubDate>Tue, 19 Nov 2019 12:35:59 +0000</pubDate>
      <link>https://forem.com/keecheriljobin/end-of-an-era-3mj8</link>
      <guid>https://forem.com/keecheriljobin/end-of-an-era-3mj8</guid>
      <description>&lt;p&gt;Mirantis acquires Docker!&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4DqAuFQ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/nr25mincbjunp5lfe7vj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4DqAuFQ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/nr25mincbjunp5lfe7vj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The complete thread can be read here:&lt;br&gt;
&lt;a href="https://analyticsindiamag.com/mirantis-docker-acquisition-enterprise-kubernetes-containers/"&gt;https://analyticsindiamag.com/mirantis-docker-acquisition-enterprise-kubernetes-containers/&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  docker #mirantis #discuss
&lt;/h1&gt;

</description>
    </item>
    <item>
      <title>Top Social Media platforms</title>
      <dc:creator>Jobin Keecheril</dc:creator>
      <pubDate>Mon, 18 Nov 2019 02:25:27 +0000</pubDate>
      <link>https://forem.com/keecheriljobin/top-social-media-platforms-3hfj</link>
      <guid>https://forem.com/keecheriljobin/top-social-media-platforms-3hfj</guid>
      <description>&lt;p&gt;October’s top #SocialMedia #apps by number of downloads 📲&lt;br&gt;
1️⃣ @WhatsApp&lt;br&gt;
2️⃣ @tiktok_us &lt;br&gt;
3️⃣ @facebook &lt;br&gt;
4️⃣ @messenger &lt;br&gt;
5️⃣ @instagram &lt;br&gt;
6️⃣ @bestSHAREit &lt;br&gt;
7️⃣ @clubfactoryapp &lt;br&gt;
8️⃣ @YouTube&lt;br&gt;
9️⃣ @likee_official&lt;br&gt;
🔟 @Snapchat&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nhh0kR6Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/7vc6hs0rj1q5ek38pudy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nhh0kR6Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/7vc6hs0rj1q5ek38pudy.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Check out: @SensorTower and&lt;br&gt;
&lt;a href="https://twitter.com/BrianHHough/status/1196162004026875912?s=19"&gt;https://twitter.com/BrianHHough/status/1196162004026875912?s=19&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Which is your fave? &lt;/p&gt;

&lt;h1&gt;
  
  
  EmergingTech #discuss
&lt;/h1&gt;

</description>
    </item>
    <item>
      <title>Towards Automation</title>
      <dc:creator>Jobin Keecheril</dc:creator>
      <pubDate>Wed, 13 Nov 2019 09:06:47 +0000</pubDate>
      <link>https://forem.com/keecheriljobin/towards-automation-ecn</link>
      <guid>https://forem.com/keecheriljobin/towards-automation-ecn</guid>
      <description>&lt;p&gt;Ansible is an open source automation platform. It is very simple to setup and yet powerful. Ansible can help you with configuration management, application deployment, task automation. It can also&lt;br&gt;
do IT orchestration, where you have to run tasks in sequence and create a chain of events which must happen on several different servers or devices. Unlike Puppet or Chef it doesn’t use an agent on the remote host. Instead Ansible uses SSH which is assumed to be installed on all the systems you want to manage. Also it’s written in Python which needs&lt;br&gt;
to be installed on the remote host. This means that you don’t have to setup a client server environment before using Ansible, you can just run it from any of your machines and from the client’s point of view there is no knowledge of any Ansible server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f3Z1RsR7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/1lcg568f2yskoud562pf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f3Z1RsR7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/1lcg568f2yskoud562pf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
IMPORTANCE OF  ANSIBLE&lt;/p&gt;

&lt;p&gt;Well before I tell you what is Ansible, it is of utmost importance to understand the problems that were faced before Ansible. Let us take a little flashback to the beginning of networked computing when deploying and managing servers reliably and efficiently has been a challenge. Previously, system administrators managed servers by hand, installing software, changing configurations, and administering services on individual servers. As data centres grew, and hosted applications became more complex, administrators realized they couldn’t scale their manual system’s management as fast as the applications they were enabling. It also hampered the velocity of the work of the developers since the development team was agile and releasing software frequently, but IT operations were spending more time configuring the systems. That’s why server provisioning and configuration management tools came to flourish.&lt;/p&gt;

&lt;p&gt;Consider the tedious routine of administering a server fleet. We always need to keep updating, pushing changes, copying files on them etc. These tasks make things very complicated and time consuming.&lt;/p&gt;

&lt;p&gt;But let me tell you that there is a solution to the above stated problem. The solution is – Ansible. But before I go ahead to explain you all about Ansible, let me get you familiarized with few Ansible terminologies:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Controller Machine: The machine where Ansible is installed, responsible for running the provisioning on the servers you are managing.
2. Inventory: An initialization file that contains information about the servers you are managing.
3. Playbook: The real strength of Ansible lies in its Playbooks. A playbook is like a recipe or an instructions manual which tells Ansible what to do when it connects to each machine. Playbooks are written in YAML, which simplistically could be viewed as XML but human readable.
4. Task: A block that defines a single procedure to be executed, e.g. Install a package.
5. Module: A module typically abstracts a system task, like dealing with packages or creating and changing files. Ansible has a multitude of built-in modules, but you can also create custom ones.
6. Role: A pre-defined way for organizing playbooks and other files in order to facilitate sharing and reusing portions of a provisioning.
7. Play: A provisioning executed from start to finish is called a play. In simple words, execution of a playbook is called a play.
8. Facts: Global variables containing information about the system, like network interfaces or operating system.
9. Handlers: Used to trigger service status changes, like restarting or stopping a service.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;ORGANIZATIONAL BENEFITS&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D2HRSRbr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/gwdf1pe4tjneq1vcffta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D2HRSRbr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/gwdf1pe4tjneq1vcffta.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;ol&gt;
&lt;li&gt;Ansible reduces the need for dedicated system administrators.&lt;/li&gt;
&lt;li&gt;It is an agentless delivery system that makes it a lot easier to manage switches and storage arrays.&lt;/li&gt;
&lt;li&gt;Easy to set up results in higher efficiency and reduced costs.&lt;/li&gt;
&lt;li&gt;Ansible lets the organization overcome all the project complexities to increase productivity.&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;USE OF ANSIBLE&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dfAKkm-f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/l8n80pa3b613pwm4bbwq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dfAKkm-f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/l8n80pa3b613pwm4bbwq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XcoMpnf1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/sbad1iea0cjefjm27s6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XcoMpnf1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/sbad1iea0cjefjm27s6j.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
JOB OPPORTUNITIES&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;According to ZipRecruiter, Ansible developers are earning $121,045 annually.&lt;/li&gt;
&lt;li&gt;Top companies like Oracle, BOEING, IBM, Capgemini, Target, and so on are hiring Ansible developers.&lt;/li&gt;
&lt;li&gt;Become eligible to apply for over 27K jobs available in INDIA for Ansible developers.&lt;/li&gt;
&lt;li&gt;An individual without a graduate level computer science degree can also land a high-paying job as an Ansible expert.&lt;/li&gt;
&lt;li&gt;Ansible Automation Engineer&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Keep Automating!&lt;/p&gt;

&lt;p&gt;Written by Jobin Keecheril &amp;amp; Aditi Munde &lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
