<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Sami Alhaddad</title>
    <description>The latest articles on Forem by Sami Alhaddad (@rootsami).</description>
    <link>https://forem.com/rootsami</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/rootsami"/>
    <language>en</language>
    <item>
      <title>Rancher Kubernetes on Openstack using Terraform</title>
      <dc:creator>Sami Alhaddad</dc:creator>
      <pubDate>Wed, 27 May 2020 03:33:00 +0000</pubDate>
      <link>https://forem.com/rootsami/rancher-kubernetes-on-openstack-using-terraform-1ild</link>
      <guid>https://forem.com/rootsami/rancher-kubernetes-on-openstack-using-terraform-1ild</guid>
      <description>&lt;p&gt;In this article we will walk through creating complete infrastructure pieces on OpenStack that are needed to have a fully provisioned Kubernetes cluster using Terraform and Rancher2. In addition to integration with &lt;a href="https://github.com/kubernetes/cloud-provider-openstack"&gt;cloud-provider-openstack&lt;/a&gt; and &lt;a href="https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md"&gt;cinder-csi-plugin&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started with Infrastructure
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Clone the repository &lt;a href="https://github.com/rootsami/terraform-rancher2"&gt;terraform-rancher2&lt;/a&gt; into a folder.&lt;/li&gt;
&lt;li&gt;Go into the openstack folder using &lt;code&gt;cd openstack/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Modify the variables in &lt;code&gt;terraform.tfvars&lt;/code&gt; to match your current cloud environment. it is important to uncomment the vars &lt;code&gt;openstack_project&lt;/code&gt; , &lt;code&gt;openstack_username&lt;/code&gt; and &lt;code&gt;openstack_password&lt;/code&gt; or export them as env variables with prefix TF_VAR_*  for example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TF_VAR_openstack_username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;myusername
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TF_VAR_openstack_password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mypassword
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TF_VAR_openstack_project&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;myproject
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Other variables can be obtained from openstack-cli such as &lt;code&gt;rancher_node_image_id&lt;/code&gt; , &lt;code&gt;external_network&lt;/code&gt; and flavors by invoking
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## image list .. pick an ubuntu image
openstack image list
## network name
openstack network list --external
## flavors
openstack flavor list
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;&lt;p&gt;RKE configuration can be adjusted and customized in &lt;a href="https://github.com/rootsami/terraform-rancher2/blob/master/openstack/rancher2.tf"&gt;rancher2.tf&lt;/a&gt;, you can check the provider documentation at &lt;a href="https://www.terraform.io/docs/providers/rancher2/r/cluster.html"&gt;rancher_cluster&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;NOTE: It is really important to keep kubelet extra_args for the external cloudprovider in order to integrate with cloud-provider-openstack&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;terraform init&lt;/code&gt; to initialize a working directory containing Terraform configuration files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To apply the creation of the environment, Run &lt;code&gt;terraform apply --auto-approve&lt;/code&gt; and wait for the output after all resources finish the creation&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;Apply &lt;span class="nb"&gt;complete&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt; Resources: 25 added, 0 changed, 0 destroyed.

Outputs:

rancher_url &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"https://xx.xx.xx.xx/"&lt;/span&gt;,
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Up to this point, use the &lt;code&gt;rancher_url&lt;/code&gt; from above output and login to rancher instance with username &lt;code&gt;admin&lt;/code&gt; and password defined in &lt;code&gt;rancher_admin_password&lt;/code&gt;. Wait for all kubernetes nodes to be discovered, registered, and active.&lt;/p&gt;
&lt;h2&gt;
  
  
  Integration with &lt;a href="https://github.com/kubernetes/cloud-provider-openstack"&gt;cloud-provider-openstack&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;As you may notice, that all the nodes have a taint &lt;code&gt;node.cloudprovider.kubernetes.io/uninitialized&lt;/code&gt;. The usage of &lt;code&gt;--cloud-provider=external&lt;/code&gt; flag to the kubelet makes it waiting for the clouder-provider to start the initialization. This marks the node as needing a second initialization from an external controller before it can be scheduled work.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Edit the file &lt;code&gt;manifests/cloud-config&lt;/code&gt; with the access information to your openstack environment.&lt;/li&gt;
&lt;li&gt;Create a secret containing the cloud configuration in the kube-system namespace
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create secret &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system generic cloud-config &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;manifests/cloud-config
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Create RBAC resources and openstack-cloud-controller-manager deamonset and wait for all the pods in kube-system namespace up and running.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/cloud-controller-manager-roles.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/cloud-controller-manager-role-bindings.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/openstack-cloud-controller-manager-ds.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Create &lt;a href="https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md"&gt;cinder-csi-plugin&lt;/a&gt; which are a set of cluster roles, cluster role bindings, statefulsets, and storageClass to communicate with openstack(cinder).
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; manifests/cinder-csi-plugin.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Up to this point, openstack-cloud-controller-manager and cinder-csi-plugin have been deployed, and they're able to obtain valuable information such as External IP addresses and Zone info.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide

NAME            STATUS   ROLES               AGE     VERSION   INTERNAL-IP     EXTERNAL-IP      OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
demo-master-1   Ready    controlplane,etcd   5h      v1.17.5   192.168.201.6   xx.xx.xx.xx      Ubuntu 18.04.2 LTS   4.15.0-45-generic   docker://19.3.9
demo-worker-1   Ready    worker              4h57m   v1.17.5   192.168.201.4   xx.xx.xx.xx      Ubuntu 18.04.2 LTS   4.15.0-45-generic   docker://19.3.9
demo-worker-2   Ready    worker              4h56m   v1.17.5   192.168.201.5   xx.xx.xx.xx      Ubuntu 18.04.2 LTS   4.15.0-45-generic   docker://19.3.9
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yMSPO8P5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/r6pkjtducqorho2ajrry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yMSPO8P5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/r6pkjtducqorho2ajrry.png" alt="cluster-overview"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, as shown in the nodes tab, All nodes are active and labeled by openstack zones.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3zdenwGE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n60b0n664x68b44tcsz7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3zdenwGE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n60b0n664x68b44tcsz7.png" alt="node-details"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Scalability
&lt;/h2&gt;

&lt;p&gt;When it comes to scalability with IaaC (infrastructure-as-a-code), it becomes so easy to obtain any desired state in less consumed efforts and time.&lt;br&gt;
All you have to do is to change the number of nodes &lt;code&gt;count_master&lt;/code&gt; or &lt;code&gt;count_worker_nodes&lt;/code&gt; and run &lt;code&gt;terraform apply&lt;/code&gt; again&lt;br&gt;
For example, let's increase the number of &lt;code&gt;count_worker_nodes&lt;/code&gt; by 1&lt;br&gt;
A few minutes later, after refreshing states and applying updates:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;
Apply &lt;span class="nb"&gt;complete&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt; Resources: 3 added, 0 changed, 0 destroyed.

Outputs:

rancher_url &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"https://xx.xx.xx.xx"&lt;/span&gt;,
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Couple of minutes for the new node to be registered&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME            STATUS   ROLES               AGE    VERSION   INTERNAL-IP     EXTERNAL-IP      OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
demo-master-1   Ready    controlplane,etcd   28h    v1.17.5   192.168.201.6   xx.xx.xx.xx      Ubuntu 18.04.2 LTS   4.15.0-45-generic   docker://19.3.9
demo-worker-1   Ready    worker              28h    v1.17.5   192.168.201.4   xx.xx.xx.xx      Ubuntu 18.04.2 LTS   4.15.0-45-generic   docker://19.3.9
demo-worker-2   Ready    worker              28h    v1.17.5   192.168.201.5   xx.xx.xx.xx      Ubuntu 18.04.2 LTS   4.15.0-45-generic   docker://19.3.9
demo-worker-3   Ready    worker              2m2s   v1.17.5   192.168.201.7   xx.xx.xx.xx      Ubuntu 18.04.2 LTS   4.15.0-45-generic   docker://19.3.9
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;NOTE: Scaling down the cluster could be made by decreasing the number of nodes in &lt;code&gt;terrafrom.tfvars&lt;/code&gt;. Node gets deleted, moreover &lt;code&gt;cloud-provider-openstack&lt;/code&gt; detects that and removes it from the cluster&lt;/strong&gt; &lt;/p&gt;
&lt;h2&gt;
  
  
  Cleaning up
&lt;/h2&gt;

&lt;p&gt;To clean up all resources created by this terraform, Just run &lt;code&gt;terraform destroy&lt;/code&gt;&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vWogaON8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/github-logo-28d89282e0daa1e2496205e2f218a44c755b0dd6536bbadf5ed5a44a7ca54716.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/rootsami"&gt;
        rootsami
      &lt;/a&gt; / &lt;a href="https://github.com/rootsami/terraform-rancher2"&gt;
        terraform-rancher2
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Terraform manifests to create e2e production grade k8s cluster
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
terraform-rancher2&lt;/h1&gt;
&lt;p&gt;Terraform manifests to create e2e production-grade Kubernetes cluster on top of cloud providers&lt;/p&gt;
&lt;h2&gt;
Overview&lt;/h2&gt;
&lt;p&gt;This repo is intended to be for creating complete infrastructure pieces on OpenStack that are needed to have a fully provisioned Kubernetes cluster using Terraform and Rancher2. In addition to integration with &lt;a href="https://github.com/kubernetes/cloud-provider-openstack"&gt;cloud-provider-openstack&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
Getting started with Infrastructure&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Clone the repository &lt;a href="https://github.com/rootsami/terraform-rancher2"&gt;terraform-rancher2&lt;/a&gt; into a folder.&lt;/li&gt;
&lt;li&gt;Go into the openstack folder using &lt;code&gt;cd openstack/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Modify the variables in &lt;code&gt;terraform.tfvars&lt;/code&gt; to match your current cloud environment. it is important to uncomment the vars &lt;code&gt;openstack_project&lt;/code&gt; , &lt;code&gt;openstack_username&lt;/code&gt; and &lt;code&gt;openstack_password&lt;/code&gt; or export them as env variables with prefix TF_VAR_*  for example:&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;&lt;span class="pl-k"&gt;export&lt;/span&gt; TF_VAR_openstack_username=myusername
&lt;span class="pl-k"&gt;export&lt;/span&gt; TF_VAR_openstack_password=mypassword
&lt;span class="pl-k"&gt;export&lt;/span&gt; TF_VAR_openstack_project=myproject&lt;/pre&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Other variables can be obtained from openstack-cli such as &lt;code&gt;rancher_node_image_id&lt;/code&gt; , &lt;code&gt;external_network&lt;/code&gt; by invoking&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight highlight-source-shell"&gt;&lt;pre&gt;&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt;# image list&lt;/span&gt;
openstack image list
&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt;# network name&lt;/span&gt;
openstack network list --external
&lt;span class="pl-c"&gt;&lt;span class="pl-c"&gt;#&lt;/span&gt;# flavors&lt;/span&gt;
openstack flavor list&lt;/pre&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;terraform init&lt;/code&gt; to initialize a working directory…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/rootsami/terraform-rancher2"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;



&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/docs/providers/rancher2/"&gt;Rancher2 Provider&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/docs/providers/openstack/"&gt;Openstack Provider&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes/cloud-provider-openstack"&gt;Cloud-provider-openstack&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/#running-cloud-controller-manager"&gt;Running-cloud-controller-manager&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>openstack</category>
      <category>rancher</category>
      <category>terraform</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Prometheus blackbox_exporter; Unconventional Way</title>
      <dc:creator>Sami Alhaddad</dc:creator>
      <pubDate>Wed, 13 May 2020 01:09:38 +0000</pubDate>
      <link>https://forem.com/rootsami/prometheus-blackboxexporter-unconventional-way-25g7</link>
      <guid>https://forem.com/rootsami/prometheus-blackboxexporter-unconventional-way-25g7</guid>
      <description>&lt;p&gt;Many of us have different requirements and different complicated setups.&lt;br&gt;
Prometheus has provided us the true power of monitoring and observability, Thus, I'm still learning and figuring things out every single day.&lt;/p&gt;
&lt;h3&gt;
  
  
  Case
&lt;/h3&gt;

&lt;p&gt;One of the most recent requirements I had, that we have to monitor a certain route of accessibility to maintain applications performing correctly. With out-of-the-box tools' functionality, the monitoring system is always the source, where our case is different that we need to make sure that System-A is reachable from every network in the datacenter.&lt;/p&gt;

&lt;p&gt;To have a simple and less complicated example, let's take north-ntw, south-ntw and west-ntw to reach the internet by probing &lt;a href="https://wikipedia.com" rel="noopener noreferrer"&gt;https://wikipedia.com&lt;/a&gt;&lt;br&gt;
by this I mean, we are monitoring egress traffic in each network whether it could reach the internet through different routes with reasonable latency.&lt;/p&gt;
&lt;h3&gt;
  
  
  Strategy
&lt;/h3&gt;

&lt;p&gt;We're going to use &lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; as a server and &lt;a href="https://github.com/prometheus/blackbox_exporter" rel="noopener noreferrer"&gt;Blackbox Exporter&lt;/a&gt; &lt;br&gt;
The blackbox exporter allows blackbox probing of endpoints over HTTP, HTTPS, DNS, TCP and ICMP. It's often used to be installed alongside to prometheus server and this makes the prometheus server is always the source.&lt;br&gt;
Here is the trick, we're going to deploy blackbox exporter on every monitored network and instruct prometheus to scrape those exporters as a source of probes to assure that System-A (aka Wikipedia) is reachable from those networks.&lt;/p&gt;
&lt;h3&gt;
  
  
  Let's go
&lt;/h3&gt;

&lt;p&gt;You can follow as many as guides and tutorial on how to install Prometheus and Prometheus exporters, however, here's a quick win with &lt;a href="https://github.com/ansible/ansible" rel="noopener noreferrer"&gt;ansible&lt;/a&gt; and playbooks for ansible-roles from the great team at &lt;a href="https://github.com/cloudalchemy/" rel="noopener noreferrer"&gt;cloudalchemy&lt;/a&gt;: &lt;br&gt;
&lt;a href="https://github.com/cloudalchemy/ansible-prometheus" rel="noopener noreferrer"&gt;ansible-prometheus&lt;/a&gt; and &lt;a href="https://github.com/cloudalchemy/ansible-blackbox-exporter" rel="noopener noreferrer"&gt;ansible-blackbox-exporter&lt;/a&gt; &lt;br&gt;
I will leave the installation part for you to be done in your preferred way&lt;/p&gt;
&lt;h3&gt;
  
  
  scraping configs
&lt;/h3&gt;

&lt;p&gt;This is the most important part of this journey where we instruct prometheus where to find it's exporters&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;## /etc/prometheus/prometheus.yml&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;blackbox_metadata&lt;/span&gt;
    &lt;span class="na"&gt;params&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;module&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;http_2xx&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://wikipedia.com&lt;/span&gt;
    &lt;span class="na"&gt;metrics_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/probe&lt;/span&gt;
    &lt;span class="na"&gt;scrape_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;
    &lt;span class="na"&gt;scrape_timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
    &lt;span class="na"&gt;static_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;south.rootsami.dev:9115&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;north.rootsami.dev:9115&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;east.rootsami.dev:9115&lt;/span&gt;
    &lt;span class="na"&gt;relabel_configs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source_labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;__param_target&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;target_label&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;target&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;source_labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;__address__&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;separator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;     &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;;'&lt;/span&gt;
        &lt;span class="na"&gt;regex&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;         &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;(.*):.*'&lt;/span&gt;
        &lt;span class="na"&gt;target_label&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;instance&lt;/span&gt;
        &lt;span class="na"&gt;replacement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;   &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;${1}:9115'&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;target&lt;/code&gt; under the params section defines the destination that you want to reach which is &lt;a href="https://wikipedia.com" rel="noopener noreferrer"&gt;https://wikipedia.com&lt;/a&gt;, whereas the static_configs &lt;code&gt;targets&lt;/code&gt; are the little blackbox-exporter that we are probing from.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgurw52ho5xb7isok2d73.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgurw52ho5xb7isok2d73.png" alt="running-targets"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fd2he6va3ob0g169d9omk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fd2he6va3ob0g169d9omk.png" alt="scrap-latency"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As shown above, target endpoints are the deployed exporters which are being scraped by prometheus server and showing that Wikipedia is reachable from that network, as well as latency can be measured from each source.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Monitoring and observability have no limits, tools are there! All you have to do is to find the blind spots to maintain systems up and running at all times.&lt;/p&gt;

</description>
      <category>prometheus</category>
      <category>monitoring</category>
      <category>observability</category>
      <category>sre</category>
    </item>
  </channel>
</rss>
