<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ladies In DevOps</title>
    <description>The latest articles on Forem by Ladies In DevOps (@ladiesindevops).</description>
    <link>https://forem.com/ladiesindevops</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ladiesindevops"/>
    <language>en</language>
    <item>
      <title>Load Balancing</title>
      <dc:creator>Nancy Chauhan</dc:creator>
      <pubDate>Tue, 28 Sep 2021 05:58:15 +0000</pubDate>
      <link>https://forem.com/ladiesindevops/load-balancing-461l</link>
      <guid>https://forem.com/ladiesindevops/load-balancing-461l</guid>
      <description>&lt;p&gt;We encounter load balancers every day. Even when you are reading this article, your requests flow through multiple load balancers, before this content reaches your browser.&lt;/p&gt;

&lt;p&gt;Load balancing is one of the most important and basic concepts we encounter every single day. It is the process of distributing incoming requests across multiple servers/processes/machines at the backend.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do we need load balancing?
&lt;/h2&gt;

&lt;p&gt;Usually, when we make an application, clients will route their request to one of the backend servers, but soon as traffic grows, that server will reach its limits. To overcome this, we can spin up another server to share the traffic. But how to let the clients know to connect to the new machine?&lt;br&gt;
Load balancing is the technique used for discovery and decision-making for this routing. There are two ways of achieving this — server-side load balancing or client-side load balancing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ds-PEW5c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/4800/1%2AYxgXygvKUmCpYjKfXEeCzw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ds-PEW5c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/4800/1%2AYxgXygvKUmCpYjKfXEeCzw.png"&gt;&lt;/a&gt;&lt;br&gt;Single application server gets overloaded with request
  &lt;/p&gt;

&lt;h1&gt;
  
  
  Server-side Load Balancing:
&lt;/h1&gt;

&lt;p&gt;There is a middle layer, a load balancer that forwards the incoming requests to different servers to remove that complexity. All backend servers get registered with a load balancer which then routes to one of the server instances using various algorithms. AWS ELB, Nginx, Envoy are some examples of server-side load balancers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rxAx41Pt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AM013EIjXPW81qIWWKbwU_w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rxAx41Pt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AM013EIjXPW81qIWWKbwU_w.png"&gt;&lt;/a&gt;&lt;br&gt;Server-side load balancing
  &lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;No need for client-side changes.&lt;/li&gt;
&lt;li&gt;Easy to make changes to load balancing algorithms and backend servers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Client-Side Load Balancing:
&lt;/h1&gt;

&lt;p&gt;In client-side load balancing, the client handles the load balancing. Let’s take an abstract look at how this can be achieved. To perform load balancing on the client-side -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The client should be aware of all available web servers&lt;/li&gt;
&lt;li&gt;A library on the client-side to implement a load balancing algorithm&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The client routes the requests to one of the servers using client-side load balancing libraries like Ribbon. Client-side load balancing is also used for service discovery.&lt;/p&gt;

&lt;p&gt;Suppose Service A (client-side) wants to access Service B (server-side). Service B has three instances and register all at the discovery server (X). Service A has enabled the Ribbon client which allows doing the client-side load balancing. It fetches the available Service B instances from the discovery server and redirects the traffic from the client-side and constantly listens for any changes.&lt;/p&gt;

&lt;p&gt;Here I have implemented client-side load balancing using consul service discovery: &lt;a href="https://github.com/Nancy-Chauhan/consul-service-discovery"&gt;https://github.com/Nancy-Chauhan/consul-service-discovery&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZQZpHQ6k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2ACtMtKBTIpfiKTNdRHD-ccQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZQZpHQ6k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2ACtMtKBTIpfiKTNdRHD-ccQ.png"&gt;&lt;/a&gt;&lt;br&gt;Server-side load balancing
  &lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;No need for additional infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Benefits of Loadbalancing
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yBZvsiGh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AVtCPP8DOJX7XUhwr4Gp7jg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yBZvsiGh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AVtCPP8DOJX7XUhwr4Gp7jg.png"&gt;&lt;/a&gt;&lt;br&gt;Reference: &lt;a href="https://www.nginx.com/resources/glossary/load-balancing/"&gt;https://www.nginx.com/resources/glossary/load-balancing/&lt;/a&gt;
  &lt;/p&gt;

&lt;p&gt;Load balancers are the foundation of modern cloud-native applications. The concept of load balancing and the ability to be dynamically configured has created innovations such as service mesh.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Branch Protection in GitHub</title>
      <dc:creator>Mbaoma</dc:creator>
      <pubDate>Sat, 12 Jun 2021 08:53:18 +0000</pubDate>
      <link>https://forem.com/ladiesindevops/branch-protection-in-github-5fn9</link>
      <guid>https://forem.com/ladiesindevops/branch-protection-in-github-5fn9</guid>
      <description>&lt;p&gt;Ever been in a position where you wish you could prevent your teammates from merging unapproved code from a development branch to the main branch?&lt;/p&gt;

&lt;p&gt;Do you want to prevent merging code which you are not sure of its build status to your main branch?&lt;/p&gt;

&lt;p&gt;Recently, I found myself in this situation and I plan to share a concept which helped me out - 'Branch Protection in GitHub'.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Branch Protection?
&lt;/h3&gt;

&lt;p&gt;Branch protection is the act of setting rules to prevent certain actions from occurring on your branch(es) without your approval.&lt;/p&gt;

&lt;p&gt;This article focuses on, preventing branches (development etc) from being merged to the main branch; such that before any merge can occur, a pull request would require a selected reviewer to review the request and then merge the commit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;It is expected that you have prior knowledge of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Github&lt;/li&gt;
&lt;li&gt;  CI/CD tools (in this article, Travis CI)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check out this guides for &lt;a href="https://youtu.be/8JJ101D3knE" rel="noopener noreferrer"&gt;an introduction to Github&lt;/a&gt; and &lt;a href="https://docs.travis-ci.com/user/tutorial" rel="noopener noreferrer"&gt;creating a simple .travis.yml file&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up branch protection rules
&lt;/h3&gt;

&lt;p&gt;We take the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Click on the

&lt;code&gt;Settings&lt;/code&gt;

option in your repository and then

&lt;code&gt;Branches&lt;/code&gt;

(located on the left hand side of the page)
-   Click on

&lt;code&gt;Add Rule&lt;/code&gt;

to create the rule(s) for your branch of choice&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F49791498%2F121063732-9ad07b00-c7be-11eb-8832-609d2836485e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F49791498%2F121063732-9ad07b00-c7be-11eb-8832-609d2836485e.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Next, under

&lt;code&gt;Branch name pattern&lt;/code&gt;

type in the name of the branch you want to protect&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For this article, we choose the following rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;'Require pull request reviews before merging': we limit the number of required reviews to 1 (you can choose to increase the required reviews).&lt;/li&gt;
&lt;li&gt;Then, we select

&lt;code&gt;Include administrators&lt;/code&gt;

, to ensure that as owners of the branch, our pull requests will have to be reviewed before a merge can occur (I mean, nobody is above mistakes 🥴)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Finally, we click on the 'Save changes' button to save our settings.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F49791498%2F121062137-a3c04d00-c7bc-11eb-8da9-84605d19e07b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F49791498%2F121062137-a3c04d00-c7bc-11eb-8da9-84605d19e07b.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F49791498%2F121062224-baff3a80-c7bc-11eb-9d9a-3090708eac69.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F49791498%2F121062224-baff3a80-c7bc-11eb-9d9a-3090708eac69.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up our Travis CI script
&lt;/h3&gt;

&lt;p&gt;According to the Travis CI documentation, 'Travis CI supports your development process by automatically building and testing code changes, providing immediate feedback on the success of the change. Travis CI can also automate other parts of your development process by managing deployments and notifications.'&lt;/p&gt;

&lt;p&gt;It is a Continuous Integration/Continuous Deployment tool which automatically runs the test(s) you specify in a .travis.yml file and sends you a report stating the build status of your commit, in this way, broken code is prevented from being pushed to production.&lt;/p&gt;

&lt;p&gt;A simple Travis script can be written as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;language: python
python:
  - "3.6"      # current default Python on Travis CI

# command to install dependencies
install:
  - pip install -r requirements.txt

# command to run tests
script:
  - python -m unittest test

# safelist
branches:
  only:
  - main
  - dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the above script, and in other Travis scripts, commands are used to perform different operations. The ones used here are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;language: This is used to specify the programming language in which our code is written (in this case Python).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;python: We can specify the language version to run our tests against.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;install: This is used to specify the language specific command to install dependencies upon which our code is dependent.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;script: This is used to specify the language specific command to run our pre-defined tests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;branches: the 'only' option shows the branches we want to build using a safelist (in this case 'main' and 'dev')&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Demo Time
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Now, to check out if all our branch protection and CI/CD rules work, we push some code to our secondary branch and open up a pull request.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F49791498%2F121061520-d9186b00-c7bb-11eb-9d8b-33dd0dd7a9ac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F49791498%2F121061520-d9186b00-c7bb-11eb-9d8b-33dd0dd7a9ac.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
The pull request will fail.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F49791498%2F121061790-34e2f400-c7bc-11eb-81a2-8efa6c84e8bd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F49791498%2F121061790-34e2f400-c7bc-11eb-81a2-8efa6c84e8bd.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;voila, we are unable to merge our pull request to the main branch (it's the audacity for me😁).&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We are told that our pull request needs to be reviewed, so we add a reviewer by clicking on the icon next to 'Reviewers'.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Also, our builds passed (yay!), so our reviewer will be more confident in merging our pull request.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More information can be found in the &lt;a href="https://docs.github.com/en/github/administering-a-repository/defining-the-mergeability-of-pull-requests/about-protected-branches" rel="noopener noreferrer"&gt;GitHub Docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Feel free to check out my &lt;a href="https://github.com/Mbaoma/ci-cd-tutorial.git" rel="noopener noreferrer"&gt;repository&lt;/a&gt; on which this article was built &lt;/p&gt;

&lt;p&gt;I hope we protect our branches better from now onwards. &lt;/p&gt;

&lt;p&gt;Feel free to reach out to me via &lt;a href="https://www.linkedin.com/in/mbaoma-chioma-mary" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Selah!!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>github</category>
      <category>programming</category>
    </item>
    <item>
      <title>Enforcing Coding Best Practices using CI</title>
      <dc:creator>Nancy Chauhan</dc:creator>
      <pubDate>Sun, 30 May 2021 11:37:02 +0000</pubDate>
      <link>https://forem.com/ladiesindevops/enforcing-coding-best-practices-using-ci-44n5</link>
      <guid>https://forem.com/ladiesindevops/enforcing-coding-best-practices-using-ci-44n5</guid>
      <description>&lt;p&gt;High-performing teams usually ship faster, better, and often! Organizations irresepctive of their level, focusing on stability and continuous delivery, will deploy frequently. Hundreds of continuous integration build run for every organization on a typical day. It indicates how CI has become an integral part of our development process. Hence to ensure that we are shipping quality code, we should integrate code quality checking in our CI. &lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1368692809436303360-796" src="https://platform.twitter.com/embed/Tweet.html?id=1368692809436303360"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1368692809436303360-796');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1368692809436303360&amp;amp;theme=dark"
  }



 &lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1281102326929874944-545" src="https://platform.twitter.com/embed/Tweet.html?id=1281102326929874944"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1281102326929874944-545');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1281102326929874944&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;Continuous integration ensures easier bug fixes, improves software quality, and reduces project risk. This blog will show what steps we should integrate into our CI pipelines to ensure that we ship better code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2A3cmnfnMsSS8u4kfcP1v2wg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2A3cmnfnMsSS8u4kfcP1v2wg.png"&gt;&lt;/a&gt;&lt;br&gt;CI Pipeline integrating code quality checks
  &lt;/p&gt;

&lt;p&gt;Traditionally code reviews use to enforce code quality, However checking for things like missing spaces, missing parameters becomes a burden for code reviewers. It would be great if you some tools to automate these checks. We can set some mandatory steps in our CI to run static analysis on the code for every code push. It creates a better development lifecycle by providing early feedback without human intervention.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1202%2F1%2AGnbnoXtaOeYZx5ijucW2mA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1202%2F1%2AGnbnoXtaOeYZx5ijucW2mA.png"&gt;&lt;/a&gt;&lt;br&gt;Source: &lt;a href="https://xkcd.com/1285/" rel="noopener noreferrer"&gt;https://xkcd.com/1285/&lt;/a&gt;
  &lt;/p&gt;

&lt;h1&gt;
  
  
  Unit Testing
&lt;/h1&gt;

&lt;p&gt;Unit testing is the process of testing discrete functions at the source code. It is ubiquitous that a CI pipeline contains a test job that verifies your code. If the tests fail, the pipeline fails, and users get notified. It allows fixing the code earlier. Unit tests should be fast and should aim to cover 100% of the codebase. It can give enough confidence that the application is functioning correctly at this point. If unit tests are not automated, the feedback cycle would be slow.&lt;/p&gt;

&lt;h1&gt;
  
  
  Code coverage
&lt;/h1&gt;

&lt;p&gt;Code coverage is a metric that can help you understand how comprehensive are your unit tests. It’s a handy metric that can help you assess the quality of your test suite. Code Coverage Reporting is how we know that all lines of code written have been exercised through testing.&lt;/p&gt;

&lt;h1&gt;
  
  
  Static code analysis
&lt;/h1&gt;

&lt;p&gt;Static code analysis parses and checks the source code and gives feedback about potential issues in code. It acts as a powerful tool to detect common security vulnerabilities, possible runtime errors, and other general coding errors. It can also enforce your coding guidelines or naming conventions along with your maintainability requirements.&lt;/p&gt;

&lt;p&gt;Static code analysis accelerates the feedback cycle in the development process. It gives feedback on new coding issues specific to the branch or commits containing them. It quickly exposes the block of code that we can optimize in terms of quality. By integrating these checks into the CI workflow, we can tackle these code quality issues in the early stages of the delivery.&lt;/p&gt;

&lt;h2&gt;
  
  
  Linting
&lt;/h2&gt;

&lt;p&gt;Linter is a tool that analyzes source code to flag programming errors, bugs, stylistic errors, and suspicious constructs. It helps in enforcing a standard code style.&lt;/p&gt;

&lt;p&gt;We can introduce linter checks in our CI pipelines according to our project setup. There are a vast number of linters out there. Depending on the programming language, there are even more than one linters for each job.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2A-WosNzXumx9wbyGbgpcIlA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2A-WosNzXumx9wbyGbgpcIlA.png"&gt;&lt;/a&gt;&lt;br&gt;Source: &lt;a href="https://xkcd.com/1513/" rel="noopener noreferrer"&gt;https://xkcd.com/1513/&lt;/a&gt;
  &lt;/p&gt;

&lt;h3&gt;
  
  
  Linters for Static Analysis
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.sourcelevel.io/engines/pep8/" rel="noopener noreferrer"&gt;pep8&lt;/a&gt; for Python&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.sourcelevel.io/engines/pmd/" rel="noopener noreferrer"&gt;PMD&lt;/a&gt; for Java&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://eslint.org/" rel="noopener noreferrer"&gt;ESLint&lt;/a&gt; for Javascript&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Linters focused on Security
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.sourcelevel.io/engines/bandit/" rel="noopener noreferrer"&gt;Bandit&lt;/a&gt; for Python&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.sourcelevel.io/engines/nodesecurity/" rel="noopener noreferrer"&gt;Node Security&lt;/a&gt; for JavaScript&lt;/li&gt;
&lt;li&gt;SpotBugs with &lt;a href="https://find-sec-bugs.github.io/" rel="noopener noreferrer"&gt;Find sec bugs&lt;/a&gt; for Java&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Docker lint check
&lt;/h2&gt;

&lt;p&gt;Considering that dockerizing applications are the norm, it is evident how important it is to introduce docker lint checks in our CI pipelines. We should make sure that our docker image generated for our application is optimized and secure.&lt;/p&gt;

&lt;p&gt;There are many open source docker linters available :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/hadolint/hadolint" rel="noopener noreferrer"&gt;hadolint&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/RedCoolBeans/dockerlint" rel="noopener noreferrer"&gt;dockerlint&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Secrets checks
&lt;/h2&gt;

&lt;p&gt;Sometimes developers leak GitHub tokens and various secrets in codebases which should be avoided. We should prevent the leaking of secrets when committing code. We can integrate Yelp’s &lt;a href="https://github.com/Yelp/detect-secrets" rel="noopener noreferrer"&gt;detect-secret&lt;/a&gt; in our workflow, which we can use to scan files for secrets and whitelist false positives to reduce the noise.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dependency Checks
&lt;/h2&gt;

&lt;p&gt;Our code often uses many open source dependencies from public repositories such as maven, PyPI, or npm. These dependencies are maintained by third-party developers who often discover security vulnerabilities in their code. Such vulnerabilities are usually assigned a CVE number and are disclosed publicly to sensitize other developers utilizing their code to update the packages.&lt;/p&gt;

&lt;p&gt;Dependency checkers use information from CVE to check for vulnerable dependencies used in our codebase. There are different tools for this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://snyk.io/" rel="noopener noreferrer"&gt;Snyk&lt;/a&gt; for many languages&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://owasp.org/www-project-dependency-check/" rel="noopener noreferrer"&gt;OWASP Dependency-Check&lt;/a&gt; for Java and Python&lt;/li&gt;
&lt;li&gt;npm comes with inbuilt dependency checks&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  All-in-one tools
&lt;/h1&gt;

&lt;p&gt;Some tools aggregate these different static code analysis tools into a single easy to use a package such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.sonarqube.org/" rel="noopener noreferrer"&gt;Sonarqube&lt;/a&gt;: Broad analysis tool&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/returntocorp/semgrep" rel="noopener noreferrer"&gt;Semgrep&lt;/a&gt; tool: Used for go, java, python&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tools provide an easy-to-use GUI to find, track and assign issues to developers.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Having the above-discussed steps as a part of your CI/CD pipeline will allow you to monitor, quickly rectify, and grow your code with much higher code quality.&lt;/p&gt;

&lt;p&gt;Originally Posted at &lt;a href="https://medium.com/@_nancychauhan/enforcing-coding-best-practices-using-ci-b3287e362202" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Designing Idempotent APIs</title>
      <dc:creator>Nancy Chauhan</dc:creator>
      <pubDate>Fri, 14 May 2021 07:09:11 +0000</pubDate>
      <link>https://forem.com/ladiesindevops/designing-idempotent-apis-17o2</link>
      <guid>https://forem.com/ladiesindevops/designing-idempotent-apis-17o2</guid>
      <description>&lt;p&gt;Networks fail! Timeouts, outages, and routing problems are bound to happen at any time. It challenges us to design our APIs and clients that will be robust in handling failures and ensuring consistency.&lt;/p&gt;

&lt;p&gt;We can design our APIs and systems to be idempotent, which means that they can be called any number of times while guaranteeing that side effects only occur once. Let’s get a deeper dive into why incorporating idempotency is essential, how it works, and how to implement it.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why Idempotency is critical in backend applications?
&lt;/h1&gt;

&lt;p&gt;Consider the design of a social networking site like Instagram where a user can share a post with all their followers. Let’s assume that we are hosting the app-server and database-server on two different machines for better performance and scalability. And also, we are using PostgreSQL to store the data. A post and creating a post will have the following model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE public.posts (
   id int(11) PRIMARY KEY,
   user_id int(11) REFERENCES users,
   image_id int(11) REFERENCES images NULL,
   content character varying(2048) COLLATE pg_catalog."default",
   create_timestamp timestamp with time zone NOT NULL DEFAULT CURRENT_TIMESTAMP
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Failures and Retries
&lt;/h2&gt;

&lt;p&gt;If we have our database on a separate server from our application server, sometimes posts will fail because of network issues. There could be the following issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The initial connection could fail as the application server tries to connect to a database server.&lt;/li&gt;
&lt;li&gt;The call could fail midway while the app server is fulfilling the operation, leaving the work in limbo.&lt;/li&gt;
&lt;li&gt;The call could succeed, but the connection breaks before the database server can tell the application server about it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2AgtfaDxb3P6Ut-rznIwy78g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fmax%2F1400%2F1%2AgtfaDxb3P6Ut-rznIwy78g.png" alt="Retry"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can fix this with retry logic, but it is very tough to suspect the real cause of network failure. Hence it could lead to a scenario where the entry of the post has been made in the database but it could not send the ACK to the app server. Here app server unknowingly keeps retrying and creating duplicate posts. This would eventually lead to business loss. There are many other critical systems like payments, shopping sites, where idempotent systems are quite important.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;The solution to this is to retry, but make the operation idempotent. If an operation is idempotent, the app server can make that same call repeatedly while producing the same result.&lt;/p&gt;

&lt;p&gt;In our design, we can use universally unique identifiers. Each post will be given its own UUID by our application server. We can change our models to have a unique key constraint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
CREATE TABLE public.posts (
   id uuid PRIMARY KEY,
   user_id uuid REFERENCES users,
   image_id uuid REFERENCES images NULL,
   content character varying(2048) COLLATE pg_catalog."default",
   create_timestamp timestamp with time zone NOT NULL DEFAULT CURRENT_TIMESTAMP
);
INSERT INTO posts (id, user_id, image_id, content)
VALUES ("DC2FB40E-058F-4208-B9A3-EB1790C532C8", "20C5ADC5-D1A5-4A1F-800F-1AADD1E4E954", "3CC32CAE-B6AC-4C53-97EC-25EB49F2E7F3", "Hello-world") RETURNING id ON CONFLICT DO NOTHING;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our application server will generate the UUID when it wants to create a post and retry the Insert statement until it gets a successful response from a database server. We need to change our system to handle constraint violations and return the existing post. Hence, there will always be exactly one post created.&lt;/p&gt;

&lt;h1&gt;
  
  
  Idempotency in HTTP
&lt;/h1&gt;

&lt;p&gt;One of the important aspects of HTTP is the concept that some methods are idempotent. Take GET for an example, how many times you may call the GET method it results in the same outcome. On the other hand, POST is not expected to be an idempotent method, calling it multiple times may result in incorrect updates.&lt;/p&gt;

&lt;p&gt;Safe methods don’t change the representation of the resource in the server e.g. GET method should not change the content of the page your accessing. They are read-only methods while the PUT method will update the page but will be idempotent in nature. To be idempotent, only the actual back-end state of the server is considered, the status code returned by each request may differ: the first call of a DELETE will likely return a 200, while successive ones will likely return a 404.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DELETE /idX/delete HTTP/1.1   -&amp;gt; Returns 200 if idX exists
DELETE /idX/delete HTTP/1.1   -&amp;gt; Returns 404 as it just got deleted
DELETE /idX/delete HTTP/1.1   -&amp;gt; Returns 404
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;GET is both safe and idempotent.&lt;/li&gt;
&lt;li&gt;HEAD is also both safe and idempotent.&lt;/li&gt;
&lt;li&gt;OPTIONS is also safe and idempotent.&lt;/li&gt;
&lt;li&gt;PUT is not safe but idempotent.&lt;/li&gt;
&lt;li&gt;DELETE is not safe but idempotent.&lt;/li&gt;
&lt;li&gt;POST is neither safe nor idempotent.&lt;/li&gt;
&lt;li&gt;PATCH is also neither safe nor idempotent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The HTTP specification defines certain methods to be idempotent but it is up to the server to actually implement it. For example, send a request-id header with a UUID which the server uses to deduplicate PUT request. If you are serving a GET request, we should not change the server-side data.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Designing idempotent systems is important for building a resilient microservice-based architecture. This helps in solving a lot of problems caused due to the network which is inherently lossy. By leveraging an idempotent queue such as Kafka, it makes sure your operations can be retried in case of a long outage. This helps you to design systems that never lose data and any missing data can be adjusted by replaying the message queue. If all operations are idempotent it will result in the same state regardless of how many times messages are processed.&lt;/p&gt;

&lt;p&gt;Originally published at &lt;a href="https://medium.com/@_nancychauhan/idempotency-in-api-design-bc4ea812a881" rel="noopener noreferrer"&gt;https://medium.com/@_nancychauhan/idempotency-in-api-design-bc4ea812a881&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building my Website-as-a-Service project</title>
      <dc:creator>Pauline P. Narvas</dc:creator>
      <pubDate>Thu, 13 May 2021 11:51:24 +0000</pubDate>
      <link>https://forem.com/ladiesindevops/building-my-website-as-a-service-project-444j</link>
      <guid>https://forem.com/ladiesindevops/building-my-website-as-a-service-project-444j</guid>
      <description>&lt;p&gt;The first time I found out about Terraform was in 2019 during the third rotation of my graduate scheme. At the time, I hadn’t realised how powerful infrastructure as code was but now that I’ve had a good amount of exposure to these technologies, I can’t think of anything better! Spinning up infrastructure in minutes and it all just working (after multiple config changes because nothing ever works first time 😂) is a dream!&lt;/p&gt;

&lt;p&gt;One of the projects that I had worked on whilst on that rotation was what we called - “website as a service.” The idea was to give teams a chance to spin up their own static websites, backed by AWS’ Simple Storage Service (S3) using a pipeline. All they had to do was enter some parameters that corresponds to the details of their website for example, domain name. &lt;/p&gt;

&lt;p&gt;In my new company, I recently participated in a company-wide Hackathon. The team I had joined had a similar idea that they had in mind, and it may be cheating but I decided to join their team to get my hands in Terraform again after not doing so for a while. With that said, although the solution was similar, it wasn’t exactly the same (as I had to plug things together that worked with our internal tech stacks) and I actually ended up learning some new things — especially on the Jenkins/writing groovy scripts front. &lt;/p&gt;

&lt;p&gt;So for this blog post, I wanted to share my solution that I hacked together after reading various documentations and seeing what other Engineers had done (thank you, Google!)  I won’t be going over Jenkins part in detail, but will have some pointers for you to get started.&lt;/p&gt;

&lt;p&gt;I’ll be re-using code that I’ve seen other Engineers implement - a list of original posts can be found at the end of this blog post.&lt;/p&gt;




&lt;h2&gt;
  
  
  Setting up
&lt;/h2&gt;

&lt;p&gt;There are a few things that you need to make sure that you have on hand:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An AWS account with AWS CLI access&lt;/li&gt;
&lt;li&gt;Terraform installed on your laptop (&lt;a href="https://www.terraform.io/downloads.html"&gt;Download Terraform - Terraform by HashiCorp&lt;/a&gt;) &lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Set up your project
&lt;/h3&gt;

&lt;p&gt;The best part of starting a project - getting set-up and organised with your fresh motivation to do the thing. Relatable? Yeah, I feel ya. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;mkdir website-terraform&lt;/code&gt;&lt;br&gt;
&lt;code&gt;cd website-terraform&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let’s also make sure that our Terraform version is up-to-date.&lt;br&gt;
&lt;code&gt;terraform version&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Initialising Terraform
&lt;/h3&gt;

&lt;p&gt;Coolio. We’re ready! Let’s get writing some Terraform. Let’s create some files!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform.tf
dist
    - index.html
    - style.css
    - about
        - about-me.html
AWS-modules
    - s3.tf
    - s3-iam.tf
    - route53.tf
    - variables.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, you may want to just copy and paste the Terraform code which is completely fine if you want to get this set-up quickly. &lt;/p&gt;

&lt;p&gt;But if you’re a newbie learning Terraform for the first time,  I found that typing some Terraform for the first time line-by-line even though I had the code in front of me helped me understand the syntax better. I guess this is true for learning to code in general. &lt;em&gt;Newbie learning hack&lt;/em&gt;! 😊&lt;/p&gt;

&lt;p&gt;In your code editor (my personal favourite is VS Code), open up your &lt;code&gt;terraform.tf&lt;/code&gt; file. This is where we first create our first Terraform configuration! &lt;a href="https://learn.hashicorp.com/tutorials/terraform/aws-build?in=terraform/aws-get-started"&gt;Build Infrastructure | Terraform - HashiCorp Learn&lt;/a&gt;  Terraform is built up of various blocks, in this case, we have a providers block that states the plugin we are going to use. In this case, it’s AWS because that is the cloud provider that will host all our resources. If you’re familiar with other cloud providers, you can use those too. The required providers is required for latest version of Terraform. &lt;/p&gt;

&lt;p&gt;In the provider block, you’ll see a &lt;code&gt;profile&lt;/code&gt; and &lt;code&gt;region&lt;/code&gt; key/value pair. This refers to which region you will be deploying your resources and which AWS credentials profile to use (usually you can leave this as default, but if you have multiple AWS accounts make sure that this points to the right one. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remember that it is IMPORTANT not to hard-code any credentials here&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 2.70.0"
    }
  }
}

provider "aws" {
    profile = "default"
  region = var.aws_region
}

module "website" {
  source = "./.deploy/terraform/static-site"
  domain_name = var.domain_name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;module&lt;/code&gt; block links to all the resources that make up our static website.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;variables.tf&lt;/code&gt;, this is where all our variables will live that we reference across our Terraform files! They don’t need any defaults but if you already know what value you want every single time e.g. a region you definitely want to use for all your resources then it’s a good idea to set a default here. Otherwise, you can leave it blank. In our case, it’s best to leave it blank because this will later be parameterised in our Jenkins job.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "aws_region" {
  type = string
    default = "eu-west-1"
}

variable "domain_name" {
  type = string
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whilst we are here, let’s create a simple HTML website that also has a link to an “About Me” page.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;head&amp;gt;Welcome to my static website!&amp;lt;/head&amp;gt;
&amp;lt;a href="/pages/about.html" &amp;gt;About me&amp;lt;/a&amp;gt;

&amp;lt;p&amp;gt;This site was deployed via cool Terraform magic&amp;lt;/p&amp;gt;
&amp;lt;/html&amp;gt;

&amp;lt;html&amp;gt;
&amp;lt;h1&amp;gt;About Me&amp;lt;/h1&amp;gt;
&amp;lt;p&amp;gt;I'm a page!&amp;lt;/p&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setting up your S3 Bucket
&lt;/h3&gt;

&lt;p&gt;Resource blocks are used to define components of your infrastructure. In the Documentation it states that &lt;em&gt;“a resource might be a physical or virtual component such as an EC2 instance, or it can be a logical resource such as a Heroku application. Resource blocks have two strings before the block: the resource type and the resource name. “&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In our case, we’re creating an S3 bucket, IAM policy for the S3 bucket and Route53 resource. As you look through the code for creating the different parts of infrastructure as code, try and see if you can understand what the Terraform code is relating to. &lt;/p&gt;

&lt;p&gt;Those who have some AWS knowledge may be able to see how a line relates to something you may have seen on the AWS Management Console. This is how I like to think of it when I’m writing my Terraform code! &lt;/p&gt;

&lt;p&gt;&lt;code&gt;s3.tf&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates a bucket that corresponds to the domain name you input (linking back to the variables.tf file)&lt;/li&gt;
&lt;li&gt;Adds a policy to the bucket and has an ACL that is set to public-read (so that anyone with the link can access it)&lt;/li&gt;
&lt;li&gt;With the &lt;code&gt;website {}&lt;/code&gt; block, this S3 bucket now knows that, “Oh, I’m going to be used as a website! Yay!” And maps the index_document in the root folder to index.html as well as the error_document (i.e. what happens when a user goes to your-website.com/random-made-up-page)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "my-website" {
  bucket = "${var.domain_name}"
  acl = "public-read"
  policy = data.aws_iam_policy_document.my-website_policy.json
  website {
    index_document = "index.html"
    error_document = "error.html"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;s3-iam.tf&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This creates the policy for your S3 bucket. You can read up on AWS policies here.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_iam_policy_document" "my-website_policy" {
  statement {
    actions = [
      "s3:GetObject"
    ]
    principals {
      identifiers = ["*"]
      type = "AWS"
    }
    resources = [
      "arn:aws:s3:::${var.domain_name}*"
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setting up Route53
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;route53.tf&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What could this Terraform code be creating, I wonder? 🤔 That’s right! A Route53 resource! Again, if you’re familiar with navigating the AWS Management Console, some of these fields may look familiar. In this case, we’re creating an Alias record that corresponds to the S3 generated URL from the bucket we created. &lt;/li&gt;
&lt;li&gt;Bear in mind - you do need to purchase a domain for this to work. &lt;/li&gt;
&lt;li&gt;Notice how I’ve used the domain name variable here again.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route53_record" "www" {
  name = "${var.domain_name}"
  type = "A"
  alias {
    name =  aws_s3_bucket.my-website_bucket.website_domain
    zone_id = aws_s3_bucket.my-website_bucket.hosted_zone_id
    evaluate_target_health = false
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Running a Terraform plan / apply
&lt;/h2&gt;

&lt;p&gt;Now for the fun part! Let’s start initiating the Terraform and see all the code that you just wrote in action.&lt;/p&gt;

&lt;p&gt;Some of key Terraform commands:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform init&lt;/code&gt;&lt;br&gt;
This initiates the Terraform based on the config in &lt;code&gt;terraform.tf&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform plan&lt;/code&gt;&lt;br&gt;
This command creates a plan of the infrastructure that you’re about to deploy. It’s a good idea to always run a plan to make sure that you see exactly what resources is going to be created on AWS.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform apply&lt;/code&gt;&lt;br&gt;
If you’re all happy with what you see in the plan then go ahead and run an apply. This creates all the resources on AWS.&lt;/p&gt;

&lt;p&gt;On your AWS Console, you should see all your resources built! Magic, right? &lt;/p&gt;

&lt;p&gt;As easy as it was to create your infrastructure, you can also delete it all at once too with…&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform destroy&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;🥳🎉🍾 Powerful, right? That is Infrastructure as Code for you! 🎉🍾🥳&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Jenkins - some pointers
&lt;/h2&gt;

&lt;p&gt;Now I have to put my hand up and be honest here… groovy scripts isn’t my strong point. I also don’t enjoy using Jenkins if we’re being even more real over here. 😆 But as part of this project, Jenkins was in scope - after all, it wasn’t just for me to run it all locally but for it to benefit both technical and non-technical colleagues from other squads too. &lt;/p&gt;

&lt;p&gt;I won’t dive into groovy in this blog post (there’s already quite a bit of Terraform here to get your head around) but here are some pointers to think about: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Parameterise the domain name so that it takes in input from users&lt;/li&gt;
&lt;li&gt;A simple bash script that uses the AWS S3 Cli to run an AWS S3 sync &lt;/li&gt;
&lt;li&gt;Or if your index.html or application build is not committed to a repo (e.g. GitHub or BitBucket) then potentially a way to upload and unzip your application build onto S3 via a File Upload parameter.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Other things to think about
&lt;/h2&gt;

&lt;p&gt;The solution above doesn’t cover using HTTPS, but it’s wise to do so. You could add Cloudfront to front your S3 website! Give it a go. 😀&lt;/p&gt;

&lt;p&gt;Woohoo! What a wild ride, right? I hope that this inspires you to dabble in some Terraform. I think that this project was a fantastic way for me to get my head round Infrastructure as Code and other Cloud native technologies to make playing around with AWS much more streamlined and fun.  ☁️&lt;/p&gt;

&lt;p&gt;Let me know if this was useful to you, happy playing around in da Clouds!&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>jenkins</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Building a Prometheus Exporter</title>
      <dc:creator>Nancy Chauhan</dc:creator>
      <pubDate>Mon, 03 May 2021 11:34:39 +0000</pubDate>
      <link>https://forem.com/ladiesindevops/building-a-prometheus-exporter-1cb9</link>
      <guid>https://forem.com/ladiesindevops/building-a-prometheus-exporter-1cb9</guid>
      <description>&lt;p&gt;&lt;a href="https://prometheus.io/docs/introduction/overview/"&gt;Prometheus&lt;/a&gt; is an open-source monitoring tool for collecting metrics from your application and infrastructure. As one of the foundations of the cloud-native environment, Prometheus has become the de-facto standard for visibility in the cloud-native landscape.&lt;/p&gt;

&lt;h1&gt;
  
  
  How Prometheus Works?
&lt;/h1&gt;

&lt;p&gt;Prometheus is a &lt;a href="https://www.influxdata.com/time-series-database/"&gt;time-series database&lt;/a&gt; and a pull-based monitoring system. It periodically scrapes HTTP endpoints (targets) to retrieve metrics. It can monitor targets such as servers, databases, standalone virtual machines, etc.&lt;br&gt;
Prometheus read metrics exposed by target using a simple &lt;a href="https://prometheus.io/docs/instrumenting/exposition_formats/#text-based-format"&gt;text-based&lt;/a&gt; exposition format. There are client libraries that help your application to expose metrics in Prometheus format.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K2vVJzzp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AUhSRulXaVEDoQQRL4nPu6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K2vVJzzp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AUhSRulXaVEDoQQRL4nPu6g.png" alt="How Prometheus Works?"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Prometheus Metrics
&lt;/h1&gt;

&lt;p&gt;While working with Prometheus it is important to know about Prometheus metrics. These are the four types of metrics that will help in instrumenting your application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Counter (the only way is up): Use counters for counting events, jobs, money, HTTP request, etc. where a cumulative value is useful.&lt;/li&gt;
&lt;li&gt;Gauges (the current picture): Use where the current value is important — CPU, RAM, JVM memory usage, queue levels, etc.&lt;/li&gt;
&lt;li&gt;Histograms (Sampling Observations): Generally use with timings, where an overall picture over a time frame is required — query times, HTTP response times.&lt;/li&gt;
&lt;li&gt;Summaries (client-side quantiles): Similar in spirit to the Histogram, with the difference being that quantiles are calculated on the client-side as well. Use when you start using quantile values frequently with one or more histogram metrics.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Using Prometheus
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus provides client libraries that you can use to add instrumentation to your applications.&lt;/li&gt;
&lt;li&gt;The client library exposes your metrics at URLs such as &lt;a href="http://localhost:8000/metrics"&gt;http://localhost:8000/metrics&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Configure the URL as one of the targets in Prometheus. Prometheus will now scrape metrics in periodic intervals. You can use visualization tools such as Grafana to view your metrics or configure alerts using Alertmanager via custom rules defined in configuration files.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Prometheus Exporters
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rfWelX0B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2A1OpRRb67QvRVg4nx" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rfWelX0B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/0%2A1OpRRb67QvRVg4nx" alt="Exporter"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prometheus has a huge ecosystem of &lt;a href="https://awesomeopensource.com/projects/prometheus-exporter"&gt;exporters&lt;/a&gt;. Prometheus exporters bridge the gap between Prometheus and applications that don’t export metrics in the Prometheus format. For example, Linux does not expose Prometheus-formatted metrics. That’s why Prometheus exporters, like &lt;a href="https://github.com/prometheus/node_exporter"&gt;the node exporter&lt;/a&gt;, exist.&lt;/p&gt;

&lt;p&gt;Some applications like Spring Boot, Kubernetes, etc. expose Prometheus metrics out of the box. On the other hand, exporters consume metrics from an existing source and utilize the Prometheus client library to export metrics to Prometheus.&lt;/p&gt;

&lt;p&gt;Prometheus exporters can be stateful or stateless. A stateful exporter is responsible for gathering data and exports them using the general metrics format such as counter, gauge, etc. Stateless exporters are exporters that translate metrics from one format to Prometheus metrics format using counter metric family, gauge metric family, etc. They do not maintain any local state instead they show a view derived from another metric source such as JMX. For example, Jenkins Jobmon is a Prometheus exporter for Jenkins which calls Jenkins API to fetch the metrics on every scrape.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i3JOwpme--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/grofers"&gt;
        grofers
      &lt;/a&gt; / &lt;a href="https://github.com/grofers/jenkins-jobmon"&gt;
        jenkins-jobmon
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Prometheus exporter to monitor Jenkins jobs
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
Jenkins Jobmon&lt;/h1&gt;
&lt;p&gt;&lt;a href="https://github.com/grofers/jenkins-jobmon/actions"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cRWrnscq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/grofers/jenkins-jobmon/workflows/ci/badge.svg" alt="CI Actions Status"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Jenkins exporter for Prometheus in python.&lt;/p&gt;
&lt;p&gt;It uses &lt;a href="https://github.com/prometheus/client_python#custom-collectors"&gt;Prometheus custom collector API&lt;/a&gt;, which allows making custom
collectors by proxying metrics from other systems.&lt;/p&gt;
&lt;p&gt;Currently we fetch following metrics:&lt;/p&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Labels&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jenkins_job_monitor_total_duration_seconds_sum&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Jenkins build total duration in millis&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;jobname&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, &lt;code&gt;repository&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jenkins_job_monitor_fail_count&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Jenkins build fail counts&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;jobname&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, &lt;code&gt;repository&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jenkins_job_monitor_total_count&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Jenkins build total counts&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;jobname&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, &lt;code&gt;repository&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jenkins_job_monitor_pass_count&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Jenkins build pass counts&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;jobname&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, &lt;code&gt;repository&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jenkins_job_monitor_pending_count&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Jenkins build pending counts&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;jobname&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, &lt;code&gt;repository&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jenkins_job_monitor_stage_duration&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Gauge&lt;/td&gt;
&lt;td&gt;Jenkins build stage duration in ms&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;jobname&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, &lt;code&gt;repository&lt;/code&gt;, &lt;code&gt;stagename&lt;/code&gt;, &lt;code&gt;build&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jenkins_job_monitor_stage_pass_count&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Counter&lt;/td&gt;
&lt;td&gt;Jenkins build stage pass count&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;jobname&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, &lt;code&gt;repository&lt;/code&gt;, &lt;code&gt;stagename&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;jenkins_job_monitor_stage_fail_count&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Counter&lt;/td&gt;
&lt;td&gt;Jenkins build stage fail count&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;jobname&lt;/code&gt;, &lt;code&gt;group&lt;/code&gt;, &lt;code&gt;repository&lt;/code&gt;, &lt;code&gt;stagename&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
Usage&lt;/h2&gt;
&lt;h3&gt;
Configuration&lt;/h3&gt;
&lt;p&gt;Create a file &lt;code&gt;config.yml&lt;/code&gt; using this template:&lt;/p&gt;
&lt;div class="highlight highlight-source-yaml position-relative js-code-highlight"&gt;
&lt;pre&gt;&lt;span class="pl-ent"&gt;jobs&lt;/span&gt;
  &lt;span class="pl-ent"&gt;example&lt;/span&gt;:          &lt;/pre&gt;…
&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/grofers/jenkins-jobmon"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h1&gt;
  
  
  Let’s build a generic HTTP server metrics exporter!
&lt;/h1&gt;

&lt;p&gt;We will build a Prometheus exporter for monitoring HTTP servers from logs. It extracts data from HTTP logs and exports it to Prometheus. We will be using a &lt;a href="https://github.com/prometheus/client_python"&gt;python client library&lt;/a&gt;, &lt;code&gt;prometheus_client&lt;/code&gt;, to define and expose metrics via an HTTP endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RBqc8oLO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AtnVyecPLcTgwQY0LbChBxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RBqc8oLO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AtnVyecPLcTgwQY0LbChBxw.png" alt="One of the metrics from httpd_exporter"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our HTTP exporter will repeatedly follow server logs to extract useful information such as HTTP requests, status codes, bytes transferred, and requests timing information. HTTP logs are structured and standardized across different servers such as Apache, Nginx, etc. You can read more about it from &lt;a href="https://publib.boulder.ibm.com/tividd/td/ITWSA/ITWSA_info45/en_US/HTML/guide/c-logs.html"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;We will use a counter metric to store the HTTP requests using status code as a label.&lt;/li&gt;
&lt;li&gt;We will use a counter metric to store bytes transferred.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is the script which collects data from apache logs indefinitely and exposes metrics to Prometheus :&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;follow_log&lt;/code&gt; function tails apache logs stored var/log/apache in your system infinitely. &lt;code&gt;gather_metrics()&lt;/code&gt; uses a regular expression to fetch the useful information from logs like status_code and total_bytes_sent and accordingly increments the counters.&lt;/p&gt;

&lt;p&gt;If you run the script, it will start the server at &lt;a href="http://localhost:8000"&gt;http://localhost:8000&lt;/a&gt; The collected metrics will show up there. Setup &lt;a href="https://github.com/Nancy-Chauhan/httpd_exporter/blob/master/prometheus/prometheus.yml"&gt;Prometheus&lt;/a&gt; to scrape the endpoint. Over time, Prometheus will build the time-series for the metrics collected. Setup &lt;a href="https://github.com/Nancy-Chauhan/httpd_exporter/blob/master/docker-compose.yml"&gt;Grafana&lt;/a&gt; to visualize the data within Prometheus.&lt;/p&gt;

&lt;p&gt;You can find the code here and run the exporter:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i3JOwpme--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-ba8488d21cd8ee1fee097b8410db9deaa41d0ca30b004c0c63de0a479114156f.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/Nancy-Chauhan"&gt;
        Nancy-Chauhan
      &lt;/a&gt; / &lt;a href="https://github.com/Nancy-Chauhan/httpd_exporter"&gt;
        httpd_exporter
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Prometheus exporter for monitoring apache
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
httpd_exporter&lt;/h1&gt;
&lt;p&gt;Prometheus exporter for monitoring http servers from logs.&lt;/p&gt;
&lt;p&gt;It extracts data from http logs and export to prometheus.&lt;/p&gt;
&lt;h3&gt;
Requirements&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;python 3.6 +&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
Usage&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Clone the repo&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;docker-compose up&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
Grafana Dashboard&lt;/h2&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://raw.githubusercontent.com/Nancy-Chauhan/httpd_exporter/master/docs/grafana_dashboard.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W-VP-KO2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/Nancy-Chauhan/httpd_exporter/master/docs/grafana_dashboard.png" alt="Grafana Dashboard"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/Nancy-Chauhan/httpd_exporter"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;p&gt;Originally Posted at &lt;a href="https://medium.com/@_nancychauhan/building-a-prometheus-exporter-8a4bbc3825f5"&gt;https://medium.com/@_nancychauhan/building-a-prometheus-exporter-8a4bbc3825f5&lt;/a&gt; &lt;/p&gt;

</description>
      <category>devops</category>
      <category>monitoring</category>
    </item>
  </channel>
</rss>
