<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: karan singh</title>
    <description>The latest articles on Forem by karan singh (@ksingh7).</description>
    <link>https://forem.com/ksingh7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ksingh7"/>
    <language>en</language>
    <item>
      <title>A Free Tool to Check VRAM Requirements for Any HuggingFace Model</title>
      <dc:creator>karan singh</dc:creator>
      <pubDate>Wed, 07 Jan 2026 12:03:51 +0000</pubDate>
      <link>https://forem.com/ksingh7/a-free-tool-to-check-vram-requirements-for-any-huggingface-model-4p7h</link>
      <guid>https://forem.com/ksingh7/a-free-tool-to-check-vram-requirements-for-any-huggingface-model-4p7h</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; I got tired of guessing whether models would fit on my GPU. So I built &lt;a href="https://vramio.ksingh.in" rel="noopener noreferrer"&gt;vramio&lt;/a&gt; — a free API that tells you exactly how much VRAM any HuggingFace model needs. One curl command. Instant answer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4q0kzir3wmdk2az7c76.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4q0kzir3wmdk2az7c76.png" alt="VRAMIO in Action" width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem Every ML Engineer Knows
&lt;/h2&gt;

&lt;p&gt;You're browsing HuggingFace. You find a model that looks perfect for your project. Then the questions start:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Will this fit on my 24GB RTX 4090?"&lt;/li&gt;
&lt;li&gt;"Do I need to quantize it?"&lt;/li&gt;
&lt;li&gt;"What's the actual memory footprint?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the answers? They're nowhere.&lt;/p&gt;

&lt;p&gt;Some model cards mention it. Most don't. You could download the model and find out the hard way. Or dig through config files, count parameters, multiply by bytes per dtype, add overhead for KV cache...&lt;/p&gt;

&lt;p&gt;I've done this calculation dozens of times and have blogged here &lt;a href="https://medium.com/@ksingh7/calculate-how-much-gpu-memory-you-need-to-serve-any-llm-67301a844f21" rel="noopener noreferrer"&gt;Calculate vRAM for LLM&lt;/a&gt;. It's tedious. It shouldn't be.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: One API Call
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s2"&gt;"https://vramio.ksingh.in/model?hf_id=mistralai/Mistral-7B-v0.1"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. You get back:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mistralai/Mistral-7B-v0.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"total_parameters"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"7.24B"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"memory_required"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"13.49 GB"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"recommended_vram"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"16.19 GB"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"other_precisions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"fp32"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"26.99 GB"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"fp16"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"13.49 GB"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"int8"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"6.75 GB"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"int4"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"3.37 GB"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;code&gt;recommended_vram&lt;/code&gt;&lt;/strong&gt; includes the standard 20% overhead for activations and KV cache during inference. This is what you actually need.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;No magic. No downloads. Just math.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fetch safetensors metadata from HuggingFace (just the headers, ~50KB)&lt;/li&gt;
&lt;li&gt;Parse tensor shapes and data types&lt;/li&gt;
&lt;li&gt;Calculate: &lt;code&gt;parameters × bytes_per_dtype&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Add 20% for inference overhead&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The entire thing is &lt;strong&gt;160 lines of Python&lt;/strong&gt; with a single dependency (&lt;code&gt;httpx&lt;/code&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I Built This
&lt;/h2&gt;

&lt;p&gt;I run models locally. A lot. Every time I wanted to try something new, I'd waste 10 minutes figuring out if it would even fit.&lt;/p&gt;

&lt;p&gt;I wanted something dead simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No signup&lt;/li&gt;
&lt;li&gt;No rate limits&lt;/li&gt;
&lt;li&gt;No bloated web UI&lt;/li&gt;
&lt;li&gt;Just an API endpoint&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I built it over a weekend and deployed it for free on Render.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Live API:&lt;/strong&gt; &lt;a href="https://vramio.ksingh.in/model?hf_id=YOUR_MODEL_ID" rel="noopener noreferrer"&gt;https://vramio.ksingh.in/model?hf_id=YOUR_MODEL_ID&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Examples:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Llama 2 7B&lt;/span&gt;
curl &lt;span class="s2"&gt;"https://vramio.ksingh.in/model?hf_id=meta-llama/Llama-2-7b"&lt;/span&gt;

&lt;span class="c"&gt;# Phi-2&lt;/span&gt;
curl &lt;span class="s2"&gt;"https://vramio.ksingh.in/model?hf_id=microsoft/phi-2"&lt;/span&gt;

&lt;span class="c"&gt;# Mistral 7B&lt;/span&gt;
curl &lt;span class="s2"&gt;"https://vramio.ksingh.in/model?hf_id=mistralai/Mistral-7B-v0.1"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Self-Host It
&lt;/h2&gt;

&lt;p&gt;It's open source. Run your own:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/ksingh-scogo/vramio.git
&lt;span class="nb"&gt;cd &lt;/span&gt;vramio
pip &lt;span class="nb"&gt;install &lt;/span&gt;httpx[http2]
python server_embedded.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;This solves my immediate problem. If people find it useful, I might add:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Batch queries for multiple models&lt;/li&gt;
&lt;li&gt;Training memory estimates (not just inference)&lt;/li&gt;
&lt;li&gt;Browser extension for HuggingFace&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But honestly? The current version does exactly what I needed. Sometimes simple is enough.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/ksingh-scogo/vramio" rel="noopener noreferrer"&gt;https://github.com/ksingh-scogo/vramio&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Built with help from &lt;a href="https://github.com/alvarobartt/hf-mem" rel="noopener noreferrer"&gt;hf-mem&lt;/a&gt; by @alvarobartt.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If this saved you time, consider starring the repo. And if you have ideas for improvements, open an issue — I'd love to hear them.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>vram</category>
      <category>gpu</category>
      <category>llm</category>
      <category>inferencing</category>
    </item>
    <item>
      <title>Deploy MongoDB on OpenShift using Helm</title>
      <dc:creator>karan singh</dc:creator>
      <pubDate>Tue, 15 Mar 2022 13:16:42 +0000</pubDate>
      <link>https://forem.com/ksingh7/deploy-mongodb-on-openshift-using-helm-4a6k</link>
      <guid>https://forem.com/ksingh7/deploy-mongodb-on-openshift-using-helm-4a6k</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohmvabe402n7pe2tmloj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohmvabe402n7pe2tmloj.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;This is yet another blog post on deploying Blahblahblah on OpenShift and this time its MongoDB. In this post you will learn deployment of MongoDB on OpenShift using Helm Chart.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lets' get started
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Install helm CLI on your local machine (&lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;see docs&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Login to OpenShift CLI&lt;/li&gt;
&lt;li&gt;Create a new Project on OpenShift
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc new-project ksingh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Add helm repository
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add bitnami https://charts.bitnami.com/bitnami
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Set &lt;code&gt;root&lt;/code&gt; user password and &lt;code&gt;replica-set-key&lt;/code&gt; as environment variables
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export MONGODB_ROOT_PASSWORD=root123
export MONGODB_REPLICA_SET_KEY=root123
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Using helm install MongoDB on OpenShift. Make sure to set the required SecurityContext , so that helm can deploy MongoDB on OpenShift
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install mongodb bitnami/mongodb --set podSecurityContext.fsGroup="",containerSecurityContext.runAsUser="1001080001",podSecurityContext.enabled=false,architecture=replicaset,auth.replicaSetKey=$MONGODB_REPLICA_SET_KEY,auth.rootPassword=$MONGODB_ROOT_PASSWORD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Wait for the deployment to be ready, you can run &lt;code&gt;oc get po&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Get the root password (optional)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace ksingh mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a MongoDB Client container and verify that connectivity and DB access
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl run --namespace ksingh mongodb-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:4.4.13-debian-10-r9 --command -- bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;From the client container shell, connect to the MongoDB cluster
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## Option-1 Using host addres
mongo admin --host "mongodb-0.mongodb-headless.ksingh.svc.cluster.local:27017,mongodb-1.mongodb-headless.ksingh.svc.cluster.local:27017" --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## Option-2 Using MongoDB URI
mongo "mongodb://mongodb-0.mongodb-headless.ksingh.svc.cluster.local:27017,mongodb-1.mongodb-headless.ksingh.svc.cluster.local:27017" --authenticationDatabase admin  -u root -p $MONGODB_ROOT_PASSWORD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;List the databases &lt;code&gt;show dbs&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Create a database &lt;code&gt;mydb&lt;/code&gt; and create a document in collection &lt;code&gt;post&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;use mydb

db.post.insert([
  {
    title: "MongoDB to Kafka testing",
    description: "Debezium connector",
    by: "Karan",
    url: "http://redhat.com",
    tags: ["mongodb", "debezium", "ROSAK"],
    likes: 100
  }
])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify the document is created
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;show dbs
db.post.find()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Exit the mongodb client container &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you want to connect to MongoDB cluster from localhost, then forward to port
&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward service/mongodb-external-0 27017 &amp;amp;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Connect using MongoDB CLI from localhost
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mongo --host 127.0.0.1 --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;(optionally) Connect using MongoDB Compass or MongoDB Shell
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# MongoDB Compass &amp;gt; New Connection
mongodb://root:JvckncuMto@127.0.0.1:27017
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fksingh7%2Fblogs%2Fmain%2Fposts%2Fassets%2Fthats-all-folks.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fksingh7%2Fblogs%2Fmain%2Fposts%2Fassets%2Fthats-all-folks.gif"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>openshift</category>
      <category>helm</category>
    </item>
    <item>
      <title>Golang automatic code formatting : Code like a Pro</title>
      <dc:creator>karan singh</dc:creator>
      <pubDate>Tue, 22 Feb 2022 07:31:02 +0000</pubDate>
      <link>https://forem.com/ksingh7/golang-automatic-code-formatting-code-like-a-pro-205a</link>
      <guid>https://forem.com/ksingh7/golang-automatic-code-formatting-code-like-a-pro-205a</guid>
      <description>&lt;h3&gt;
  
  
  Why Format your code?
&lt;/h3&gt;

&lt;p&gt;Everyone loves clean readable and beautifully organized code using tabs/spaces (whatever you like), short lines etc. As a developer, while writing code, you should not spend time counting the tabs/spaces, instead let the tools handle your code formatting for you and that too automatically.&lt;/p&gt;

&lt;p&gt;In this short post, we will learn how you can use &lt;code&gt;golines&lt;/code&gt; to automagically format all your golang code&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementing Golines: a Golang formatter
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Installing golines
```
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;go install github.com/segmentio/golines@latest&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Using golines from VSCode
    - Go into the VSCode settings menu, scroll down to the section for the `Run on Save` extension, click the `Edit in settings.json` link
    - Set the `emeraldwalk.runonsave` key as follows
    ```


    "emeraldwalk.runonsave": {
        "commands": [
            {
                "match": "\\.go$",
                "cmd": "golines ${file} -w"
            }
        ]
    }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Save the settings and restart VSCode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;(optional) using golines from CLI
```
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;golines -w "path to *.go files"&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;### Summary
golines together with vscode helps you autoformatt your code 
#### from 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;myMap := map[string]string{"first key": "first value", "second key": "second value", "third key": "third value", "fourth key": "fourth value", "fifth key": "fifth value"}&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#### To


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;myMap := map[string]string{&lt;br&gt;
    "first key": "first value",&lt;br&gt;
    "second key": "second value",&lt;br&gt;
    "third key": "third value",&lt;br&gt;
    "fourth key": "fourth value",&lt;br&gt;
    "fifth key": "fifth value",&lt;br&gt;
}&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Isn't this beautiful, I know it is ;)


![](https://raw.githubusercontent.com/ksingh7/blogs/main/posts/assets/thats-all-folks.gif)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>go</category>
      <category>formatting</category>
      <category>vscode</category>
    </item>
    <item>
      <title>MongoDB Change Streams Implementation in Golang</title>
      <dc:creator>karan singh</dc:creator>
      <pubDate>Sat, 19 Feb 2022 15:03:42 +0000</pubDate>
      <link>https://forem.com/ksingh7/mongodb-change-streams-implementation-in-golang-49lp</link>
      <guid>https://forem.com/ksingh7/mongodb-change-streams-implementation-in-golang-49lp</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KjZkwYhL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/ksingh7/blogs/main/posts/assets/mongodb-change-streams.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KjZkwYhL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/ksingh7/blogs/main/posts/assets/mongodb-change-streams.png" alt="" width="838" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Change Streams?
&lt;/h2&gt;

&lt;p&gt;Change streams is a near real-time ordered flow of information (stream) about any change to an item in a database, table/collection, or row of a table/document in a collection. For example, whenever any update (Insert, Update or Delete) occurs in a specific collection/table, the database triggers a change event with all the data which has been modified.&lt;/p&gt;

&lt;h2&gt;
  
  
  MongoDB Change Streams
&lt;/h2&gt;

&lt;p&gt;MongoDB change streams provide a high-level API that can notify an application of changes to a MongoDB database, collection, or cluster, without using polling(which would come with much higher overhead). Characteristics of MongoDB Change Streams are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Filterable

&lt;ul&gt;
&lt;li&gt;Applications can filter changes to receive only those change notifications they need.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Resumable 

&lt;ul&gt;
&lt;li&gt;Change streams are resumable because each response comes with a resume token. Using the token, an application can start the stream where it left off (if it ever disconnects).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Ordered

&lt;ul&gt;
&lt;li&gt;Change notifications occur in the same order that the database was updated.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Durable

&lt;ul&gt;
&lt;li&gt;Change streams only include majority-committed changes. This is so every change seen by listening applications is durable in failure scenarios, such as electing a new primary.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Secure 

&lt;ul&gt;
&lt;li&gt;Only users with rights to read a collection can create a change stream on that collection.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Easy to use

&lt;ul&gt;
&lt;li&gt;The syntax of the change streams API uses the existing MongoDB drivers and query language.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Experimenting with MongoDB Change Stream using Golang
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;MongoDB Atlas Cluster, get it for free at &lt;a href="https://www.mongodb.com/cloud/atlas"&gt;https://www.mongodb.com/cloud/atlas&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Codebase is available at &lt;a href="https://github.com/ksingh7/mongodb-change-events-go.git"&gt;https://github.com/ksingh7/mongodb-change-events-go.git&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Getting Started with MongoDB Streams: Golang Implementation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# export MongoDB URI

export MONGODB_URI="mongodb+srv://admin:xxxxx@cluster0.ii90w.mongodb.net/myFirstDatabase?retryWrites=true&amp;amp;w=majority"

git clone https://github.com/ksingh7/mongodb-change-events-go.git
cd mongodb-change-events-go
go mod tidy
go run main.go
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Demo Video
&lt;/h3&gt;

&lt;p&gt;Here is my demo video recording that can help you understand this implementation.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/kJw8gYh5-7s"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Walkthrough
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;main.go&lt;/code&gt; file already has required guidelines in the form of comments. However, in this section, I will explain sections that I think are crucial&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Declaring struct returned by MongoDB Stream API
&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;DbEvent&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;DocumentKey&lt;/span&gt;     &lt;span class="n"&gt;documentKey&lt;/span&gt;     &lt;span class="s"&gt;`bson:"documentKey"`&lt;/span&gt;
    &lt;span class="n"&gt;OperationType&lt;/span&gt;    &lt;span class="kt"&gt;string&lt;/span&gt;                   &lt;span class="s"&gt;`bson:"operationType"`&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;documentKey&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;ID&lt;/span&gt;      &lt;span class="n"&gt;primitive&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ObjectID&lt;/span&gt;      &lt;span class="s"&gt;`bson:"_id"`&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Declaring a struct that resembles to the collection
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;ID&lt;/span&gt;               &lt;span class="n"&gt;primitive&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ObjectID&lt;/span&gt;       &lt;span class="s"&gt;`bson:"_id"`&lt;/span&gt;
    &lt;span class="n"&gt;UserID&lt;/span&gt;        &lt;span class="kt"&gt;string&lt;/span&gt;                            &lt;span class="s"&gt;`bson:"userID"`&lt;/span&gt;
    &lt;span class="n"&gt;DeviceType&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;                            &lt;span class="s"&gt;`bson:"deviceType"`&lt;/span&gt;
    &lt;span class="n"&gt;GameState&lt;/span&gt;   &lt;span class="kt"&gt;string&lt;/span&gt;                            &lt;span class="s"&gt;`bson:"gameState"`&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Connect to MongoDB
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;    &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;mongo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TODO&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ApplyURI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"MONGODB_URI"&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nb"&gt;panic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Set DB and Collection names
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;    &lt;span class="n"&gt;database&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Database&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"summit-demo"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;collection&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;database&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"bike-factory"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a change stream
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;    &lt;span class="n"&gt;changeStream&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;collection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Watch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TODO&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;mongo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Pipeline&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nb"&gt;panic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Iterate over the change stream
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;changeStream&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Next&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TODO&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;change&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;changeStream&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Current&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"%+v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;change&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Detect change type (Insert or Update) and accordingly fetch the document
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;        &lt;span class="c"&gt;// Print out the document that was inserted or updated&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;DbEvent&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;OperationType&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s"&gt;"insert"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt;  &lt;span class="n"&gt;DbEvent&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;OperationType&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s"&gt;"update"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c"&gt;// Find the mongodb document based on the objectID&lt;/span&gt;
            &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;
            &lt;span class="n"&gt;err&lt;/span&gt;  &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;collection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;FindOne&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TODO&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;DbEvent&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DocumentKey&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="c"&gt;// Convert changd MongoDB document from BSON to JSON&lt;/span&gt;
            &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;writeErr&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;bson&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MarshalExtJSON&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;writeErr&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;writeErr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="c"&gt;// Print the changed document in JSON format&lt;/span&gt;
            &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
            &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Close the change stream
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;changeStream&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Close&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TODO&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nb"&gt;panic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Bonus : Function to Insert records to MongoDB collection
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;insertRecord&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;collection&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;mongo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Collection&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c"&gt;// pre-populated values for DeviceType and GameState    &lt;/span&gt;
        &lt;span class="n"&gt;DeviceType&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;DeviceType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;DeviceType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"mobile"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"laptop"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"karan-board"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"tablet"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"desktop"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"smart-watch"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;GameState&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;GameState&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;GameState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"playing"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"paused"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"stopped"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"finished"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s"&gt;"failed"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c"&gt;// insert new records to MongoDB every 5 seconds&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;primitive&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewObjectID&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
                &lt;span class="n"&gt;UserID&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;strconv&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Itoa&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rand&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Intn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;10000&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
                &lt;span class="n"&gt;DeviceType&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;DeviceType&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;rand&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Intn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;DeviceType&lt;/span&gt;&lt;span class="p"&gt;))],&lt;/span&gt;
                &lt;span class="n"&gt;GameState&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;GameState&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;rand&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Intn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;GameState&lt;/span&gt;&lt;span class="p"&gt;))],&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;collection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;InsertOne&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TODO&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;

            &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;Hope this post gives you a better understanding of MongoDB Change Streams and how to use them in your application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mMblqx4b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://raw.githubusercontent.com/ksingh7/blogs/main/posts/assets/thats-all-folks.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mMblqx4b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://raw.githubusercontent.com/ksingh7/blogs/main/posts/assets/thats-all-folks.gif" alt="" width="249" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>go</category>
    </item>
    <item>
      <title>Demo : Twitter streaming and sentiment analysis using Kafka, OCS, MongoDB &amp; OpenShift (Kubernetes)</title>
      <dc:creator>karan singh</dc:creator>
      <pubDate>Tue, 15 Feb 2022 20:25:09 +0000</pubDate>
      <link>https://forem.com/ksingh7/demo-twitter-streaming-and-sentiment-analysis-using-kafka-ocs-mongodb-openshift-kubernetes-161f</link>
      <guid>https://forem.com/ksingh7/demo-twitter-streaming-and-sentiment-analysis-using-kafka-ocs-mongodb-openshift-kubernetes-161f</guid>
      <description>&lt;p&gt;You know tech tools are cool, but unless you have a defined use case it's hard to put things into perspective and understand how different tools can interact with each other, help solve a problem or explore new use cases.&lt;/p&gt;

&lt;p&gt;So to educate and motivate our technical buyers, sellers and customers, I created a fancy use case of ingesting live Twitter tweets and applying sentiment analysis to it. For this demo, i used the following tools&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Twitter API&lt;/strong&gt; : Realtime streaming data source&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Red Hat AMQ Streams&lt;/strong&gt; : Apache Kafka cluster to store real-time streaming data coming into the system&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MongoDB&lt;/strong&gt; : Storing tweets for long term persistence from Kafka into a schema-less NoSQL database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Red Hat OpenShift Container Storage&lt;/strong&gt; : Used for providing RWO (in this project), RWX, Object Storage persistence storage for Kafka and MongoDB apps running on OpenShift&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;*&lt;em&gt;Red Hat OpenShift Container Platform *&lt;/em&gt;: Enterprise grade k8s distribution for container apps&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;**Aylien : **Sentiment analysis solution backend&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Python&lt;/strong&gt; : Backend API app to trigger data sourcing from twitter, move data from Kafka to MongoDB, server data to frontend app&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Frontend&lt;/strong&gt; : basic HTLM, CSS, Javascript-based frontend to plot some graphs&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This slide deck should give you a glimpse of how the demo would look like (demo youtube/github link below)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.slideshare.net/alohamora/demo-twitter-sentiment-analysis-on-kubernetes-using-kafka-mongodb-with-openshift-container-storage"&gt;Slide Deck&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here is the actual demo recording that you can go through, where i have explained how these components work together and make this a viable solution if you have a real-world use case along the same lines&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/ngolragtNto"&gt;YouTube Video Link&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;If you are interested in running this demo by yourself, you can find the code in my repo, Github project link : &lt;a href="https://github.com/ksingh7/twitter_streaming_app_on_openshift_OCS"&gt;https://github.com/ksingh7/twitter_streaming_app_on_openshift_OCS&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Happy Analysing Live Tweets&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>twitter</category>
    </item>
    <item>
      <title>Connecting to Kafka Cluster running on Kubernetes from your Local Machine : CLI &amp; Programatic Access</title>
      <dc:creator>karan singh</dc:creator>
      <pubDate>Tue, 15 Feb 2022 20:25:06 +0000</pubDate>
      <link>https://forem.com/ksingh7/connecting-to-kafka-cluster-running-on-kubernetes-from-your-local-machine-cli-programatic-access-37ld</link>
      <guid>https://forem.com/ksingh7/connecting-to-kafka-cluster-running-on-kubernetes-from-your-local-machine-cli-programatic-access-37ld</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Why do you need this?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For local development you want to connect to a remote Kafka Cluster running on OpenShift , that is deployed using Strimzi Operator&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisite
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;OpenShift Container Platform or OKD&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Strimzi Operator deployed&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Deploy Kafka Cluster&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a YAML file with these contents (only for dev/test clusters)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    apiVersion: kafka.strimzi.io/v1beta2
    kind: Kafka
    metadata:
      name: my-cluster
      namespace: nestjs-testing
    spec:
      entityOperator:
        topicOperator: {}
        userOperator: {}
      kafka:
        config:
          inter.broker.protocol.version: "2.8"
          log.message.format.version: "2.8"
          offsets.topic.replication.factor: 3
          transaction.state.log.min.isr: 2
          transaction.state.log.replication.factor: 3
        listeners:
        - name: plain
          port: 9092
          tls: false
          type: internal
        - name: tls
          port: 9093
          tls: true
          type: internal
        - name: route
          port: 9094
          tls: true
          type: route
        replicas: 3
        storage:
          type: ephemeral
        version: 2.8.0
      zookeeper:
        replicas: 3
        storage:
          type: ephemeral
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Preparing to Connect
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d &amp;gt; ca.crt

    keytool -import -trustcacerts -alias root -file ca.crt -keystore truststore.jks -storepass password -noprompt

    # This should create 2 files in PWD

    ls -l *.crt *.jks
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Grab Kafka Endpoint
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;KAFKA_ENDPOINT=$(oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.type=="route")].bootstrapServers}{"\n"}')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Connecting from CLI (Kafka Console Producer/Consumer)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Get Kafka Console Producer &amp;amp; Consumer script files
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    wget [https://dlcdn.apache.org/kafka/3.0.0/kafka_2.13-3.0.0.tgz](https://dlcdn.apache.org/kafka/3.0.0/kafka_2.13-3.0.0.tgz) ; tar -xvf kafka_2.13-3.0.0.tgz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Console Producer
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    kafka_2.13-3.0.0/bin/kafka-console-producer.sh --broker-list $KAFKA_ENDPOINT --producer-property security.protocol=SSL --producer-property ssl.truststore.password=password --producer-property ssl.truststore.location=truststore.jks --topic my-topic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Console Consumer
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    kafka_2.13-3.0.0/bin/kafka-console-consumer.sh --bootstrap-server $KAFKA_ENDPOINT --topic my-topic --from-beginning  --consumer-property security.protocol=SSL --consumer-property ssl.truststore.password=password --consumer-property ssl.truststore.location=truststore.jks
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Connecting from Python Client (running locally)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from kafka import KafkaProducer, KafkaConsumer
import json
from bson import json_util

bootstrap_server = 'my-cluster-kafka-route-bootstrap-nestjs-testing.apps.ocp.ceph-s3.com:443'

print("Producing messages to Kafka topic ...")
producer = KafkaProducer(bootstrap_servers=bootstrap_server, ssl_cafile='ca.crt', security_protocol="SSL")

for i in range(10):
    message = {'value': i}
    producer.send('my-topic', json.dumps(message, default=json_util.default).encode('utf-8'))

print("Consuming messages from Kafka topic ...")

consumer = KafkaConsumer('my-topic',  group_id='my-group', bootstrap_servers=bootstrap_server, ssl_cafile='ca.crt', security_protocol="SSL", consumer_timeout_ms=10000, enable_auto_commit=True)
for message in consumer:
    # message value and key are raw bytes -- decode if necessary!
    # e.g., for unicode: `message.value.decode('utf-8')`
    print ("%s:%d:%d: value=%s" % (message.topic, message.partition,message.offset,message.value))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AIs-Hc882G5EdnS4nb4vZFA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AIs-Hc882G5EdnS4nb4vZFA.png" alt="Output of Kafka Python Producer &amp;amp; Consumer example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is how you can connect to a remote Kafka cluster from your local machine. This is handy when you are developing locally and eventually deploying that to your OpenShift environment.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>kafka</category>
      <category>strimzi</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>The most elegant way to performance test your microservices running on Kubernetes</title>
      <dc:creator>karan singh</dc:creator>
      <pubDate>Tue, 15 Feb 2022 20:24:17 +0000</pubDate>
      <link>https://forem.com/ksingh7/the-most-elegant-way-to-performance-test-your-microservices-running-on-kubernetes-2mo2</link>
      <guid>https://forem.com/ksingh7/the-most-elegant-way-to-performance-test-your-microservices-running-on-kubernetes-2mo2</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“If you cannot measure it, you cannot improve it.” — Lord Kelvin&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Application programming interfaces ( &lt;a href="https://developers.redhat.com/topics/api-management" rel="noopener noreferrer"&gt;APIs&lt;/a&gt;) are the core system of most services. Client, web, and mobile applications are all built from APIs. They sit on the critical path between an end-user and a service, and they’re also used for intra-service communication.&lt;/p&gt;

&lt;p&gt;Because APIs are so critical, API performance is also essential. It doesn’t matter how well-built your front-end application is if the API data sources it accesses take several seconds to respond. This is especially true in a world of &lt;a href="https://developers.redhat.com/topics/microservices" rel="noopener noreferrer"&gt;microservices&lt;/a&gt;, where services depend on each other to provide data. In my opinion, the best feature your API can offer is great performance.&lt;/p&gt;

&lt;p&gt;To measure API performance, you need to benchmark your APIs as reliably as possible, which can be challenging. The optimal approach depends on your performance objectives. In this article, I’ll guide you through an elegant process for measuring the performance of backend applications running on &lt;a href="https://developers.redhat.com/openshift" rel="noopener noreferrer"&gt;Red Hat OpenShift&lt;/a&gt; or &lt;a href="https://developers.redhat.com/topics/kubernetes" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;. You’ll also learn how to use &lt;a href="https://github.com/tsenart/vegeta" rel="noopener noreferrer"&gt;Vegeta&lt;/a&gt;, a versatile HTTP load testing and benchmarking tool written in &lt;a href="https://developers.redhat.com/topics/go" rel="noopener noreferrer"&gt;Golang&lt;/a&gt;. We will deploy Vegeta on OpenShift and run performance tests in both standalone and distributed modes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Standalone benchmarking with Vegeta
&lt;/h2&gt;

&lt;p&gt;To run performance tests, you’ll need an API endpoint to test. I’ve provided a simple Go-based application that you will deploy on OpenShift. Once the application is deployed, we’ll apply various loads using Vegeta, as Figure 1 illustrates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2A6ByDVg-sZHKZgMkhB6YY_w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2A6ByDVg-sZHKZgMkhB6YY_w.png" alt="Figure 1. Triggering a performance test from your local machine."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can follow along with this example on your own OpenShift cluster if you have access to one; otherwise, you can use the &lt;a href="https://developers.redhat.com/developer-sandbox/get-started" rel="noopener noreferrer"&gt;Developer Sandbox for Red Hat OpenShift&lt;/a&gt;, which is free of charge with a Red Hat account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set up the example application
&lt;/h2&gt;

&lt;p&gt;To begin, log in to your OpenShift cluster from the command line and run the following commands to create a simple GET API in Go.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    oc new-project perf-testing 
    oc new-app golang~https://github.com/sclorg/golang-ex.git --name=golang-service1 
    oc expose deployment/golang-service1 --port=8888 
    oc expose service/golang-service1 
    oc get route golang-service1 curl http://$(oc get route golang-service1 -o json | jq -r .spec.host)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Set up the Vegeta benchmarking environment
&lt;/h2&gt;

&lt;p&gt;Next, install Vegeta on your local machine. Use this command if you’re running macOS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    brew update &amp;amp;&amp;amp; brew install vegeta
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use this command on &lt;a href="https://developers.redhat.com/topics/linux" rel="noopener noreferrer"&gt;Linux&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    wget https://github.com/tsenart/vegeta/releases/download/v12.8.4/vegeta_12.8.4_linux_amd64.tar.gz -O /tmp/vegeta.tar.gz 

    tar -xvf /tmp/vegeta.tar.gz sudo mv vegeta /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Launch your benchmarking process
&lt;/h2&gt;

&lt;p&gt;Now, you’re ready to launch the benchmarking process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    echo "GET http://$(oc get route golang-service1 -o json | jq -r .spec.host)" | vegeta attack -duration=60s | vegeta report
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to read the Vegeta output
&lt;/h2&gt;

&lt;p&gt;Vegeta’s output is largely straightforward; here’s an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ## Output

    Requests      [total, rate, throughput]         3000, 50.02, 49.84
    Duration      [total, attack, wait]             1m0s, 59.978s, 214.968ms
    Latencies     [min, mean, 50, 90, 95, 99, max]  204.638ms, 217.337ms, 214.49ms, 222.256ms, 227.075ms, 394.248ms, 492.278ms
    Bytes In      [total, mean]                     51000, 17.00
    Bytes Out     [total, mean]                     0, 0.00
    Success       [ratio]                           100.00%
    Status Codes  [code:count]                      200:3000
    Error Set:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some of the more important metrics here are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Requests&lt;/strong&gt;: The total number of requests, their rate per second, and their throughput.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Latencies&lt;/strong&gt;: The time taken to send the requests and the time taken to wait for the response.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Success&lt;/strong&gt;: The percentage of requests that were successful.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Status Code&lt;/strong&gt;: The status code and the number of requests that were successful.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To summarize the output of this first test: When we attempted to access the single pod of the service golang-service1 over the internet, we found a mean latency of around 217 milliseconds at 50 requests per second. This is a good indication that the application is working as expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing API performance with Vegeta
&lt;/h2&gt;

&lt;p&gt;Now, let’s get more serious. Run a test with 64 parallel workers, without any throttling or rate-limiting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    echo "GET http://$(oc get route golang-service1 -o json | jq -r .spec.host)"| vegeta attack -duration=60s -rate=0 -max-workers=64 | vegeta report
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s the output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ## Output

    Requests      [total, rate, throughput]         17908, 298.44, 297.38
    Duration      [total, attack, wait]             1m0s, 1m0s, 214.692ms
    Latencies     [min, mean, 50, 90, 95, 99, max]  201.543ms, 214.793ms, 214.57ms, 222.524ms, 224.861ms, 228.75ms, 563.581ms
    Bytes In      [total, mean]                     304436, 17.00
    Bytes Out     [total, mean]                     0, 0.00
    Success       [ratio]                           100.00%
    Status Codes  [code:count]                      200:17908
    Error Set:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the load increased to 64 threads, we got a mean latency of around 214 milliseconds at 298 requests per second-a rate per second that’s six times higher than what we saw in the previous test. The latency basically stayed constant (it actually dipped just a bit) as the number of requests per second increased, which is great.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: We are stress-testing a Golang app running on a single pod, hosted on a shared OpenShift cluster over the internet (in this case, the cluster is hosted on the Developer Sandbox). This is just an example to show you how to quickly run a performance test against your own application; it does not represent the real-world performance of any component.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmarking Kubernetes service names in a cluster
&lt;/h2&gt;

&lt;p&gt;In the previous test, you benchmarked an internet-facing service endpoint. In this test, you’ll use the locally accessible Kubernetes service name and run a performance test against that, as illustrated in Figure 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2ALk8Wr4ZcOCGFKNaiz-5Usg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2ALk8Wr4ZcOCGFKNaiz-5Usg.png" alt="Figure 2. Triggering a microservices performance test from within an OpenShift cluster."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Launch Vegeta as a pod in the same namespace (project) as your service, then run the same test that you ran previously:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    oc run vegeta --rm --attach --restart=Never --image="quay.io/karansingh/vegeta-ubi" -- sh -c \
    "echo 'GET [http://golang-service1:8888'](http://golang-service1:8888') | vegeta attack -duration=60s -rate=0 -max-workers=64 | vegeta report"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s the output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ## Output

    If you don't see a command prompt, try pressing enter.
    Requests      [total, rate, throughput]         732977, 12205.54, 12205.23
    Duration      [total, attack, wait]             1m0s, 1m0s, 1.514ms
    Latencies     [min, mean, 50, 90, 95, 99, max]  201.313µs, 3.133ms, 472.767µs, 1.751ms, 3.585ms, 80.58ms, 102.89ms
    Bytes In      [total, mean]                     12460609, 17.00
    Bytes Out     [total, mean]                     0, 0.00
    Success       [ratio]                           100.00%
    Status Codes  [code:count]                      200:732977
    Error Set:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The service is now seeing about 12,000 requests per second, and the mean latency is 3 milliseconds. These improved results should come as no surprise: All the traffic is staying within OpenShift, unlike the previous test in which Vegeta connected to the Golang service over the internet.&lt;/p&gt;

&lt;h2&gt;
  
  
  A distributed load test for parallel containerized workloads
&lt;/h2&gt;

&lt;p&gt;Next, let’s try a benchmarking test that’s closer to a real-world example. You’ll run the same test again, using the Golang application’s Kubernetes service name. But this time, you’ll launch multiple Vegeta pods, all hammering your backend microservice in parallel, as illustrated in Figure 3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AuLUmWyNivtjvIUN3WBVJWw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AuLUmWyNivtjvIUN3WBVJWw.png" alt="Figure 3. Triggering a distributed performance test on your microservices."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Start by scaling the Go application deployment to 10 replicas, which should make things more interesting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    oc scale deployment/golang-service1 --replicas=10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The best way to launch a distributed load test is to use OpenShift’s Job object, which provides the flexibility to launch parallel containerized workloads. Create a YAML file named vegeta-job.yaml with the following content. This sets parallelism to 10 pods, which will launch 10 Vegeta pods, which will in turn launch attacks on the Golang service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    apiVersion: batch/v1
    kind: Job
    metadata:
      name: vegeta
    spec:
      parallelism: 10
      completions: 10
      backoffLimit: 0
      template:
        metadata:
          name: vegeta
        spec:
          containers:
          - name: vegeta
            image: quay.io/karansingh/vegeta-ubi
            command: ["/bin/sh","-c"]
            args: ["echo 'GET [http://golang-service1:8888'](http://golang-service1:8888') | vegeta attack -duration=60s -rate=0 -max-workers=64 | tee /tmp/results.bin ; sleep 600" ]
          restartPolicy: OnFailure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply this file to the OpenShift cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    oc create -f vegeta-job.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait a minute or two for Vegeta to complete its test run. Then, execute the following command, which will import and aggregate the binary output files from all 10 Vegeta pods onto your local machine (where you installed the Vegeta binary at the beginning of this article) and generate a final performance report:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    for i in $(oc get po | grep -i vegeta | awk '{print $1}') ; do oc cp $i:tmp/results.bin $i.bin &amp;amp; done ; fg vegeta report *.bin ;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s the output from the distributed test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ## Output

    Requests [total, rate, throughput] 5651742, 88071.28, 88070.66
    Duration [total, attack, wait] 1m4s, 1m4s, 449.276µs
    Latencies [min, mean, 50, 90, 95, 99, max] 69.075µs, 5.538ms, 1.476ms, 16.563ms, 27.235ms, 47.554ms, 333.121ms
    Bytes In [total, mean] 96079614, 17.00
    Bytes Out [total, mean] 0, 0.00
    Success [ratio] 100.00%
    Status Codes [code:count] 200:5651742
    Error Set:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this test, the Golang service delivered a mean latency of around 5.5 milliseconds at 88,070 requests per second, which works out to about 5.2 million requests per minute. That’s pretty impressive performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tidy up your environment
&lt;/h2&gt;

&lt;p&gt;After my experiments, I like to clean up my system. You can tidy up your own machine with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    oc delete -f vegeta-job.yaml oc delete project perf-testing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There is a great saying from the physicist and engineer Lord Kelvin: “If you cannot measure it, you cannot improve it.” In this article, you’ve learned an elegant method for testing API performance in your distributed microservices applications. You can use the techniques introduced here to benchmark your next great backend microservice application running on OpenShift or Kubernetes.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>performance</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Backing up Prometheus using TSDB Snapshots : Kubernetes/OpenShift</title>
      <dc:creator>karan singh</dc:creator>
      <pubDate>Tue, 15 Feb 2022 11:30:00 +0000</pubDate>
      <link>https://forem.com/ksingh7/backing-up-prometheus-using-tsdb-snapshots-kubernetesopenshift-2pdi</link>
      <guid>https://forem.com/ksingh7/backing-up-prometheus-using-tsdb-snapshots-kubernetesopenshift-2pdi</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;These are my quick-and-dirty brain-dump notes to myself on how to backup prometheus database running on k8s or OpenShift&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Get Token for API Authentication and Prometheus API Route URL
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    oc whoami -t
    oc get route -n openshift-monitoring | grep -i prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Run sample curl request
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    curl -ks -H ‘Authorization: Bearer 0za4LjX9xPcqDjhWaufkgcQGo4grqA7ws4zvHrqgfY4’ ‘[https://prometheus-k8s-openshift-monitoring.apps.ocp4.cp4d.com/api/v1/query?query=ALERTS'](https://prometheus-k8s-openshift-monitoring.apps.ocp4.cp4d.com/api/v1/query?query=ALERTS') | python -m json.tool
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create TSDB Snapshot
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    curl -X ‘POST’ -ks -H ‘Authorization: Bearer 0za4LjX9xPcqDjhWaufkgcQGo4grqA7ws4zvHrqgfY4’ ‘[https://prometheus-k8s-openshift-monitoring.apps.ocp4.cp4d.com/api/v2/admin/tsdb/snapshot'](https://prometheus-k8s-openshift-monitoring.apps.ocp4.cp4d.com/api/v2/admin/tsdb/snapshot') | python -m json.tool
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;You might get an error, so you first need to enable Admin APIs
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    {
     “error”: “Admin APIs are disabled”,
     “message”: “Admin APIs are disabled”,
     “code”: 14
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Enable AdminAPI
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    oc -n openshift-monitoring patch prometheus k8s \
     — type merge — patch ‘{“spec”:{“enableAdminAPI”:true}}’
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify Admin API is enabled
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    oc describe po prometheus-k8s-1 | grep -i admin
     — web.enable-admin-api
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Hit TSDB snapshot API to take snapshot
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    curl -X ‘POST’ -ks -H ‘Authorization: Bearer 0za4LjX9xPcqDjhWaufkgcQGo4grqA7ws4zvHrqgfY4’ ‘[https://prometheus-k8s-openshift-monitoring.apps.ocp4.cp4d.com/api/v2/admin/tsdb/snapshot'](https://prometheus-k8s-openshift-monitoring.apps.ocp4.cp4d.com/api/v2/admin/tsdb/snapshot')
     | python -m json.tool
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    {
     “name”: “20210512T162601Z-33415dbd315ae6af”
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Find the snapshot and copy it locally&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The default folder is /prometheus/snapshots/ but you can find the data folder by finding the &lt;code&gt;--storage.tsdb.path&lt;/code&gt; config in your deployment.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    curl -X ‘POST’ -ks -H ‘Authorization: Bearer 0za4LjX9xPcqDjhWaufkgcQGo4grqA7ws4zvHrqgfY4’ ‘[https://prometheus-k8s-openshift-monitoring.apps.ocp4.cp4d.com/api/v1/admin/tsdb/snapshot'](https://prometheus-k8s-openshift-monitoring.apps.ocp4.cp4d.com/api/v1/admin/tsdb/snapshot') | python -m json.tool
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;List the the snapshot directory
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    oc -n openshift-monitoring exec -it prometheus-k8s-0 -c prometheus — /bin/sh -c “ls /prometheus/snapshots/20210512T162601Z-33415dbd315ae6af”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Copy the Snapshot from prometheus container to local machine
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    oc project openshift-monitoring
    oc rsync prometheus-k8s-0:/prometheus/snapshots/ /home/prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;And this is how to back Prometheus Snapshot to a local machine&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>prometheus</category>
      <category>backup</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>LMDTFY Series : MultiCloud Vs Hybrid Cloud</title>
      <dc:creator>karan singh</dc:creator>
      <pubDate>Tue, 15 Feb 2022 09:16:02 +0000</pubDate>
      <link>https://forem.com/ksingh7/lmdtfy-series-multicloud-vs-hybrid-cloud-52a7</link>
      <guid>https://forem.com/ksingh7/lmdtfy-series-multicloud-vs-hybrid-cloud-52a7</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aFAl2idN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/ksingh7/blogs/main/posts/assets/multi-cloud-hybrid-cloud-min.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aFAl2idN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/ksingh7/blogs/main/posts/assets/multi-cloud-hybrid-cloud-min.png" alt="" width="880" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Difference between : MultiCloud and  Hybrid Cloud
&lt;/h2&gt;

&lt;h3&gt;
  
  
  TL;DR
&lt;/h3&gt;

&lt;p&gt;These two terms are very similar yet different. Lol ;D&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MultiCloud&lt;/strong&gt; : Refers to using multiple clouds from multiple public or private cloud providers for workloads or tasks spanning across said clouds but without any interconnectivity between the clouds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hybrid Cloud&lt;/strong&gt; : Refers to the combination of public and private clouds with some degree of connectivity, integration, portability and unified management.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mMblqx4b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://raw.githubusercontent.com/ksingh7/blogs/main/posts/assets/thats-all-folks.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mMblqx4b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://raw.githubusercontent.com/ksingh7/blogs/main/posts/assets/thats-all-folks.gif" alt="" width="249" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>hybridcloud</category>
      <category>multicloud</category>
      <category>difference</category>
    </item>
    <item>
      <title>Allow Containers to run as root on OpenShift 4 : Hack</title>
      <dc:creator>karan singh</dc:creator>
      <pubDate>Mon, 14 Feb 2022 09:30:01 +0000</pubDate>
      <link>https://forem.com/ksingh7/allow-containers-to-run-as-root-on-openshift-4-hack-3gp7</link>
      <guid>https://forem.com/ksingh7/allow-containers-to-run-as-root-on-openshift-4-hack-3gp7</guid>
      <description>&lt;p&gt;🤫 Don’t tell anyone that i shared this trick with you&lt;/p&gt;

&lt;p&gt;Let me tell you that OpenShift is the most secure Kubernetes distribution on this planet. So OpenShift has the responsibility to secure your apps, which is why OpenShift does not allow containers to run as root.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“ First Principles : Never ever run your containers as root user”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Having said that, there are some instances when you want to run a pokemon container image that you found on some random container repository and want to run that to your OpenShift homelab/dev/test clusters.&lt;/p&gt;

&lt;p&gt;Well to do so, you need to allow running container image as root and this is how you can do it.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Login to OpenShift as system:admin
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc login -u system:admin -n default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2. Create a new project where you will be running that in-secure container&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc new-project pokemon-prj
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3. Add the security policy &lt;code&gt;anyuid&lt;/code&gt; to the service account responsible for creating your deployment, by default this user is default. The dash &lt;code&gt;z&lt;/code&gt; indicates that we want to manipulate a service account&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc adm policy add-scc-to-user anyuid -z default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4. You are all set, go and deploy or re-deploy your containers, it should work now, in &lt;code&gt;pokemon-prj&lt;/code&gt; project&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;  Don’t ever run containers as root in production environments&lt;/li&gt;
&lt;li&gt;  Don’t tell anyone that you learned this hack from this blog&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>openshift</category>
      <category>root</category>
      <category>container</category>
    </item>
    <item>
      <title>Export Medium Stories to Markdown format</title>
      <dc:creator>karan singh</dc:creator>
      <pubDate>Sun, 13 Feb 2022 20:54:42 +0000</pubDate>
      <link>https://forem.com/ksingh7/export-medium-stories-to-markdown-format-108b</link>
      <guid>https://forem.com/ksingh7/export-medium-stories-to-markdown-format-108b</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;If you are like me, I mean a frequent tech blogger. You would like to keep all your blog posts in Markdown format. If you have not thought about this until now, then I invite you to give a serious thought. &lt;/p&gt;

&lt;p&gt;Nothing could beat, a blog post in Markdown format. It is a simple format that is easy to read and write. It is also a very good format for blogs, and brownie points to store those in a git repository.&lt;/p&gt;

&lt;p&gt;I have been writing on Medium for a while now, and was using its inbuilt editor. Lately I have started to use hashnode and dev.to. I thought, why not move my Medium blog posts to dev.to and hashnode (both of them supports markdown). However, there was no easy way provided by Medium to export them to markdown.&lt;/p&gt;

&lt;h2&gt;
  
  
  mediumexporter to the rescue
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Install node on your computer&lt;/li&gt;
&lt;li&gt;Then install mediumexporter node package
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g mediumexporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Finally, export your medium stories to Markdown format using mediumexporter
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mediumexporter https://ksingh7.medium.com/kubernetes-endpoint-object-your-bridge-to-external-services-3fc48263b776 &amp;gt; exported-blog.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This will create &lt;code&gt;exported-blog.md&lt;/code&gt; file in your present working directory, which is the markdown file you were looking for.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;Like everything, this is not a perfect solution. But it is a good start. You might want to review the code/image formatting, before you can re-port this blog to other platforms.&lt;/p&gt;

&lt;p&gt;Special Thanks to the creator of  &lt;code&gt;markdownexporter&lt;/code&gt; that I used to export my Medium stories to Markdown format. Thanks a lot and well done my friend :)&lt;/p&gt;

</description>
      <category>medium</category>
      <category>markdown</category>
      <category>export</category>
    </item>
    <item>
      <title>Minimalistic guide to Launch Azure Red Hat Openshift</title>
      <dc:creator>karan singh</dc:creator>
      <pubDate>Sun, 13 Feb 2022 10:19:54 +0000</pubDate>
      <link>https://forem.com/ksingh7/minimalistic-guide-to-launch-azure-red-hat-openshift-2jhe</link>
      <guid>https://forem.com/ksingh7/minimalistic-guide-to-launch-azure-red-hat-openshift-2jhe</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fksingh7%2Fblogs%2Frefs%2Ftags%2Faro-blog-v4%2Fposts%2Fassets%2Faro-1.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fksingh7%2Fblogs%2Frefs%2Ftags%2Faro-blog-v4%2Fposts%2Fassets%2Faro-1.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is ARO
&lt;/h2&gt;

&lt;p&gt;Azure Red Hat OpenShift (ARO) is a fully-managed service of Red Hat OpenShift on Azure, Jointly engineered, managed, and supported by Microsoft and Red Hat. &lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Azure account with portal access&lt;/li&gt;
&lt;li&gt;Make sure your Azure User account has &lt;code&gt;Microsoft.Authorization/roleAssignments/write&lt;/code&gt; permissions, such as &lt;code&gt;User Access Administrator&lt;/code&gt; or &lt;code&gt;Owner&lt;/code&gt; &lt;a href="https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles" rel="noopener noreferrer"&gt;more info here&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fksingh7%2Fblogs%2Frefs%2Ftags%2Faro-blog-v4%2Fposts%2Fassets%2Faro-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Fksingh7%2Fblogs%2Frefs%2Ftags%2Faro-blog-v4%2Fposts%2Fassets%2Faro-2.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The default Azure resource quota for a new Azure subscription is 10 and does not meet this requirement. Increase quota from 10 to minimum 40 &lt;a href="https://docs.microsoft.com/en-us/azure/azure-portal/supportability/per-vm-quota-requests" rel="noopener noreferrer"&gt;by following this guide&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Launch Azure Cloud Shell from Azure Portal (top right). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Export some variables that we will often use in the rest of the tutorial.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export LOCATION=centralindia
export RESOURCEGROUP=ksingh-resource-group-india
export CLUSTER=azureopenstack
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify the quota
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az vm list-usage -l $LOCATION \
--query "[?contains(name.value, 'standardDSv3Family')]" \
-o table
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt; Grab subscription ID from Azure Portal
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az account set --subscription &amp;lt;SUBSCRIPTION ID&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Launching ARO Cluster
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Register the resource providers

az provider register -n Microsoft.RedHatOpenShift --wait
az provider register -n Microsoft.Compute --wait
az provider register -n Microsoft.Storage --wait
az provider register -n Microsoft.Authorization --wait

# Create a resource group

az group create --name $RESOURCEGROUP --location $LOCATION

# Create a virtual network

az network vnet create --resource-group $RESOURCEGROUP --name aro-vnet --address-prefixes 10.0.0.0/22

# Create two subnets in aro-vnet network for OpenShift control plane (master) and worker nodes

az network vnet subnet create --resource-group $RESOURCEGROUP --vnet-name aro-vnet --name master-subnet --address-prefixes 10.0.0.0/23 --service-endpoints Microsoft.ContainerRegistry

az network vnet subnet create --resource-group $RESOURCEGROUP --vnet-name aro-vnet --name worker-subnet --address-prefixes 10.0.2.0/23 --service-endpoints Microsoft.ContainerRegistry

# Update master node subnet network policy

az network vnet subnet update --name master-subnet --resource-group $RESOURCEGROUP --vnet-name aro-vnet --disable-private-link-service-network-policies true

# Finally, create ARO cluster with default configuration

az aro create --resource-group $RESOURCEGROUP --name $CLUSTER --vnet aro-vnet --master-subnet master-subnet --worker-subnet worker-subnet 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Connect to ARO
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;(GUI) Grab OpenShift Console URL and credentials
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az aro show --name $CLUSTER --resource-group $RESOURCEGROUP --query "consoleProfile.url" -o tsv
az aro list-credentials --name $CLUSTER --resource-group $RESOURCEGROUP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;(CLI) Install OpenShift Client &lt;code&gt;oc&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~
wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz

mkdir openshift
tar -zxvf openshift-client-linux.tar.gz -C openshift
echo 'export PATH=$PATH:~/openshift' &amp;gt;&amp;gt; ~/.bashrc &amp;amp;&amp;amp; source ~/.bashrc

apiServer=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query apiserverProfile.url -o tsv)
oc login $apiServer -u kubeadmin -p &amp;lt;kubeadmin password&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;The experience of launching OpenShift cluster from Azure Cloud Shell &lt;code&gt;aro&lt;/code&gt; is very simple and easy. &lt;br&gt;
Hope this guide helps you, See You Next Time o/&lt;/p&gt;

</description>
      <category>azure</category>
      <category>redhat</category>
      <category>openshift</category>
      <category>aro</category>
    </item>
  </channel>
</rss>
