<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: murugan</title>
    <description>The latest articles on Forem by murugan (@krpmuruga).</description>
    <link>https://forem.com/krpmuruga</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/krpmuruga"/>
    <language>en</language>
    <item>
      <title>Kubernets Helm Chart</title>
      <dc:creator>murugan</dc:creator>
      <pubDate>Tue, 13 Feb 2024 04:50:08 +0000</pubDate>
      <link>https://forem.com/krpmuruga/kubernets-helm-chart-412d</link>
      <guid>https://forem.com/krpmuruga/kubernets-helm-chart-412d</guid>
      <description>&lt;h3&gt;
  
  
  Definition:
&lt;/h3&gt;

&lt;p&gt;Helm is a package manager for Kubernetes. Similar to yum but for Kubernetes. It bundles all related manifests(such as deployment, service, etc) into a chart. When installing chart, helm creates a release. Benefits of helm is it provide templating, repeatability, reliability, multiple environment and ease of collaboration.&lt;/p&gt;

&lt;p&gt;Helm uses a packaging format called Charts. A Helm Chart in Kubernetes is a collection of files that describe a set of Kubernetes resources. The Helm Charts can be sent to a Helm Chart Repository. The details specified in the Helm Chart are used to enable a more consistent Kubernetes deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Components of a Helm Chart!
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Chart: A Helm chart is a collection of files that define a specific application deployment on Kubernetes. It includes templates, which are parameterized YAML files that generate Kubernetes manifests. Charts also contain a Chart.yaml file that describes the chart, and a values.yaml file that holds configurable parameters.&lt;/li&gt;
&lt;li&gt;Release: It is a specific instance of a chart which has been deployed to the Kubernetes cluster using Helm.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Repository:Helm repositories are collections of charts hosted on a server or a remote location. Charts can be published to a public repository like the official Helm Hub or to private repositories.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helm Client: The Helm client is the command-line interface (CLI) tool used to interact with Helm. It allows you to create, package, and manage charts, as well as install and upgrade releases&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Installing Helm
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 |bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;#Verify helm version&lt;/th&gt;
&lt;th&gt;#To check the current state of helm locally&lt;/th&gt;
&lt;th&gt;#helm use the Kubernetes cluster/host via the config file&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Comments&lt;/td&gt;
&lt;td&gt;helm version&lt;/td&gt;
&lt;td&gt;helm env&lt;/td&gt;
&lt;td&gt;~/.kube/config&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Advantage of using Helm
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Traditional deployments in Kubernetes is done with Kubectl across files into separately managed items. Helm deploys units called charts as managed releases.&lt;/li&gt;
&lt;li&gt;We can search for charts &lt;a href="https://helm.sh/"&gt;https://helm.sh/&lt;/a&gt; . Charts can be pulled(downloaded) and optionally unpacked(untar).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Chart Repo:
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;#To show all the repos&lt;/th&gt;
&lt;th&gt;#To add a repo&lt;/th&gt;
&lt;th&gt;#Search a repo&lt;/th&gt;
&lt;th&gt;#Search for hub&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;helm repo list&lt;/td&gt;
&lt;td&gt;helm repo add  url&lt;/td&gt;
&lt;td&gt;helm search repo mysql&lt;/td&gt;
&lt;td&gt;helm search hub&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Creating your first chart
&lt;/h3&gt;

&lt;p&gt;The easiest way to get started with chart is by using helm create command. The below command create a new chart named firsthelmchart&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;helm create myhelmchart&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;If you see the firsthelmchart directory you will see the structure like below&lt;/p&gt;

&lt;p&gt;firsthelmchart&lt;br&gt;
├── charts&lt;br&gt;
├── Chart.yaml&lt;br&gt;
├── templates&lt;br&gt;
│   ├── deployment.yaml&lt;br&gt;
│   ├── _helpers.tpl&lt;br&gt;
│   ├── hpa.yaml&lt;br&gt;
│   ├── ingress.yaml&lt;br&gt;
│   ├── NOTES.txt&lt;br&gt;
│   ├── serviceaccount.yaml&lt;br&gt;
│   ├── service.yaml&lt;br&gt;
│   └── tests&lt;br&gt;
│       └── test-connection.yaml&lt;br&gt;
└── values.yaml&lt;/p&gt;

&lt;p&gt;Chart.yaml Example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v2
name: my-web-app
description: A Helm chart for a simple web application
version: 1.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;values.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;replicaCount: 3
image:
  repository: my-web-app
  tag: latest
  pullPolicy: IfNotPresent
service:
  name: my-web-app
  type: ClusterIP
  port: 80
ingress:
  enabled: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a new directory called templates, and create a file called deployment.yaml inside it with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Chart.Name }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Chart.Name }}
  template:
    metadata:
      labels:
        app: {{ .Chart.Name }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create another file called service.yaml inside the templates directory with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.service.name }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - port: {{ .Values.service.port }}
      targetPort: http
  selector:
    app: {{ .Chart.Name }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, package your chart by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm package .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  templates
&lt;/h4&gt;

&lt;p&gt;Templates is one of the most important directory and this is where helm looks for your kubernetes object yaml definitions(Deployment,Services,etc).&lt;/p&gt;

&lt;h4&gt;
  
  
  values.yaml
&lt;/h4&gt;

&lt;p&gt;If templates holds the resource definition, then values provides the way to parameterize it.&lt;/p&gt;

&lt;h4&gt;
  
  
  _helpers.tpl
&lt;/h4&gt;

&lt;p&gt;Helm allows the use of Go templating in the resources files for kubernetes. The _helpers.tpl is used to define Go template helpers.&lt;/p&gt;

&lt;h4&gt;
  
  
  NOTES.txt
&lt;/h4&gt;

&lt;p&gt;This is the plain text file that get printed out when the chart is successfully deployed. Usually this contains the next step for using the chart.&lt;/p&gt;

&lt;h4&gt;
  
  
  Chart.yaml
&lt;/h4&gt;

&lt;p&gt;This file contain the metadata such as chart version, application version or constraints like minimum version of kubernetes/Helm which is required to manage this chart. Some required fields in Chart.yaml&lt;/p&gt;

&lt;p&gt;Now you understand all the chart structure and all the corresponding file, it’s time to create your first chart. For the purpose of that I will start with a simple nginx application and later on we will parameterize it using values.yaml.&lt;/p&gt;

&lt;p&gt;Step1: Cleanup all the files inside templates directory so that we will start from scratch&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;rm -rf templates/*&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Step2: Create deployment file&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl create deployment nginx --image=nginx --dry-run=client -o yaml &amp;gt;&amp;gt; templates/deployment.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Step3: Expose your deployment&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl expose deploy nginx --port 80 --type NodePort --dry-run=client -o yaml &amp;gt; /tmp/service.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;create deployment temporarily(as we want deploy it using helm).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create deployment nginx --image=nginx
kubectl expose deploy nginx --port 80 --type NodePort --dry-run=client -o yaml &amp;gt; templates/service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As this my first chart, I want to keep it bare bone and define only the required values inside Chart.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat Chart.yaml               
apiVersion: v2
name: myhelmchart
description: A Helm chart for Kubernetes
version: 0.1.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One more additional file, I need to create is NOTES.txt inside templates directory&lt;/p&gt;

&lt;p&gt;echo "This is first helm chart and it will deploy nginx application" &amp;gt;&amp;gt;templates/NOTES.txt&lt;/p&gt;

&lt;p&gt;I also cleanup rest of the files and directory to draw the clean slate&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;rm -rf values.yaml charts&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;h4&gt;
  
  
  Deploying your first chart
&lt;/h4&gt;

&lt;p&gt;With all the config in place it’s time to deploy first helm chart. We can also set the name of the release so that we can refer it back at the later stages.&lt;/p&gt;

&lt;p&gt;It always a good idea to run linter before deploying your chart to make sure there is no syntax error or your are following all the best practices&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm lint ./firsthelmchart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s now start with a dry-run to make sure everything looks good&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install demochart ./firsthelmchart --dry-run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If things good deploy your helm chart without dry-run option&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install demochart ./firsthelmchart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To verify it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or you can access your application via hitting any of the kubernetes node&lt;/p&gt;

&lt;p&gt;curl &lt;a href="http://175.12.0.1:32543/"&gt;http://175.12.0.1:32543/&lt;/a&gt;  &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Singlestore, MemSQL Basic understanding</title>
      <dc:creator>murugan</dc:creator>
      <pubDate>Wed, 07 Feb 2024 12:18:34 +0000</pubDate>
      <link>https://forem.com/krpmuruga/singlestore-memsql-basic-understanding-4nal</link>
      <guid>https://forem.com/krpmuruga/singlestore-memsql-basic-understanding-4nal</guid>
      <description>&lt;h1&gt;
  
  
  Single Store
&lt;/h1&gt;

&lt;p&gt;SingleStore DB is a distributed, relational database that handles both transactions and real-time analytics at scale. Querying is done through standard SQL drivers and syntax, leveraging a broad ecosystem of drivers and applications. Read the links below to get familiar with SingleStore DB:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://archived.docs.singlestore.com/v6.5/introduction/documentation-overview/"&gt;Reference&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How SingleStore DB Works
&lt;/h2&gt;

&lt;p&gt;SingleStore DB is a distributed, relational database that handles both transactions and real-time analytics at scale. It is accessible through standard SQL drivers and supports ANSI SQL syntax including joins, filters, and analytical capabilities (e.g. aggregates, group by, and windowing functions).&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Ingestion
&lt;/h3&gt;

&lt;p&gt;SingleStore DB can load data continuously or in bulk from a variety of sources. Popular loading sources include: files, a Kafka cluster, cloud repositories like Amazon S3, HDFS, or from other databases. As a distributed system, SingleStore DB ingests data streams using parallel loading to maximize throughput.&lt;/p&gt;

&lt;p&gt;SingleStore Pipelines is an easy-to-use built-in capability that extracts, transforms, and loads external data using sources such as Kafka, S3, Azure Blob, and filesystems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy
&lt;/h2&gt;

&lt;p&gt;SingleStore DB can be deployed on bare metal, on virtual machines, or in the cloud by using SingleStore Tools&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.singlestore.com/db/v7.5/en/deploy/linux.html"&gt;Reference&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Managed Service

&lt;ul&gt;
&lt;li&gt;The scalable cloud database for data-intensive applications, deployed on AWS, Azure, or GCP&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;DB Software

&lt;ul&gt;
&lt;li&gt;Manually deploy and manage a SingleStore database cluster on your own hardware.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  DB Creation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DROP TABLE IF EXISTS play_game;
CREATE DATABASE play_game;
USE play_game;

CREATE TABLE play_game (
    msgId INT,
    msgDateTime DATETIME NOT NULL,
    gameName  varchar(60),
    SORT KEY (timeSince),
    SHARD KEY (msgId)
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Pipelines
&lt;/h2&gt;

&lt;p&gt;MemSQL Pipelines is a MemSQL Database feature that natively ingests real-time data from external sources. As a built-in component of the database, Pipelines can extract, transform, and load external data without the need for third-party tools or middleware. Pipelines is robust, scalable, highly performant, and supports fully distributed workloads.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Alter pipeline&lt;/li&gt;
&lt;li&gt;Create Pipeline&lt;/li&gt;
&lt;li&gt;Create Pipeline with Transform&lt;/li&gt;
&lt;li&gt;Create Pipeline into Procedure&lt;/li&gt;
&lt;li&gt;Extract Pipeline into outfile&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://docs.singlestore.com/db/v7.5/en/reference/sql-reference/pipelines-commands/alter-pipeline.html"&gt;Reference&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Pipeline
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE PIPELINE play_game
AS LOAD DATA KAFKA '127.0.0.1:9092/play-games-1'
SKIP DUPLICATE KEY ERRORS
INTO TABLE play_game
FORMAT JSON (
    msgId &amp;lt;- msgId,
    msgDateTime &amp;lt;- msgDateTime,
    gameName &amp;lt;- gameName,
    timeSince &amp;lt;- timeSince
)

START PIPELINE play_game;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Basic Comments&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Syntax&lt;/th&gt;
&lt;th&gt;Comments&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Delete Pipeline&lt;/td&gt;
&lt;td&gt;DELETE PIPELINE play_games;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Delete Database&lt;/td&gt;
&lt;td&gt;DROP DATABASE IF EXISTS play_game;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Delete Table&lt;/td&gt;
&lt;td&gt;DROP TABLE IF EXISTS play_game;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stop Pipeline&lt;/td&gt;
&lt;td&gt;STOP PIPELINE play_game;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Test Pipeline&lt;/td&gt;
&lt;td&gt;Test PIPELINE play_game;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Extract Pipeline&lt;/td&gt;
&lt;td&gt;EXTRACT PIPELINE play_games INTO OUTFILE 'file_name.json'&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://archived.docs.singlestore.com/v6.5/concepts/pipelines/kafka-pipeline-quickstart/"&gt;Reference&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Load Data
&lt;/h3&gt;

&lt;p&gt;The easiest way to load data is to first upload it to Amazon S3 or Azure Blob Storage. Then, use SingleStore Pipelines to extract your data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Load Data from Amazon Web Services (AWS)&lt;/li&gt;
&lt;li&gt;Load Data from Microsoft Azure&lt;/li&gt;
&lt;li&gt;Load Data from the Filesystem using a Pipeline&lt;/li&gt;
&lt;li&gt;Load Data from Kafka&lt;/li&gt;
&lt;li&gt;Load Data from MySQL etc&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Load Data from KAFA
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bin/kafka-topics.sh --create --topic play-games-1 --partitions 1 --bootstrap-server localhost:9092 --replication-factor 1


bin/kafka-topics.sh --describe --topic play-games-1 --bootstrap-server localhost:9092

bin/kafka-console-producer.sh --topic play-games-1 --bootstrap-server localhost:9092

Insert into the data from Kafka usimng JSON

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic play-games-1 &amp;lt; PlayActivity.json

bin/kafka-console-consumer.sh --topic play-games-1 --from-beginning --bootstrap-server localhost:9092

List Topic

bin/kafka-topics.sh --list --bootstrap-server localhost:9092

Delete Topic

bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic DummyTopic

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Single Store Stored Procedure Example
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;USE quickstart_kafka;
CREATE TABLE test2 (id int, fname varchar(50), lname varchar(50), addr varchar(50));

DELIMITER //
CREATE OR REPLACE PROCEDURE process_users(GENERIC_BATCH query(GENERIC_JSON json)) AS
BEGIN
INSERT INTO test3(id,fname,lname,addr)
SELECT GENERIC_JSON::id, GENERIC_JSON::fname,GENERIC_JSON::lname,GENERIC_JSON::addr
FROM GENERIC_BATCH;
END //
DELIMITER ;

CREATE or replace PIPELINE jsonproce1 AS LOAD DATA KAFKA '127.0.0.1:9092/test2'
INTO PROCEDURE process_users (GENERIC_JSON &amp;lt;- %)FORMAT JSON ;

test pipeline jsonproce1;

start pipeline jsonproce1;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Manage Data
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Local and Unlimited Database Storage Concepts&lt;/li&gt;
&lt;li&gt;Unlimited Data Storage&lt;/li&gt;
&lt;li&gt;Backing Up and Restoring Data&lt;/li&gt;
&lt;li&gt;Exporting Particular Database
&amp;gt;&amp;gt; mysqldump -h 127.0.0.1 -u root -p -P 3306 foo &amp;gt; foo.sql&lt;/li&gt;
&lt;li&gt;Export All Database
&amp;gt;&amp;gt; mysqldump -h 127.0.0.1 -u root -p -P 3306 --all-databases &amp;gt; full_backup.sql&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Aditional Concepts:
&lt;/h3&gt;

&lt;p&gt;Open MemSQL Studio&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/quickstart/architecture/memsql/"&gt;https://aws.amazon.com/quickstart/architecture/memsql/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws-quickstart.s3.amazonaws.com/quickstart-memsql/doc/memsql-on-the-aws-cloud.pdf"&gt;https://aws-quickstart.s3.amazonaws.com/quickstart-memsql/doc/memsql-on-the-aws-cloud.pdf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;--&amp;gt; SQL Editor&lt;br&gt;
--&amp;gt; Master Aggregator Node (1 Master) (Aggregator we will share the meta data)&lt;br&gt;
--&amp;gt; Child Aggregator Node (2 Nodes)&lt;br&gt;
--&amp;gt; Leaf Node (4 Nodes) -  Leaf node we will share the data&lt;/p&gt;

&lt;p&gt;Every cluster atleast 1 master aggregater and 1 leaf node&lt;/p&gt;

&lt;h4&gt;
  
  
  Cluster
&lt;/h4&gt;

&lt;p&gt;A cluster encompasses all of the nodes that are included in a complete SingleStore DB installation. A cluster contains aggregator nodes and leaf nodes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Connect to your Cluster
&lt;/h4&gt;

&lt;p&gt;You have three ways to connect to your SingleStore DB cluster: SingleStore DB Studio, the singlestore client application, or through any compatible third-party applications.&lt;/p&gt;

&lt;h4&gt;
  
  
  Node
&lt;/h4&gt;

&lt;p&gt;A node is a server that has an installation of a SingleStore DB instance.&lt;/p&gt;

&lt;h4&gt;
  
  
  Leaf
&lt;/h4&gt;

&lt;p&gt;A leaf is a node that stores a subset of a cluster’s data. A cluster typically contains many leaves.&lt;/p&gt;

&lt;h4&gt;
  
  
  Partition
&lt;/h4&gt;

&lt;p&gt;A partition contains a subset (a shard) of a database’s data. A leaf contains multiple partitions. When you run CREATE DATABASE, SingleStore DB splits the database into partitions, &lt;br&gt;
which are distributed evenly among available leaves. With CREATE DATABASE, you can specify the number of partitions with the PARTITIONS=X option.&lt;/p&gt;

&lt;p&gt;If you don’t specify the number of partitions explicitly, the default is used (the number of leaves times the value of the default_partitions_per_leaf engine variable.&lt;/p&gt;

&lt;h4&gt;
  
  
  Aggregator
&lt;/h4&gt;

&lt;p&gt;An aggregator is a node that routes queries to the leaves, aggregates intermediate the results, and sends the results back to the client. &lt;br&gt;
There are two types of aggregators: master and child. A cluster contains exactly one master aggregator, a specialized aggregator responsible for cluster monitoring and failover. &lt;br&gt;
A cluster may also contain zero or more child aggregators (depending on query volume).&lt;/p&gt;

&lt;h3&gt;
  
  
  Time Series Functions:
&lt;/h3&gt;

&lt;p&gt;For storing and manipulating time series data, SingleStore supports the following functions:&lt;/p&gt;

&lt;p&gt;FIRST&lt;br&gt;
LAST&lt;br&gt;
TIME_BUCKET&lt;/p&gt;

&lt;p&gt;FIRST&lt;br&gt;
An aggregate function that returns the first value of a set of input values, defined as the value associated with the minimum time.&lt;/p&gt;

&lt;p&gt;LAST&lt;br&gt;
An aggregate function that returns the last value of a set of input values, defined as the value associated with the maximum time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;SELECT TIME_BUCKET('4d'), gameName&lt;br&gt;
FROM play_activity ORDER BY 2, 1;&lt;/p&gt;

&lt;p&gt;SELECT TIME_BUCKET('1d'), gameName&lt;br&gt;
FROM play WHERE timeSince BETWEEN '2021-10-06T00:00:10.530+00:00' AND '2021-10-07T23:00:10.530+00:00' ORDER BY 2, 1;&lt;/p&gt;

&lt;p&gt;SELECT TIME_BUCKET('6h', timeSince) as timeSince, gameName&lt;br&gt;
FROM play&lt;br&gt;
WHERE timeSince &amp;gt; now() - INTERVAL 1 day ORDER BY 2, 1;&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Additional comments
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Restart Studio&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;sudo systemctl restart singlestoredb-studio&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installation Steps
&lt;/h2&gt;

&lt;p&gt;sudo apt-get update&lt;/p&gt;

&lt;p&gt;&lt;a href="http://localhost:9000/cluster"&gt;http://localhost:9000/cluster&lt;/a&gt;&lt;br&gt;
&lt;a href="http://localhost:8080/cluster/localhost/dashboard"&gt;http://localhost:8080/cluster/localhost/dashboard&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;p&gt;timeSince&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.singlestore.com/blog/spin-up-a-memsql-cluster-on-windows-in-20-minutes/"&gt;Single Store Installation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/memsql/singlestore-go-template"&gt;Go Code Example with Docker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/singlestore-labs/singlestore-workshop-data-intensive-app"&gt;Git Reference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.singlestore.com/"&gt;Singlestore Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.tutorialkart.com/apache-kafka-tutorial/"&gt;Kafka Tutorial&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/pulse/memsql-installtion-local-ubuntu-vm-vishwajeet-dabholkar"&gt;Learning 1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/quickstart/architecture/memsql/"&gt;Learning 2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws-quickstart.s3.amazonaws.com/quickstart-memsql/doc/memsql-on-the-aws-cloud.pdf"&gt;Learning 3&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@VeryFatBoy/how-to-use-singlestore-pipelines-with-kafka-a86df67e48ec"&gt;Medium&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>singlestore</category>
      <category>memsql</category>
    </item>
    <item>
      <title>Golang Design Patterns</title>
      <dc:creator>murugan</dc:creator>
      <pubDate>Wed, 07 Feb 2024 12:18:16 +0000</pubDate>
      <link>https://forem.com/krpmuruga/golang-design-patterns-2lo</link>
      <guid>https://forem.com/krpmuruga/golang-design-patterns-2lo</guid>
      <description>&lt;h1&gt;
  
  
  Design Patterns in go
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Creational design pattern
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Abstract design Pattern

&lt;ul&gt;
&lt;li&gt;Singleton Pattern&lt;/li&gt;
&lt;li&gt;Builder&lt;/li&gt;
&lt;li&gt;Factory&lt;/li&gt;
&lt;li&gt;Object Pool&lt;/li&gt;
&lt;li&gt;Prototype
&lt;/li&gt;
&lt;/ul&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;


Abstract Factory Design Pattern in Go
&lt;/h3&gt;
&lt;h4&gt;


Definition:
&lt;/h4&gt;


&lt;p&gt;Abstract Factory Design Pattern is a creational design pattern that lets you create a family of related objects. It is an abstraction over the factory pattern.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example use cases:
&lt;/h4&gt;

&lt;p&gt;Imagine you need to buy a sports kit which has a shoe and short. Preferably most of the time you would want to buy a full sports kit of a similar factory i.e either nike or adidas. This is where the abstract factory comes into the picture as concrete products that you want is shoe and a short and these products will be created by the abstract factory of nike and adidas.&lt;br&gt;
Both these two factories – nike and adidas implement iSportsFactory interface.&lt;br&gt;
We have two product interfaces.&lt;/p&gt;

&lt;p&gt;iShoe – this interface is implemented by nikeShoe and adidasShoe concrete product.&lt;br&gt;
iShort – this interface is implemented by nikeShort and adidasShort concrete product.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://golangbyexample.com/abstract-factory-design-pattern-go/"&gt;Note&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Builder Pattern in GoLang
&lt;/h3&gt;

&lt;p&gt;Builder Pattern is a creational design pattern used for constructing complex objects.&lt;/p&gt;

&lt;p&gt;When To Use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Use Builder pattern when the object constructed is big and requires multiple steps. It helps in less size of the constructor.  The construction of the house becomes simple and it does not require a large constructor&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When a different version of the same product needs to be created. For example, in the below code we see a different version of house ie. igloo and the normal house being constructed by iglooBuilder and normalBuilder&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When half constructed final object should not exist. Again referring to below code the house created will either be created fully or not created at all. The Concrete Builder struct holds the temporary state of house object being created&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Factory Design Pattern in Go
&lt;/h3&gt;

&lt;p&gt;Factory design pattern is a creational design pattern and it is also one of the most commonly used pattern. This pattern provides a way to hide the creation logic of the instances being created.&lt;br&gt;
The client only interacts with a factory struct and tells the kind of instances that needs to be created. The factory class interacts with the corresponding concrete structs and returns the correct instance back.&lt;/p&gt;

&lt;p&gt;When To Use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We have iGun interface which defines all methods a gun should have&lt;/li&gt;
&lt;li&gt;There is gun struct that implements the iGun interface.&lt;/li&gt;
&lt;li&gt;Two concrete guns ak47 and maverick. Both embed gun struct and hence also indirectly implement all methods of iGun and hence are of iGun type&lt;/li&gt;
&lt;li&gt;We have a gunFactory struct which creates the gun of type ak47 or maverick.&lt;/li&gt;
&lt;li&gt;The main.go acts as a client and instead of directly interacting with ak47 or maverick, it relies on gunFactory to create instances of ak47 and maverick&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Object Pool Design Pattern in Go
&lt;/h3&gt;

&lt;p&gt;The Object Pool Design Pattern is a creational design pattern in which a pool of objects is initialized and created beforehand and kept in a pool. As and when needed, a client can request an object from the pool, use it, and return it to the pool. The object in the pool is never destroyed.&lt;/p&gt;

&lt;p&gt;When to Use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;When the cost to create the object of the class is high and the number of such objects that will be needed at a particular time is not much.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let’s take the example of DB connections. Each of the connection object creation is cost is high as there is network calls involved and also at a time not more than a certain connection might be needed. The object pool design pattern is perfectly suitable for such cases.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;When the pool object is the immutable object&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Again take the example of DB connection again. A DB connection is an immutable object. Almost none of its property needs to be changed&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For performance reasons. It will boost the application performance significantly since the pool is already created&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prototype Pattern in Go
&lt;/h3&gt;

&lt;p&gt;It is a creational design pattern that lets you create copies of objects. In this pattern, the responsibility of creating the clone objects is delegated to the actual object to clone.&lt;/p&gt;

&lt;p&gt;The object to be cloned exposes a clone method which returns a clone copy of the object&lt;/p&gt;

&lt;p&gt;When to Use&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We use prototype pattern when the object to be cloned creation process is complex i.e the cloning may involve vases of handling deep copies, hierarchical copies, etc. Moreover, there may be some private members too which cannot be directly accessed.&lt;/li&gt;
&lt;li&gt;A copy of the object is created instead of creating a new instance from scratch. This prevents costly operations involved while creating a new object such as database operation.&lt;/li&gt;
&lt;li&gt;When you want to create a copy of a new object, but it is only available to you as an interface. Hence you cannot directly create copies of that object.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Singleton Pattern in Go
&lt;/h3&gt;

&lt;p&gt;Singleton Design Pattern is a creational design pattern and also one of the most commonly used design pattern. This pattern is used when only a single instance of the struct should exist. This single instance is called a singleton object. Some of the cases where the singleton object is applicable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DB instance – we only want to create only one instance of DB object and that instance will be used throughout the application. &lt;/li&gt;
&lt;li&gt;Logger instance – again only one instance of the logger should be created and it should be used throughout the application.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Behavioural Design Pattern
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Observer Design Pattern in Go
&lt;/h3&gt;

&lt;p&gt;Observer Design Pattern is a behavioral design pattern. This pattern allows an instance (called subject) to publish events to other multiple instances (called observers).  These observers subscribe to the subject and hence get notified by events in case of any change happening in the subject.&lt;/p&gt;

&lt;p&gt;Let’s take an example. In the E-Commerce website, many items go out of stock. There can be customers, who are interested in a particular item that went out of stock. There are three solutions to this problem&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The customer keeps checking the availability of the item at some frequency.&lt;/li&gt;
&lt;li&gt;E-Commerce bombard customers with all new items available which are in stock&lt;/li&gt;
&lt;li&gt;The customer subscribes only to the particular item he is interested in and gets notified in the case that item is available. Also, multiple customers can subscribe to the same product&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Strategy pattern
&lt;/h3&gt;

&lt;p&gt;Strategy pattern is a behavioral design pattern. Strategy pattern allows changing the behavior of an object at the runtime which is useful in certain cases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://golangbyexample.com/facade-design-pattern-in-golang/"&gt;https://golangbyexample.com/facade-design-pattern-in-golang/&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Facade Design Pattern in Go:
&lt;/h3&gt;

&lt;p&gt;Facade Pattern is classified as a structural design pattern. This design pattern is meant to hide the complexities of the underlying system and provide a simple interface to the client. It provides a unified interface to underlying many interfaces in the system so that from the client perspective it is easier to use. Basically it provides a higher level abstraction over a complicated system.&lt;/p&gt;

&lt;p&gt;Check Account&lt;br&gt;
Check Security Pin&lt;br&gt;
Credit/Debit Balance&lt;br&gt;
Make Ledger Entry&lt;br&gt;
Send Notification&lt;/p&gt;

&lt;p&gt;As can be noticed, there are a lot of things that happen for a single debit/credit operation. This is where the Facade pattern comes into picture. As a client one only needs to enter the Wallet Number, Security Pin, Amount and specify the type of operation. The rest of the things are taken care of in the background. Here we create a WalletFacade which provides a simple interface to the client and which takes care of dealing with all underlying operations&lt;/p&gt;

&lt;h2&gt;
  
  
  Structural Design Pattern:
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Adapter Patern:
&lt;/h3&gt;

&lt;p&gt;Adapter is a structural design pattern, which allows incompatible objects to collaborate.&lt;/p&gt;

&lt;p&gt;The adapter pattern is a design pattern that allows the interface of a class to be used as another interface.&lt;/p&gt;

&lt;p&gt;This is a frequently used pattern in ORM libraries (ActiveRecord, GORM, etc), because it allows for connections and queries to different data backends (databases) while keeping the same client interface.&lt;/p&gt;

&lt;p&gt;Another common occurrence of this pattern is in hardware drivers. Take printers for instance. Most printers can be used either via USB or serial connections or over the network&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;We have a client code that expects some features of an object (Lightning port), but we have another object called adaptee (Windows laptop) which offers the same functionality but through a different interface (USB port)&lt;/p&gt;

&lt;h3&gt;
  
  
  Bridge in Go
&lt;/h3&gt;

&lt;p&gt;Bridge is a structural design pattern that divides business logic or huge class into separate class hierarchies that can be developed independently.&lt;/p&gt;

&lt;p&gt;One of these hierarchies (often called the Abstraction) will get a reference to an object of the second hierarchy (Implementation). The abstraction will be able to delegate some (sometimes, most) of its calls to the implementations object. Since all implementations will have a common interface, they’d be interchangeable inside the abstraction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Say, you have two types of computers: Mac and Windows. Also, two types of printers: Epson and HP. Both computers and printers need to work with each other in any combination. The client doesn’t want to worry about the details of connecting printers to computers.&lt;/p&gt;

&lt;p&gt;If we introduce new printers, we don’t want our code to grow exponentially. Instead of creating four structs for the 2*2 combination, we create two hierarchies:&lt;/p&gt;

&lt;p&gt;Abstraction hierarchy: this will be our computers&lt;br&gt;
Implementation hierarchy: this will be our printers&lt;br&gt;
These two hierarchies communicate with each other via a Bridge, where the Abstraction (computer) contains a reference to the Implementation (printer). Both the abstraction and implementation can be developed independently without affecting each other.&lt;/p&gt;

&lt;p&gt;See the example code &lt;a href="https://refactoring.guru/design-patterns/bridge/go/example"&gt;https://refactoring.guru/design-patterns/bridge/go/example&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Composite in Go
&lt;/h3&gt;

&lt;p&gt;Composite is a structural design pattern that allows composing objects into a tree-like structure and work with the it as if it was a singular object.&lt;/p&gt;

&lt;p&gt;Composite became a pretty popular solution for the most problems that require building a tree structure. Composite’s great feature is the ability to run methods recursively over the whole tree structure and sum up the results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;br&gt;
Let’s try to understand the Composite pattern with an example of an operating system’s file system. In the file system, there are two types of objects: files and folders. There are cases when files and folders should be treated to be the same way. This is where the Composite pattern comes in handy.&lt;/p&gt;

&lt;p&gt;Imagine that you need to run a search for a particular keyword in your file system. This search operation applies to both files and folders. For a file, it will just look into the contents of the file; for a folder, it will go through all files of that folder to find that keyword.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>GraphQL with Golang</title>
      <dc:creator>murugan</dc:creator>
      <pubDate>Thu, 13 Apr 2023 13:00:25 +0000</pubDate>
      <link>https://forem.com/krpmuruga/graphql-with-golang-b05</link>
      <guid>https://forem.com/krpmuruga/graphql-with-golang-b05</guid>
      <description>&lt;h1&gt;
  
  
  GraphQL with Golang
&lt;/h1&gt;

&lt;p&gt;GraphQL is an open-source data query and manipulation language for APIs, and a runtime for fulfilling queries with existing data. GraphQL was developed internally by Facebook in 2012 before being publicly released in 2015&lt;/p&gt;

&lt;p&gt;GraphQL is designed to make APIs fast, flexible, and developer-friendly. As an alternative to REST, GraphQL lets developers construct requests that pull data from multiple data sources in a single API call. GraphQL uses HTTP POST method to submit queries. We have a single service endpoint with variations in HTTP body, driven by schema and query patterns.&lt;/p&gt;

&lt;p&gt;API developers will create a schema to define queries, data types, mutations that clients can query through&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Todo {
  id: ID!
  text: String!
  done: bool
}
type Query {
  todos: [Todo!]!
  findTodor(id: String!): Todo
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Consider a the above definition. We define Todo type, and what queries are exposed as a part of this API. When a query request comes-in, GraphQL resolver will validate the request against pre-defined schema and invokes connected methods for resolution. Another benefit to this approach is flexibility for clients to request fields they are interested in - It’s better for network, security and process. If we don’t want “lastName”, we don’t need to fetch it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query {
  todos {
    id
    text
    done
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Golang &amp;amp; GraphQL
&lt;/h2&gt;

&lt;p&gt;Let's start with the bootstrap process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir gographql
cd gographql
go mod init gographql
go get github.com/99designs/gqlgen
mkdir tools &amp;amp;&amp;amp; cd tools
touch tools.go
cd ..
code .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open file tools.go and add the following content to it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package tools
import _ "github.com/99designs/gqlgen"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save your changes in VSCode and back to terminal. Let's now generate sample graphql schema and golang connectors to execute&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go run github.com/99designs/gqlgen init
Creating gqlgen.yml
Creating graph/schema.graphqls
Creating server.go
Generating…
go: downloading gopkg.in/check.v1 v1.0.0–20190902080502–41f04d3bba15
go: downloading github.com/kr/pretty v0.1.0
go: downloading github.com/kr/text v0.1.0
Exec "go run ./server.go" to start GraphQL server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should now see a bunch of files added. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;model/model_gen.go golang data types for graphql types created in schema.graphqls&lt;/li&gt;
&lt;li&gt;generated/generated.go this is a file with generated code that injects context and middleware for each query and mutation&lt;/li&gt;
&lt;li&gt;schema.graphqls a GraphQL schema file where types, queries, and mutations are defined.&lt;/li&gt;
&lt;li&gt;schema.resolvers.go a go file with wrapper code for queries and mutations defined in schema.graphqls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want make any changes in the schema.graphqls do the changes and delete the schema.resolvers.go and run the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go run github.com/99designs/gqlgen generate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Back to terminal and run the following comments&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go run ./server.go
2022/06/22 18:20:03 connect to http://localhost:8080/ for GraphQL playground
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Head over to &lt;a href="http://localhost:8080"&gt;http://localhost:8080&lt;/a&gt;. You’ll see the playground.&lt;/p&gt;

&lt;p&gt;Let’s introduce a custom schema and see how it works. Let’s say we want to have a simple CRUD ops on Todo model. Nothing complicated in Todo model — Id, Text and Done should be good for us to start.&lt;/p&gt;

&lt;p&gt;Head over to “schema.graphqls” replace file content with the following&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Todo {
  id: ID!
  text: String!
  done: Boolean!
  todo: Todo!
}

type Todo {
  id: ID!
  text: String!
}

type Query {
  todos: [Todo!]!
  findTodo(id: String!): Todo
}

input NewTodo {
  text: String!
  Id: String!
}

input DeleteTodo {
 Id: String!
}

type Mutation {
  createTodo(input: NewTodo!): Todo!
  removeTodo(input: DeleteTodo!): Todo!
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;We have defined a Todo type&lt;/li&gt;
&lt;li&gt;Couple of queries — One to list all todos and other to search for todo&lt;/li&gt;
&lt;li&gt;Input types — Types for mutation requests.&lt;/li&gt;
&lt;li&gt;Mutation — Capability to modify data/model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To proceed, delete "schema.resolvers.go" and head back to terminal&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go run github.com/99designs/gqlgen generate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should not see any error(s). New “schema.resolvers.go” file should be generated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// CreateTodo is the resolver for the createTodo field.
func (r *mutationResolver) CreateTodo(ctx context.Context, input model.NewTodo) (*model.Todo, error) {
    log.Println("Create a new Todo")
    uuidValue := uuid.NewString()
    todo := &amp;amp;model.Todo{ID: uuidValue, Text: input.Text, Done: true}
    todos = append(todos, todo)
    return todo, nil
}

// RemoveTodo is the resolver for the removeTodo field.
func (r *mutationResolver) RemoveTodo(ctx context.Context, input model.DeleteTodo) (*model.Todo, error) {
    index := -1
    for i, todo := range todos {
        if todo.ID == input.ID {
            index = i
        }
    }
    if index == -1 {
        return nil, errors.New("Cannot find todo you are looking for!")
    }
    todo := todos[index]
    todos = append(todos[:index], todos[index+1:]...)

    return todo, nil
}

// Todos is the resolver for the todos field.
func (r *queryResolver) Todos(ctx context.Context) ([]*model.Todo, error) {
    return todos, nil
}

// FindTodo is the resolver for the findTodo field.
func (r *queryResolver) FindTodo(ctx context.Context, id string) (*model.Todo, error) {
    panic(fmt.Errorf("not implemented: FindTodo - findTodo"))
}

// Mutation returns MutationResolver implementation.
func (r *Resolver) Mutation() MutationResolver { return &amp;amp;mutationResolver{r} }

// Query returns QueryResolver implementation.
func (r *Resolver) Query() QueryResolver { return &amp;amp;queryResolver{r} }

type mutationResolver struct{ *Resolver }
type queryResolver struct{ *Resolver }

// !!! WARNING !!!
// The code below was going to be deleted when updating resolvers. It has been copied here so you have
// one last chance to move it out of harms way if you want. There are two reasons this happens:
//   - When renaming or deleting a resolver the old code will be put in here. You can safely delete
//     it when you're done.
//   - You have helper methods in this file. Move them out to keep these resolver files clean.
func (r *queryResolver) Todo(ctx context.Context, id string) (*model.Todo, error) {
    panic(fmt.Errorf("not implemented: Todo - todo"))
}

var todos []*model.Todo

func init() {
    log.Println("Init - Todo array to be created")
    todos = make([]*model.Todo, 0)
    todos = append(todos, &amp;amp;model.Todo{ID: "1", Text: "Hello One", Done: true, Todo: &amp;amp;model.Todo{Text: "Text1"}})
    todos = append(todos, &amp;amp;model.Todo{ID: "2", Text: "Hello Two", Done: true, Todo: &amp;amp;model.Todo{Text: "Text2"}})
    todos = aid:ppend(todos, &amp;amp;model.Todo{ID: "3", Text: "Hello Three", Done: true, Todo: &amp;amp;model.Todo{Text: "Text3"}})
    log.Println("Init - Todo array has been created")
}
func (r *queryResolver) FindTodos(ctx context.Context, id string) (*model.Todo, error) {
    for _, todo := range todos {
        if todo.ID == id {
            return todo, nil
        }
    }
    return nil, errors.New("Cannot find todo you are looking for!")
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the server.go and and see how the application works using below queries&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query{
    todos{
        id
        text
        done
    }
}

Find Todo

query{
    findTodo(id:"1"){
        id
        text
        done
    }
}

Create Todo

mutation createTodo($todo: NewTodo!){
    createTodo(input:$todo){
        id
        text
        done
    }
}

Query Variable:

{
    "todo": {
        "id":4,
        "text": "Hello 4",
        "done": true
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Reference:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://medium.com/@krishnan.srm/graphql-with-golang-331de956d956"&gt;GraphQL with Golang Reference&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.apollographql.com/blog/graphql/golang/using-graphql-with-golang/"&gt;GraphQL&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>golan</category>
      <category>graphql</category>
    </item>
    <item>
      <title>NATS WITH GOLANG USING DOCKER</title>
      <dc:creator>murugan</dc:creator>
      <pubDate>Thu, 21 Jul 2022 09:52:00 +0000</pubDate>
      <link>https://forem.com/krpmuruga/nats-with-golang-using-docker-dhj</link>
      <guid>https://forem.com/krpmuruga/nats-with-golang-using-docker-dhj</guid>
      <description>&lt;h1&gt;
  
  
  NATS
&lt;/h1&gt;

&lt;p&gt;NATs is an open source, lightweight and high performance native message system developed by go language. It provides an abstraction layer between an application or service and the underlying physical network. The data is encoded and sent as a message by the publisher. Messages are received, decoded, and processed by one or more subscribers.&lt;br&gt;
NATs makes it easy for programs to communicate across different environments, languages, cloud providers, and internal systems&lt;/p&gt;
&lt;h3&gt;
  
  
  Advantages
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Ease of use&lt;/li&gt;
&lt;li&gt;Highly performant&lt;/li&gt;
&lt;li&gt;Zero downtime scaling&lt;/li&gt;
&lt;li&gt;Supports edge, cloud or hybrid deployments&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Install NATS in windows Locally
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;choco install nats-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;To start a simple demonstration server locally, simply run:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;nats-server&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;nats-server -m 8222 (if you want to enable the HTTP monitoring functionality)&lt;br&gt;
You can also test the monitoring endpoint, viewing &lt;a href="http://localhost:8222" rel="noopener noreferrer"&gt;http://localhost:8222&lt;/a&gt; with a browser.&lt;/p&gt;

&lt;p&gt;Installing via Docker&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;docker pull nats:latest&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To run NATS on docker&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;docker run -p 4222:4222 -ti nats:latest&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;NATS Server Containerization&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;docker pull nats&lt;br&gt;
docker run nats&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;By default the NATS server exposes multiple ports:&lt;br&gt;
4222 is for clients.&lt;br&gt;
8222 is an HTTP management port for information reporting.&lt;br&gt;
6222 is a routing port for clustering.&lt;br&gt;
Use -p or -P to customize.&lt;/p&gt;

&lt;p&gt;Or&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;docker run -it nats:alpine sh&lt;br&gt;
docker run -itd nats:alpine&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To run a server with the ports exposed on a docker network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker network create nats
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --name nats --network nats --rm -p 4222:4222 -p 8222:8222 nats --http_port 8222

or 

docker run --itd --restart always --name nats-alpine --network nats --rm -p 4222:4222 -p 8222:8222 nats:alpine --http_port 8222
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  NATS CLI
&lt;/h3&gt;

&lt;p&gt;Downloading Release Build&lt;/p&gt;

&lt;p&gt;You can find the latest release of nats-server &lt;a href="https://dev.tohere"&gt;https://github.com/nats-io/nats-server/releases&lt;/a&gt;.&lt;br&gt;
You could manually download the zip file matching your systems architecture, and unzip it. You could also use curl to download a specific version&lt;/p&gt;

&lt;p&gt;Download this build (&lt;a href="https://github.com/nats-io/nats-server/releases" rel="noopener noreferrer"&gt;https://github.com/nats-io/nats-server/releases&lt;/a&gt;) "nats-0.0.33-windows-amd64" from this URL and unzip in you machine and navigating into the given path and run the nats account info command which show the accout information. Run the below commands to perform publish or subscribe the given messages&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;nats-server -js&lt;br&gt;
nats account info&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Publish Message&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;nats publish hello my-data&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Subscribe&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;nats subscribe hello&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Request-Reply&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;nats request hello&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Reply&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;nats reply hello data&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Benchmark&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;nats bench --msgs 1000 --pub 2 --sub 2 test&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Subject based Message
&lt;/h3&gt;

&lt;p&gt;Basically, NATs is about publishing and listening to messages. Both depend heavily on the subject of the message. Simply put, subject is a string of characters that publishers and subscribers can use to find each other’s names.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrc9n2r8q6oqfbq11sn9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsrc9n2r8q6oqfbq11sn9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Publish Subscribe
&lt;/h3&gt;

&lt;p&gt;NATS implements a publish-subscribe message distribution model for one-to-many communication. A publisher sends a message on a subject and any active subscriber listening on that subject receives the message. Subscribers can also register interest in wildcard subjects that work a bit like a regular expression (but only a bit). This one-to-many pattern is sometimes called a fan-out.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faiaw7j0kms3og2yks1de.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faiaw7j0kms3og2yks1de.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example&lt;/p&gt;

&lt;p&gt;Pub&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nc, _ := nats.Connect(nats.DefaultURL)
defer nc.Close()

nc.Publish("foo", []byte("Hello World!"))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sub&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nc, _ := nats.Connect(nats.DefaultURL)
defer nc.Close()

nc.Subscribe("foo", func(m *nats.Msg) {
    fmt.Printf("Received a message: %s\n", string(m.Data))
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Request Reply
&lt;/h3&gt;

&lt;p&gt;Request response is a common pattern in modern distributed system. When sending a request, the application either waits for a response with a specific timeout or receives the response asynchronously. With the increasing complexity requirements of modern systems, many technologies need additional components to complete the complete feature set.&lt;br&gt;
NATs supports this pattern through its core communication mechanisms (publish and subscribe). The request is published together with the answer topic on a given topic, and the responder listens for the topic and sends the response to the answer topic. The reply topic is usually a topic called “inbox”, which will be dynamically directed back to the requester, regardless of the location of either party.&lt;br&gt;
The ability of NATs even allows multiple responses, the first of which is utilized and the system effectively discards the additional responses. This allows a complex pattern to have multiple responders to reduce response latency and jitter.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5583usftbzrof7btyxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5583usftbzrof7btyxy.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Go to the /nats-0.0.33-windows-amd64 path and run below commands&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1.Start the first member of the queue group (In new window)
nats reply hello "service instance A Reply# {{Count}}"

2.Start the second member of the queue group [In new window]
&amp;gt; nats reply hello "service instance B Reply# {{Count}}"

3.Start the thrid member of the queue group [In new window]
&amp;gt; nats reply hello "service instance C Reply# {{Count}}"

4.Publish NATS Message [In new window]
&amp;gt; nats request foo "Simple request"
5.Publish another Message [In new window]
&amp;gt; nats pub foo "Another simple request"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see that only one of the hello group subscribers receives the message and replies it, and you can also see which one of the available hello subscribers processed the request from the reply message received (i.e. service instance A, B or C)&lt;/p&gt;

&lt;p&gt;Example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nc, _ := nats.Connect(nats.DefaultURL)
defer nc.Close()

nc.Subscribe("foo", func(m *nats.Msg) {
    nc.Publish(m.Reply, []byte("I will help you"))
})

reply, _ := nc.Request("foo", []byte("help"), 50*time.Millisecond)

fmt.Println(string(reply.Data))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Queue Group
&lt;/h3&gt;

&lt;p&gt;NATs provides a built-in load balancing feature called distributed queues. Using queue subscribers balances messaging between a set of subscribers that can be used to provide application fault tolerance and large-scale workload processing.&lt;br&gt;
To create a queue subscription, you only need the subscriber to register the queue name. All subscribers with the same queue name form a queue group. No configuration is required. When a message is published on a registered topic, a member of the group is randomly selected to receive the message. Although the queue group has multiple subscribers, each message is used by only one subscriber.&lt;br&gt;
An important feature of NATs is that queue groups are defined by the application and its queue subscribers, not on the server configuration.&lt;br&gt;
Queue subscribers are ideal for extended services. Scaling up is as simple as running another application, and scaling down is to terminate the application with a signal that exhausts the running request. This flexibility and lack of any configuration changes make NATs an excellent service communication technology that can work with all platform technologies&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fba0xojiunlg8l4yr6efi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fba0xojiunlg8l4yr6efi.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nc, _ := nats.Connect(nats.DefaultURL)
defer nc.Close()

received := 0

nc.QueueSubscribe("foo", "worker_group", func(_ *nats.Msg) {
    received++
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Acknowledgements
&lt;/h3&gt;

&lt;p&gt;In systems with up to one semantics, messages are sometimes lost. If your application is performing a request answer, it should use a timeout to handle any network or application failures. It’s always a good idea to set a timeout on a request and use code that handles the timeout. When publishing an event or data flow, one way to ensure message delivery is to convert it to a request reply with an ACK concept. In NATs, an ACK can be an empty message, a message without a payload&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4j25ze29mfrpwu4vgasf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4j25ze29mfrpwu4vgasf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nc, _ := nats.Connect(nats.DefaultURL)
defer nc.Close()

nc.Subscribe("foo", func(m *nats.Msg) {
    //nc.Publish(m.Reply, []byte("I will help you"))
    m.Respond([]byte(""))
})

reply, _ := nc.Request("foo", []byte("help"), 50*time.Millisecond)

fmt.Println("ack:", string(reply.Data))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  NATS-TOP
&lt;/h3&gt;

&lt;p&gt;nats-top is a top-like tool for monitoring nats-server servers.&lt;br&gt;
The nats-top tool provides a dynamic real-time view of a NATS server. nats-top can display a variety of system summary information about the NATS server, such as subscription, pending bytes, number of messages, and more, in real time&lt;/p&gt;

&lt;p&gt;Installation in GO&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go install github.com/nats-io/nats-top
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to start the nats server monitoring enabled we can use the below commands&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nats-server -m 8222
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start Nats-top&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nats-top
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="http://thinkmicroservices.com/blog/2021/nats/nats.html" rel="noopener noreferrer"&gt;Reference 1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.nats.io/" rel="noopener noreferrer"&gt;Reference 2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developpaper.com/an-introduction-to-the-messaging-model-of-golang-nats/" rel="noopener noreferrer"&gt;Reference 3&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>nats</category>
      <category>go</category>
      <category>docker</category>
    </item>
    <item>
      <title>Containerizing your Go Applications with Docker</title>
      <dc:creator>murugan</dc:creator>
      <pubDate>Thu, 03 Mar 2022 14:52:23 +0000</pubDate>
      <link>https://forem.com/krpmuruga/go-applications-with-docker-2f0g</link>
      <guid>https://forem.com/krpmuruga/go-applications-with-docker-2f0g</guid>
      <description>&lt;h1&gt;
  
  
  Containerizing your Go Applications with Docker
&lt;/h1&gt;

&lt;p&gt;Create &lt;code&gt;docker-go&lt;/code&gt; folder and Build a simple Web Server in Go. For create the main.go file and add the below code&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
    "fmt"
    "html"
    "log"
    "net/http"
)

func main() {

    http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
        fmt.Fprintf(w, "Hello, %q", html.EscapeString(r.URL.Path))
    })

    http.HandleFunc("/hi", func(w http.ResponseWriter, r *http.Request){
        fmt.Fprintf(w, "Hi")
    })

    log.Fatal(http.ListenAndServe(":8083", nil))

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;if we want to run this go run main.go which will kick of a server on &lt;a href="http://localhost:8083"&gt;http://localhost:8083&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Write Dockerfile
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## We specify the base image we need for our
## go application
FROM golang:1.16-alpine
## We create an /app directory within our
## image that will hold our application source
## files
RUN mkdir /app
## We copy everything in the root directory
## into our /app directory
ADD . /app
## We specify that we now wish to execute 
## any further commands inside our /app
## directory
WORKDIR /app
## we run go build to compile the binary
## executable of our Go program
RUN go build -o main .
## Our start command which kicks off
## our newly created binary executable
CMD ["/app/main"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the below command to create the docker image&lt;/p&gt;

&lt;p&gt;docker build -t docker-go .&lt;/p&gt;

&lt;p&gt;It will build the code and showing the below output in your terminal&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;C:\work\latest\docker-go&amp;gt;docker build -t docker-go . 
[+] Building 38.4s (10/10) FINISHED
 =&amp;gt; [internal] load build definition from Dockerfile                                                                    0.2s 
 =&amp;gt; =&amp;gt; transferring dockerfile: 643B                                                                                    0.0s 
 =&amp;gt; [internal] load .dockerignore                                                                                       0.2s 
 =&amp;gt; =&amp;gt; transferring context: 2B                                                                                         0.0s 
 =&amp;gt; [internal] load metadata for docker.io/library/golang:1.12.0-alpine3.9                                              3.2s 
 =&amp;gt; [1/5] FROM docker.io/library/golang:1.12.0-alpine3.9@sha256:6c143f415448f883ed034529162b3dc1c85bb2967fdd1579a8735  29.4s 
 =&amp;gt; =&amp;gt; resolve docker.io/library/golang:1.12.0-alpine3.9@sha256:6c143f415448f883ed034529162b3dc1c85bb2967fdd1579a87356  0.1s 
 =&amp;gt; =&amp;gt; sha256:69371d496b2b4e99120216fa3c5057b0c5468411370ab24ea99cd87d7b1d9203 1.36kB / 1.36kB                          0.0s 
 =&amp;gt; =&amp;gt; sha256:2205a315f9c751a8c205aa42f29ad0ff29918c40d85c8ddaabac99e0cb46b5d8 3.80kB / 3.80kB                          0.0s 
 =&amp;gt; =&amp;gt; sha256:6c143f415448f883ed034529162b3dc1c85bb2967fdd1579a873567b22bcb790 2.37kB / 2.37kB                          0.0s 
 =&amp;gt; =&amp;gt; sha256:8e402f1a9c577ded051c1ef10e9fe4492890459522089959988a4852dee8ab2c 2.75MB / 2.75MB                          0.7s 
 =&amp;gt; =&amp;gt; sha256:de1a1e452942df2228b914d2ce9be43f18b137f4ebc3dce9491bc08c2630a2ea 154B / 154B                              0.7s 
 =&amp;gt; =&amp;gt; sha256:ce7779d8bfe3415e27ec3bf5950b2ab67a854c608595f0f2e49066fb5806fd12 301.88kB / 301.88kB                      0.8s 
 =&amp;gt; =&amp;gt; extracting sha256:8e402f1a9c577ded051c1ef10e9fe4492890459522089959988a4852dee8ab2c                               0.7s 
 =&amp;gt; =&amp;gt; sha256:a8c461e224a623234c9f2ff60e4249678c9e6847add7152ac80239b13d14df4c 126B / 126B                              1.0s 
 =&amp;gt; =&amp;gt; sha256:1bdc943bc000449a960c5121688afe0a9b51837407bf0478391b6bad6642d36f 124.28MB / 124.28MB                     15.7s 
 =&amp;gt; =&amp;gt; extracting sha256:ce7779d8bfe3415e27ec3bf5950b2ab67a854c608595f0f2e49066fb5806fd12                               0.5s 
 =&amp;gt; =&amp;gt; extracting sha256:de1a1e452942df2228b914d2ce9be43f18b137f4ebc3dce9491bc08c2630a2ea                               0.0s 
 =&amp;gt; =&amp;gt; extracting sha256:1bdc943bc000449a960c5121688afe0a9b51837407bf0478391b6bad6642d36f                              11.8s 
 =&amp;gt; =&amp;gt; extracting sha256:a8c461e224a623234c9f2ff60e4249678c9e6847add7152ac80239b13d14df4c                               0.0s 
 =&amp;gt; [internal] load build context                                                                                       0.2s 
 =&amp;gt; =&amp;gt; transferring context: 1.06kB                                                                                     0.1s 
 =&amp;gt; [2/5] RUN mkdir /app                                                                                                1.7s
 =&amp;gt; [3/5] ADD . /app                                                                                                    0.3s 
 =&amp;gt; [4/5] WORKDIR /app                                                                                                  0.2s 
 =&amp;gt; [5/5] RUN go build -o main .                                                                                        2.1s 
 =&amp;gt; exporting to image                                                                                                  0.6s 
 =&amp;gt; =&amp;gt; exporting layers                                                                                                 0.5s 
 =&amp;gt; =&amp;gt; writing image sha256:9bbcb2070c03ab1affa9f2dc62292f1cea589a60c05d4796c4490c0fa31afedb                            0.0s 
 =&amp;gt; =&amp;gt; naming to docker.io/library/docker-go
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now verify that our image exists on our machine by typing docker images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;which will show the list of images which contain &lt;code&gt;docker-go&lt;/code&gt; docker images.&lt;/p&gt;

&lt;p&gt;In order to run this newly created image, we can use the docker run command and pass in the ports we want to map to and the image we wish to run.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -p 8083:8083 -it docker-go
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;p 8083:8083 - This exposes our application which is running on port 8083 within our container on &lt;a href="http://localhost:8083"&gt;http://localhost:8083&lt;/a&gt; on our local machine.&lt;/li&gt;
&lt;li&gt;it - This flag specifies that we want to run this image in interactive mode with a tty for this container process.&lt;/li&gt;
&lt;li&gt;docker-go - This is the name of the image that we want to run in a container.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;if we open up &lt;a href="http://localhost:8083"&gt;http://localhost:8083&lt;/a&gt; within our browser, we should see that our application is successfully responding with Hello, "/".&lt;/p&gt;

&lt;p&gt;When i see the docker image size it roughly take 800MB in size. This is absolutely massive for smaller go application. If you want to reduce this do the following steps &lt;/p&gt;

&lt;h2&gt;
  
  
  A Simple Multi-Stage Dockerfile
&lt;/h2&gt;

&lt;p&gt;In order to see why multi-stage Dockerfiles are useful, we'll be creating a simple Dockerfile that features one stage to both build and run our application, and a second Dockerfile which features both a builder stage and a production stage.&lt;/p&gt;

&lt;p&gt;Once we've created these two distinct Dockerfiles, we should be able to compare them and hopefully see for ourselves just how multi-stage Dockerfiles are preferred over their simpler counterparts!&lt;/p&gt;

&lt;p&gt;So, The above dockerfile, we created a really simple Docker image in which our Go application was both built and run from. &lt;/p&gt;

&lt;p&gt;You should hopefully notice that last column states that the size of this image is 800MBs in size. This is absolutely massive for something that builds and runs a very simple Go application.&lt;/p&gt;

&lt;p&gt;Within this image will be all the packages and dependencies that are needed to both compile and run our Go applications. With multi-stage dockerfiles, we can actually reduce the size of these images dramatically by splitting things up into two distinct stages.&lt;/p&gt;

&lt;p&gt;let's take a look at how we could define a real multi-stage Dockerfile that will first compile our application and subsequently run our application in a lightweight Docker alpine image.&lt;/p&gt;

&lt;p&gt;Next, we'll create a Dockerfile in the same directory as our main.go file above. This will feature a builder stage and a production stage which will be built from two distinct base images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## We'll choose the incredibly lightweight
## Go alpine image to work with
FROM golang:1.16-alpine AS builder

## We create an /app directory in which
## we'll put all of our project code
RUN mkdir /app
ADD . /app
WORKDIR /app
## We want to build our application's binary executable
RUN CGO_ENABLED=0 GOOS=linux go build -o main ./...

## the lightweight scratch image we'll
## run our application within
FROM alpine:latest AS production
## We have to copy the output from our
## builder stage to our production stage
COPY --from=builder /app .
## we can then kick off our newly compiled
## binary exectuable!!
CMD ["./main"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we've defined this multi-stage Dockerfile, we can proceed to build it using the standard docker build command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t go-multi-stage .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, when we compare the sizes of our simple image against our multi-stage image, we should see a dramatic difference in sizes. Our previous, docker-go image was roughly 800MB in size, whereas this multi-stage image is about 1/80th the size.&lt;/p&gt;

&lt;p&gt;If we want to try running this to verify it all works, we can do so using the following docker run command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -p 8083:8083 go-multi-stage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;if we open up &lt;a href="http://localhost:8083"&gt;http://localhost:8083&lt;/a&gt; within our browser, we should see that our application is successfully responding with Hello, "/".&lt;/p&gt;

</description>
      <category>go</category>
      <category>docker</category>
    </item>
    <item>
      <title># Golang Profiling</title>
      <dc:creator>murugan</dc:creator>
      <pubDate>Tue, 22 Feb 2022 05:48:04 +0000</pubDate>
      <link>https://forem.com/krpmuruga/-golang-profiling-3j</link>
      <guid>https://forem.com/krpmuruga/-golang-profiling-3j</guid>
      <description>&lt;h2&gt;
  
  
  Golang Profiling
&lt;/h2&gt;

&lt;p&gt;Profiling is a form of analyzing the program for optimizable code or functions. In software engineering, it is an essential task since optimization is a key factor when developing an application. &lt;br&gt;
Avoiding memory leaks and optimizing for better performance is almost always a target for enterprise-level software.&lt;/p&gt;

&lt;p&gt;Profiling is an important task that cannot be avoided for larger applications. Profiling helps us understand CPU and memory intensive code and helps us write better code for optimization&lt;/p&gt;

&lt;p&gt;To create any profile first we need to have a test file. Here we are going to use the Fibonacci function to see profiles of it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// main.go
package main

func Fib2(n int) uint64 {
    if n == 0 {
        return 0
    } else if n == 1 {
        return 1
    } else {
        return Fib2(n-1) + Fib2(n-2)
    }
}

func main() {
    // fmt.Println(Fib2(30)) // 832040
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, the test file is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// main_test.go
package main

import "testing"

func TestGetVal(t *testing.T) {
    for i := 0; i &amp;lt; 1000; i++ {             // running it a 1000 times
        if Fib2(30) != 832040 {
            t.Error("Incorrect!")
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now when we run the go test we got the below output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output:

go test
Pass
Ok
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It took almost 7.25s to complete. Now let’s create a CPU profile. We will use this command shown below to generate a profile file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go test -cpuprofile cpu.prof -bench . 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;which will return the below output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PS C:\work\latest\go-tutorial\profile&amp;gt; go test -cpuprofile cpu.prof -bench .
PASS
ok      profile 8.052s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we will view it using the pprof tool. The command will be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go tool pprof cpu.prof
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we run the above commands we got the below output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PS C:\work\latest\go-tutorial\profile&amp;gt; go tool pprof cpu.prof
Type: cpu
Time: Feb 22, 2022 at 10:53am (IST)
Duration: 7.64s, Total samples = 4.86s (63.60%)
Entering interactive mode (type "help" for commands, "o" for options")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Typing help will show all commands available. We will run the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;top5 -cum
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The topN function shown top N entries and the -cum flag shows the cumulative time taken&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(pprof) top5 -cum
Showing nodes accounting for 4700ms, 96.71% of 4860ms total
Dropped 33 nodes (cum &amp;lt;= 24.30ms)
Showing top 5 nodes out of 15
      flat  flat%   sum%        cum   cum%
    4700ms 96.71% 96.71%     4700ms 96.71%  profile.Fib2
         0     0% 96.71%     4700ms 96.71%  profile.TestGetVal
         0     0% 96.71%     4700ms 96.71%  testing.tRunner
         0     0% 96.71%       90ms  1.85%  runtime.mcall
         0     0% 96.71%       90ms  1.85%  runtime.park_m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;based on the above results we will optimize our code.&lt;/p&gt;

&lt;p&gt;To create a memory profile we simply use this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go test -memprofile mem.prof -bench . 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;which will return the below output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PS C:\work\latest\go-tutorial\profile&amp;gt; go test -memprofile mem.prof -bench .
PASS
ok      profile 0.644s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can generate both profiles (main, test) using the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go test -cpuprofile cpu.prof -memprofile mem.prof -bench . 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;which will return the below output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PS C:\work\latest\go-tutorial\profile&amp;gt; go test -cpuprofile cpu.prof -memprofile mem.prof -bench .
PASS
ok      profile 0.655s 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>go</category>
      <category>profiling</category>
    </item>
    <item>
      <title>Basic Linux commands for developers</title>
      <dc:creator>murugan</dc:creator>
      <pubDate>Thu, 10 Feb 2022 09:25:00 +0000</pubDate>
      <link>https://forem.com/krpmuruga/basic-linux-commands-for-developers-3210</link>
      <guid>https://forem.com/krpmuruga/basic-linux-commands-for-developers-3210</guid>
      <description>&lt;h1&gt;
  
  
  Basic Linux Commands for developers
&lt;/h1&gt;

&lt;h3&gt;
  
  
  FILE AND DIRECTORY COMMANDS
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;S.No&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Commands&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;List all files in a long listing (detailed) format&lt;/td&gt;
&lt;td&gt;ls -al&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Display the present working directory&lt;/td&gt;
&lt;td&gt;pwd&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Create a directory&lt;/td&gt;
&lt;td&gt;mkdir&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Remove (delete) file&lt;/td&gt;
&lt;td&gt;rm file&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;Remove the directory and its contents recursively&lt;/td&gt;
&lt;td&gt;rm -r directory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;Force removal of file without prompting for confirmation&lt;/td&gt;
&lt;td&gt;rm -f file&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;Copy file1 to file2&lt;/td&gt;
&lt;td&gt;cp file1 file2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;Copy source_directory recursively to destination. If destination exists, copy source_directory into destination, otherwise create destination with the contents of source_directory&lt;/td&gt;
&lt;td&gt;cp -r source_directory destination&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;Rename or move file1 to file2. If file2 is an existing directory, move file1 into directory file2&lt;/td&gt;
&lt;td&gt;mv file1 file2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;View the contents of file&lt;/td&gt;
&lt;td&gt;cat file&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;Browse through a text file&lt;/td&gt;
&lt;td&gt;less file&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;Display the first 10 lines of file&lt;/td&gt;
&lt;td&gt;head file&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;Display the last 10 lines of file&lt;/td&gt;
&lt;td&gt;tail file&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  ARCHIVES (TAR FILES)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;S.No&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Commands&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Create tar named archive.tar containing directory&lt;/td&gt;
&lt;td&gt;tar cf archive.tar directory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Extract the contents from archive.tar&lt;/td&gt;
&lt;td&gt;tar xf archive.tar&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Create a gzip compressed tar file name archive.tar.gz.&lt;/td&gt;
&lt;td&gt;tar czf archive.tar.gz directory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Extract a gzip compressed tar file.&lt;/td&gt;
&lt;td&gt;tar xjf archive.tar.bz2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;Create a tar file with bzip2 compression&lt;/td&gt;
&lt;td&gt;tar cjf archive.tar.bz2 directory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;Extract a bzip2 compressed tar file&lt;/td&gt;
&lt;td&gt;tar xjf archive.tar.bz2&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  FILE PERMISSIONS
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;S.No&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Commands&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;chmod 777 filename&lt;/td&gt;
&lt;td&gt;rwx rwx rwx&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;chmod 775 filename&lt;/td&gt;
&lt;td&gt;rwx rwx r-x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;chmod 755 filename&lt;/td&gt;
&lt;td&gt;rwx r-x r-x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;chmod 664 filename&lt;/td&gt;
&lt;td&gt;rw- rw- r--&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;chmod 644 filename&lt;/td&gt;
&lt;td&gt;rw- r-- r--&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  NETWORKING
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;S.No&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Commands&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Display DNS information for domain&lt;/td&gt;
&lt;td&gt;dig domain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Display DNS IP address for domain&lt;/td&gt;
&lt;td&gt;host domain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Display all local IP addresses of the host.&lt;/td&gt;
&lt;td&gt;hostname -I&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;Display listening tcp and udp ports and corresponding programs&lt;/td&gt;
&lt;td&gt;netstat -nutlp&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  INSTALLING PACKAGES
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;S.No&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Commands&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Search for a package by keyword&lt;/td&gt;
&lt;td&gt;yum search keyword&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Install package&lt;/td&gt;
&lt;td&gt;yum install package&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Display description and summary information about package&lt;/td&gt;
&lt;td&gt;yum info package&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Install package from local file named package.rpm&lt;/td&gt;
&lt;td&gt;rpm -i package.rpm&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;Remove/uninstall package&lt;/td&gt;
&lt;td&gt;yum remove package&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  SEARCH
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;S.No&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Commands&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Search for pattern in file&lt;/td&gt;
&lt;td&gt;grep pattern file&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Search recursively for pattern in directory&lt;/td&gt;
&lt;td&gt;grep -r pattern directory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Find files and directories by name&lt;/td&gt;
&lt;td&gt;locate name&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Find files in /home/john that start with "prefix".&lt;/td&gt;
&lt;td&gt;find /home/john -name 'prefix*'&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  VIM Exiting
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;S.No&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Commands&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;write (save) the file, but don’t exit&lt;/td&gt;
&lt;td&gt;:w&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;write out the current file using sudo&lt;/td&gt;
&lt;td&gt;:w !sudo tee %&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;write (save) and quit&lt;/td&gt;
&lt;td&gt;:wq or :x or ZZ&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;quit (fails if there are unsaved changes)&lt;/td&gt;
&lt;td&gt;:q – quit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;quit and throw away unsaved changes&lt;/td&gt;
&lt;td&gt;:q! or ZQ&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;search for pattern&lt;/td&gt;
&lt;td&gt;/pattern –&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

</description>
      <category>linux</category>
      <category>vim</category>
    </item>
    <item>
      <title>Docker-compose basic tutorial</title>
      <dc:creator>murugan</dc:creator>
      <pubDate>Mon, 07 Feb 2022 12:18:51 +0000</pubDate>
      <link>https://forem.com/krpmuruga/docker-compose-basic-tutorial-5hhh</link>
      <guid>https://forem.com/krpmuruga/docker-compose-basic-tutorial-5hhh</guid>
      <description>&lt;h1&gt;
  
  
  Docker Compose Tutorial
&lt;/h1&gt;

&lt;p&gt;Docker simplifies the process of managing application processes in containers. While containers are similar to virtual machines in certain ways, they are more lightweight and resource-friendly. This allows developers to break down an application environment into multiple isolated services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Compose
&lt;/h2&gt;

&lt;p&gt;Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.&lt;br&gt;
Compose works in all environments: production, staging, development, testing, as well as CI workflow etc&lt;/p&gt;

&lt;p&gt;Setting Up a docker-compose.yml File&lt;/p&gt;

&lt;p&gt;mkdir ~/compose-demo&lt;/p&gt;

&lt;p&gt;cd ~/compose-demo&lt;/p&gt;

&lt;p&gt;mkdir app&lt;/p&gt;

&lt;p&gt;nano app/index.html&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!doctype html&amp;gt;
&amp;lt;html lang="en"&amp;gt;
&amp;lt;head&amp;gt;
    &amp;lt;meta charset="utf-8"&amp;gt;
    &amp;lt;title&amp;gt;Docker Compose Demo&amp;lt;/title&amp;gt;
    &amp;lt;link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/kognise/water.css@latest/dist/dark.min.css"&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;

    &amp;lt;h1&amp;gt;This is a Docker Compose Demo Page.&amp;lt;/h1&amp;gt;
    &amp;lt;p&amp;gt;This content is being served by an Nginx container.&amp;lt;/p&amp;gt;

&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;nano docker-compose.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.7'
services:
  web:
    image: nginx:alpine
    ports:
      - "8000:80"
    volumes:
      - ./app:/usr/share/nginx/html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We then have the services block, where we set up the services that are part of this environment. In our case, we have a single service called web. This service uses the nginx:alpine image and sets up a port redirection with the ports directive. All requests on port 8000 of the host machine (the system from where you’re running Docker Compose) will be redirected to the web container on port 80, where Nginx will be running. &lt;/p&gt;

&lt;p&gt;The volumes directive will create a shared volume between the host machine and the container. This will share the local app folder with the container, and the volume will be located at /usr/share/nginx/html inside the container, which will then overwrite the default document root for Nginx.&lt;/p&gt;

&lt;p&gt;docker-compose up -d&lt;/p&gt;

&lt;p&gt;This command will show you information about the running containers and their state, as well as any port redirections currently in place:&lt;/p&gt;

&lt;p&gt;docker-compose ps&lt;/p&gt;

&lt;p&gt;To check the logs produced by your Nginx container, you can use the logs command:&lt;/p&gt;

&lt;p&gt;docker-compose logs&lt;/p&gt;

&lt;p&gt;If you want to pause the environment execution without changing the current state of your containers, you can use:&lt;/p&gt;

&lt;p&gt;docker-compose pause&lt;/p&gt;

&lt;p&gt;To resume execution after issuing a pause:&lt;/p&gt;

&lt;p&gt;docker-compose unpause&lt;/p&gt;

&lt;p&gt;The stop command will terminate the container execution, but it won’t destroy any data associated with your containers:&lt;/p&gt;

&lt;p&gt;docker-compose stop&lt;/p&gt;

&lt;p&gt;If you want to remove the containers, networks, and volumes associated with this containerized environment, use the down command:&lt;/p&gt;

&lt;p&gt;Notice that this won’t remove the base image used by Docker Compose to spin up your environment (in our case, nginx:alpine). This way, whenever you bring your environment up again with a docker-compose up, the process will be much faster since the image is already on your system.&lt;/p&gt;

&lt;p&gt;docker-compose down&lt;/p&gt;

&lt;p&gt;In case you want to also remove the base image from your system, you can use:&lt;/p&gt;

&lt;p&gt;docker image rm nginx:alpine&lt;/p&gt;

</description>
      <category>docker</category>
      <category>dockercompose</category>
    </item>
    <item>
      <title>Building Web Applications using Beego + Mysql+ORM</title>
      <dc:creator>murugan</dc:creator>
      <pubDate>Thu, 29 Apr 2021 09:53:47 +0000</pubDate>
      <link>https://forem.com/krpmuruga/building-web-applications-using-beego-mysql-orm-4d2e</link>
      <guid>https://forem.com/krpmuruga/building-web-applications-using-beego-mysql-orm-4d2e</guid>
      <description>&lt;h1&gt;
  
  
  Beego:
&lt;/h1&gt;

&lt;p&gt;Beego is a RESTful HTTP framework for the rapid development of Go applications including APIs, web apps and backend services with integrated Go-specific features such as interfaces and struct embedding.&lt;/p&gt;

&lt;p&gt;M (models), V (views), C (controllers) each have top level folders. main.go is the entry point.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Beego
&lt;/h2&gt;

&lt;p&gt;Run the below command in your gopath&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go get -u github.com/beego/beego/v2
go get -u github.com/beego/bee/v2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Beego has in-built scaffolding support, via the command line tool bee&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create new applications&lt;/li&gt;
&lt;li&gt;Run an application&lt;/li&gt;
&lt;li&gt;Test the application&lt;/li&gt;
&lt;li&gt;Create routes and more&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create The Core Project
&lt;/h2&gt;

&lt;p&gt;Once installed, from your $GOPATH directory, run the following command, which will scaffold the application, called sitepointgoapp:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;bee new sitepointgoapp&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Looking at the files, we have:&lt;br&gt;
1.Our bootstrap file main.go&lt;br&gt;
2.The core configuration file conf/app.conf&lt;br&gt;
3.A default controller controllers/default.go&lt;br&gt;
4.A default set of tests in tests/default_test.go&lt;br&gt;
5.A default view template in views/index.tpl&lt;/p&gt;

&lt;p&gt;Right, the basic app is ready, now run the below command&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;bee run&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;This loads our new application.&lt;/p&gt;

&lt;p&gt;Router settings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;beego.Router("/user/home", &amp;amp;controllers.UserController{}, "*:User")
beego.Router("/user/add", &amp;amp;controllers.UserController{}, "get,post:Add")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here i've used the mysql with ORM. First we need to install the MySQL driver using golang by using the below command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go get -u github.com/go-sql-driver/mysql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first one imports Beego’s ORM library &lt;strong&gt;&lt;em&gt;"github.com/astaxie/beego/orm"&lt;/em&gt;&lt;/strong&gt;, the second one provides support for mysql, required because we’re using an mysql database. The third one imports the models we just created, giving them an alias of models.&lt;/p&gt;

&lt;p&gt;create the table in MySQL for orm_test and run the below query to create the users table.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE `users` (
  `id` int(11) NOT NULL,
  `name` varchar(200) NOT NULL,
  `client` varchar(200) NOT NULL,
  `url` varchar(200) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the main.go we need to register the driver along with database settings like below for mysql&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;orm.RegisterDriver("mysql", orm.DRMySQL)
orm.RegisterDataBase("default", "mysql", "root:@/orm_test?charset=utf8")
orm.RegisterModel(new(models.User))

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install ORM using&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;go get github.com/beego/beego/v2/client/orm&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 command. By using this line we need to insert the record in db using ORM&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;id, err := o.Insert(&amp;amp;user)
if err == nil {
    msg := fmt.Sprintf("User inserted with id:", id)
    beego.Debug(msg)
    flash.Notice(msg)
    flash.Store(&amp;amp;manage.Controller)
} else {
    msg := fmt.Sprintf("Couldn't insert new User. Reason: ", err)
    beego.Debug(msg)
    flash.Error(msg)
    flash.Store(&amp;amp;manage.Controller)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we using the sqlite we need to install the GCC and then we need to setup the database setting same as like below in main.go file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;orm.RegisterDriver("sqlite", orm.DR_Sqlite)
orm.RegisterDataBase("default", "sqlite3", "database/orm_test.db")
orm.RegisterModel(new(models.User))

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the &lt;a href="http://localhost:8086/user/home"&gt;http://localhost:8086/user/home&lt;/a&gt; we have displayed the home page which we have done.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beego commands:
&lt;/h2&gt;

&lt;p&gt;1.&lt;strong&gt;&lt;em&gt;bee pack&lt;/em&gt;&lt;/strong&gt; The pack command is used to compress the project into a single file. The compressed file can be deployed by uploading and extracting the zip file to the server&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;______
| ___ \
| |_/ /  ___   ___   
| ___ \ / _ \ / _ \  
| |_/ /|  __/|  __/  
\____/  \___| \___| v1.12.0
2021/04/29 15:15:17 INFO     ▶ 0001 Packaging application on 'D:\Go-work\src\github.com\aws-learning\bee-go\sitepointgoapp'...
2021/04/29 15:15:17 INFO     ▶ 0002 Building application (sitepointgoapp)...
2021/04/29 15:15:17 INFO     ▶ 0003 Using: GOOS=windows GOARCH=amd64
2021/04/29 15:15:21 SUCCESS  ▶ 0004 Build Successful!
2021/04/29 15:15:21 INFO     ▶ 0005 Writing to output: D:\Go-work\src\github.com\aws-learning\bee-go\sitepointgoapp\sitepointgoapp.tar.gz
2021/04/29 15:15:21 INFO     ▶ 0006 Excluding relpath prefix: .
2021/04/29 15:15:21 INFO     ▶ 0007 Excluding relpath suffix: .go:.DS_Store:.tmp:go.mod:go.sum
2021/04/29 15:15:24 SUCCESS  ▶ 0008 Application packed!    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.&lt;strong&gt;&lt;em&gt;bee bale&lt;/em&gt;&lt;/strong&gt; - This command is currently only available to the developer team. It is used to compress all static files in to a single binary file so that they do not need to carry static files including js, css, images and views when publishing the project. Those files will be self-extracting with non-overwrite when the program starts&lt;/p&gt;

&lt;p&gt;3.&lt;strong&gt;&lt;em&gt;bee version&lt;/em&gt;&lt;/strong&gt; - This command displays the version of bee, beego, and go&lt;/p&gt;

&lt;p&gt;4.&lt;strong&gt;&lt;em&gt;bee generate&lt;/em&gt;&lt;/strong&gt; - generate the particular scaffoldname&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://beego.me/docs/intro/"&gt;What is Beego&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sitepoint.com/go-building-web-applications-beego/"&gt;Go: Building Web Applications with Beego&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sitepoint.com/go-building-web-applications-beego-part-2/"&gt;Go: Building Web Applications With Beego - Part 2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/krpmurugan/golearning/tree/master/bee-go"&gt;Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://beego.me/docs/install/bee.md#:~:text=Command%20new,%3E%20under%20%24GOPATH%2Fsrc%20"&gt;Beego Commands&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>go</category>
      <category>orm</category>
      <category>mysql</category>
      <category>beego</category>
    </item>
    <item>
      <title>Docker Basic</title>
      <dc:creator>murugan</dc:creator>
      <pubDate>Tue, 13 Apr 2021 09:25:25 +0000</pubDate>
      <link>https://forem.com/krpmuruga/docker-basic-24id</link>
      <guid>https://forem.com/krpmuruga/docker-basic-24id</guid>
      <description>&lt;h1&gt;
  
  
  Docker
&lt;/h1&gt;

&lt;p&gt;Docker is a platform used to containerize our software, using which we can easily build our applications and package them, with the dependencies required, into containers, and further these containers are easily shipped to run on other machines.&lt;/p&gt;

&lt;p&gt;In simple words, Docker is a software containerization platform, meaning you can build your application, package them along with their dependencies into a container and then these containers can be easily shipped to run on other machines. &lt;/p&gt;

&lt;p&gt;For example: Lets consider a linux based application which has been written both in Ruby and Python. This application requires a specific version of linux, Ruby and Python. In order to avoid any version conflicts on user’s end, a linux docker container can be  created with the required versions of Ruby and Python installed along with the application. Now the end users can use the application easily by running this container without worrying about the dependencies or any version conflicts.&lt;/p&gt;

&lt;p&gt;Docker is an open-source centralized platform designed to create, deploy, and run applications. Docker uses container on the host's operating system to run applications. It allows applications to use the same Linux kernel as a system on the host computer, rather than creating a whole virtual operating system. Containers ensure that our application works in any environment like development, test, or production.&lt;/p&gt;

&lt;p&gt;Docker includes components such as Docker client, Docker server, Docker machine, Docker hub, Docker composes, etc.&lt;/p&gt;

&lt;p&gt;Docker simplifies the DevOps Methodology by allowing developers to create templates called ‘images’ using which we can create lightweight virtual machines called ‘containers.’ Docker makes things easier for software developers giving them the capability to automate infrastructure, isolate applications, maintain consistency, and improve resource utilization.&lt;/p&gt;

&lt;p&gt;Docker is a containerization platform that packages your application and all its dependencies together in the form of a docker container to ensure that your application works seamlessly in any environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Virtualization
&lt;/h2&gt;

&lt;p&gt;Virtualization is the technique of importing a Guest operating system on top of a Host operating system. This technique was a revelation at the beginning because it allowed developers to run multiple operating systems in different virtual machines all running on the same host. This eliminated the need for extra hardware resource. The advantages of Virtual Machines or Virtualization are:&lt;br&gt;
    1.Multiple operating systems can run on the same machine&lt;br&gt;
    2.Maintenance and Recovery were easy in case of failure conditions&lt;br&gt;
    3.Total cost of ownership was also less due to the reduced need for infrastructure&lt;/p&gt;

&lt;h2&gt;
  
  
  Containerization
&lt;/h2&gt;

&lt;p&gt;Containerization is the technique of bringing virtualization to the operating system level. While Virtualization brings abstraction to the hardware, Containerization brings abstraction to the operating system. Do note that Containerization is also a type of Virtualization. Containerization is however more efficient because there is no guest OS here and utilizes a host’s operating system, share relevant libraries &amp;amp; resources as and when needed unlike virtual machines. Application specific binaries and libraries of containers run on the host kernel, which makes processing and execution very fast. Even booting-up a container takes only a fraction of a second. Because all the containers share, host operating system and holds only the application related binaries &amp;amp; libraries. They are lightweight and faster than Virtual Machines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advantages of Containerization over Virtualization:
&lt;/h3&gt;

&lt;p&gt;1.Containers on the same OS kernel are lighter and smaller&lt;br&gt;
2.Better resource utilization compared to VMs&lt;br&gt;
3.Boot-up process is short and takes few seconds&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Containers
&lt;/h3&gt;

&lt;p&gt;Docker containers are the lightweight alternatives of the virtual machine. It allows developers to package up the application with all its libraries and dependencies, and ship it as a single package. The advantage of using a docker container is that you don't need to allocate any RAM and disk space for the applications. It automatically generates storage and space according to the application requirement. &lt;/p&gt;

&lt;p&gt;It is basically the instance of an image. Multiple containers can exist for a single image.&lt;/p&gt;

&lt;p&gt;Docker Containers are the ready applications created from Docker Images. Or you can say they are running instances of the Images and they hold the entire package needed to run the application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Images
&lt;/h3&gt;

&lt;p&gt;Images are nothing but a read-only binary template that can build containers.&lt;br&gt;
Docker Image can be compared to a template which is used to create Docker Containers.They are the building blocks of a Docker Container. These Docker Images are created using the build command. These Read only templates are used for creating containers by using the run command.&lt;/p&gt;

&lt;h3&gt;
  
  
  Virtual Machine
&lt;/h3&gt;

&lt;p&gt;A virtual machine is a software that allows us to install and use other operating systems (Windows, Linux, and Debian) simultaneously on our machine. The operating system in which virtual machine runs are called virtualized operating systems. These virtualized operating systems can run programs and preforms tasks that we perform in a real operating system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Containers Vs. Virtual Machine
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Containers&lt;/th&gt;
&lt;th&gt;Virtual Machine&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Integration in a container is faster and cheap.&lt;/td&gt;
&lt;td&gt;Integration in virtual is slow and costly.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No wastage of memory.&lt;/td&gt;
&lt;td&gt;Wastage of memory.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;It uses the same kernel, but different distribution.&lt;/td&gt;
&lt;td&gt;It uses multiple independent operating systems.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Why Docker:
&lt;/h3&gt;

&lt;p&gt;1.Docker allows us to easily install and run software without worrying about setup or dependencies.&lt;br&gt;
2.Developers use Docker to eliminate machine problems, i.e. "but code is worked on my laptop." when working on code together with co-workers.&lt;br&gt;
3.Operators use Docker to run and manage apps in isolated containers for better compute density.&lt;br&gt;
4.Enterprises use Docker to securely built agile software delivery pipelines to ship new application features faster and more securely.&lt;br&gt;
5.Since docker is not only used for the deployment, but it is also a great platform for development, that's why we can efficiently increase our customer's satisfaction.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dockerfile, Docker Image And Docker Container:
&lt;/h3&gt;

&lt;p&gt;1.A Docker Image is created by the sequence of commands written in a file called as Dockerfile.&lt;br&gt;
2.When this Dockerfile is executed using a docker command it results into a Docker Image with a name.&lt;br&gt;
3.When this Image is executed by “docker run” command it will by itself start whatever application or service it must start on its execution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Compose:
&lt;/h3&gt;

&lt;p&gt;Docker Compose is basically used to run multiple Docker Containers as a single server. Let me give you an example:&lt;/p&gt;

&lt;p&gt;Suppose if I have an application which requires WordPress, Maria DB and PHP MyAdmin. I can create one file which would start both the containers as a service without the need to start each one separately. It is really useful especially if you have a microservice architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker daemon:
&lt;/h3&gt;

&lt;p&gt;A daemon creates, runs, and monitors containers, along with building and storing images.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Registry
&lt;/h3&gt;

&lt;p&gt;Docker Registry is where the Docker Images are stored. The Registry can be either a user’s local repository or a public repository like a Docker Hub allowing multiple users to collaborate in building an application. Even with multiple teams within the same organization can exchange or share containers by uploading them to the Docker Hub, which is a cloud repository similar to GitHub.&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/get-started/overview/"&gt;Docker Overview&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.edureka.co/blog/what-is-docker-container"&gt;Docker Basic&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.javatpoint.com/docker-php-example"&gt;Baisc Docker example&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>container</category>
    </item>
    <item>
      <title>Docker commands with simple PHP example</title>
      <dc:creator>murugan</dc:creator>
      <pubDate>Tue, 13 Apr 2021 09:19:53 +0000</pubDate>
      <link>https://forem.com/krpmuruga/docker-commands-with-simple-php-example-108h</link>
      <guid>https://forem.com/krpmuruga/docker-commands-with-simple-php-example-108h</guid>
      <description>&lt;h1&gt;
  
  
  Docker Commands
&lt;/h1&gt;

&lt;p&gt;1.docker –version&lt;/p&gt;

&lt;p&gt;This command is used to get the currently installed version of docker&lt;/p&gt;

&lt;p&gt;2.docker pull&lt;/p&gt;

&lt;p&gt;docker pull  - This command is used to pull images from the docker repository(hub.docker.com)&lt;/p&gt;

&lt;p&gt;3.docker run&lt;/p&gt;

&lt;p&gt;Usage: docker run -it -d  - This command is used to create a container from an image&lt;/p&gt;

&lt;p&gt;4.docker ps&lt;/p&gt;

&lt;p&gt;This command is used to list the running containers&lt;/p&gt;

&lt;p&gt;5.docker ps -a&lt;/p&gt;

&lt;p&gt;This command is used to show all the running and exited containers&lt;/p&gt;

&lt;p&gt;6.docker exec&lt;/p&gt;

&lt;p&gt;docker exec -it  bash - This command is used to access the running container&lt;/p&gt;

&lt;p&gt;7.docker stop&lt;/p&gt;

&lt;p&gt;docker stop  - This command stops a running container&lt;/p&gt;

&lt;p&gt;8.docker kill&lt;/p&gt;

&lt;p&gt;docker kill  - This command kills the container by stopping its execution immediately. The difference between ‘docker kill’ and ‘docker stop’ is that ‘docker stop’ gives the container time to shutdown gracefully, in situations when it is taking too much time for getting the container to stop, one can opt to kill it&lt;/p&gt;

&lt;p&gt;9.docker commit&lt;/p&gt;

&lt;p&gt;docker commit   - This command creates a new image of an edited container on the local system&lt;/p&gt;

&lt;p&gt;10.docker login&lt;/p&gt;

&lt;p&gt;This command is used to login to the docker hub repository&lt;/p&gt;

&lt;p&gt;11.docker push&lt;/p&gt;

&lt;p&gt;docker push  - This command is used to push an image to the docker hub repository&lt;/p&gt;

&lt;p&gt;12.docker images&lt;/p&gt;

&lt;p&gt;This command lists all the locally stored docker images&lt;/p&gt;

&lt;p&gt;13.docker rm&lt;/p&gt;

&lt;p&gt;docker rm  - This command is used to delete a stopped container&lt;/p&gt;

&lt;p&gt;14.docker rmi&lt;/p&gt;

&lt;p&gt;docker rmi  - This command is used to delete an image from local storage&lt;/p&gt;

&lt;p&gt;15.docker build&lt;/p&gt;

&lt;p&gt;docker build  - This command is used to build an image from a specified docker file&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Php Application Example
&lt;/h2&gt;

&lt;p&gt;A Dockerfile is a text document that contains commands that are used to assemble an image&lt;/p&gt;

&lt;p&gt;1.Create a directory&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mkdir php-docker-app&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;2.Create a Php File&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?php  
    echo ?Hello, Php?;  
?&amp;gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.Create a DockerFile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM php:7.0-apache  
COPY . /var/www/php  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.Create Docker Image&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker build -t php-docker-app .&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Which will create the docker image&lt;/p&gt;

&lt;p&gt;5.Run the Docker image&lt;/p&gt;

&lt;p&gt;docker run php-docker-app&lt;/p&gt;

&lt;p&gt;We can see that our docker image is running and output is shown to the browser. This image is running on the local host.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>php</category>
    </item>
  </channel>
</rss>
