<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: vaibhav aggarwal</title>
    <description>The latest articles on Forem by vaibhav aggarwal (@alakazam03).</description>
    <link>https://forem.com/alakazam03</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/alakazam03"/>
    <language>en</language>
    <item>
      <title>Automating Continuous Integration through Jenkins</title>
      <dc:creator>vaibhav aggarwal</dc:creator>
      <pubDate>Thu, 11 Jun 2020 17:57:36 +0000</pubDate>
      <link>https://forem.com/alakazam03/automating-continuous-integration-through-jenkins-448b</link>
      <guid>https://forem.com/alakazam03/automating-continuous-integration-through-jenkins-448b</guid>
      <description>&lt;p&gt;In the last post, &lt;a href="https://dev.to/alakazam03/setting-up-jenkins-on-aws-21pf"&gt;Setting up Jenkins on AWS&lt;/a&gt;  jenkins was installed and hosted on aws. It was accessible at a hosted public address.&lt;/p&gt;

&lt;p&gt;Now, lets proceed to connect our jenkins to our vcs (read github in our case). &lt;/p&gt;

&lt;p&gt;While implementing a project, there are many components that are being developed in parallel. These small components are (should be) loosely coupled for better scalability. &lt;/p&gt;

&lt;p&gt;Let's understand this through an example;&lt;br&gt;
Suppose, team is working on a ecommerce portal. After high-level-design and tech stack discussion, a basic portal will have services namely; UI, backend, database and a payment system. &lt;/p&gt;

&lt;p&gt;Implementation will start and after a significant progress, services need to be integrated with each other. Code needs to be gathered, compiled and built to check the proper integration. Now imagine doing this without any tools everyday. Somebody has to sit down, gather the code at one place, compile it and test the working. And if any error occurs, person has to track down the implementor and coordinate the concerned errors. This will slow down the process and will add overhead to the functioning of the team.&lt;/p&gt;

&lt;p&gt;A version control system with continuous integration pipeline will automate and fasten the process.&lt;/p&gt;

&lt;p&gt;Jenkins with many of its plugin will pull the code, build and test it. If any erros occurs, concerned developer or team will be notified with the log outputs.&lt;/p&gt;
&lt;h3&gt;
  
  
  Connect with github
&lt;/h3&gt;

&lt;p&gt;To automate the process of continuous integration, code should be compiled and built on every new change pushed to the vcs repository. Change has to be notified to the CI server (read jenkins here) via some mechanism. Here, webhooks come handy as they allow external services to be notified when certain change happens.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Most powerful or realiable aspect of jenkins is plugins available. Go to jenkins console and proceed to manage jenkins section. Select manage plugins and make sure jenkins have git plugin installed or otherwise install it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QSjhctMt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/23367724/80520518-2b773d80-89a7-11ea-9b64-8a3d7f6bdb11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QSjhctMt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/23367724/80520518-2b773d80-89a7-11ea-9b64-8a3d7f6bdb11.png" alt="Screenshot 2020-04-28 at 11 22 36 PM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to the concerned github project and proceed to settings. In the options, you will find webhooks section. Select add webhook and enter the &lt;a href="http://%3Cjenkins_url"&gt;http://&amp;lt;jenkins_url&lt;/a&gt;/github-webhook/&amp;gt;. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XrDg4g41--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/23367724/80518888-befb3f00-89a4-11ea-8874-339bfde9489a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XrDg4g41--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/23367724/80518888-befb3f00-89a4-11ea-8874-339bfde9489a.png" alt="Screenshot 2020-04-27 at 6 17 52 PM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As soon as you save the webhook, github will make a dummy post call to the jenkins webhook. Make sure the call is successful to the jenkins. It can be verified from from the status icon on right of added webhook. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xbrRwm7V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/23367724/80519653-ef8fa880-89a5-11ea-8aa3-2d2d819518e7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xbrRwm7V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/23367724/80519653-ef8fa880-89a5-11ea-8aa3-2d2d819518e7.png" alt="Screenshot 2020-04-27 at 6 17 26 PM"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Pipeline steps
&lt;/h3&gt;

&lt;p&gt;In the last section, github repository was connected to jenkins server. Everytime theres a new update (our case only push events), it will be notified to the jenkins github webhook.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new freestyle project and name it build (your choice). &lt;/li&gt;
&lt;li&gt;Its always better to give a meaningful description, so other team members can understand. &lt;/li&gt;
&lt;li&gt;Under source code management, select git and enter the github repository. Provide the credentials if its a private repository (always better though).&lt;/li&gt;
&lt;li&gt;In build triggers, select GitHub hook trigger for GITScm polling. It will trigger the build for this jenkins project anytime, jenkins webhook receive any PUSH event from github.&lt;/li&gt;
&lt;li&gt;Under build section, select execute shell and enter the commands to build your project. For as of now just put something,
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt; echo "building"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;This completes our build stage of the pipeline. Save and exit from the configuration.&lt;/p&gt;

&lt;p&gt;Finally, everything is setup and ready to test. Lets test our first stage of pipeline by creating a commit in the corresponding github repo.&lt;/p&gt;

&lt;p&gt;As soon as commit is published, git will post a PUSH event on to the jenkins webhook. That will trigger the build project that was created above. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TEonl7Zk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/23367724/80521959-772ae680-89a9-11ea-9e51-4931c1e538f5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TEonl7Zk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/23367724/80521959-772ae680-89a9-11ea-9e51-4931c1e538f5.png" alt="Screenshot 2020-04-28 at 11 38 54 PM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check console output and confirm our statement "building" stdout on the screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MV5TTDtk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/23367724/80522756-b3ab1200-89aa-11ea-8fac-bdc5a2002661.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MV5TTDtk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/23367724/80522756-b3ab1200-89aa-11ea-8fac-bdc5a2002661.png" alt="Screenshot 2020-04-28 at 11 47 08 PM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that nothing was specified in the post build stage of jenkins build project. Above build stage was completed and on success pipeline should proceed to testing stage.&lt;/p&gt;
&lt;h3&gt;
  
  
  install docker
&lt;/h3&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo amazon-linux-extras install docker
sudo service docker start
sudo usermod -a -G docker ec2-user
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4N7yueia--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/23367724/80628616-3d241800-8a6f-11ea-9731-a58741b5bf99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4N7yueia--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/23367724/80628616-3d241800-8a6f-11ea-9731-a58741b5bf99.png" alt="docker running"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If your jenkins is getting error " dial unix /var/run/docker.sock: connect: permission denied". This is due to the permissions as jenkins does not have access to docker.sock. There are two workarounds for this&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Change file access type (not recommended though) : change permission access of the file
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   chmod 777 /var/run/docker.sock
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Using groups: &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After that configure your jenkins file build stage. In execete shell commands;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build desiredname -t .
docker push dockerhubname/desiredname
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SPDoBE-5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/23367724/80631254-1962d100-8a73-11ea-8cb0-8cd4432cb87c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SPDoBE-5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/23367724/80631254-1962d100-8a73-11ea-8cb0-8cd4432cb87c.png" alt="Screenshot 2020-04-29 at 11 41 12 PM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Final run
&lt;/h3&gt;

&lt;p&gt;Everything is setup and its time to do a test run. Publish a commit to the github repo and pipeline will start running. At the end desired result can be cross checked at console output.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SAJ_ni23--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/23367724/80631707-c9d0d500-8a73-11ea-8eca-836a440fed87.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SAJ_ni23--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/23367724/80631707-c9d0d500-8a73-11ea-8eca-836a440fed87.png" alt="Screenshot 2020-04-29 at 11 47 08 PM"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>opensource</category>
      <category>github</category>
    </item>
    <item>
      <title>var vs let vs const</title>
      <dc:creator>vaibhav aggarwal</dc:creator>
      <pubDate>Fri, 05 Jun 2020 18:20:45 +0000</pubDate>
      <link>https://forem.com/alakazam03/var-vs-let-vs-const-4ki1</link>
      <guid>https://forem.com/alakazam03/var-vs-let-vs-const-4ki1</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Let, var and const are ways to create a new varibale in javascript. Before ES2015 (or ES6) only var was available which provided limited scoping capabilities. let and const were instroduced in ES6. &lt;/p&gt;

&lt;p&gt;There are two scopes in JS called global scope and function scope. Global scoped variable is accessible everywhere whereas function scoped variable is accessible only inside function declaration. &lt;/p&gt;

&lt;p&gt;"In JavaScript, variables are initialized with the value of undefined when they are created.". The JavaScript interpreter will assign variable declarations a default value of undefined during what's called the "Creation" phase.&lt;/p&gt;

&lt;h4&gt;
  
  
  var
&lt;/h4&gt;

&lt;p&gt;For var, it does not matter where it is declared first inside the function. Creation phase will happen before anything and var declaration will assigned a value as 'undefined' until initialized. (Think of every var in the function declaration coming up and being declared as unassigned on the first line.)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function app() {
  console.log(declare); //undefined
  console.log(i); //undefined

  var declare;
  declare  = "initialize";

  for(var i = 0; i &amp;lt; 5; i++){
    var sum = i;
  }

  console.log(declare); //initialize
  console.log(i); //5
  console.log(sum); //4
}

app();

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Notice, declare was assigned a default value of unassigned and is accessible even before declaration. For variable i and sum, their values are accessible outside the loop as var is function scoped and not block scope. (Remember every var in the function coming up on the first line);&lt;/p&gt;

&lt;p&gt;Also, I do not think its a good practice to access variable before declaring as it can lead to unknown issues.&lt;/p&gt;

&lt;p&gt;To solve the problem, let and const was introduced in ES6. &lt;/p&gt;

&lt;h4&gt;
  
  
  let
&lt;/h4&gt;

&lt;p&gt;let is block scoped rather than function scoped as in the case of var. Block scoped in most simple terms means inside {} and below nested code. Variables declared using let are not accessible before declaration. Imagine making a box starting from declaration of let till corresponding closing bracket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function app() {
  console.log(declare); //undefined
  console.log(i); //ReferenceError: i is not defined

  var declare;
  declare  = "initialize";

  for(let i = 0; i &amp;lt; 5; i++){
    let sum = i;
  }

  console.log(declare); //initialize
  // console.log(i);

}

app();
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As we tried accessing variable i before declaring it throws a reference error opposed to case with variables declared using var. This difference occurs because of the difference in scope of let and var.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function app() {
  console.log(declare); //undefined

  var declare;
  declare  = "initialize";

  for(let i = 0; i &amp;lt; 5; i++){
    let sum = i;
  }

  console.log(declare); //initialize
  console.log(i); //ReferenceError: i is not defined

}

app();
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Notice, variable i is accessible only inside for loop. Outside its block, it throws a reference error of not being defined.&lt;/p&gt;

&lt;h4&gt;
  
  
  const
&lt;/h4&gt;

&lt;p&gt;const is almost similar to let only difference being it can not be reassigned.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let firstName = "vaibhav";
const secondName = "aggarwal";

firstName = "changeMyName";
secondName = "youCantChangeMyName"; //TypeError: Assignment to constant variable.

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Dont confuse reassigned with change. Its properties can be changed and only restriction is on reassigning.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const name = {
  firstName: "vaibhav",
  secondName: "aggarwal"
}

console.log(name);

name.firstName = "changeMyName";

console.log(name); 
// {
//   firstName: "changeMyName",
//   secondName: "aggarwal"
// }

name = {}; //TypeError: Assignment to constant variable.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;There are many important concepts such as scope, hoisting etc... being involved here. I have tried explaining in simple terms for better understanding. &lt;/p&gt;

&lt;h3&gt;
  
  
  Reference
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide"&gt;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>javascript</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Setting up Jenkins on AWS</title>
      <dc:creator>vaibhav aggarwal</dc:creator>
      <pubDate>Thu, 04 Jun 2020 18:16:27 +0000</pubDate>
      <link>https://forem.com/alakazam03/setting-up-jenkins-on-aws-21pf</link>
      <guid>https://forem.com/alakazam03/setting-up-jenkins-on-aws-21pf</guid>
      <description>&lt;p&gt;This tutorial walks you through the process of deploying a Jenkins application.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You will launch an EC2 instance (centOS), install and configure Jenkins.&lt;/li&gt;
&lt;li&gt;Jenkins to automatically spin up Jenkins build slave instances if build abilities
need to be augmented on the instance. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Launch and Setup an EC2 instance
&lt;/h3&gt;

&lt;p&gt;First, launch an ec2 instance to install and run jenkins. Steps involved are: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;launch an ec2 instance&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F80366539-0ef3dc00-88a7-11ea-9ce6-7cab66d8f8ea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F80366539-0ef3dc00-88a7-11ea-9ce6-7cab66d8f8ea.png" alt="Screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add secutiry group&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F80366758-6b56fb80-88a7-11ea-996b-40755fcf14f4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F80366758-6b56fb80-88a7-11ea-996b-40755fcf14f4.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;ssh into the instance: select initiated instance, proceed to connect and follow the instructions. Make sure you are in the same folder as your downloaded .pem key.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    chmod 400 jenkins-ec2.pem //if you named the instance same
    ssh -i "jenkins-ec2.pem" ec2-user@ec2-13-127-31-75.ap-south-1.compute.amazonaws.com //please check your details
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Install and run Jenkins
&lt;/h3&gt;

&lt;p&gt;Here, we will connect to the instance and run the jenkins after all necessary installation.&lt;br&gt;
Steps covered:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;connect to your instance&lt;/li&gt;
&lt;li&gt;download and install jenkins&lt;/li&gt;
&lt;li&gt;configure jenkins&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After connecting to your ec2 instance via ssh, jenkins need to be download and installed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    sudo yum update -y //update system
    sudo yum install java-1.8.0-openjdk-devel //install jdk
    curl --silent --location http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo | sudo tee /etc/yum.repos.d/jenkins.repo //get GPG key
    sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key //add repo to system 
    sudo yum install jenkins //install jenkins

    // start the service
    sudo systemctl start jenkins
    //check service status
    systemctl status jenkins.service

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On checking jenkins service status, an output like this is expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F80367483-d0f7b780-88a8-11ea-84be-81e49ca1c995.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F80367483-d0f7b780-88a8-11ea-84be-81e49ca1c995.png" alt="jenkins running screenshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Voila! Jenkins is installed and running at port 8080 on your instance. &lt;br&gt;
But remember, there is no access to port 8080 to outside world. Security groups are used for the purpose of protecting our running instances from outside attack. Let's go to attached security group and edit inbound rules. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F80371669-e3c1ba80-88af-11ea-8c8c-4253f054dbae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F80371669-e3c1ba80-88af-11ea-8c8c-4253f054dbae.png" alt="Screenshot 2020-04-27 at 5 30 36 PM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open a browser window and access your jenkins instance running on your_ec2_public_ip:8080&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;access your initial administator password for jenkins using (ec2 terminal).&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    sudo cat /var/lib/jenkins/secrets/initialAdminPassword
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Select install selected plugins and wait for it. Its almost done.
Create a user if needed other than admin. Jenkins ready screen will appear. We are all set to automate our CI process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F80369152-6b58fa80-88ab-11ea-965d-39a07817094f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F80369152-6b58fa80-88ab-11ea-965d-39a07817094f.png" alt="create-jenkins-user"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to manage jenkins and toggle to install plugins. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F80369405-d73b6300-88ab-11ea-9fea-2b299922542d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F80369405-d73b6300-88ab-11ea-9fea-2b299922542d.png" alt="Screenshot 2020-04-27 at 5 23 19 PM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Jenkins is up and running on AWS.&lt;br&gt;
"Please note for security reasons, never expose your public IP. Always have it running behind a reverse proxy."&lt;/p&gt;

&lt;p&gt;In next article, we will create a Jenkins project and integrate it with github. That will be the first step to create a Continuous Integration pipeline.&lt;/p&gt;

&lt;p&gt;P.S. "Above written are my views. I am always learning and exploring new things. Please comment and criticize wherever possible. Stay Happy."&lt;/p&gt;

&lt;h3&gt;
  
  
  Helpful links
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://jenkins.io/" rel="noopener noreferrer"&gt;https://jenkins.io/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://linuxize.com/post/how-to-install-git-on-centos-7/" rel="noopener noreferrer"&gt;github on centOS&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Sending emails in  NodeJs with Nodemailer</title>
      <dc:creator>vaibhav aggarwal</dc:creator>
      <pubDate>Thu, 04 Jun 2020 12:45:43 +0000</pubDate>
      <link>https://forem.com/alakazam03/sending-emails-in-nodejs-with-nodemailer-1jn1</link>
      <guid>https://forem.com/alakazam03/sending-emails-in-nodejs-with-nodemailer-1jn1</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;While making a product communication with the customers is one of the most important factor. The formal and continuous communication for the upcoming events, newsletter and other cyclic events happen through email servers. There are events when email needs to be sent based on specific action performed by the customer.&lt;/p&gt;

&lt;p&gt;Consider the mentioned specific actions by the user:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Register for the product&lt;/li&gt;
&lt;li&gt;Buys or avails any service&lt;/li&gt;
&lt;li&gt;Updates about transactions&lt;/li&gt;
&lt;li&gt;Query related to the product &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Hence, emails needs to be sent triggered through some APIs. A email server needs to contacted and carry out the communication. An SMTP (Simple Mail Transfer Protocol) server is an application that’s primary purpose is to send, receive, and/or relay outgoing mail between email senders and receivers. Read more about &lt;a href="https://sendgrid.com/blog/what-is-an-smtp-server/" rel="noopener noreferrer"&gt;SMTP&lt;/a&gt; server.&lt;/p&gt;

&lt;p&gt;After setting up a server (article for another day) a transported is needed to send emails through that.&lt;/p&gt;

&lt;p&gt;Nodemailer is zero dependency module for Node.js applications that allows sending email in an easy manner. Its flexible and supports SMTP and other transport mechanism. It can be configured for aws-ses, sendgrid and other smtp providers. Read more about nodemailer at &lt;a href="//nodemailer.com"&gt;nodemailer.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some of the features of nodemailer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zero dependecny on other modules&lt;/li&gt;
&lt;li&gt;Secure email delivery with TLS and DKIM email authentication&lt;/li&gt;
&lt;li&gt;HTML content and embedded image attachments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lets integrate nodemailer in our project and start sending some emails.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  npm install nodemailer 

  // If to include in package.json
  npm install --save nodemailer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create a transporter
&lt;/h3&gt;

&lt;p&gt;SMTP protocol is the most common transporter for sending mails. It makes integration very simple. Set host, port, authentication details and method.&lt;/p&gt;

&lt;p&gt;SMTP can be setup on various services such as aws-ses, gmail etc. But here as our main focus is on to use nodemailer, lets use mailtrap. Mailtrap integrated as a SMTP server and enables you to debug emails in a pre-production environment.&lt;/p&gt;

&lt;p&gt;Go to &lt;a href="//www.mailtrap.io"&gt;mailtrap.io&lt;/a&gt; and sign up in a second. On opening, a page in figure below will be seen. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F83756364-97c31c00-a68c-11ea-8d71-e1f312f68d8a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F83756364-97c31c00-a68c-11ea-8d71-e1f312f68d8a.png" alt="Figure 1"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 1. Mailtrap account to setup and use as an pre production SMTP server&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now, we have SMTP credentials, lets setup our transporter for nodemailer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;//server.js&lt;/span&gt;
&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;transport&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;nodemailer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createTransport&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;smtp.mailtrap.io&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2525&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;2a591f5397e74b&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;pass&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;c8115f6368ceb0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nodemailer uses transporter to facilitate sending mails. Next steps will be setting up email configurations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;//server.js&lt;/span&gt;
&lt;span class="cm"&gt;/**
 * Sends mail through aws-ses client
 * @param options Contains emails recipient, subject and text
 */&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;send&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fromName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; &amp;lt;&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fromEmail&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;&amp;gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;userEmail&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;subject&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;subject&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;info&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;transporter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sendMail&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;info&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;messageId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;info&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One thing to notice is that, here in message object text field will have normal text. But the emails, you generally receive are beautiful and formatted rather than just plain text. As mentioned above, nodemailer provides options to send HTML and image attachments.(I will write another article covering all features of nodemailer and how to send beautiful HTML based emails)&lt;/p&gt;

&lt;p&gt;In the to section, take email from mailtrap.io. It provides temporary email addresses for testing. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F83756333-8bd75a00-a68c-11ea-87f9-9556b6a16ae4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F83756333-8bd75a00-a68c-11ea-87f9-9556b6a16ae4.png" alt="Figure 2"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 2. Temporary Email address for testing by mailtrap&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;//server.js&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;nodemailer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;nodemailer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Router&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/email&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
  &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;sucess&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;
   &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;
   &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;transporter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;nodemailer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createTransport&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;smtp.mailtrap.io&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2525&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;2a591f5397e74b&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;pass&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;c8115f6368ceb0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="cm"&gt;/*
Use in your req.body
 const options = {
   userEmail: &amp;lt;mailtrapEmail&amp;gt;,
   subject: 'Welcome to Auffr',
   message: 'We are excited to have you in the family'
 }
*/&lt;/span&gt;
&lt;span class="cm"&gt;/**
 * Sends mail through aws-ses client
 * @param options Contains emails recipient, subject and text
 */&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;send&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fromName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; &amp;lt;&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fromEmail&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;&amp;gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;userEmail&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;subject&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;subject&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;info&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;transporter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sendMail&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;info&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;messageId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;info&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createServer&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;statusCode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setHeader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;text/plain&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;end&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;This is the Main App!&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Server running at http://localhost:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On running the file server.js, express server will be up and running at &lt;a href="http://localhost:3000/" rel="noopener noreferrer"&gt;http://localhost:3000/&lt;/a&gt; . As mentioned above, a post request made at the endpoint /email will send an email to the mailtrap account. &lt;/p&gt;

&lt;p&gt;I used postman to make a simple post request at endpoint &lt;a href="http://localhost:3000/email" rel="noopener noreferrer"&gt;http://localhost:3000/email&lt;/a&gt;. Voila, email is received and visible in our mailtrap account inbox.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F83756683-04d6b180-a68d-11ea-90d3-e1791c9acbd7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23367724%2F83756683-04d6b180-a68d-11ea-90d3-e1791c9acbd7.png" alt="Figure 3"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure 3. Email received through /email endpoint&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In next article, I will share how nodemailer can achieve so much more than discusses here. &lt;/p&gt;

&lt;p&gt;"I am not an expert, please comment and correct me if I am wrong anywhere. Always love to have a discussion."&lt;/p&gt;

</description>
      <category>node</category>
      <category>smtp</category>
      <category>aws</category>
      <category>javascript</category>
    </item>
    <item>
      <title>How to Analyse Big Data Using the ELK Stack</title>
      <dc:creator>vaibhav aggarwal</dc:creator>
      <pubDate>Tue, 03 Mar 2020 12:08:08 +0000</pubDate>
      <link>https://forem.com/alakazam03/how-to-analyse-big-data-using-the-elk-stack-1n82</link>
      <guid>https://forem.com/alakazam03/how-to-analyse-big-data-using-the-elk-stack-1n82</guid>
      <description>&lt;p&gt;This article introduces users to the ELK stack. Apart from discussing how it helps in better data analytics, there are detailed explanations about the internal workings of Elasticsearch, as well as insights into how Logstash and Kibana complement Elasticsearch’s capabilities. The article was originally published in Open source for you, Asia’s largest open-source magazine: &lt;a href="https://opensourceforu.com/2020/02/how-to-analyse-big-data-using-the-elk-stack/"&gt;https://opensourceforu.com/2020/02/how-to-analyse-big-data-using-the-elk-stack/&lt;/a&gt;&lt;br&gt;
Big Data refers to large data sets that combine structured, semi-structured and non-structured data from various sources. Big Data is not just big in size but can also provide various insights about running and improving businesses when it is analysed. It can give insights on user behavioural patterns, changing trends, a system’s health or any anomalies. The analysis helps improve customer integration, detect abnormalities and mitigate fraud.&lt;br&gt;
Increased Internet penetration and mobile in every hand are generating loads of user data for companies. This data is of high velocity, volume and variety. A lot of data is being generated from mobiles, media devices, traffic surveillance systems, messages, etc. This data needs to be analysed in real-time to provide the user with consumable insights.&lt;/p&gt;

&lt;p&gt;Imagine an e-commerce company that aims to provide high-quality customer service. When users search for a product on the website, they want the best-suited results to be shown. For better user experience, various other factors like user age, brand, budget and location are added to the user search.&lt;br&gt;
Relational databases have predefined schema to store structured data, whereas data being generated today can be non-structured. The real power of data can be harnessed only when we are able to aggregate the data collected from various sources. Data can be structured (has a defined schema) or unstructured (has no defined schema). Examples like user number, location and email can be stored in an SQL database as structured. User information, such as payment mode, wish lists and previous orders, is stored in an unstructured format. To provide the best user experience, users should be shown recommendations and search results based on an analysis of the data collected about them — for example, combining users’ product search with their purchase history and budget range. No matter how complex data is, it needs to be combined and analysed properly for the best decision making.&lt;/p&gt;

&lt;p&gt;Fig1. Sample logstash config file&lt;br&gt;
ELK is an acronym for three open-source projects. The first is Elasticsearch, which is the distributed search and analytics engine. Then comes Logstash, which is the data pipeline that helps ingest data from multiple sources, transforms it and sends it to data stashes like Elasticsearch. And finally, Kibana lets users visualise data stored in Elasticsearch for better understanding and business interactions. ELK is easy to set up, scalable and efficient and provides near real-time search analytics. Its capabilities include data feature enhancing, visual analytics and alerting mechanisms with the help of Logstash and Kibana.&lt;br&gt;
A hands-on guide on the ELK stack can be found at github.com/ELK-Tutorial.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Logstash
Logstash is a server-side data processing pipeline used to ingest, parse and then transform data, before outputting it to Elasticsearch. It provides various filters that enhance the insights and human readability.
Logstash ingests the structured and unstructured data from various sources such as logs, system metrics and Web applications, by using the input plugins available. It transforms and combines the data using many of the available filters to a common format for more powerful analysis and business value.
Filter examples
grok: Derives structure from unstructured data
geo-ip: Extracts information from IP addresses
date: Extracts date and time from standard
Logstash structures and enhances the data collected from different sources to output it into Elasticsearch, which is a NoSQL database that can store data and provide real-time search over that multi-valued data. Logstash has a variety of outputs that let you route transformed data where you want it, flexibly.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;fig 2. Data after enhancing through logstash&lt;/p&gt;

&lt;p&gt;fig 3. Data after enhancing through logstash&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Elasticsearch
Elasticsearch is a distributed open source search engine released under the Apache licence. It is a REST API layer over Apache’s Lucene (&lt;a href="https://lucene.apache.org/"&gt;https://lucene.apache.org/&lt;/a&gt;). It provides horizontal scalability, reliability and real-time search through the documents. Elasticsearch is able to search faster because it uses indexing to go over documents. Lucene (the storage engine of Elasticsearch) stores the data using a technique called inverted indexing. This storage architecture lets Elasticsearch provide functionalities like aggregation (similar to GROUPBY in SQL), which helps when doing complex queries on data.
Indexing
Indexing is jargon for inserting data in Elasticsearch, which stores data as documents contained under an index. The index can be thought of as a table in SQL. An Elasticsearch cluster can have multiple indices, which in turn contain multiple types. These types then have multiple documents. A document is similar to a row in SQL. Documents can have multiple fields that provide information about a data object.
Here is an example:
A POST statement is used for creating an index.
Here, iiitb is an index.
‘Student’ is one of the types under iiitb. Others are professors, staff and security.
Inside the brackets, we have a document under the type ‘student’.
By looking at the example stated above, we observe that there wasn’t a need to perform any architectural work specifically, like creating the index or defining the types of a field as in an RDBMS. Elasticsearch stores this data as documents using inverted indexing.
Inverted indexing
Elasticsearch uses an inverted index to achieve a fast full-text search over documents. An inverted index consists of all the unique words in the document, and each unique word is mapped to the list of documents containing it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;fig 4. Data stored by Elastic Search before any indexing stage&lt;br&gt;
Table 1 considers three documents and six words. Each column shows if the document contains the corresponding word. The way in which Elasticsearch stores this data is called inverted indexing.&lt;br&gt;
The inverted index may hold a lot more information than just the list of the documents containing a unique term. It may have a count of documents, the number of times the term appears in the document, the length of the document, and many other valuable insights.&lt;br&gt;
Indexing involves tokenisation and normalisation, which is essentially an analysis of the document. The data from each field is broken into terms, and then normalised into a standard form to improve ‘searchability’.&lt;br&gt;
Token filters are used in the analysis to increase search efficiency are listed below.&lt;br&gt;
Stop words: Removing frequently occurring words like the, is, they, etc.&lt;br&gt;
Lower casing: All words are changed into a lower case for better results.&lt;br&gt;
Stemming: This involves using only the root word. The extra tense-defining part is removed; e.g., swimming and swimmer can be stemmed to swim.&lt;br&gt;
Synonyms: This filter merges the occurrences of words with the same meaning.&lt;br&gt;
A separate word known as the token is mapped to the documents carrying that word. Therefore, when we are searching for a document with a particular word or phrase in our data, we don’t have to search the whole document. Instead, indexes will directly give the document associated with the desired word or phrase.&lt;br&gt;
Indexed documents are then sent into a buffer, and wait for a certain period to get stored as segments. Segment data is divided and stored into small blocks called shards. This reduces the time taken for the document to become available for search after the process of indexing. In versions before 6.7.0, Elasticsearch is used to directly store to disks using the fsync() function, which is very costly as it involves directly writing to the disk. In Elasticsearch 6.7.0, the latest version, this has been changed to a file cache system that stores data in a buffer first, and then commits it to the disk, increasing the performance.&lt;/p&gt;

&lt;p&gt;fig 5. Data indexing optimisation stage&lt;br&gt;
As can be seen in Table 2, after analysis, stop words (like ‘the’) are removed, synonyms combined (like ‘sad’and ‘upset’) and stemming is done&lt;br&gt;
(‘swimming’ and ‘swimmer’ to ‘swim’).&lt;br&gt;
Shard is a low-level worker unit that holds just a slice of all the data. Index, as mentioned above, is just a logical namespace that points to one or more shards. Elasticsearch distributes the given data in clusters by storing them as shards, which are then allocated to nodes present in the cluster. Elasticsearch needs to find the shard where it has saved a particular document. Therefore, the process should be deterministic and can be described as follows:&lt;br&gt;
shard number = Hash( _id ) % number_of_primary_shards&lt;br&gt;
After fixing the shard, we have a way to find the location of any document. This means that any node in the cluster knows about every document and can forward the call to the correct node.&lt;/p&gt;

&lt;p&gt;Fig 6. Data storage and fetching cycle&lt;br&gt;
Under the hood&lt;br&gt;
The Lucene engine stores data in shards, i.e., data blocks. A node can have many shards but this creates a big, single point of failure. Elasticsearch handles failure by doing replication across the cluster. We have to mention the number of primary index shards at the time of indexing. This fixes the number of primary nodes and helps to coordinate in finding the location of appropriate nodes.&lt;br&gt;
Let us say that we have two primary indexes and a two-node architecture.&lt;br&gt;
The master node finds the shard containing the required documents (here, Node 2) and forwards the request to the corresponding node, as shown in Figure 6.&lt;br&gt;
The effect of replication on queries&lt;br&gt;
Insert: We send the request to the master node (each cluster has one), which finds the shard containing the document with the help of _id. The document is saved to the primary shard. If successful, the request is forwarded to the replica nodes in parallel and while inserting a document, Elasticsearch will locate the primary shard for the index and insert the document there.&lt;br&gt;
Search: The related shard will be retrieved based on the following formula. Using round robin scheduling on the primary or replica nodes, one is chosen to carry out the current query. Round robin scheduling between primary and replicated nodes optimises the load on the search engine. The explained architecture uses round robin scheduling to balance the load on nodes.&lt;br&gt;
Delete: The delete request follows similar steps as inserting. The master node finds the associated primary shard and deletes the document from there. Afterwards, it will also be deleted from replicated shards.&lt;/p&gt;

&lt;p&gt;Fig 7. Stored data as a index in ES&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Kibana
Kibana is a tool to visualise Elasticsearch data in real-time. It provides a variety of visualisations to get the best meaning out of your data. Dashboards can be created according to use cases, and get updated with new data automatically.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Fig 8. Visualisations based on data stored&lt;br&gt;
Kibana is a visualisation tool built on top of Elasticsearch, which connects to the latter using Elasticsearch’s REST APIs. It helps us understand data better from a non-developer perspective by providing various forms such as tables, rows, charts, geo maps, heat maps, and so on.&lt;br&gt;
Some of the main features of Kibana are:&lt;br&gt;
Leverages the aggregation capabilities of Elasticsearch&lt;br&gt;
Has many predefined visualisations and it is very easy to define new ones using Vega&lt;br&gt;
Can plot complex data like geo-data, time series metrics, relationship graphs, etc&lt;br&gt;
Unsupervised machine learning has been added to detect anomalies in Elasticsearch data&lt;br&gt;
Dashboards can be made and transported in the form of PDF, PNG or level-controlled links&lt;br&gt;
Alerts and notifications can be set up for any specific query&lt;br&gt;
The ELK stack helps users to collect data from various sources, enhance it and store it in a self-replicating distributed manner. Stored data can be accessed in real-time and can be presented to the user in various forms such as pie-charts or geo-mapping, with the help of Kibana. It helps enterprises in collecting data to provide it to businesses and users in real-time.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>bigdata</category>
      <category>elk</category>
    </item>
  </channel>
</rss>
