<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Siben Nayak</title>
    <description>The latest articles on Forem by Siben Nayak (@theawesomenayak).</description>
    <link>https://forem.com/theawesomenayak</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/theawesomenayak"/>
    <language>en</language>
    <item>
      <title>Garbage Collection in Java - What is GC and How it Works in the JVM</title>
      <dc:creator>Siben Nayak</dc:creator>
      <pubDate>Sat, 23 Jan 2021 16:29:38 +0000</pubDate>
      <link>https://forem.com/theawesomenayak/garbage-collection-in-java-what-is-gc-and-how-it-works-in-the-jvm-k71</link>
      <guid>https://forem.com/theawesomenayak/garbage-collection-in-java-what-is-gc-and-how-it-works-in-the-jvm-k71</guid>
      <description>&lt;p&gt;Garbage Collection is the process of reclaiming the runtime unused memory by destroying the unused objects.&lt;/p&gt;

&lt;p&gt;In languages like C and C++, the programmer is responsible for both the creation and destruction of objects. Sometimes, the programmer may forget to destroy useless objects, and the memory allocated to them is not released. The used memory of the system keeps on growing and eventually there is no memory left in the system to allocate. Such applications suffer from “memory leaks”.&lt;/p&gt;

&lt;p&gt;Java Garbage Collection is the process by which Java programs perform automatic memory management. Java programs compile into bytecode that can be run on a Java Virtual Machine (JVM).&lt;/p&gt;

&lt;p&gt;Garbage collection makes Java memory-efficient because it removes the unreferenced objects from heap memory and makes free space for new objects.&lt;/p&gt;

&lt;p&gt;The Java Virtual Machine has many types of garbage collectors.&lt;/p&gt;

&lt;p&gt;In these videos, I’ve discussed the Garbage Collection in Java, how it works, and the various types of collectors available. &lt;/p&gt;

&lt;p&gt;Please subscribe to the channel and like the video if you find it helpful.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/X1DkoRGVRp4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/4sBhc-pSILs"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>java</category>
      <category>programming</category>
      <category>technology</category>
      <category>software</category>
    </item>
    <item>
      <title>JVM Tutorial - Java Virtual Machine Architecture Explained for Beginners</title>
      <dc:creator>Siben Nayak</dc:creator>
      <pubDate>Sun, 17 Jan 2021 18:08:41 +0000</pubDate>
      <link>https://forem.com/theawesomenayak/jvm-tutorial-java-virtual-machine-architecture-explained-for-beginners-5hlj</link>
      <guid>https://forem.com/theawesomenayak/jvm-tutorial-java-virtual-machine-architecture-explained-for-beginners-5hlj</guid>
      <description>&lt;p&gt;Whether you have used Java to develop programs or not, you might have heard about the Java Virtual Machine (JVM) at some point or another.&lt;/p&gt;

&lt;p&gt;JVM is the core of the Java ecosystem, and makes it possible for Java-based software programs to follow the “write once, run anywhere” approach. You can write Java code on one machine, and run it on any other machine using the JVM.&lt;/p&gt;

&lt;p&gt;JVM was initially designed to support only Java. However, over time, many other languages such as Scala, Kotlin, and Groovy were adopted on the Java platform. All of these languages are collectively known as JVM languages.&lt;/p&gt;

&lt;p&gt;In this video, I’ve discussed the JVM, how it works, and the various components that it is made of. Please subscribe to the channel and like the video if you find it helpful.&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/jnpuRvRdTgI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>java</category>
      <category>jvm</category>
      <category>programming</category>
      <category>software</category>
    </item>
    <item>
      <title>How to Build a Serverless Application Using AWS Chalice</title>
      <dc:creator>Siben Nayak</dc:creator>
      <pubDate>Tue, 20 Oct 2020 18:27:32 +0000</pubDate>
      <link>https://forem.com/theawesomenayak/how-to-build-a-serverless-application-using-aws-chalice-3360</link>
      <guid>https://forem.com/theawesomenayak/how-to-build-a-serverless-application-using-aws-chalice-3360</guid>
      <description>&lt;p&gt;I recently came across AWS Chalice and was fascinated by the simplicity and usability it offers.&lt;/p&gt;

&lt;p&gt;AWS Chalice is a serverless framework that allows you to build serverless applications using Python, and deploy them on AWS using Amazon API Gateway and AWS Lambda.&lt;/p&gt;

&lt;p&gt;I decided to play around with it and was actually able to create and deploy a sample REST API on AWS within a few minutes.&lt;/p&gt;

&lt;p&gt;In this article, I will walk you through the steps required to build and deploy a serverless application that gets the latest news from Google News using Chalice.&lt;/p&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;p&gt;This tutorial requires an AWS account. If you don’t have one already, go ahead and &lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/"&gt;create one&lt;/a&gt;. Our application is going to use only the free-tier resources, so cost shouldn’t be an issue.&lt;/p&gt;

&lt;p&gt;You also need to configure security and create users and roles for your access.&lt;/p&gt;

&lt;h1&gt;
  
  
  How to Configure AWS Credentials
&lt;/h1&gt;

&lt;p&gt;Chalice uses the AWS Command Line Interface (CLI) behind the scenes to deploy the project. If you haven’t used AWS's CLI before to work with AWS resources, you can install it by following the guidelines &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once installed, you need to &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html"&gt;configure&lt;/a&gt; your AWS CLI to use the credentials from your AWS account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ aws configure                       
AWS Access Key ID [****************OI3G]:
AWS Secret Access Key [****************weRu]:
Default region name [us-west-2]:
Default output format [None]:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h1&gt;
  
  
  How to Install Chalice
&lt;/h1&gt;

&lt;p&gt;Next, you need to install Chalice. We will be using Python 3 in this tutorial, but you can use any version of Python supported by AWS Lambda.&lt;/p&gt;
&lt;h2&gt;
  
  
  Verify Python Installation
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ python3 --version
Python 3.8.6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Install Chalice
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ python3 -m pip install chalice
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Verify Chalice Installation
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ chalice --version
chalice 1.20.0, python 3.8.6, darwin 19.6.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h1&gt;
  
  
  How to Create a Project
&lt;/h1&gt;

&lt;p&gt;Next, run the &lt;code&gt;chalice new-project&lt;/code&gt; command to create a new project.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ chalice new-project daily-news
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This will create a &lt;code&gt;daily-news&lt;/code&gt; folder in your current directory. You can see that Chalice has created several files in this folder. We'll be working with the &lt;code&gt;app.py&lt;/code&gt; and &lt;code&gt;requirements.txt&lt;/code&gt; files only in this article.&lt;/p&gt;

&lt;p&gt;Let’s take a look at the contents of &lt;code&gt;app.py&lt;/code&gt; file:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;new-project&lt;/code&gt; command created a sample app daily-news. It defines a single view &lt;code&gt;/&lt;/code&gt;, that returns the JSON body &lt;code&gt;{"hello": "world"}&lt;/code&gt; when called. You can now modify this template and add more code to read news from Google.&lt;/p&gt;

&lt;p&gt;We will be using Google’s RSS feed to get our news. Since RSS feeds consist of data in XML format, we will need a Python library called Beautiful Soup for parsing the XML data.&lt;/p&gt;

&lt;p&gt;You can install Beautiful Soup and its XML parsing library using &lt;code&gt;pip&lt;/code&gt;, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ python3 -m pip install bs4
$ python3 -m pip install lxml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Next add the following imports to &lt;code&gt;app.py&lt;/code&gt;. This essentially adds imports from &lt;code&gt;urllib&lt;/code&gt; to make HTTP calls and &lt;code&gt;bs4&lt;/code&gt; to parse XML.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Next, you need to add a method to fetch the RSS feed from Google. We will use urllib to make an HTTP call to Google's RSS endpoint and get the response. You can then parse the response to extract the news title and publication date, and create a list of news items.&lt;/p&gt;

&lt;p&gt;To do this, add the following code to your &lt;code&gt;app.py&lt;/code&gt;:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Update the index method in &lt;code&gt;app.py&lt;/code&gt; to invoke this method and return the list of news items as a result.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Note that you installed a few dependencies to make the code work. These dependencies were installed locally, and will not be available to the AWS Lambda container at runtime.&lt;/p&gt;

&lt;p&gt;To make them available to AWS Lambda, you will need to package them along with your code.&lt;/p&gt;

&lt;p&gt;To do that, add the following to the &lt;code&gt;requirements.txt&lt;/code&gt; file. Chalice packs these dependencies as part of your code during build and uploads them as part of the Lambda function.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h1&gt;
  
  
  How to Deploy the Project
&lt;/h1&gt;

&lt;p&gt;Let’s deploy this app. From the &lt;code&gt;daily-news&lt;/code&gt; folder, run the &lt;code&gt;chalice deploy&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oiZ0weNi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/90x1xb5439agxpa4yjqp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oiZ0weNi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/90x1xb5439agxpa4yjqp.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This deploys your API on Amazon API Gateway and creates a new function on AWS Lambda.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TYqd-EGZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/c0wyg1yhielm0yzjskqz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TYqd-EGZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/c0wyg1yhielm0yzjskqz.png" alt="Daily News API"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0Uuxxo00--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fvzsqk1mt7626es7cymv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0Uuxxo00--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fvzsqk1mt7626es7cymv.png" alt="Daily News Lambda"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's try accessing the API now. You can use curl to invoke the API Gateway URL that you received during chalice deploy. The response of the API call would return a list of news items as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ipyzmoZ4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yfave1aj77kx26rlog7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ipyzmoZ4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yfave1aj77kx26rlog7r.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  How to Clean Up Resources
&lt;/h1&gt;

&lt;p&gt;You can also use the &lt;code&gt;chalice delete&lt;/code&gt; command to delete all the resources created when you ran the chalice deploy command.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Congratulations! You just deployed a serverless application on AWS using Chalice. It wasn’t too hard, was it?&lt;/p&gt;

&lt;p&gt;You can now go ahead and make any modifications to your &lt;code&gt;app.py&lt;/code&gt; file and rerun chalice deploy to redeploy your changes.&lt;/p&gt;

&lt;p&gt;You can also use Chalice to integrate your serverless app with Amazon S3, Amazon SNS, Amazon SQS, and other AWS services. Take a look at the &lt;a href="https://aws.github.io/chalice/tutorials/index.html"&gt;tutorials&lt;/a&gt; and keep exploring. The full source code for this tutorial can be found &lt;a href="https://github.com/theawesomenayak/daily-news"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thank you for staying with me so far. Hope you liked the article. You can connect with me on &lt;a href="https://www.linkedin.com/in/theawesomenayak/"&gt;LinkedIn&lt;/a&gt; where I regularly discuss technology and life. Also take a look at some of my other articles on &lt;a href="https://medium.com/@theawesomenayak"&gt;Medium&lt;/a&gt;. Happy reading 🙂&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>programming</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Microservice Observability - Metrics</title>
      <dc:creator>Siben Nayak</dc:creator>
      <pubDate>Sat, 03 Oct 2020 16:29:00 +0000</pubDate>
      <link>https://forem.com/theawesomenayak/microservice-observability-metrics-1l12</link>
      <guid>https://forem.com/theawesomenayak/microservice-observability-metrics-1l12</guid>
      <description>&lt;p&gt;In my previous &lt;a href="https://dev.to/theawesomenayak/microservice-observability-logging-3n7n"&gt;article&lt;/a&gt;, I talked about the importance of logs and the differences between structured and unstructured logging. Logs are easy to integrate in your application and provide the ability to represent any type of data in the form of strings.&lt;/p&gt;

&lt;p&gt;Metrics, on the other hand, are a numerical representation of data. These are often used to count or measure a value, and are aggregated over a period of time. Metrics give us insights about the historical and current state of a system. Since they are just numbers, they can also be used to perform statistical analysis and predictions about the system behaviour in future. Metrics are also used to trigger alerts and notify about issues in system behaviour.&lt;/p&gt;

&lt;h1&gt;
  
  
  Logs vs Metrics
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Format
&lt;/h3&gt;

&lt;p&gt;Logs are represented as strings. They can be simple texts, JSON payloads, or key value pairs (like we discussed in structured logging).&lt;/p&gt;

&lt;p&gt;Metrics are represented as numbers. They measure something (like CPU usage, number of errors, etc.) and are numeric in nature.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resolution
&lt;/h3&gt;

&lt;p&gt;Logs contain high-resolution data. This includes complete information about an event, and can be used to correlate the flow (or path) that the event took through the system. In case of errors, logs contain the entire stack trace of the exception, which allows us to view and debug issues originating from downstream systems as well. In short, logs can tell you &lt;strong&gt;&lt;em&gt;what happened&lt;/em&gt;&lt;/strong&gt; in the system at a certain time.&lt;/p&gt;

&lt;p&gt;Metrics contain low-resolution data. This may include count of parameters (such as requests, errors, etc.) and measures of resources (such as CPU and memory utilization). In short, metrics can give you &lt;strong&gt;&lt;em&gt;a count of something that happened&lt;/em&gt;&lt;/strong&gt; in the system at a certain time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost
&lt;/h3&gt;

&lt;p&gt;Logs are expensive to store. The storage overhead of logs also increases over time and is directly proportional to the increase in traffic.&lt;/p&gt;

&lt;p&gt;Metrics have a constant storage overhead. The cost of storage and retrieval of metrics does not increase too much with increase in traffic. It is however, dependent on the number of variables we emit with each metric.&lt;/p&gt;

&lt;h1&gt;
  
  
  Cardinality
&lt;/h1&gt;

&lt;p&gt;Metrics are identified by two key pieces of information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A metric name&lt;/li&gt;
&lt;li&gt;A set of key value pairs called tags or labels&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A permutation of these values provides the metric its cardinality. For e.g. if we are measuring the CPU Utilization of a system with 3 hosts, the metric has a cardinality value of 3, and can have the following 3 values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(name=pod.cpu.utilization, host=A)
(name=pod.cpu.utilization, host=B)
(name=pod.cpu.utilization, host=C)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Similarly, if we introduced another tag in the metric which determined the AWS region of the hosts (say us-west-1 and us-west-2), we will now have a metric with a cardinality value of 6.&lt;/p&gt;

&lt;h1&gt;
  
  
  Types of Metrics
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Golden Signals
&lt;/h3&gt;

&lt;p&gt;Golden signals are an effective way of monitoring the overall state of the system and identify problems.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Availability:&lt;/strong&gt; state of your system measured from the perspective of clients, for e.g. percentage of errors on total requests&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Health:&lt;/strong&gt; state of your system measured using periodic pings&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Request Rate:&lt;/strong&gt; the rate of incoming requests to the system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Saturation:&lt;/strong&gt; how free or loaded is the system. For e.g. queue depth or available memory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Utilization:&lt;/strong&gt; how busy is the system. For e.g. CPU load or memory usage. This is represented in percentage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Error Rate:&lt;/strong&gt; the rate of errors being produced in the system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Latency:&lt;/strong&gt; the response time of the system, usually measured in the 95th or 99th percentile.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Resource Metrics
&lt;/h3&gt;

&lt;p&gt;Resource metrics are almost always made available by default from the infrastructure provider (AWS CloudWatch or Kubernetes metrics) and are used to monitor infrastructure health.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CPU/Memory Utilization:&lt;/strong&gt; usage of core resources of the system&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Host Count:&lt;/strong&gt; number of hosts/pods that are running your system (used to detect availability issues due to pod crashes)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Live Threads:&lt;/strong&gt; threads spawned your service (used to detect issues in multi-threading)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Heap Usage:&lt;/strong&gt; heap memory usage statistics (can help debug memory leaks)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Business Metrics
&lt;/h3&gt;

&lt;p&gt;Business metrics can be used to monitor granular interaction with core APIs or functionality in your services.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Request Rate:&lt;/strong&gt; rate of requests to the APIs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Error Rate:&lt;/strong&gt; rate of errors being thrown by the APIs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Latency:&lt;/strong&gt; time taken to process requests by the APIs&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Dashboards and Alerts
&lt;/h1&gt;

&lt;p&gt;Since metrics are stored in a time-series database, it’s more efficient and reliable to run queries against them for measuring the state of the system.&lt;/p&gt;

&lt;p&gt;These queries can be used to build dashboards for representing the historical state of the system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ePhZtGuH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5bfzkduwc2dd7tcu5dya.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ePhZtGuH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5bfzkduwc2dd7tcu5dya.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;They can also be used to trigger alerts when there is an issue with the system, for e.g. an increase in number of errors observed, or a sudden spike in CPU utilization.&lt;/p&gt;

&lt;p&gt;Due to their numeric nature, we can also create complex mathematical queries (such as X% of errors in last Y minutes) to monitor system health.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;In this article, we saw the differences between metrics and logs, and how metrics can help us monitor the health of our system more efficiently. Metrics can also be used to create dashboards and alerts using monitoring software like Wavefront and Grafana.&lt;/p&gt;

&lt;p&gt;The biggest challenge, however, in handling metrics is deciding the right amount of cardinality that makes the metric useful, while also keeping its costs in control. It is necessary to use both metrics and logs in coordination to accurately detect and debug issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  Note
&lt;/h3&gt;

&lt;p&gt;This is the second part of my Microservice Observability Series. The first part has been posted &lt;a href="https://dev.to/theawesomenayak/microservice-observability-logging-3n7n"&gt;here&lt;/a&gt;. I’ll be adding links to the next articles when they go live. Stay tuned!!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>softwaredevelopment</category>
      <category>programming</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Microservice Observability - Logs</title>
      <dc:creator>Siben Nayak</dc:creator>
      <pubDate>Sun, 27 Sep 2020 06:27:14 +0000</pubDate>
      <link>https://forem.com/theawesomenayak/microservice-observability-logging-3n7n</link>
      <guid>https://forem.com/theawesomenayak/microservice-observability-logging-3n7n</guid>
      <description>&lt;p&gt;Logs are one of the most important parts of software systems. Whether you have just started working on a new piece of software, or your system is running in a large scale production environment, you'll always find yourself seeking help from log files. Logs are the first thing developers look for when something goes wrong, or something doesn't work as expected. &lt;/p&gt;

&lt;p&gt;Logging the right information in the right way makes the life of developers so much easier. Two things that developers need to be aware of to get better at logging are what and how to log. In this article, we'll take a look at the some of the basic logging etiquettes that can get us the best out of our logs.&lt;/p&gt;

&lt;h1&gt;
  
  
  The What and How of Logging
&lt;/h1&gt;

&lt;p&gt;Let's take the example of an e-commerce system and take a look at logging in Order Management Service (OMS).&lt;/p&gt;

&lt;p&gt;Suppose a customer order fails due to an error from Inventory Management Service (IMS), a downstream service that OMS uses to verify the available inventory.&lt;/p&gt;

&lt;p&gt;Let's assume that OMS has already accepted an order, but during the final verification step, IMS returns the following error because the product is no longer available in the inventory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;404 Product Not Available
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  What to Log
&lt;/h2&gt;

&lt;p&gt;Normally, you would log the error in this way&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Exception in fetching product information - {}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ex&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getResponseBodyAsString&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will output a log in the following format&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[2020-09-27T18:54:41,500+0530]-[ERROR]-[InventoryValidator]-[13] Exception in fetching product information - Product Not Available
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Well, there isn't much information available in this log statement, is it? A log like this doesn't serve much purpose because it lacks any contextual information about about the error.&lt;/p&gt;

&lt;p&gt;Can we add more information to this log to make it more relevant for debugging? How about adding the Order Id and Product Id?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Exception in processing Order #{} for Product #{} due to exception - {}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;orderId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;productId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ex&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getResponseBodyAsString&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will output a log in the following format&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[2020-09-27T18:54:41,500+0530]-[ERROR]-[InventoryValidator]-[13] Exception in processing Order #182726 for Product #21 due to exception - Product Not Available
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now this makes a lot of sense!! Looking at the logs, we can understand that an error occurred while processing Order #182726 because Product #21 was not available.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Log
&lt;/h2&gt;

&lt;p&gt;While the above log makes perfect sense for us humans, it may not be the the best format for machines. Let's take an example to understand why.&lt;/p&gt;

&lt;p&gt;Suppose there is some issue in the availability of a certain product (say Product #21) due to which all orders containing that product are failing. You have been assigned the task to find all the failed orders for this product. &lt;/p&gt;

&lt;p&gt;You happily do a grep for Product #21 in your logs and excitedly wait for the results. When the search completes, you get something like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[2020-09-27T18:54:41,500+0530]-[ERROR]-[InventoryValidator]-[13] Exception in processing Order #182726 for Product #21 due to exception - Product Not Available
[2020-09-27T18:53:29,500+0530]-[ERROR]-[InventoryValidator]-[13] Exception in processing Order #972526 for Product #217 due to exception - Product Not Available
[2020-09-27T18:52:34,500+0530]-[ERROR]-[InventoryValidator]-[13] Exception in processing Order #46675754 for Product #21 due to exception - Product Not Available
[2020-09-27T18:52:13,500+0530]-[ERROR]-[InventoryValidator]-[13] Exception in processing Order #332254 for Product #2109 due to exception - Product Not Available
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Not quite something that we were expecting right? So how can we improve this? Structured logging to the rescue.&lt;/p&gt;

&lt;h1&gt;
  
  
  Structured Logging
&lt;/h1&gt;

&lt;p&gt;Structured logging solves these common problems and allows log analysis tools to provide additional capabilities. Logs written in a structured format are more machine-friendly, i.e. they can be easily parsed by a machine. This can be helpful in the following scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Developers can search logs and correlate events, which is invaluable both during development as well as for troubleshooting production issues.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Business teams can parse these logs and perform analysis over certain fields (for e.g. unique product count per day) by extracting and summarising these fields.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can build dashboards (both business and technical) by parsing the logs and performing aggregates over relevant fields.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's use our earlier log statement and make a small change to make it structured.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Exception in processing OrderId={} for ProductId={} due to Error={}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;orderId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;productId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ex&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getResponseBodyAsString&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will output a log in the following format&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[2020-09-27T18:54:41,500+0530]-[ERROR]-[InventoryValidator]-[13] Exception in processing OrderId=182726 for ProductId=21 due to Error=Product Not Available
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now this log message can be easily parsed by the machine by using "=" as a delimiter to extract the OrderId, ProductId and Error fields. We can now do an exact search over ProductId=21 to accomplish our task. &lt;/p&gt;

&lt;p&gt;This also allows us to perform more advanced analytics on the logs, such as preparing a report of all the orders that failed such errors.&lt;/p&gt;

&lt;p&gt;If you use a log management system like Splunk, the query &lt;code&gt;Error="Product Not Available" | stats count by ProductId&lt;/code&gt; can now produce the following result&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+-----------+-------+
| ProductId | count |
+-----------+-------+
| 21        | 5     |
| 27        | 12    |
+-----------+-------+
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We could also use a JSON layout to print our logs in the JSON format&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"2020-09-27T18:54:41,500+0530"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"level"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"ERROR"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"class"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"InventoryValidator"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"line"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"13"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"OrderId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"182726"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ProductId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"21"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"Product Not Available"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It's important to understand the approach behind structured logging. There is no fixed standard for the same and it can be done in many different ways. &lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;In this article, we saw the pitfalls of unstructured logging and the benefits and advantages offered by structured logging. Log management systems such as Splunk are hugely benefited by a well structured log message and can offer easy search and analytics on log events.&lt;/p&gt;

&lt;p&gt;The biggest challenge however, when it comes to structured logging, is establishing a standard set of fields for your software. This can be achieved by following a custom logging model or centralised logging which ensures that all developers use the same fields in their log messages.&lt;/p&gt;

&lt;h4&gt;
  
  
  Note
&lt;/h4&gt;

&lt;p&gt;This is the first part of my Microservice Observability Series. I'll be adding links to the next articles when they go live. Stay tuned!!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>softwaredevelopment</category>
      <category>programming</category>
      <category>microservices</category>
    </item>
    <item>
      <title>10-Step Guide to GitHub Contributions</title>
      <dc:creator>Siben Nayak</dc:creator>
      <pubDate>Mon, 31 Aug 2020 12:58:05 +0000</pubDate>
      <link>https://forem.com/theawesomenayak/10-step-guide-to-github-contributions-1fo0</link>
      <guid>https://forem.com/theawesomenayak/10-step-guide-to-github-contributions-1fo0</guid>
      <description>&lt;p&gt;You've finally decided to make your first GitHub contribution and want to quickly start making changes. Whether it's an Open Source Software (OSS) on public GitHub or your organization's internal project on GitHub Enterprise, there's a well-defined process for contributing that makes your life easier and keeps your codebase clean. In this article, I'll give you ten simple steps to ensure a quick and clean GitHub contribution.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Contribution Lifecycle
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Fork the main repository
&lt;/h3&gt;

&lt;p&gt;Forking the repository creates a copy of it in your account. You can make changes and push any code to this fork, without worrying about messing up the original code base. Click on the fork button at the top of the page to create a new fork.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WGbFO4Ht--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/j3lbwo20rlakkqgsg624.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WGbFO4Ht--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/j3lbwo20rlakkqgsg624.png" alt="Fork"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The forked repository will now be available in the "Repositories" section in your account.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Clone the forked repository to your machine
&lt;/h3&gt;

&lt;p&gt;Now we need to clone the forked repository to our machine so that we have a local copy of the code. Click on the clipboard icon next to the SSH or HTTPS URL of your forked repository to copy it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M0JBFwcH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2csyhbfg3ikynrh0ojkj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M0JBFwcH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2csyhbfg3ikynrh0ojkj.png" alt="Clone"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now open a terminal on your machine and run the following command to clone the forked repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone git@github.com:theawesomenayak/guava.git
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Create a feature branch
&lt;/h3&gt;

&lt;p&gt;While making any change to the code, a best practice is to create a new feature branch for the changes we need to make. &lt;/p&gt;

&lt;p&gt;This ensures that we keep the master branch clean, and are able to simply revert our code or make updates when necessary.&lt;/p&gt;

&lt;p&gt;Switch to the directory that was created after you cloned the forked repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd guava
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Create a new feature branch with a name that identifies with the changes you are planning to do. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout -b fix-npe-issue
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Commit your changes to the feature branch
&lt;/h3&gt;

&lt;p&gt;If you have created any new files as part of your change, you will need to add it to the branch you just created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add &amp;lt;filename&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;For all the changes made, you have to commit them to the branch. Make sure you add a valid commit message (as per the conventions of the project):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git commit -m "Fixed the NPE issue due to a null key used in cache"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Push the feature branch to your fork
&lt;/h3&gt;

&lt;p&gt;Now it's time to push your commit to your forked repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push origin fix-npe-issue
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  6. Raise a Pull Request against the main repository
&lt;/h3&gt;

&lt;p&gt;Once you have pushed your code to your forked repository, it's time to raise a PR against the main repository. Click on the "Pull Request" button to start a new PR.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mkxiupNY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7rac4t3kwmvertujtsr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mkxiupNY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7rac4t3kwmvertujtsr5.png" alt="PR"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This takes you to a screen where the changes in your forked repository are compared with the code in the main repository. You can review the changes and provide a valid description of your changes before submitting them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oJy3xODV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xdxwqlcwnr31aa158wy0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oJy3xODV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xdxwqlcwnr31aa158wy0.png" alt="PR Description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Address review comments and get your PR merged
&lt;/h3&gt;

&lt;p&gt;The code maintainers will often come back with certain comments around the changes you have made. These could either propose functional changes, or cosmetic changes such as formatting, etc. Once you make these changes, simply push them to your branch and the PR will be automatically updated.&lt;/p&gt;

&lt;p&gt;Once the changes are good, a maintainer will merge them to the main repository. &lt;/p&gt;

&lt;p&gt;Congratulations!!! You are now officially an open-source contributor.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Add the main repository as an upstream to your cloned repository
&lt;/h3&gt;

&lt;p&gt;Many other developers, apart from you, also keep merging their code into the main repository. We need to continuously sync our forked repository with it to get the latest code available.&lt;/p&gt;

&lt;p&gt;Your cloned repository is linked to your forked repository. &lt;/p&gt;

&lt;p&gt;To keep your fork in sync with the main repository, you'll need to connect them by adding the main repository as an upstream in your cloned repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git remote add upstream git@github.com:google/guava.git
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Verify that the upstream has been set correctly using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git remote -v
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It should display the following values to confirm that the origin and upstream are pointing to the correct repositories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;origin  git@github.com:theawesomenayak/guava.git (fetch)
origin  git@github.com:theawesomenayak/guava.git (push)
upstream        git@github.com:google/guava.git (fetch)
upstream        git@github.com:google/guava.git (push)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  9. Update your master branch from upstream
&lt;/h3&gt;

&lt;p&gt;Once the upstream is set, you can pull in changes that have been done in the main repository by other developers. This updates the cloned repository on your local machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git pull upstream master
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  10. Push the master branch to your fork
&lt;/h3&gt;

&lt;p&gt;Once you have all the updates on your local machine, you will need to push them to your forked repository so that it's in sync with the main repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push origin master
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  (Optional) Delete your feature branch
&lt;/h3&gt;

&lt;p&gt;Once the feature has been merged into the main repository, it's no longer needed and can be deleted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git branch -d fix-npe-issue
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can also delete the remote branch from your forked repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push origin --delete fix-npe-issue
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;






&lt;h2&gt;
  
  
  Finish Line
&lt;/h2&gt;

&lt;p&gt;Contributing to GitHub projects can be tricky depending on how many developers are concurrently working on it. Hopefully, this article clears the air around the process of making GitHub contributions and makes your development cycle a bit easier.&lt;/p&gt;

&lt;p&gt;Thank you for spending your time reading my article. Keep building!&lt;/p&gt;

</description>
      <category>github</category>
      <category>beginners</category>
      <category>opensource</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
