<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Rushal Verma</title>
    <description>The latest articles on Forem by Rushal Verma (@rusrushal13).</description>
    <link>https://forem.com/rusrushal13</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/rusrushal13"/>
    <language>en</language>
    <item>
      <title>Hello, I'm a Junior DevOps looking for a job. Is there anyone who is looking for Junior DevOps?</title>
      <dc:creator>Rushal Verma</dc:creator>
      <pubDate>Mon, 23 Jul 2018 11:39:21 +0000</pubDate>
      <link>https://forem.com/rusrushal13/hello-im-a-junior-devops-looking-for-a-job-is-there-anyone-who-is-looking-for-junior-devops-2eki</link>
      <guid>https://forem.com/rusrushal13/hello-im-a-junior-devops-looking-for-a-job-is-there-anyone-who-is-looking-for-junior-devops-2eki</guid>
      <description>&lt;p&gt;Here is the link to my resume: &lt;a href="https://www.dropbox.com/s/yl0s6clvvo60zay/Rushal_Resume.pdf?dl=0"&gt;Resume&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I recently graduated and currently residing in India, but willing to relocate or work remotely! I have a great interest in learning about infrastructure, containers, Data Intensive Technologies, and DevOps. I am currently reading &lt;code&gt;Kubernetes: Up and Running&lt;/code&gt; and posting notes from it on my Github which you can find here: &lt;a href="https://github.com/rusrushal13/Kubernetes-Up-and-Running-Notes"&gt;Kubernetes: Up and Running&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Also, for paying the bills I am currently doing an internship with The Linux Foundation under there &lt;a href="https://github.com/openmainframeproject/tsc/blob/master/projects/internship.md"&gt;outreach program&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;My mail id: &lt;a href="mailto:rusrushal13@gmail.com"&gt;rusrushal13@gmail.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So if anyone is interested in hiring, please let me know &lt;/p&gt;

</description>
      <category>discuss</category>
    </item>
    <item>
      <title>Learning about the Druid Architecture</title>
      <dc:creator>Rushal Verma</dc:creator>
      <pubDate>Thu, 11 Jan 2018 18:59:57 +0000</pubDate>
      <link>https://forem.com/rusrushal13/learning-about-the-druid-architecture-184c</link>
      <guid>https://forem.com/rusrushal13/learning-about-the-druid-architecture-184c</guid>
      <description>&lt;p&gt;Learning about the Druid Architecture&lt;br&gt;
This post distills the material presented in the paper titled “&lt;a href="http://static.druid.io/docs/druid.pdf" rel="noopener noreferrer"&gt;Druid-A Real-time Analytical Data Store&lt;/a&gt;” published in 2014 by F Yang and others.&lt;/p&gt;

&lt;p&gt;The paper presents the architecture of Druid and what problem it solves in the world of analytical processing and details how it supports fast aggregations, flexible filters, and low latency data ingestion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The paper starts with talking about how much Hadoop has grown in time but the pain in dealing with Hadoop performance. Hadoop excels at storing and providing access to large amounts of data, however, it does not make any performance guarantees around how quickly that data can be accessed. Hadoop works well for storing data, it is not optimised for ingesting data and making that data immediately readable.&lt;/p&gt;

&lt;p&gt;The system combines a column-oriented storage layout, a distributed, shared-nothing architecture, and an advanced indexing structure to allow for the arbitrary exploration of billion-row tables with sub-second latencies.&lt;/p&gt;

&lt;p&gt;Every section of the paper describes bits and pieces of the problem and how to solve it with the help of Druid storage, it also describes what they learned leaving Druid on production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Problem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Druid solves the problem around the ingesting and exploring large quantities of transactional events(log data). The need for Druid was facilitated by the fact that existing open source Relational Database Management Systems (RDBMS) and NoSQL key/value stores were unable to provide a low latency data ingestion and query platform for interactive applications. In addition to the query latency needs, it supports the system to be multi-tenant and highly available. It tries to solve the problem of data exploration, ingestion, and availability span multiple industries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A Druid cluster consists of different types of nodes and each node type is designed to perform a specific set of things. The different types of nodes are loosely coupled so that cluster could support distributed, shared-nothing architecture so that intra-cluster communication failures have minimal impact on availability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2AvH-x1zx1fAHZdsNJb8Gh0Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2AvH-x1zx1fAHZdsNJb8Gh0Q.png" alt="Composition and flow of data in Druid Cluster"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-Time Nodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Real-Time Nodes encapsulates the functionality to ingest and query event streams. Events indexed via these nodes are immediately available for querying. The nodes announced their online state and the data they serve in Zookeeper(uses for coordination). Druid behaves as a row store for queries and processes the events in a buffer to a column-oriented storage format. Each persisted index is immutable and nodes load the indexes into off-heap memory for querying.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F600%2F1%2AedKVl4RKNeNMnyCR3JDtZQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F600%2F1%2AedKVl4RKNeNMnyCR3JDtZQ.png" alt="Processes in Real-Time Nodes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We refer segments here as the immutable block which contains all the events that have been ingested by a real-time node for some span of time. Also, Deep Storage mostly refers to S3 or HDFS. The ingest, persist, merge, and handoff steps are fluid; there is no data loss during any of the processes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F600%2F1%2AAE3stpHEr7-Wp3FhsEO6AQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F600%2F1%2AAE3stpHEr7-Wp3FhsEO6AQ.png" alt="Real-time nodes coordination with Kafka(or any message bus)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Real-Time Nodes are a consumer of data and require a corresponding producer to provide the data stream, Kafka(or any other message bus) sits between the producer and the real-time node. Real-time nodes ingest data by reading events from the message bus. The time from event creation to event consumption is ordinarily on the order of hundreds of millisecond. The purpose of message bus acts as a buffer for incoming events and to act as a single endpoint from which multiple real-time nodes can read events.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Historical Nodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Historical Nodes encapsulate the functionality to load and serve the immutable blocks of data(segments) created by real-time nodes. Most of the data in Druid is immutable and typically Historical Nodes are the main workers in the cluster. Nodes also follow shared-nothing architecture i.e. the nodes have no knowledge of one another; they simply know how to load, drop and serve immutable segments. They also use Zookeeper of coordination.&lt;/p&gt;

&lt;p&gt;Historical Nodes can support read consistency because they only deal with immutable data. Immutable data blocks also enable a parallelization model; can concurrently scan and aggregate immutable blocks without blocking.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Broker Nodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Broker Nodes are query routers to historical and real-time nodes. Broker nodes understand the metadata published in Zookeeper about what segments are queryable and where those segments are located. Broker nodes route the incoming queries such that the queries hit the right historical or real-time nodes. Broker nodes also merge partial results from historical and real-time nodes before returning a final consolidated result to the caller. Nodes also contain a cache with an LRU invalidation strategy. The cache can use local heap memory or an external distributed key/value store such as Memcached. Real-time data is never cached. In the time of outages, broker nodes use their last state and forward queries to the real-time nodes and historical nodes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Coordinator Nodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Coordinator Nodes are the in-charge of data management and distribution on Historical Nodes. They tell the historical nodes to load new data, drop outdated data, replicate data and move data to load balance. Coordinator nodes undergo leader-election process that determines a single node that runs the coordinator functionality. The remaining coordinator nodes act as redundant backups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Storage Format&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Data tables in Druid(called data sources) are collections of timestamped events and partitioned into a set of segments. Segments represent the fundamental storage unit in Druid and replication and distribution are done at segments. Druid creates additional lookup indices for string columns such that only those rows that pertain to a particular query filter are ever scanned. They also discuss how much storing of columns indices could help in maximising compression. Druid components allow storage engines such as the JVM heap or in memory-mapped structures(default).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Query API&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Druid has its own query language and accepts queries as POST requests. Broker, historical, and real-time nodes all share the same query API. It also supports filter set. Druid supports many types of aggregations including sums on floating-point and integer types, minimums, maximums, and complex aggregations(cardinality estimation and others). The one main drawback of using Druid that it doesn’t support a Join query. They still haven’t done that yet and said that research is going on to resolve this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Performance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The paper shows many insights(graphs and tables) of running druid in production that they shared in the paper. The results are:&lt;/p&gt;

&lt;p&gt;Average query latency: 550 milliseconds, with 90% queries returning in less than 1 second, 95% queries in less than 2 second and 99% in less than 10 seconds.&lt;br&gt;
For the most basic datasets cluster ingested data 800,000 events/second/core. Ingestion totally depends on the complexity of data source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Druid in Production&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Druid is often used to explore data and generate reports on data. Users tend to explore short time intervals of recent data.&lt;br&gt;
Concurrent queries could be problematic which they solve by query prioritization to address the issues.&lt;/p&gt;

&lt;p&gt;Assumes that many nodes failing at once are not that practical and left the capacity to completely reassign the data from 2 historical nodes.&lt;br&gt;
In the case of Data Center outages, it relies on Deep Storage.&lt;br&gt;
Operational Metrics on nodes are provisioned too and include system level data(CPU usage, available memory, JVM statistics, and disk capacity). The metrics used for performance and stability of the cluster and also for the aspects of data users.&lt;br&gt;
Druid also paired with Stream Processor(Apache Storm) for both real-time and historical data. Storm handles the streaming data processing work, and the columnar storage used for responding to queries.&lt;br&gt;
Segments are the essence of Druid and they are distributed. They can be exactly replicated over multiple data centers. Such setup may be desired if one data center is situated near to users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Conclusions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;They tried to provide every essential information about the Druid and if anyone wants to get started with it they could start with this paper. They gave references to many more papers which helps you understanding the OLAP databases, columnar storage vs row storage, distributed and real-time analytical store and much more in a single paper. It’s an essential read for people getting started with Druid.&lt;/p&gt;

</description>
      <category>druid</category>
      <category>infrastructure</category>
      <category>distributedsystems</category>
      <category>bigdata</category>
    </item>
    <item>
      <title>How to document the learning on Internship? </title>
      <dc:creator>Rushal Verma</dc:creator>
      <pubDate>Wed, 10 Jan 2018 03:37:37 +0000</pubDate>
      <link>https://forem.com/rusrushal13/how-to-document-the-learning-on-internship-k7m</link>
      <guid>https://forem.com/rusrushal13/how-to-document-the-learning-on-internship-k7m</guid>
      <description>&lt;p&gt;I am a Data Engineering Intern at a startup in India. I am learning new things every day because of switching teams back n forth. How to document these learning day to day or week to week so that I will remember these stuffs after a year or two? Any ideas will help a lot :D&lt;/p&gt;

</description>
      <category>discuss</category>
    </item>
    <item>
      <title>Twitter University test preparation</title>
      <dc:creator>Rushal Verma</dc:creator>
      <pubDate>Tue, 08 Aug 2017 10:54:40 +0000</pubDate>
      <link>https://forem.com/rusrushal13/twitter-university-test-preparation</link>
      <guid>https://forem.com/rusrushal13/twitter-university-test-preparation</guid>
      <description>&lt;p&gt;I applied for Twitter University Test. Is there anyone here from twitter or knows about the process?&lt;br&gt;
What kind of questions should I look at / practice with for Twitter University - Test 2 specifically?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>interview</category>
      <category>beginners</category>
      <category>career</category>
    </item>
    <item>
      <title>How do I know if I’m good at programming?</title>
      <dc:creator>Rushal Verma</dc:creator>
      <pubDate>Sun, 23 Jul 2017 15:37:40 +0000</pubDate>
      <link>https://forem.com/rusrushal13/how-do-i-know-if-im-good-at-programming</link>
      <guid>https://forem.com/rusrushal13/how-do-i-know-if-im-good-at-programming</guid>
      <description>&lt;p&gt;I am reading a article(&lt;a href="http://www.danielslater.net/2017/07/how-do-i-know-if-im-good-at-programming.html"&gt;http://www.danielslater.net/2017/07/how-do-i-know-if-im-good-at-programming.html&lt;/a&gt;) where a senior developer try to explain the answer of it. I also want to know more answers? &lt;br&gt;
How you all know that you are good?&lt;/p&gt;

</description>
      <category>discuss</category>
    </item>
    <item>
      <title>How to be a junior software developer? Tips for students to grab a good job, what was your story?</title>
      <dc:creator>Rushal Verma</dc:creator>
      <pubDate>Mon, 10 Jul 2017 09:59:45 +0000</pubDate>
      <link>https://forem.com/rusrushal13/how-to-be-a-junior-software-developer-tips-for-students-to-grab-a-good-job-what-was-your-story</link>
      <guid>https://forem.com/rusrushal13/how-to-be-a-junior-software-developer-tips-for-students-to-grab-a-good-job-what-was-your-story</guid>
      <description>

&lt;p&gt;I am right now a student entering in my last year of engineering in computer science from India. I'll be glad to know some responses in order to grab a good job in the field of junior software development. I am right now learning about Docker and would like to know more about it too and if possible a job reference will be great :D&lt;/p&gt;


</description>
      <category>discuss</category>
      <category>jobhunting</category>
    </item>
    <item>
      <title>Publish your first image to Docker Hub</title>
      <dc:creator>Rushal Verma</dc:creator>
      <pubDate>Sun, 09 Jul 2017 21:02:15 +0000</pubDate>
      <link>https://forem.com/rusrushal13/publish-your-first-image-to-docker-hub</link>
      <guid>https://forem.com/rusrushal13/publish-your-first-image-to-docker-hub</guid>
      <description>

&lt;p&gt;As you are familiar with Docker from my previous post. Let dive in to explore more.&lt;/p&gt;

&lt;p&gt;Now you know how to run a container and pull an image, now we should publish our image for others too. Why you should have all the fun ;)&lt;/p&gt;

&lt;p&gt;So what we need to publish our Docker Image?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Dockerfile&lt;/li&gt;
&lt;li&gt;Your App&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Yeah, that’s it.&lt;/p&gt;

&lt;p&gt;Why we need my app the Docker way?&lt;/p&gt;

&lt;p&gt;Historically we have to our app(maybe python app) and we need python(or all dependencies) runtime environment on our machine. But then it creates a situation where the environment on your machine has to be just so in order for your app to run as expected and for your server too where you are running the server. With Docker, you don’t need anything(no environment). You can just grab a portable Python runtime as an image, no installation necessary. Then, your build can include the base Python image right alongside your app code, ensuring that your app, its dependencies, and the runtime, all travel together. These portable images are defined by something called a Dockerfile.&lt;/p&gt;

&lt;p&gt;Dockerfile serves as the environment file inside the container. It helps in creating an isolated environment for your container, what ports will be exposed to outside world, what files you want to “copy in” to that environment. However, after doing that, you can expect that the build of your app defined in this Dockerfile will behave exactly the same wherever it runs.&lt;/p&gt;

&lt;p&gt;So let's create a directory and make a Dockerfile.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.6
WORKDIR /app
ADD . /app
RUN pip install -r requirements.txt
EXPOSE 80
ENV NAME world
CMD [“python”, “app.py”]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So you have your Dockerfile. You can see the syntax is pretty easy and self-explanatory.&lt;/p&gt;

&lt;p&gt;Now we need our app. Let's create one, a python app ;)&lt;/p&gt;

&lt;p&gt;&lt;code&gt;app.py&lt;/code&gt;&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from flask import Flask
import os
import socket
app = Flask(__name__)
@app.route("/")
def hello():
    html = "&amp;lt;h3&amp;gt;Hello {name}!&amp;lt;/h3&amp;gt;" \
           "&amp;lt;b&amp;gt;Hostname:&amp;lt;/b&amp;gt; {hostname}&amp;lt;br/&amp;gt;"
    return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname())
if __name__ == "__main__":
    app.run(host='0.0.0.0', port=80)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;code&gt;requirements.txt&lt;/code&gt;&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Flask
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now you have all the things in order to proceed. Now just build your app.&lt;/p&gt;

&lt;p&gt;Let's Build it&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ls&lt;/code&gt; will now show you this&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ls
app.py        requirements.txt        Dockerfile
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now create the image.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t imagebuildinginprocess .
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Where is your image? It’s in your local image registry.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker images
REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
imagebuildinginprocess    latest              4728a04a9d39        14 minutes ago      694MB
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Lets Run it too&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -p 4000:80 imagebuildinginprocess
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;What we did here is mapping the port 4000 to the container exposed port 80. You should see a notice that Python is serving your app at &lt;a href="http://0.0.0.0:80"&gt;http://0.0.0.0:80&lt;/a&gt;. But that message is coming from inside the container, which doesn’t know you mapped to port 80 of that container to 4000, making the URL &lt;a href="http://localhost:4000"&gt;http://localhost:4000&lt;/a&gt;. Go to that URL in a web browser to see the display content served up on a web page, including “Hello World” text and the container ID.&lt;/p&gt;

&lt;p&gt;Let's Share it :D&lt;/p&gt;

&lt;p&gt;we will be pushing our built image to the registry so that we can use it anywhere. The Docker CLI uses Docker’s public registry by default.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log into the Docker public registry on your local machine.(If you don’t have account make it here cloud.docker.com)&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker login
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Tag the image: It is more like naming the version of the image. It’s optional but it is recommended as it helps in maintaining the version(same like ubuntu:16.04 and ubuntu:17.04)&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag imagebuildinginprocess rusrushal13/get-started:part1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Publish the image: Upload your tagged image to the repository: Once complete, the results of this upload are publicly available. If you log into Docker Hub, you will see the new image there, with its pull command.&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push rusrushal13/get-started:part1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Yeah, that's it, you are done. Now you can go to Docker hub and can check about it also ;). You published your first image.&lt;/p&gt;

&lt;p&gt;I found out this GitHub repository really awesome. Have a look on it &lt;a href="https://github.com/jessfraz/dockerfiles"&gt;https://github.com/jessfraz/dockerfiles&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Do give me feedbacks for improvement ;)&lt;/p&gt;


</description>
      <category>digitalproductschool</category>
      <category>devops</category>
      <category>docker</category>
      <category>dockerimage</category>
    </item>
    <item>
      <title>Hello Docker</title>
      <dc:creator>Rushal Verma</dc:creator>
      <pubDate>Sat, 08 Jul 2017 05:18:30 +0000</pubDate>
      <link>https://forem.com/rusrushal13/hello-docker</link>
      <guid>https://forem.com/rusrushal13/hello-docker</guid>
      <description>&lt;p&gt;Docker is a new craze which is quite popular nowadays. I don’t know much before about it but at my internship, I attended a workshop from Dieter Reuter(Docker Captain)and Niclas Mietz from bee42 solutions gmbh.&lt;/p&gt;

&lt;p&gt;It was quite fun two days workshop with them and I got to know a lot about it. So let’s get started with Docker.&lt;br&gt;
Before getting started with it, you should install Docker on your machine. Take a help from Google.&lt;/p&gt;

&lt;p&gt;Okay now, your machine has Docker, so now you may be thinking what Docker really is?&lt;/p&gt;

&lt;p&gt;Docker is an open platform for building, shipping, and running applications. With Docker, you can manage your infrastructure in the same ways you manage your applications. It’s easy, simple and quite powerful too.&lt;/p&gt;

&lt;p&gt;Okay as everyone knows we always start with Hello-world&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker container run hello-world&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In the output, you can see that we are doing many things(what Docker doing). But before that, I want to illustrate something regarding Docker Engine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Engine
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fz41mmtmgm2cqrf5bspwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fz41mmtmgm2cqrf5bspwg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docker is same like your car, it has an engine which manages everything. It also consists a server known as Docker Daemon(the dockerd command) which manages everything for the client(Docker CLI). In between, the API handles the request from the client to the server. The server responds accordingly to the client by managing the images, containers, networks, and volumes.&lt;/p&gt;

&lt;p&gt;Docker Daemon: The Docker daemon listens to API requests and manages Docker objects(images, containers, networks, and volumes). A daemon can also communicate with other daemons to manage Docker services(swarm mode).&lt;/p&gt;

&lt;p&gt;Docker Client: The Docker client is the way that many Docker users interact with Docker. When you use commands such as docker run the client sends these commands to dockerd which carries them out. The docker command uses the Docker API. The Docker client can communicate with more than one daemon.&lt;/p&gt;

&lt;p&gt;Docker Registries: A Docker registry stores Docker images. Docker Hub and Docker Cloud are public registries that anyone can use, and Docker is configured to look for images on Docker Hub by default. When you use the docker pull or docker run commands, the required images are pulled from your configured registry. When you use the docker push command, your image is pushed to your configured registry. You can upgrade the application by pulling the new version of the image and redeploying the containers.&lt;/p&gt;

&lt;p&gt;Images: An image is a snapshot of a Docker container. Often, an image is based on another image, with some additional customization. For example, you may build an image which is based on the ubuntu image but installs the Nginx web server and your application, as well as the configuration details needed to make your application run. Images are lightweight, small, and fast when compared to other virtualization technologies.&lt;/p&gt;

&lt;p&gt;Containers: A container is a runnable instance of an image. You can create, run, stop, move, or delete a container using the CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state. By default, a container is relatively well isolated from other containers and its host machine. You can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host machine. A container is defined by its image as well as any configuration options you provide to it when you create or run it. When a container stops, any changes to its state that are not stored in persistent storage disappears.&lt;/p&gt;

&lt;p&gt;Services: It allows you to scale containers across multiple Docker daemons, which all work together as a swarm with multiple managers and workers. Each member of a swarm is a Docker daemon, and the daemons all communicate using the Docker API. A service allows you to define the desired state, such as the number of replicas of the service that must be available at any given time. By default, the service is load-balanced across all worker nodes. To the consumer, the Docker service appears to be a single application.&lt;/p&gt;

&lt;p&gt;After some theory let’s see the output now,&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Hello from Docker!&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The docker client contacted to Docker daemon.&lt;/li&gt;
&lt;li&gt;The Docker daemon pulled the “hello-world image from the Docker Hub.&lt;/li&gt;
&lt;li&gt;The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.&lt;/li&gt;
&lt;li&gt;The Docker daemon streamed that output to the Docker client, which sent it to your terminal.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That’s it, pretty simple and easy. Enjoy the easiness of it.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>digitalproductschool</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
