<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: P. Acharya</title>
    <description>The latest articles on Forem by P. Acharya (@p_acharya_cb32943b1cb6a0).</description>
    <link>https://forem.com/p_acharya_cb32943b1cb6a0</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/p_acharya_cb32943b1cb6a0"/>
    <language>en</language>
    <item>
      <title>From DNS to Containers: How AWS Routes Traffic Using Route 53 and Application Load Balancer</title>
      <dc:creator>P. Acharya</dc:creator>
      <pubDate>Sun, 14 Dec 2025 17:17:26 +0000</pubDate>
      <link>https://forem.com/p_acharya_cb32943b1cb6a0/from-dns-to-containers-how-aws-routes-traffic-using-route-53-and-application-load-balancer-3eo</link>
      <guid>https://forem.com/p_acharya_cb32943b1cb6a0/from-dns-to-containers-how-aws-routes-traffic-using-route-53-and-application-load-balancer-3eo</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Why modern apps don’t expose EC2 directly
&lt;/h2&gt;

&lt;p&gt;In theory, you can deploy an application on an EC2 instance, grab its public IP, and let users hit it directly. In practice, this fails almost immediately in real-world systems because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We want users to be able to access the web application with a human-readable domain name such as google.com, etc. The public IP keeps changing when instances are stopped, scaled, or replaced. Users also won’t be able to establish a secure HTTPS connection to the website reliably.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As traffic on our web application grows, a single server becomes a bottleneck. You need a mechanism to add or remove backend instances without letting users know about the change.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Another challenge when making EC2 instances available directly to the internet is that, in a highly available architecture, you need to distribute traffic across multiple availability zones to ensure that a single data center failure doesn’t bring down your entire application.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS solves the above problems through a carefully designed pipeline. This article explains the complete end-to-end request flow, from the moment a user types your domain in their browser to the moment a Docker container receives the request.&lt;/p&gt;

&lt;h2&gt;
  
  
  How DNS Resolution Works
&lt;/h2&gt;

&lt;p&gt;What Happens when a User enters a domain?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User's Browser (stub resolver) doesnt know the IP address of the domain you typed for ex. app.example.com, it sends a DNS query to the configured DNS Resolver which is usually your ISP's DNS Resolver or a Public Resolver like 1.1.1.1&lt;/li&gt;
&lt;li&gt;This DNS Resolver performs a recursive lookup on the DNS registry on your browser's behalf, It queries the root nameservers to find the authoritative nameserver for the .com TLD for our earlier example.&lt;/li&gt;
&lt;li&gt;The Root Nameserver responds with the address of the TLD authoritative nameserver for .com.&lt;/li&gt;
&lt;li&gt;The TLD Authoritative Nameserver then responds with the address of the authoritative nameserver for example.com.&lt;/li&gt;
&lt;li&gt;This step is where AWS Route 53 comes into play, Route 53 acts as the authoritative nameserver for example.com and managing all the DNS Records for the example.com domain. When a resolver queries Route 53 with a request for app.example.com, Route 53 returns the IP address(es) associated with that subdomain.&lt;/li&gt;
&lt;li&gt;The Recursive Resolver caches this response and sends it back to the User's Browser&lt;/li&gt;
&lt;li&gt;The User's Browser then establishes a TCP connection to app.example.com's Resolved IP Address.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This whole process typically takes 10 to 100 milliseconds but could be optimised to be faster by implementing caching at multiple levels meaning the subsquent requests will skip several steps.&lt;/p&gt;

&lt;h1&gt;
  
  
  Difference between Authoritative and Recursive DNS Servers
&lt;/h1&gt;

&lt;p&gt;Authoritative DNS Servers store the actual DNS Records for the domain and respond to queries about domains they are responsible for. AWS acts as your Authoritative DNS &lt;/p&gt;

&lt;p&gt;Recursive DNS servers on the other hand perform the full resolution process on behalf of clients, querying other servers and caching results. Your ISP typically provides these.&lt;/p&gt;

&lt;p&gt;Route 53 can also function as a recursive resolver (through Route 53 Resolver) when you need to resolve private VPC domain names or forward queries to on premise DNS servers.&lt;/p&gt;

&lt;h1&gt;
  
  
  Difference between Route 53 Alias Records and CNAME Records
&lt;/h1&gt;

&lt;p&gt;A CNAME Record is a "canonical name" that points to another domain name. For ex. app.example.com points to a CNAME record that points to the ALB's Domain (my-alb-123456.us-east-1.elb.amazonaws.com). In this case the resolver must perform another DNS Query to resolve ALB's domain name to its IP Address.&lt;/p&gt;

&lt;p&gt;Using CNAME creates increased latency because of two DNS Lookups instead of one and Increased cost as AWS charges per query so CNAME doubles your query cost.&lt;/p&gt;

&lt;p&gt;An Alias Record is a solution specifically for AWS Resources, When Route 53 receives a query for app.example.com, it internally resolves the ALB's DNS name to its current IP addresses and returns those IP addresses directly to the client, This is advantageous because there is only a single DNS Lookup reducing latency, lower costs as Alias queries to AWS Resources are free.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Load Balancer Architecture
&lt;/h2&gt;

&lt;h1&gt;
  
  
  Layer 7 Load Balancing
&lt;/h1&gt;

&lt;p&gt;The Application Load Balancer operates at Layer 7 (Application Layer) of the OSI model, meaning it can read and understand HTTP/HTTPS traffic. &lt;/p&gt;

&lt;p&gt;At Layer 7, the ALB can make routing decisions based on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;HTTP Host header (e.g., route api.example.com differently from &lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HTTP path (e.g., route /api/* to API servers, /static/* to static file servers)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HTTP methods (GET, POST, etc.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Query parameters&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HTTP headers (custom headers, user-agent, etc.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Hostname patterns and regex matching&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Listeners
&lt;/h1&gt;

&lt;p&gt;Each ALB have one or more listeners, that listen for incoming connections.&lt;/p&gt;

&lt;p&gt;A listener defines:​&lt;/p&gt;

&lt;p&gt;Protocol: HTTP or HTTPS (ALB only supports application protocols, not TCP/UDP)&lt;/p&gt;

&lt;p&gt;Port: The port the ALB listens on (typically 80 for HTTP, 443 for HTTPS)&lt;/p&gt;

&lt;p&gt;Certificate (for HTTPS): The SSL/TLS certificate from AWS Certificate Manager (ACM)&lt;/p&gt;

&lt;p&gt;Rules: Conditions that determine how traffic is routed&lt;/p&gt;

&lt;p&gt;When the ALB receives a connection on port 443 it,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accepts the TLS handhshake connection using the configured certificate&lt;/li&gt;
&lt;li&gt;Decrypts the incoming HTTPS Traffic&lt;/li&gt;
&lt;li&gt;Reads the HTTP headers to evaluate listener rules&lt;/li&gt;
&lt;li&gt;Routes the request based on matching rules&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  SSL/TLS Termination with AWS Certificates Manager (ACM)
&lt;/h1&gt;

&lt;p&gt;SSL Termination means the ALB decrypts HTTPS traffic, reads the unencrypted HTTP headers to make routing decisions, and then sends traffic to the backend targets.​​&lt;/p&gt;

&lt;p&gt;AWS Certificate Manager (ACM) provides free SSL/TLS certificates with automatic renewal. The process to obtain a certificate is:​​&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Request a certificate for example.com and all subdomains.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ACM sends a validation email or creates a CNAME record in Route 53 that you must confirm.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once validated, attach the certificate to your ALB listener.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ACM renews the certificate automatically before expiration (no configuration needed).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After decrypting the HTTPS traffic, it can direct the unencrypted HTTP to the backend or re-encrypt them and send it to the backend for end to end encryption.&lt;/p&gt;

&lt;p&gt;For most applications, plaintext HTTP between ALB and backend is acceptable because both are within your VPC (network isolation), security groups restrict who can communicate with the backend, and the expensive crypto happens once (at the user-facing edge).&lt;/p&gt;

&lt;h1&gt;
  
  
  Listener Rules
&lt;/h1&gt;

&lt;p&gt;Listener rules determine which target group can recieve this request. Rules are evaluated in priority order (lowest number first), and the first matching rule wins&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkqgje8og9rgvt8l0wugz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkqgje8og9rgvt8l0wugz.png" alt=" " width="555" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example Listener rule set:&lt;br&gt;
Rule 1 (Priority: 1):&lt;br&gt;
  Condition: Host header = "api.example.com"&lt;br&gt;
  Action: Forward to API Target Group&lt;/p&gt;

&lt;p&gt;Rule 2 (Priority: 2):&lt;br&gt;
  Condition: Path = "/health"&lt;br&gt;
  Action: Return fixed response (200 OK)&lt;/p&gt;

&lt;p&gt;Rule 3 (Priority: 3):&lt;br&gt;
  Condition: Path starts with "/uploads" AND Host = "example.com"&lt;br&gt;
  Action: Forward to Uploads Target Group&lt;/p&gt;

&lt;p&gt;Default Rule (Priority: 10000):&lt;br&gt;
  Action: Return 404 Not Found&lt;/p&gt;

&lt;p&gt;Listener Rules are particularly useful in scenarios when you want the ALB to route traffic to different environemnt target groups like production and development based on host headers.&lt;/p&gt;

&lt;p&gt;Some caveats for Listener Rules are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Conditions use pattern matching. For example, path starts with "/api" matches /api/users, /api/products, /api/v1/health, etc.&lt;/li&gt;
&lt;li&gt;When a rule has multiple conditions, all of them have to be TRUE (AND logic), If you need OR logic, you will have to create seperate rule &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Target Groups and Traffic Forwarding
&lt;/h2&gt;

&lt;p&gt;A target group is the component that turns a routing decision into an actual backend call. Once a listener rule decides where a request should go, the target group defines how the load balancer delivers it which port and protocol to use, how to determine whether a backend is healthy, and how to choose one healthy target among many at runtime. Instead of pointing traffic at fixed servers, the ALB forwards every request through a target group, which constantly tracks backend health and availability and dynamically routes traffic to EC2 instances, IPs, or containers as they scale, fail, or get replaced.&lt;/p&gt;

&lt;p&gt;Example Target Group Configuration:&lt;br&gt;
Target Group Configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name: "production-api"&lt;/li&gt;
&lt;li&gt;Protocol: HTTP (or HTTPS for re-encryption)&lt;/li&gt;
&lt;li&gt;Port: 8080 (where the application listens inside containers)&lt;/li&gt;
&lt;li&gt;VPC: vpc-12345678&lt;/li&gt;
&lt;li&gt;Health Check:

&lt;ul&gt;
&lt;li&gt;Protocol: HTTP&lt;/li&gt;
&lt;li&gt;Path: /health&lt;/li&gt;
&lt;li&gt;Port: 8080&lt;/li&gt;
&lt;li&gt;Interval: 30 seconds&lt;/li&gt;
&lt;li&gt;Timeout: 5 seconds&lt;/li&gt;
&lt;li&gt;Healthy threshold: 2 successful checks&lt;/li&gt;
&lt;li&gt;Unhealthy threshold: 3 failed checks&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Stickiness: Enabled (AWSALB cookie, 24 hour duration)&lt;/li&gt;

&lt;li&gt;Targets:

&lt;ul&gt;
&lt;li&gt;i-0a1b2c3d4e5f6g7h8 (EC2 instance, port 8080)&lt;/li&gt;
&lt;li&gt;i-0x1y2z3a4b5c6d7e8 (EC2 instance, port 8080)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Target Groups are used for health checks as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Every 30 seconds (configurable), the ALB sends an HTTP GET request to each registered target.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The target must respond with a 200-299 HTTP status code (configurable).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Threshold Logic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If 2 consecutive health checks succeed (healthy threshold), mark    as healthy.&lt;/li&gt;
&lt;li&gt;If 3 consecutive health checks fail (unhealthy threshold), mark as unhealthy.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Traffic Routing: The ALB only routes traffic to targets marked as healthy.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Graceful Degradation: If ALL targets in a target group become unhealthy, the ALB enters "fail-open" mode and routes traffic to all targets regardless (better than returning 503 Service Unavailable to users).&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Your application must implement a "/health" endpoint that responds quickly (before the 5-second timeout), returns a 200 status code when healthy, performs meaningful health checks (database connectivity, cache availability, disk space, etc.), and returns 500+ status when unhealthy.&lt;/p&gt;

&lt;p&gt;After a request is mapped to a target group, the ALB chooses a destination from the set of currently healthy targets using a load-balancing strategy.&lt;/p&gt;

&lt;p&gt;It uses Round Robin Strategy by default where requests are routed sequentially across all healthy targets. This is simple and works well for simple stateless applications&lt;/p&gt;

&lt;p&gt;For workloads with uneven or long-running requests, the ALB can instead prefer the target with the fewest in-flight requests, reducing tail latency under load.&lt;/p&gt;

&lt;p&gt;In deployment scenarios like canary releases or blue-green rollouts, traffic can be split using weighted routing so only a controlled percentage of users reach a new version.&lt;/p&gt;

&lt;p&gt;Optionally, the ALB can enable sticky sessions, where a client is consistently routed to the same backend via an ALB-managed cookie; this can be useful for stateful or expensive-to-initialize applications, but it weakens fault tolerance and requires a clear strategy for handling backend failures.&lt;/p&gt;

&lt;h1&gt;
  
  
  Container Port Mapping
&lt;/h1&gt;

&lt;p&gt;Docker containers listen on internal ports and are mapped to host ports:&lt;br&gt;
&lt;code&gt;Host:8080 → Container:8080&lt;/code&gt;&lt;br&gt;
ALB → EC2 (8080) → Docker bridge → Container → App process&lt;br&gt;
The target group port must match the exposed container port.&lt;/p&gt;

&lt;h2&gt;
  
  
  EC2 And Docker, Where your Application actually runs
&lt;/h2&gt;

&lt;h1&gt;
  
  
  EC2 as the compute layer
&lt;/h1&gt;

&lt;p&gt;The EC2 instance is the host machine where your Docker containers run. From the ALB's perspective, the EC2 instance is identified by:​&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Private IP Address: The IP within the VPC (e.g., 10.0.1.100)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Port: The port the container is listening on (e.g., 8080)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The ALB sends traffic directly to this private IP, assuming network connectivity exists, This requires:​&lt;/p&gt;

&lt;p&gt;VPC Configuration - The ALB and EC2 instance must be in the same VPC or connected via VPC peering/Transit Gateway.&lt;/p&gt;

&lt;p&gt;Subnet Routing - The ALB can route to targets in any subnet within the VPC.&lt;/p&gt;

&lt;p&gt;Security Groups - The EC2 instance's security group must allow inbound traffic from the ALB's security group.&lt;/p&gt;

&lt;p&gt;Using security group references (not IP addresses) ensures that even if the ALB's IP addresses change, the routing still works because security groups dynamically resolve to all ENIs in that group&lt;/p&gt;

&lt;h1&gt;
  
  
  Docker Container recieveing traffic
&lt;/h1&gt;

&lt;p&gt;Inside the EC2 instance, Docker creates a virtual network bridge for each container&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodflm7owshg00so850vh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodflm7owshg00so850vh.png" alt=" " width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Full End to End Request Life cycle
&lt;/h1&gt;

&lt;p&gt;Let's trace a complete HTTPS request from a user in London to your application in us-east-1:&lt;/p&gt;

&lt;p&gt;A user in London requests &lt;a href="https://app.example.com" rel="noopener noreferrer"&gt;https://app.example.com&lt;/a&gt;. The browser resolves the domain via a recursive DNS resolver which reaches Route 53 and receives an Alias to the Application Load Balancer. The ALB hostname resolves to multiple IP addresses cached for 59 seconds.&lt;/p&gt;

&lt;p&gt;The browser connects to one IP on port 443, completes TCP and TLS handshakes using SNI app.example.com, and establishes an encrypted connection.&lt;/p&gt;

&lt;p&gt;The browser sends GET /api/users?id=123. The ALB terminates TLS, evaluates listener rules, and forwards the request to the API target group.&lt;/p&gt;

&lt;p&gt;A healthy EC2 target is selected using round robin and the request is forwarded with X Forwarded For and X Forwarded Proto https headers to a Docker container running the Node.js application. The application validates the session, processes the request, and returns a JSON response.&lt;/p&gt;

&lt;p&gt;The ALB re-encrypts the response and returns it to the browser. Subsequent requests include the AWSALB cookie, ensuring traffic is routed to the same backend instance for up to 24 hours.&lt;/p&gt;

&lt;p&gt;To the user, this appears as a single HTTPS request while internally handling DNS resolution, secure transport, rule based routing, load balancing, container execution, and session persistence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The architecture connecting Route 53 to ALB to EC2/Docker represents decades of distributed systems engineering. Each component solves a specific problem:&lt;/p&gt;

&lt;p&gt;Route 53 provides dynamic DNS resolution that automatically tracks your load balancer's changing IP addresses.&lt;/p&gt;

&lt;p&gt;ALB provides Layer 7 routing, SSL termination, and health-based traffic steering across multiple backends.&lt;/p&gt;

&lt;p&gt;EC2 and Docker provides isolated, scalable compute with port mapping that connects load balanced traffic to your application.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>aws</category>
      <category>network</category>
    </item>
    <item>
      <title>What Exactly is Docker and Why its necessary</title>
      <dc:creator>P. Acharya</dc:creator>
      <pubDate>Sat, 06 Dec 2025 09:26:53 +0000</pubDate>
      <link>https://forem.com/p_acharya_cb32943b1cb6a0/what-exactly-is-docker-and-why-its-necessary-33p9</link>
      <guid>https://forem.com/p_acharya_cb32943b1cb6a0/what-exactly-is-docker-and-why-its-necessary-33p9</guid>
      <description>&lt;h2&gt;
  
  
  Why Docker Matters
&lt;/h2&gt;

&lt;p&gt;Being a rookie developer we all have been there: "If it works on my system then why did it not work on the server?" We won't know how to send the entire application code along with the installed dependencies (services) to our colleagues so that they can test it without "Dependency Hell." That is where Docker and containers come in.&lt;/p&gt;

&lt;p&gt;Before containers, when a team of developers needed to test and run the application code from other developers in the team, they would need to install the required services and libraries the application code uses with their correct versions. For example, if your app is a Java app that uses PostgreSQL as a database, Redis for caching, and RabbitMQ for messaging, the developers would need to install the precise versions of each of these services on their local system. Another problem that arose from this situation is that the installation process is different on every operating system, involving multiple steps where many things can go wrong. This will not sound like a big problem right now, but in hindsight, if your application uses ten services, you would need to complete the installation of ten services on your local system with the correct versions, which can lead to unexpected errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Docker Affected Deployment Process
&lt;/h2&gt;

&lt;p&gt;The development team would create an application artifact or package along with instructions on how to set up the application on your server. Along with this were the other services that the application needed, with instructions on how to set up those services, which would be handed to the operations team. The problem with this approach is that there might be multiple services that depend on the same library but each requires different versions to work, leading to dependency conflicts. Now with Docker, everything that the app needs is packaged inside the Docker artifact and sent to the operations team. This process does not require any configuration in the server itself.&lt;/p&gt;

&lt;p&gt;Docker containers improved deployment as well as the development process because the container, being portable, can be easily shared with your DevOps and development team, streamlining the process. It solves dependency hell by packaging the relevant dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Container?
&lt;/h2&gt;

&lt;p&gt;A container is a way to package services with all their required dependencies in a single “box.” All the required configuration files, start scripts, and other dependencies for the service (for example PostgreSQL) are packaged in a container and simply installed with a single Docker command. Now, instead of downloading binaries for ten different services and going through the gruesome installation process, you can just run ten Docker commands to start the ten services that your application depends on.&lt;/p&gt;

&lt;p&gt;Docker containers enable you to spend more of your time and energy on development rather than being stuck installing and fixing dependencies of multiple services for your application to run.&lt;/p&gt;

&lt;h2&gt;
  
  
  Virtual Machine VS Docker
&lt;/h2&gt;

&lt;p&gt;Docker is a virtualization tool. You might be wondering how Docker enables us to run the services in a container on any operating system with a single command. To understand this, let us dissect the OS. The OS has two main layers: the OS kernel and the OS applications layer. The kernel interacts with the hardware and applications layer to enable communication between them.&lt;/p&gt;

&lt;p&gt;Docker virtualizes the applications layer. When you run a Docker container, it contains the OS applications layer and uses the OS kernel of the host machine to interact with the hardware.&lt;/p&gt;

&lt;p&gt;The virtual machine, on the other hand, has the OS applications layer and its own kernel. You can save a lot of disk space when using Docker.&lt;/p&gt;

&lt;p&gt;The size of virtual machines is much larger than the size of Docker containers (images). Docker can start within seconds, while virtual machines can take minutes to start because they need to boot up their own kernel.&lt;/p&gt;

&lt;p&gt;You can run a virtual machine of any OS on any other operating system, but you cannot do that in Docker, at least not directly. Let us say you have a Windows-based host machine and you want to run a Linux-based Docker container. The problem is that the virtual Linux applications layer will not be compatible with the Windows kernel. But there is a workaround for this. You can download Docker Desktop, which uses a hypervisor layer with a lightweight Linux distro providing the Linux kernel, letting you run Docker containers on Windows and Mac hosts easily.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Images VS Docker Containers
&lt;/h2&gt;

&lt;p&gt;A Docker image can be thought of as an executable application artifact that not only includes the app source code but also the complete environment configuration. It includes the OS applications layer, any services the app needs, and the main app source code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8majuslefk9xudxd5jtg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8majuslefk9xudxd5jtg.png" alt=" " width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also add environment variables in this Docker image (though not recommended for security) and create directories inside the Docker image.&lt;/p&gt;

&lt;p&gt;A container, on the other hand, is nothing but a running instance of the image. A container is basically when you are actually running the application source code inside the Docker image. The advantage of this is that we can create multiple Docker containers, running instances, from a single Docker image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Registry
&lt;/h2&gt;

&lt;p&gt;Now you might wonder, we would need to get the images for a service from somewhere to run the service. That is where registries come into play. This is storage specifically for Docker-image-type artifacts. Docker registries contain official images that are maintained by the companies. Docker hosts one of the biggest Docker registries called DockerHub, where you can find many images that different companies and individual developers have created and shared.&lt;/p&gt;

&lt;h2&gt;
  
  
  Image Versioning
&lt;/h2&gt;

&lt;p&gt;Technology changes, and as new features are added to the service, the Docker image for this service is changed. As you add new features in the application code, you can create a separate image for the service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F744trcts666vf4jsc2bx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F744trcts666vf4jsc2bx.png" alt=" " width="800" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can track and name the different versions of the Docker images using tags.&lt;/p&gt;

&lt;h2&gt;
  
  
  Main Docker Commands
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;docker pull {image name}:{tag}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This command is used to pull an image from the registry into your local machine. Docker uses DockerHub as the default image registry to pull Docker images from.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run {image name}:{tag}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Creates a container for the given image and starts it. It pulls the image if it is not available in the host machine. Running this command in the terminal will actually block the container. You can use a “-d” or “--detach” flag (detached mode) to run the container in the background. You can add the “--name” flag to give the Docker container a specific name instead of using the default one that Docker gives you.&lt;/p&gt;

&lt;p&gt;Even when running the container in detached mode, you may want to see the container logs. For that you can run this command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker logs {container ID}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Using this command, you can view the logs of the service running inside the Docker container.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker ps&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This command is used to list the running containers. The “-a” flag is used with this command to see all the containers, even the ones that were stopped.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker stop {container name or container ID}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This command stops the running container.&lt;/p&gt;

&lt;h2&gt;
  
  
  Port Binding
&lt;/h2&gt;

&lt;p&gt;Docker solves the problem of running different versions of the same application with the help of port binding. Applications inside containers run in an isolated Docker network. We need to expose the container port to the host machine port so the application inside the container can be accessed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00wdzyt99l24t4mndfcj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F00wdzyt99l24t4mndfcj.png" alt=" " width="800" height="1069"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can bind the container port to the host port at the time of running a Docker image:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -d -p {host port}:{container port} {image name}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can specify two containers to run on different ports of the same local host. For example, you can run container 1 with port 80 on port 80 and container 2 with port 80 on local host port 3000.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;At the end of the day, Docker just makes life easier. No more dependency nightmares, no more “but it worked on my system,” and no more hours wasted setting up services on every machine. You package everything once, run it anywhere, and focus on actually building stuff. That is the real power of Docker and why every developer should at least know the basics. Happy containerizing!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>docker</category>
      <category>containers</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
