<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Deborah Emeni</title>
    <description>The latest articles on Forem by Deborah Emeni (@deborahemeni1).</description>
    <link>https://forem.com/deborahemeni1</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/deborahemeni1"/>
    <language>en</language>
    <item>
      <title>How to run a multi-container app with Docker compose</title>
      <dc:creator>Deborah Emeni</dc:creator>
      <pubDate>Sun, 01 Feb 2026 09:16:49 +0000</pubDate>
      <link>https://forem.com/deborahemeni1/how-to-run-a-multi-container-app-with-docker-compose-25i7</link>
      <guid>https://forem.com/deborahemeni1/how-to-run-a-multi-container-app-with-docker-compose-25i7</guid>
      <description>&lt;h2&gt;
  
  
  What you'll learn
&lt;/h2&gt;

&lt;p&gt;By the end of this section, you'll:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Understand what Docker Compose is and why it’s useful for managing multi-container applications
&lt;/li&gt;
&lt;li&gt;Learn how a &lt;code&gt;docker-compose.yml&lt;/code&gt; file defines and runs an entire application
&lt;/li&gt;
&lt;li&gt;Run multiple services together as a single system instead of starting containers manually
&lt;/li&gt;
&lt;li&gt;Configure environment variables inside Docker Compose
&lt;/li&gt;
&lt;li&gt;Enable communication between services using Docker’s built-in networking
&lt;/li&gt;
&lt;li&gt;Persist database data using volumes
&lt;/li&gt;
&lt;li&gt;Deploy a Node.js API with MongoDB using one command
&lt;/li&gt;
&lt;li&gt;Start, stop, and manage your whole application using Docker Compose commands&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Why managing containers manually doesn’t scale
&lt;/h2&gt;

&lt;p&gt;Before Docker Compose, the usual way to run a multi-container setup is to start each container separately, then manually connect everything together.&lt;/p&gt;

&lt;p&gt;If you’ve been following along in this series, you already know the building blocks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dockerfiles&lt;/strong&gt; package your application into an image
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Networking&lt;/strong&gt; allows containers to communicate
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Volumes&lt;/strong&gt; preserve your data between restarts
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Individually, these are simple.&lt;/p&gt;

&lt;p&gt;But once your project has more than one container, the setup quickly turns into a long list of commands you need to remember and repeat every time you run the app.&lt;/p&gt;

&lt;p&gt;Take a look at what the manual approach can feel like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker network create app-net
docker run mongo
docker run node-api
docker run &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;MONGO_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This works, but there are some clear problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;too many commands to manage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;easy to forget flags or the correct order&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;harder for teammates to follow&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;not easily reproducible across machines&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At some point, it stops feeling like “running an application” and starts feeling like manually configuring containers to work together.&lt;/p&gt;

&lt;p&gt;There has to be a simpler way to treat these containers as one application.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Docker Compose?
&lt;/h2&gt;

&lt;p&gt;Docker Compose is a tool that lets you define and run multiple containers as a single application.&lt;/p&gt;

&lt;p&gt;Instead of starting containers individually and remembering a long list of commands, you describe everything in one place, then start the entire system together.&lt;/p&gt;

&lt;p&gt;In practice, that means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;you define your services in one file
&lt;/li&gt;
&lt;li&gt;Docker sets up the network for you
&lt;/li&gt;
&lt;li&gt;Docker creates volumes for you
&lt;/li&gt;
&lt;li&gt;and you start everything with a single command
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s the mental model I want you to keep in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a Dockerfile describes one container
&lt;/li&gt;
&lt;li&gt;a &lt;code&gt;docker-compose.yml&lt;/code&gt; file describes your whole application
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So rather than managing containers separately, you treat them as one system.&lt;/p&gt;

&lt;p&gt;You’ll see terms like &lt;strong&gt;services&lt;/strong&gt;, &lt;strong&gt;networks&lt;/strong&gt;, and &lt;strong&gt;volumes&lt;/strong&gt; as we go. Don’t worry about them yet. We’ll learn them naturally in the hands-on sections.&lt;/p&gt;

&lt;p&gt;For now, just remember: Docker Compose helps you run multiple containers together like one application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the docker-compose.yml structure
&lt;/h2&gt;

&lt;p&gt;When most people see a YAML file for the first time, it can feel a little intimidating with all the many indentation and keys.  &lt;/p&gt;

&lt;p&gt;And it looks more complicated than it really is.&lt;/p&gt;

&lt;p&gt;So instead of pasting a huge file and trying to explain everything at once, let’s build it gradually.&lt;/p&gt;

&lt;p&gt;We’ll start small and add each part step by step.&lt;/p&gt;

&lt;p&gt;Let’s start with the smallest possible Compose file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it.&lt;/p&gt;

&lt;p&gt;Let’s break this down together:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;services&lt;/code&gt; → the containers we want Docker to run&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;app&lt;/code&gt; → the name of our service (you can choose this)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;image&lt;/code&gt; → the image Docker should use to create the container&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So with just these few lines, we’ve already told Docker:&lt;/p&gt;

&lt;p&gt;“Start one container called app using the Node image.”&lt;/p&gt;

&lt;p&gt;From here, we’ll keep extending the same file.&lt;/p&gt;

&lt;p&gt;As our application becomes more complex, we’ll add things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ports&lt;/code&gt; to expose the app to our browser&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;environment&lt;/code&gt; for configuration values&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;volumes&lt;/code&gt; for persistent data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;depends_on&lt;/code&gt; to control startup order&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key idea is simple: we don’t write everything at once.&lt;/p&gt;

&lt;p&gt;We add one piece, understand it, test it, then move on.&lt;/p&gt;

&lt;p&gt;That way, the file never feels overwhelming, and you always know exactly what each line is doing.&lt;/p&gt;

&lt;p&gt;Next, we’ll put all of this into practice with a hands-on project and apply these concepts in an actual setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the demo project
&lt;/h2&gt;

&lt;p&gt;Before we start writing our &lt;code&gt;docker-compose.yml&lt;/code&gt; file, we need a small project to run. The goal is not to build a feature-rich API. We just need something simple that can connect to a database so we can focus on learning Docker Compose.&lt;/p&gt;

&lt;p&gt;If you prefer to skip the setup, you can clone the complete project here: &lt;a href="https://github.com/d-emeni/node-api-compose-demo" rel="noopener noreferrer"&gt;https://github.com/d-emeni/node-api-compose-demo&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/d-emeni/node-api-compose-demo.git
&lt;span class="nb"&gt;cd &lt;/span&gt;node-api-compose-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;To follow along, you should already have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker installed and working on your machine (Follow this &lt;a href="https://dev.to/deborahemeni1/getting-started-with-docker-how-to-install-docker-and-set-it-up-correctly-4knb"&gt;guide&lt;/a&gt;) &lt;/li&gt;
&lt;li&gt;a basic understanding of Docker images and containers  (read up &lt;a href="https://dev.to/deborahemeni1/understanding-virtualization-containers-in-the-simplest-way-18m3"&gt;here&lt;/a&gt; if you're new to containers)&lt;/li&gt;
&lt;li&gt;familiarity with Dockerfiles (you have already seen this earlier in the &lt;a href="https://dev.to/deborahemeni1/how-to-dockerize-a-nodejs-application-with-a-custom-dockerfile-7ji"&gt;series&lt;/a&gt;)
&lt;/li&gt;
&lt;li&gt;basic Node.js knowledge (enough to run a small API)
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have gone through the earlier posts in this series, you are in a good place to continue.&lt;/p&gt;

&lt;h3&gt;
  
  
  What we’re building
&lt;/h3&gt;

&lt;p&gt;We’ll build a small Node.js API that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;connects to MongoDB
&lt;/li&gt;
&lt;li&gt;saves data
&lt;/li&gt;
&lt;li&gt;returns it
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That gives us a realistic setup for Docker Compose without adding unnecessary complexity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project structure
&lt;/h3&gt;

&lt;p&gt;This is the structure we’ll work with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node-api/
 index.js
 package.json
 package-lock.json
 Dockerfile
 .dockerignore
 .gitignore
 docker-compose.yml
 README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don’t worry if some of these files look unfamiliar. We’ll walk through the important ones as we go.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Creating our &lt;code&gt;docker-compose.yml&lt;/code&gt; file
&lt;/h2&gt;

&lt;p&gt;Now that we have a working Node.js API, the next step is to run the API and MongoDB as a single application using Docker Compose.&lt;/p&gt;

&lt;p&gt;This is where Compose starts to feel useful.&lt;/p&gt;

&lt;p&gt;Instead of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;starting MongoDB manually&lt;/li&gt;
&lt;li&gt;starting the API manually&lt;/li&gt;
&lt;li&gt;remembering flags like ports, environment variables, and container names&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We’re going to describe the entire setup in one file, then run everything together.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the file
&lt;/h3&gt;

&lt;p&gt;In the root of your project (the same level as your &lt;code&gt;Dockerfile&lt;/code&gt;), create a file named:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the next section, we’ll start with the smallest Compose configuration and build it up step by step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Start with the smallest working Compose file
&lt;/h2&gt;

&lt;p&gt;Let’s start small and build up from there.&lt;/p&gt;

&lt;p&gt;Add this to your &lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3000:3000"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break down what this means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;services&lt;/code&gt; is where we define the containers our application needs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;app&lt;/code&gt; is the name of our first service (this is your Node.js API)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;build: .&lt;/code&gt; tells Docker Compose to build an image using the Dockerfile in the current folder&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ports: "3000:3000"&lt;/code&gt; maps port 3000 in the container to port 3000 on your machine&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Run it
&lt;/h3&gt;

&lt;p&gt;From the project root, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything is set up correctly, Docker will build the image and start your API container.&lt;/p&gt;

&lt;p&gt;You should see Docker building the image, creating the container, and then starting the Node.js server logs.&lt;/p&gt;

&lt;p&gt;Don’t worry if you see a MongoDB connection error at the end. That’s expected for now, because we haven’t added MongoDB yet.&lt;/p&gt;

&lt;p&gt;Your output should look similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8zcbkwqtd8fbxpcx5ez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8zcbkwqtd8fbxpcx5ez.png" alt="docker-compose-app-only-startup" width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Add MongoDB as a service
&lt;/h2&gt;

&lt;p&gt;Right now, our Compose file only starts the API container. That’s why the app fails to connect to MongoDB.&lt;/p&gt;

&lt;p&gt;The fix is simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;add a MongoDB service&lt;/li&gt;
&lt;li&gt;point our Node.js app to that service using an environment variable&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Update your &lt;code&gt;docker-compose.yml&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Replace the contents of your &lt;code&gt;docker-compose.yml&lt;/code&gt; with this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3000:3000"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;MONGO_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb://mongo:27017&lt;/span&gt;
      &lt;span class="na"&gt;DB_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;compose_demo&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mongo&lt;/span&gt;

  &lt;span class="na"&gt;mongo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongo:latest&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;27017:27017"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What changed?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;mongo&lt;/code&gt; is a new service running the official MongoDB image&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;MONGO_URL&lt;/code&gt; is now &lt;code&gt;mongodb://mongo:27017&lt;/code&gt; (the service name becomes the hostname)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;depends_on&lt;/code&gt; tells Docker Compose to start MongoDB before starting the API&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Run it again
&lt;/h3&gt;

&lt;p&gt;If your previous Compose run is still active, stop it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Ctrl + C
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then start everything again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you run the command, Docker Compose will go through two phases.&lt;/p&gt;

&lt;p&gt;First, it builds and prepares everything for you.&lt;/p&gt;

&lt;p&gt;You’ll see Docker pulling the MongoDB image (if you don’t already have it), building your Node.js image from the Dockerfile, and creating the containers.&lt;/p&gt;

&lt;p&gt;It should look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzfo3cjjhuj6a08fflv8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzfo3cjjhuj6a08fflv8b.png" alt="docker-compose-build-phase" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the images are built, Docker starts both containers and begins streaming their logs.&lt;/p&gt;

&lt;p&gt;You should see MongoDB starting up first, followed by your API connecting to it.&lt;/p&gt;

&lt;p&gt;If everything is working, look for a message like:&lt;/p&gt;

&lt;p&gt;Connected to MongoDB at mongodb://mongo:27017&lt;/p&gt;

&lt;p&gt;Your output should look similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngy6fg1whqofzrkg1173.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngy6fg1whqofzrkg1173.png" alt="docker-compose-running-phase" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: A quick recap of what just happened
&lt;/h2&gt;

&lt;p&gt;Before we move on, let’s pause for a moment and connect the dots.&lt;/p&gt;

&lt;p&gt;With one &lt;code&gt;docker-compose.yml&lt;/code&gt; file, we:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;built our Node.js image
&lt;/li&gt;
&lt;li&gt;started a MongoDB container
&lt;/li&gt;
&lt;li&gt;created a shared network automatically
&lt;/li&gt;
&lt;li&gt;connected both services together
&lt;/li&gt;
&lt;li&gt;and launched the entire application with a single command
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of running multiple &lt;code&gt;docker run&lt;/code&gt; commands and configuring networking manually, you defined the system once and Docker Compose handled the setup for you.&lt;/p&gt;

&lt;p&gt;That’s really the core idea behind Docker Compose.&lt;/p&gt;

&lt;p&gt;You describe your application in one place, and Docker takes care of creating and running the containers.&lt;/p&gt;

&lt;p&gt;Now that everything is up and running, let’s test the API and confirm our application behaves as expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Testing the application
&lt;/h2&gt;

&lt;p&gt;At this point, both containers are running:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the Node.js API
&lt;/li&gt;
&lt;li&gt;the MongoDB database
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They’re connected through Docker Compose, and your app should already be talking to MongoDB in the background.&lt;/p&gt;

&lt;p&gt;Now let’s quickly verify that everything works as expected.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Check the health endpoint
&lt;/h3&gt;

&lt;p&gt;Open a new terminal and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:3000/health
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ok"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fao2kuvg53hyv925qwzql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fao2kuvg53hyv925qwzql.png" alt="status ok" width="800" height="47"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This confirms that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the API is running&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the server is reachable&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the container started correctly&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Save some data
&lt;/h3&gt;

&lt;p&gt;Now let’s store something in the database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:3000/notes &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"text":"hello from docker compose"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should get a response similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Note created"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"note"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"text"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"hello from docker compose"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wzmpqpd7xgnfg3kbtf1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wzmpqpd7xgnfg3kbtf1.png" alt=" " width="800" height="71"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This tells us the API successfully wrote data to MongoDB.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Read the data back
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:3000/notes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see your saved note returned in the response.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79kw4ufurfgr6py88w0m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79kw4ufurfgr6py88w0m.png" alt="read-data-docker-compose" width="800" height="39"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you can create and read notes successfully, your containers are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;running&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;connected&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and communicating correctly&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point, you officially have a multi-container application running with Docker Compose.&lt;/p&gt;

&lt;p&gt;In the next section, we’ll make this setup more practical by adding persistent storage so your database data survives container restarts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Adding persistent storage with volumes
&lt;/h2&gt;

&lt;p&gt;Right now, everything works.&lt;/p&gt;

&lt;p&gt;You can create notes, read them back, and the API communicates with MongoDB correctly.&lt;/p&gt;

&lt;p&gt;But there’s one problem.&lt;/p&gt;

&lt;p&gt;If you stop and remove the containers, all your database data disappears.&lt;/p&gt;

&lt;p&gt;That’s because containers are ephemeral by default. When a container is deleted, its filesystem is deleted too.&lt;/p&gt;

&lt;p&gt;Let’s prove that quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stop everything
&lt;/h3&gt;

&lt;p&gt;Press:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Ctrl + C
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then remove the containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xw92dy0908vdl5iueh8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xw92dy0908vdl5iueh8.png" alt="remove-containers-docker-compose" width="800" height="67"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now start everything again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5u3s7oa5gx3j38509c1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5u3s7oa5gx3j38509c1.png" alt="start-container-docker-compose" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you try:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:3000/notes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll notice your notes are gone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7yeuzauoi5xcx1zerop.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7yeuzauoi5xcx1zerop.png" alt="notes-gone-docker-compose" width="800" height="56"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The database started fresh.&lt;/p&gt;

&lt;p&gt;This is not what we want in our applications.&lt;/p&gt;

&lt;p&gt;We need our data to survive container restarts.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are volumes?
&lt;/h3&gt;

&lt;p&gt;Docker volumes are persistent storage managed by Docker.&lt;/p&gt;

&lt;p&gt;Think of them as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;storage outside the container&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;that containers can mount and reuse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;even after they are stopped or deleted&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So instead of storing MongoDB data inside the container, we store it in a volume.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you need more practical knowledge about volumes, we covered it in our previous series: "&lt;a href="https://dev.to/deborahemeni1/run-a-mysql-container-with-persistent-storage-using-docker-volumes-49ma"&gt;Run a MySQL container with persistent storage using Docker volumes&lt;/a&gt;"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Update your docker-compose.yml
&lt;/h3&gt;

&lt;p&gt;Add a volume to the MongoDB service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3000:3000"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;MONGO_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb://mongo:27017&lt;/span&gt;
      &lt;span class="na"&gt;DB_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;compose_demo&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mongo&lt;/span&gt;

  &lt;span class="na"&gt;mongo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongo:latest&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;27017:27017"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mongo_data:/data/db&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mongo_data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What this does&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mongo_data:/data/db&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;mongo_data&lt;/code&gt; → Docker volume&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;/data/db&lt;/code&gt; → where MongoDB stores its files inside the container&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So MongoDB now saves data to the volume instead of the container filesystem.&lt;/p&gt;

&lt;p&gt;Even if the container is removed, the volume remains.&lt;/p&gt;

&lt;h3&gt;
  
  
  Test it
&lt;/h3&gt;

&lt;p&gt;Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker will rebuild the containers and create a new volume for MongoDB. Watch for a line that says the volume was created. Your output should look similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnu93foowpstn8a5qmv91.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnu93foowpstn8a5qmv91.png" alt=" " width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a note again to test persistence (so we have some data to test with):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:3000/notes &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"text":"hello again after restart"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zvsiqkubijzhx5lek2k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zvsiqkubijzhx5lek2k.png" alt="note-created-docker-compose" width="800" height="87"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then stop everything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzt18qx8mwqa9g06xiyr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzt18qx8mwqa9g06xiyr.png" alt="stop-container-again-docker-compose" width="800" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Start it again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0o332ps3xywhut1kuu3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0o332ps3xywhut1kuu3.png" alt="start-container-again-docker-compose" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now check:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:3000/notes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your data should still be there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc84bh12bf84sqtzpki9j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc84bh12bf84sqtzpki9j.png" alt="create-notes-again-docker-compose" width="800" height="80"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s persistence working.&lt;/p&gt;

&lt;p&gt;In the next section, we’ll clean things up further by using environment variables more cleanly with a .env file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7: Managing configuration with a &lt;code&gt;.env&lt;/code&gt; file
&lt;/h2&gt;

&lt;p&gt;So far, everything works.&lt;/p&gt;

&lt;p&gt;But our &lt;code&gt;docker-compose.yml&lt;/code&gt; is starting to look a little noisy.&lt;/p&gt;

&lt;p&gt;Right now we have configuration values hardcoded directly inside the file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;database URL
&lt;/li&gt;
&lt;li&gt;database name
&lt;/li&gt;
&lt;li&gt;ports
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This works for small demos, but in your actual projects it can quickly become chaotic.&lt;/p&gt;

&lt;p&gt;As your app becomes more complex, you don’t want to edit the Compose file every time you change:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a port&lt;/li&gt;
&lt;li&gt;an environment variable&lt;/li&gt;
&lt;li&gt;or a setting for a different environment (dev, staging, production)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead, we separate &lt;strong&gt;configuration from infrastructure&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That’s where a &lt;code&gt;.env&lt;/code&gt; file helps.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is a &lt;code&gt;.env&lt;/code&gt; file?
&lt;/h3&gt;

&lt;p&gt;A &lt;code&gt;.env&lt;/code&gt; file stores environment variables in one place.&lt;/p&gt;

&lt;p&gt;Docker Compose automatically reads this file and replaces variables inside &lt;code&gt;docker-compose.yml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So instead of hardcoding values, we reference them.&lt;/p&gt;

&lt;p&gt;This keeps your setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cleaner&lt;/li&gt;
&lt;li&gt;easier to change&lt;/li&gt;
&lt;li&gt;and more portable&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create a &lt;code&gt;.env&lt;/code&gt; file
&lt;/h3&gt;

&lt;p&gt;In your project root, create:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;touch&lt;/span&gt; .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PORT=3000
MONGO_URL=mongodb://mongo:27017
DB_NAME=compose_demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Update &lt;code&gt;docker-compose.yml&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Now replace the hardcoded values with variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${PORT}:3000"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;MONGO_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${MONGO_URL}&lt;/span&gt;
      &lt;span class="na"&gt;DB_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${DB_NAME}&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mongo&lt;/span&gt;

  &lt;span class="na"&gt;mongo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongo:latest&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mongo_data:/data/db&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mongo_data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What changed?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;MONGO_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb://mongo:27017&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We now use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;MONGO_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${MONGO_URL}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker Compose reads the value from &lt;code&gt;.env.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;So if you ever want to change something, you only edit one file.&lt;/p&gt;

&lt;p&gt;No touching the Compose config.&lt;/p&gt;

&lt;h3&gt;
  
  
  Run it again
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This time, Docker doesn’t rebuild everything from scratch. It simply recreates the containers and reuses the existing volume, so startup is much faster. Your output should look similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faw6k7toblhcvts1bxsax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faw6k7toblhcvts1bxsax.png" alt="run-docker-compose-again" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Everything should behave exactly the same.&lt;/p&gt;

&lt;p&gt;The difference is that your configuration is now cleaner and easier to manage, and you can change values without editing the &lt;code&gt;docker-compose.yml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;This becomes more valuable as your application becomes more complex or when you deploy to different environments.&lt;/p&gt;

&lt;p&gt;In the next section, we’ll look at a few everyday Docker Compose commands that make managing multi-container apps much easier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Everyday Docker Compose commands
&lt;/h2&gt;

&lt;p&gt;Now that everything is running, let’s look at a few commands you’ll use regularly when working with Docker Compose.&lt;/p&gt;

&lt;p&gt;These make it much easier to start, stop, rebuild, and debug your application during development.&lt;/p&gt;

&lt;h3&gt;
  
  
  Start all services
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Builds (if needed) and starts every service defined in &lt;code&gt;docker-compose.yml&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Start in the background (detached mode)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Runs containers in the background so your terminal stays free.&lt;/p&gt;

&lt;p&gt;This is how you’ll usually run your app during development.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rebuild after code changes
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Forces Docker to rebuild images before starting containers.&lt;/p&gt;

&lt;p&gt;Useful when you change your Dockerfile or dependencies.&lt;/p&gt;

&lt;h3&gt;
  
  
  View logs
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose logs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See logs from all services.&lt;/p&gt;

&lt;p&gt;Follow logs live:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose logs &lt;span class="nt"&gt;-f&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Very helpful for debugging.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check running containers
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Shows which services are running and their ports.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stop everything
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Stops and removes containers and the network.&lt;/p&gt;

&lt;p&gt;Your data stays safe because it’s stored in volumes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reset everything (including database data)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose down &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Removes containers and volumes.&lt;/p&gt;

&lt;p&gt;This wipes the database and gives you a completely fresh start.&lt;/p&gt;

&lt;p&gt;Use this when testing or troubleshooting.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;At this point, you should feel comfortable managing your entire multi-container app with just a few simple commands.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Wrap up
&lt;/h2&gt;

&lt;p&gt;You started with a single container and gradually built up to a complete multi-container setup.&lt;/p&gt;

&lt;p&gt;Along the way, you learned how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;run multiple services with Docker Compose&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;connect containers using service names&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;persist data with volumes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;clean up configuration using environment variables&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;manage everything with simple Compose commands&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You now have a small but realistic Node.js + MongoDB application running exactly how many real-world projects run in development.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s next in the Docker learning series?
&lt;/h2&gt;

&lt;p&gt;So far, everything has been running locally on your machine.&lt;/p&gt;

&lt;p&gt;You now know how to package an app into containers, run multiple services with Docker Compose, connect them together, and manage them in development.&lt;/p&gt;

&lt;p&gt;But local environments are only half the story.&lt;/p&gt;

&lt;p&gt;In the next part of this series, we’ll move beyond your laptop and deploy containers to the cloud.&lt;/p&gt;

&lt;p&gt;You’ll:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;understand cloud container platforms like AWS ECS, Google Cloud Run, and Azure Containers
&lt;/li&gt;
&lt;li&gt;deploy a containerized app to the cloud
&lt;/li&gt;
&lt;li&gt;manage and run containers in cloud environments
&lt;/li&gt;
&lt;li&gt;hands-on: deploy an API container to Google Cloud Run (and try the same on AWS and Azure)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the end, you’ll be able to deploy your containers beyond your local machine and into the cloud.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>dockercompose</category>
      <category>devops</category>
      <category>containers</category>
    </item>
    <item>
      <title>Run a MySQL container with persistent storage using Docker volumes</title>
      <dc:creator>Deborah Emeni</dc:creator>
      <pubDate>Tue, 02 Dec 2025 07:17:37 +0000</pubDate>
      <link>https://forem.com/deborahemeni1/run-a-mysql-container-with-persistent-storage-using-docker-volumes-49ma</link>
      <guid>https://forem.com/deborahemeni1/run-a-mysql-container-with-persistent-storage-using-docker-volumes-49ma</guid>
      <description>&lt;h2&gt;
  
  
  What you'll learn
&lt;/h2&gt;

&lt;p&gt;By the end of this section, you'll:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Understand what happens to data inside a container and why it does not stay once the container is removed.&lt;/li&gt;
&lt;li&gt;Learn what Docker volumes are and how they help you keep data safe even when containers stop or get recreated.&lt;/li&gt;
&lt;li&gt;See the difference between bind mounts and named volumes, and know which one to use in simple development situations.&lt;/li&gt;
&lt;li&gt;Learn how a container reads and writes data to a location outside its own filesystem using volumes.&lt;/li&gt;
&lt;li&gt;Complete a hands-on project where you run a MySQL container with persistent storage so your database stays intact across restarts.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We’ll begin by looking at how container storage works and why volumes are needed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why do containers lose data?
&lt;/h2&gt;

&lt;p&gt;Before we start working with volumes, it helps to understand how container storage works.&lt;/p&gt;

&lt;p&gt;Every container has its own isolated filesystem. It's like a temporary workspace that only that container can see.&lt;/p&gt;

&lt;p&gt;When you start a container, Docker creates a thin writable layer on top of the image. Any changes the container makes, such as logs, uploaded files, and database entries, go into this writable layer.&lt;/p&gt;

&lt;p&gt;The key point is that this layer is not designed to stay around forever. Once you stop or remove the container, Docker wipes the writable layer along with anything stored inside it. The image remains, but your data does not.&lt;/p&gt;

&lt;h3&gt;
  
  
  A quick example to visualize this
&lt;/h3&gt;

&lt;p&gt;Let's say you run a MySQL container, create a database table, and insert some rows.&lt;/p&gt;

&lt;p&gt;As long as the container is running, everything looks fine. But if you delete that container and start a new one using the same image, MySQL starts up as if it is a fresh installation.&lt;/p&gt;

&lt;p&gt;The data you added earlier is gone because it lived only inside the old container’s temporary filesystem.&lt;/p&gt;

&lt;p&gt;This is why we need a way to store data outside the container’s lifecycle. And that is exactly what Docker volumes provide.&lt;/p&gt;




&lt;h2&gt;
  
  
  What are Docker volumes?
&lt;/h2&gt;

&lt;p&gt;Docker volumes give containers a place to store data that lives beyond the container itself.&lt;/p&gt;

&lt;p&gt;So, instead of writing everything to the container’s short-lived filesystem, a volume provides a separate storage location that Docker manages for you.&lt;/p&gt;

&lt;p&gt;A better way to understand it is this: the container has its own internal workspace, but a volume sits outside that workspace. When you attach a volume to a container, the container can read from it and write to it as if it were part of its own filesystem, even though the data is actually stored elsewhere.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9n7dmjg20wll1tzoc67h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9n7dmjg20wll1tzoc67h.png" alt="docker-volumes" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why are Docker volumes used?
&lt;/h3&gt;

&lt;p&gt;Docker handles all the behind-the-scenes work. It creates the volume, keeps track of where it lives on your machine, and makes sure it is available whenever a container needs it. You do not have to manage the actual folder path or worry about accidentally deleting it.&lt;/p&gt;

&lt;p&gt;Volumes are used because writing data directly inside a container is not reliable. If the container is removed, everything inside it disappears.&lt;/p&gt;

&lt;p&gt;By using a volume, the data stays safe, and you can start fresh containers that still have access to the same information. This is very helpful for databases, logs, or anything you expect to keep long-term.&lt;/p&gt;




&lt;h2&gt;
  
  
  Bind mounts vs named volumes - what’s the difference?
&lt;/h2&gt;

&lt;p&gt;When you attach external storage to a container, you’re choosing how the container should access data that lives outside its own filesystem. The two ways to do this are &lt;strong&gt;bind mounts&lt;/strong&gt; and &lt;strong&gt;named volumes&lt;/strong&gt;, and each one fits a different situation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bind mounts
&lt;/h3&gt;

&lt;p&gt;A bind mount directly links a folder on your host machine into the container, so both sides see the same files immediately.&lt;/p&gt;

&lt;h3&gt;
  
  
  Named volumes
&lt;/h3&gt;

&lt;p&gt;A named volume is storage created and managed by Docker, giving your container a stable place to store data without relying on your host’s folder structure.&lt;/p&gt;

&lt;h3&gt;
  
  
  See this table below that summarizes this comparison
&lt;/h3&gt;

&lt;p&gt;Let's look at a simple way to compare both approaches so you can decide when to use each one:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Bind mounts&lt;/th&gt;
&lt;th&gt;Named volumes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Where data lives&lt;/td&gt;
&lt;td&gt;A folder on your host machine&lt;/td&gt;
&lt;td&gt;Docker-managed storage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best for&lt;/td&gt;
&lt;td&gt;Local development, hot-reloading&lt;/td&gt;
&lt;td&gt;Databases, production workloads&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Portability&lt;/td&gt;
&lt;td&gt;Low (path depends on the host)&lt;/td&gt;
&lt;td&gt;High (Docker manages it)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reliability&lt;/td&gt;
&lt;td&gt;Tied to host system setup&lt;/td&gt;
&lt;td&gt;More stable and predictable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Visibility&lt;/td&gt;
&lt;td&gt;Easy to browse directly on your machine&lt;/td&gt;
&lt;td&gt;Harder to inspect manually&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Permissions&lt;/td&gt;
&lt;td&gt;Can be tricky across OSes&lt;/td&gt;
&lt;td&gt;Handled by Docker&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  How do containers store and retrieve data using volumes?
&lt;/h2&gt;

&lt;p&gt;Before we run our MySQL example, I'll explain what happens when a container uses a volume.&lt;/p&gt;

&lt;p&gt;Containers normally write data into their own internal filesystem, but when you attach a volume, you’re telling the container:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“Store this data somewhere safer, outside your temporary filesystem.”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every container keeps its data in specific directories. For MySQL, that directory is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/var/lib/mysql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is where MySQL writes tables, logs, metadata, and all database files.&lt;/p&gt;

&lt;p&gt;If this path stays inside the container, the data disappears when the container is removed. But if we mount a volume to this path, something different happens.&lt;/p&gt;

&lt;h3&gt;
  
  
  What exactly happens when a volume is mounted?
&lt;/h3&gt;

&lt;p&gt;So, for example, when you attach a volume to &lt;code&gt;/var/lib/mysql&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker gives the container a storage location that lives outside the container.&lt;/li&gt;
&lt;li&gt;Anything MySQL writes goes into the volume instead of the container’s own filesystem.&lt;/li&gt;
&lt;li&gt;If you delete the container and start a new one with the same volume, the new container can see all the old files immediately.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is why volumes are used for databases. They let the container come and go, while the data stays in place.&lt;/p&gt;

&lt;h3&gt;
  
  
  What does the data flow look like, in simple terms?
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;The container starts.&lt;/li&gt;
&lt;li&gt;Docker mounts the volume into the container at a specific path.&lt;/li&gt;
&lt;li&gt;The application (like MySQL) reads/writes files as if the folder was part of its normal filesystem.&lt;/li&gt;
&lt;li&gt;Docker handles the actual storage behind the scenes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The key idea is this: &lt;strong&gt;the container thinks it’s writing to its own filesystem, but the data is safely stored in the volume instead.&lt;/strong&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  Hands-on project: Run a MySQL container with persistent storage using Docker volumes
&lt;/h1&gt;

&lt;p&gt;Now that you understand what volumes are and how containers use them, let’s put it all together by running a MySQL container whose data survives restarts and rebuilds.&lt;/p&gt;

&lt;p&gt;We’ll go step by step so you not only run the container but also understand the importance of each step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Before you begin
&lt;/h3&gt;

&lt;p&gt;To follow along with the hands-on MySQL project, make sure you have the following set up:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker installed and running&lt;/strong&gt;&lt;br&gt;
You should have Docker Desktop (macOS/Windows) or Docker Engine (Linux) installed.&lt;br&gt;
If you’ve completed the earlier parts of this series, your setup should already be ready. (If not, follow this &lt;a href="https://dev.to/deborahemeni1/getting-started-with-docker-how-to-install-docker-and-set-it-up-correctly-4knb"&gt;guide&lt;/a&gt; to make sure Docker is installed and running.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Basic familiarity with running containers&lt;/strong&gt;&lt;br&gt;
You should know how to use commands like &lt;code&gt;docker run&lt;/code&gt;, &lt;code&gt;docker stop&lt;/code&gt;, and &lt;code&gt;docker exec&lt;/code&gt;.&lt;br&gt;
We’ll build on these, but you don’t need advanced knowledge.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;A terminal or command prompt open&lt;/strong&gt;&lt;br&gt;
All the commands in this section will be run from your terminal.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Port 3306 available on your machine&lt;/strong&gt;&lt;br&gt;
MySQL uses port 3306. If another MySQL server is already running locally, you may need to stop it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;At least a few hundred MB of free disk space&lt;/strong&gt;&lt;br&gt;
The MySQL image and its data files need some space to run smoothly.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once you have these ready, you can safely move on to the first step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Pull the MySQL image
&lt;/h3&gt;

&lt;p&gt;Open your terminal or command prompt. This can be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terminal&lt;/strong&gt; on macOS or Linux&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Command Prompt&lt;/strong&gt; or &lt;strong&gt;PowerShell&lt;/strong&gt; on Windows&lt;/li&gt;
&lt;li&gt;Or the built-in terminal inside tools like VS Code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Make sure Docker Desktop is running in the background.&lt;br&gt;
You can check by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If Docker responds with a version number, you’re good to continue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9w6zxz29m75pybxgeng2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9w6zxz29m75pybxgeng2.png" alt="docker version output" width="800" height="49"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let’s download the official &lt;a href="https://hub.docker.com/_/mysql" rel="noopener noreferrer"&gt;MySQL image from Docker Hub&lt;/a&gt;. This image contains everything needed to run a MySQL server inside a container.&lt;/p&gt;

&lt;p&gt;Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull mysql:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first time you pull it, Docker will download all the layers it needs.&lt;/p&gt;

&lt;p&gt;If you already have it, Docker will simply check for updates and use the cached version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgh57dswrbeuezw865zgf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgh57dswrbeuezw865zgf.png" alt="docker pull mysql output" width="800" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When this completes, you now have the MySQL image available locally and ready to run in a container.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create a named Docker volume
&lt;/h3&gt;

&lt;p&gt;Before we run MySQL, we need a place for it to store its data. Instead of letting it write into the container’s temporary filesystem, we’ll give it a &lt;strong&gt;named Docker volume&lt;/strong&gt;. This ensures the data stays safe even if the container is deleted.&lt;/p&gt;

&lt;p&gt;In your terminal, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker volume create mysql_data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a volume named &lt;strong&gt;&lt;code&gt;mysql_data&lt;/code&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw32cnkcul5ozx6v7rcd3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw32cnkcul5ozx6v7rcd3.png" alt=" " width="800" height="76"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You won’t see a folder appear on your machine, because Docker manages the volume internally. The important thing is that the volume now exists, and we can attach it to the MySQL container in the next step.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Why create the volume ahead of time instead of letting Docker create it automatically?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Because doing it explicitly helps you see the whole flow: &lt;strong&gt;you&lt;/strong&gt; create the storage location, and MySQL simply uses it.&lt;/p&gt;

&lt;p&gt;This makes the idea of persistent storage much clearer when you start working with real databases and production workloads.&lt;/p&gt;

&lt;p&gt;You can confirm that the volume was created by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker volume &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Look for &lt;code&gt;mysql_data&lt;/code&gt; in the list, if it’s there, you’re ready to move on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgf1m7dafactxdk2q76k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgf1m7dafactxdk2q76k.png" alt=" " width="800" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Run a MySQL container using the volume
&lt;/h3&gt;

&lt;p&gt;Now that the volume is ready, the next step is to run a MySQL container and attach that volume to MySQL’s data directory.&lt;/p&gt;

&lt;p&gt;This is the point where the container and the volume start working together.&lt;/p&gt;

&lt;p&gt;In your terminal, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; mysql_container &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;my-secret-password &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; mysql_data:/var/lib/mysql &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 3306:3306 &lt;span class="se"&gt;\&lt;/span&gt;
  mysql:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This confirms the container has started successfully, seeing a long container ID means it’s running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F40fpnhm5rzerzp0jljav.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F40fpnhm5rzerzp0jljav.png" alt=" " width="800" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This single command does a lot, so here’s what each part means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;--name mysql_container&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
Assigns a readable name to the container so it’s easy to start, stop, or access later.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;-e MYSQL_ROOT_PASSWORD=my-secret-password&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
Creates the root password for MySQL inside the container.&lt;br&gt;
(This password will be needed later when connecting to the database.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;-v mysql_data:/var/lib/mysql&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
Mounts the volume created in the previous step into MySQL’s internal data directory.&lt;br&gt;
This is what makes the data persist outside the container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;-p 3306:3306&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
Exposes MySQL on your machine so you can connect using tools or clients.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once this container is running, &lt;strong&gt;every table, row, and record MySQL writes from now on will live inside the &lt;code&gt;mysql_data&lt;/code&gt; volume instead of the container’s temporary filesystem.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That means even if we delete the container, the data remains.&lt;/p&gt;

&lt;p&gt;If you’d like, you can run the next command to confirm that the container is up and running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If &lt;code&gt;mysql_container&lt;/code&gt; appears in the list, you’re ready for Step 4.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4gp02ol9fmgttiupyvi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4gp02ol9fmgttiupyvi.png" alt=" " width="800" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Connect to MySQL inside the container
&lt;/h3&gt;

&lt;p&gt;Now that MySQL is running inside Docker, let’s connect to it using the MySQL CLI built into the container.&lt;/p&gt;

&lt;p&gt;Run this command from your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; mysql_container mysql &lt;span class="nt"&gt;-u&lt;/span&gt; root &lt;span class="nt"&gt;-p&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This opens an interactive MySQL session inside the container.&lt;/p&gt;

&lt;p&gt;Right after running the command, MySQL asks for the root password you set earlier. You should see a password prompt like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyre1a5tcs5owma9jvv9j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyre1a5tcs5owma9jvv9j.png" alt="Example output showing the password prompt" width="800" height="35"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the password and press Enter.&lt;/p&gt;

&lt;p&gt;If authentication works, you’ll be dropped into the MySQL shell and shown the MySQL welcome banner. A successful login looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3rw1ept45p7qo55434a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3rw1ept45p7qo55434a.png" alt="Example of a successful login" width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you’re inside MySQL and can run SQL commands normally. Let’s create some data so we can verify persistence later. Run the following inside the MySQL prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;DATABASE&lt;/span&gt; &lt;span class="n"&gt;testdb&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;USE&lt;/span&gt; &lt;span class="n"&gt;testdb&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;INT&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'Alice'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The final &lt;code&gt;SELECT&lt;/code&gt; should return the row you inserted. Here’s what the output looks like when the row is returned:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiittmv8p5cotaovouyga.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiittmv8p5cotaovouyga.png" alt=" " width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This confirms two things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;MySQL is running correctly inside your container.&lt;/li&gt;
&lt;li&gt;The database files are being written into the &lt;code&gt;mysql_data&lt;/code&gt; volume.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When you’re done, exit the MySQL shell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exit;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The container will continue running in the background.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Stop and remove the container
&lt;/h3&gt;

&lt;p&gt;Now that we’ve added data to MySQL, let’s demonstrate what happens when the container itself is removed.&lt;/p&gt;

&lt;p&gt;The goal here is to show that even if the container is removed, the data stored in the volume remains intact.&lt;/p&gt;

&lt;p&gt;From your terminal, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stop mysql_container
docker &lt;span class="nb"&gt;rm &lt;/span&gt;mysql_container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s what each command does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker stop&lt;/code&gt; shuts the container down safely.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker rm&lt;/code&gt; deletes the container entirely.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you run those two commands, you should see output similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frq2j3hu6ezvv99z65ijz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frq2j3hu6ezvv99z65ijz.png" alt=" " width="800" height="84"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even though the container is now gone, your &lt;code&gt;mysql_data&lt;/code&gt; volume still exists (remains untouched), and it still contains every database file MySQL wrote earlier.&lt;/p&gt;

&lt;p&gt;This sets us up perfectly for the next step, where we’ll start a new container and confirm that the data is still there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Start a new container using the same volume
&lt;/h2&gt;

&lt;p&gt;Now that the previous container has been removed, let’s start a brand-new MySQL container and attach the same volume (mysql_data).&lt;/p&gt;

&lt;p&gt;This is where you’ll see Docker volumes doing exactly what they’re designed for.&lt;/p&gt;

&lt;p&gt;Run the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; mysql_container &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;my-secret-password &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; mysql_data:/var/lib/mysql &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 3306:3306 &lt;span class="se"&gt;\&lt;/span&gt;
  mysql:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what it looks like when the container starts successfully:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfcd0zx8x44hul15fbzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfcd0zx8x44hul15fbzu.png" alt="Screenshot of docker run command starting a new MySQL container" width="800" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the same command you ran earlier. The container is new, but the volume still contains all of MySQL’s data files from before.&lt;/p&gt;

&lt;h4&gt;
  
  
  Connect back into MySQL
&lt;/h4&gt;

&lt;p&gt;Now that the container is running again, open a MySQL shell inside it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; mysql_container mysql &lt;span class="nt"&gt;-u&lt;/span&gt; root &lt;span class="nt"&gt;-p&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker will prompt you for the root password:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh94ffplnxxhxh6nou7f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh94ffplnxxhxh6nou7f.png" alt=" " width="800" height="39"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the same password you set earlier.&lt;/p&gt;

&lt;h4&gt;
  
  
  Verify that your data is still there
&lt;/h4&gt;

&lt;p&gt;Once inside the MySQL shell, run the following SQL commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="n"&gt;DATABASES&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;USE&lt;/span&gt; &lt;span class="n"&gt;testdb&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should now see the same database and table you created earlier, including the “Alice” row:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fulwe240lyj12b76zgn4u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fulwe240lyj12b76zgn4u.png" alt=" " width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your new container didn’t have to recreate anything. It simply reused the data stored in the volume. This is the entire point of Docker volumes: containers may be temporary, but your data is not.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7: Inspect the volume (optional)
&lt;/h3&gt;

&lt;p&gt;If you want to see where Docker is storing your MySQL data on your machine, you can inspect the volume directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker volume inspect mysql_data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command prints the full details of the &lt;code&gt;mysql_data&lt;/code&gt; volume, including the path on your system where Docker keeps the files.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sdhf14o1cixx8565twd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sdhf14o1cixx8565twd.png" alt="docker volume inspect output for mysql_data" width="800" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pay attention to the &lt;code&gt;"Mountpoint"&lt;/code&gt; field in the output. That path is where Docker stores all of MySQL’s internal data files, and it remains available even after containers are removed. This confirms that the volume persists independently of any container.&lt;/p&gt;




&lt;h2&gt;
  
  
  So what you just learned (and why it’s useful)
&lt;/h2&gt;

&lt;p&gt;In this part of the series, I walked you through how Docker volumes keep your data safe even when containers are stopped or removed.&lt;/p&gt;

&lt;p&gt;You went through the process step by step:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You created a named volume for MySQL’s data.&lt;/li&gt;
&lt;li&gt;You ran a MySQL container and attached that volume to its data directory.&lt;/li&gt;
&lt;li&gt;You created a database, table, and record inside MySQL.&lt;/li&gt;
&lt;li&gt;You removed the container and started a new one using the same volume.&lt;/li&gt;
&lt;li&gt;You confirmed that the data was still there.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By doing this, you now know how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep important data outside a container’s temporary filesystem.&lt;/li&gt;
&lt;li&gt;Reuse the same data across multiple containers.&lt;/li&gt;
&lt;li&gt;Inspect a volume to see where Docker stores it.&lt;/li&gt;
&lt;li&gt;Prevent data loss when containers are rebuilt during development.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This understanding becomes very practical once you start working with databases, background workers, message queues, or any service that needs to persist data.&lt;/p&gt;

&lt;p&gt;In the next part of the series, I’ll show you how to manage everything with &lt;strong&gt;Docker Compose&lt;/strong&gt;, so you can start multiple containers with a single command.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>dockervolumes</category>
      <category>mysql</category>
      <category>devops</category>
    </item>
    <item>
      <title>Docker networking: How to connect containers in a full-stack project</title>
      <dc:creator>Deborah Emeni</dc:creator>
      <pubDate>Tue, 04 Nov 2025 09:10:30 +0000</pubDate>
      <link>https://forem.com/deborahemeni1/docker-networking-how-to-connect-containers-in-a-full-stack-project-3l98</link>
      <guid>https://forem.com/deborahemeni1/docker-networking-how-to-connect-containers-in-a-full-stack-project-3l98</guid>
      <description>&lt;h2&gt;
  
  
  What you'll learn
&lt;/h2&gt;

&lt;p&gt;By the end of this section, you'll:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Understand how Docker networking works and why it's important for multi-container apps.&lt;/li&gt;
&lt;li&gt;Learn the difference between bridge, host, and overlay networks in Docker.&lt;/li&gt;
&lt;li&gt;Know how to expose ports using &lt;code&gt;-p&lt;/code&gt; so services can communicate across containers or expose endpoints to your host.&lt;/li&gt;
&lt;li&gt;See how to connect multiple containers so they can communicate internally.&lt;/li&gt;
&lt;li&gt;Complete a hands-on project where a React app communicates with a Node.js backend over a shared custom bridge network in Docker.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We'll start by understanding why Docker networking is important.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Docker networking is important in multi-container applications
&lt;/h2&gt;

&lt;p&gt;When you're working on a project, it's rarely a single container doing all the work. You'll usually have a:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;backend API&lt;/li&gt;
&lt;li&gt;frontend application&lt;/li&gt;
&lt;li&gt;database&lt;/li&gt;
&lt;li&gt;sometimes a message broker&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;... with all running as separate containers.&lt;/p&gt;

&lt;p&gt;The key is that these containers need to communicate with each other over a network.&lt;/p&gt;

&lt;p&gt;For example, if your frontend can't reach the backend, or your API can't access the database, the entire application breaks. That's where Docker networking comes in.&lt;/p&gt;

&lt;p&gt;It enables containers to communicate reliably and securely, without exposing everything to the internet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Okay, let me give you a common scenario.
&lt;/h3&gt;

&lt;p&gt;Let's say you're building a React application that fetches data from a Node.js backend. During development, you might call the backend using "&lt;a href="http://localhost:4000" rel="noopener noreferrer"&gt;http://localhost:4000&lt;/a&gt;", right?&lt;/p&gt;

&lt;p&gt;That works when both apps run directly on your local machine. But once they run in separate containers, &lt;em&gt;localhost&lt;/em&gt; no longer refers to the same environment. The React container’s localhost is not the same as the backend’s.&lt;/p&gt;

&lt;p&gt;Now you need a way for those containers to discover and communicate with each other.&lt;/p&gt;

&lt;h3&gt;
  
  
  So, what's the solution?
&lt;/h3&gt;

&lt;p&gt;Docker solves this by creating a virtual network and allowing containers to discover each other by name, like an internal DNS system.&lt;/p&gt;

&lt;p&gt;So if you name your backend container "backend", your frontend can make requests to "&lt;a href="http://backend:4000" rel="noopener noreferrer"&gt;http://backend:4000&lt;/a&gt;".&lt;/p&gt;

&lt;p&gt;Without any need for IPs or manual linking, it just works as long as both containers are on the same Docker network.&lt;/p&gt;

&lt;p&gt;In this project, you'll see how this works in practice. You’ll:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a shared custom network&lt;/li&gt;
&lt;li&gt;Run both containers on it&lt;/li&gt;
&lt;li&gt;Configure the frontend to communicate with the backend using its container name&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This workflow is foundational for larger containerized systems and directly applies to more advanced tooling like Docker Compose or Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Since you now have a clear understanding of why containers need to communicate. In the next section, you'll learn how Docker makes that communication possible behind the scenes.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How Docker networking works
&lt;/h2&gt;

&lt;p&gt;Now that you understand why containers need to communicate, it’s important to see how Docker enables that communication.&lt;/p&gt;

&lt;p&gt;When Docker is installed, it creates a default network called &lt;strong&gt;bridge&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This &lt;strong&gt;bridge&lt;/strong&gt; is a virtual network that Docker uses to connect containers on the same host. If you run containers without explicitly assigning them to a custom network, Docker attaches them to this default bridge network.&lt;/p&gt;

&lt;p&gt;Every container connected to this bridge gets its own internal IP address. But more importantly, when containers are connected to the same network, Docker allows them to resolve each other by name.&lt;/p&gt;

&lt;p&gt;This means instead of using &lt;em&gt;localhost&lt;/em&gt; or an IP address, a container can communicate with another container simply by using its name.&lt;/p&gt;

&lt;p&gt;For example, if a container is started with&lt;code&gt;--name backend&lt;/code&gt;, any other container on the same network can reach it at: "&lt;a href="http://backend" rel="noopener noreferrer"&gt;http://backend&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;This feature is what makes internal container communication seamless. You don’t need to hardcode IPs or expose every service to the outside world.&lt;/p&gt;

&lt;h3&gt;
  
  
  The 3 types of Docker networks
&lt;/h3&gt;

&lt;p&gt;Now, let's break down the three main network types to know when working with Docker:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Bridge network (default for single-host setups)
&lt;/h3&gt;

&lt;p&gt;This is the most commonly used network for local development. When multiple containers are attached to the same bridge network, they can communicate with each other using their container names.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this project, a custom bridge network will be used. Creating a named network gives more control and ensures both the frontend and backend containers are communicating within a shared, isolated environment.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  2. Host network
&lt;/h3&gt;

&lt;p&gt;This mode removes network isolation between the container and the host. The container shares the host’s network stack. It is typically used when maximum network performance is needed or when the container must bind directly to host ports.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We won't use this in our project, but it's important to be aware of its use cases and limitations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  3. Overlay network
&lt;/h3&gt;

&lt;p&gt;This one is for multi-host setups, like when you're running Docker Swarm or Kubernetes. It allows containers running on different physical machines to communicate over a secure virtual network.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It's not needed for single-machine setups, but essential when deploying distributed systems in production.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, to recap:&lt;/p&gt;

&lt;p&gt;For most development environments and projects running on a single host, the bridge network is the most practical choice. It provides container name resolution, clean isolation, and is simple to configure.&lt;/p&gt;

&lt;p&gt;This is why the upcoming steps in the project will use a custom bridge network, which ensures that the frontend and backend containers can communicate securely and reliably.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Coming up next, I'll show you how to expose container ports with the &lt;code&gt;-p&lt;/code&gt; flag so services running in containers can be accessed from the host machine or other tools. Let's walk through that now.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Exposing container ports with &lt;code&gt;-p&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Now that you’ve seen how containers can communicate with each other on a Docker network, it’s also important to understand how your host machine (such as your browser, Postman, or terminal) can communicate with those containers.&lt;/p&gt;

&lt;p&gt;By default, Docker containers are isolated. Even if a service is running correctly inside a container, it cannot be accessed from outside unless a port is explicitly exposed.&lt;/p&gt;

&lt;p&gt;This is where the &lt;code&gt;-p&lt;/code&gt; flag comes in.&lt;/p&gt;

&lt;p&gt;When starting a container, the &lt;code&gt;-p&lt;/code&gt; option maps a port inside the container to a port on the host machine. This allows external tools, like your browser, to access services running inside the container.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nt"&gt;-p&lt;/span&gt; 3000:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells Docker to take port 3000 from inside the container and map it to port 3000 on your host machine.&lt;/p&gt;

&lt;p&gt;So if your React app is running inside the container on port 3000, you can open your browser and visit "&lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;If you forget to use the &lt;code&gt;-p&lt;/code&gt; flag, the container might still be running and responding internally, but you will not be able to access it from your host environment.&lt;/p&gt;

&lt;p&gt;Where you'll find this mapping mostly important is for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Frontend applications that need to be opened in a browser&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;APIs that should be tested using Postman, curl, or other tools&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Any service that should be exposed to the outside world during development or testing&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;In our project, we'll expose both the backend (on port 4000) and the frontend (on port 3000) using &lt;code&gt;-p&lt;/code&gt;. This will allow us to view the React app in the browser and test the Node.js API externally.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Up next, I’ll walk you through how containers communicate internally without needing to expose their ports to the host at all. This is particularly useful when two services need to communicate entirely within Docker.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How containers communicate with each other internally
&lt;/h2&gt;

&lt;p&gt;Earlier, you learned how to expose a container’s port to your host machine using the &lt;code&gt;-p&lt;/code&gt; flag. That setup is useful when you want to access a service from your browser, testing tools like Postman, or terminal utilities like curl.&lt;/p&gt;

&lt;p&gt;But what happens when two containers need to communicate internally, without routing through your host machine?&lt;/p&gt;

&lt;p&gt;For example, say your React frontend container needs to fetch data from a Node.js backend container. Both are running inside Docker. In this case, you do not need to expose any ports with &lt;code&gt;-p&lt;/code&gt; to enable communication between them.&lt;/p&gt;

&lt;p&gt;This works because Docker automatically sets up internal networking for containers that are on the same network. Docker provides an internal DNS system that allows containers to resolve each other by name.&lt;/p&gt;

&lt;p&gt;If your backend container is named &lt;strong&gt;backend&lt;/strong&gt;, your React app can send a request like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;http://backend:4000
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There’s no need to use IP addresses or expose backend ports to the host. The container name acts as the hostname, and Docker handles the rest.&lt;/p&gt;

&lt;p&gt;To enable this setup, both containers must be attached to the same Docker network. You can do this by creating a custom bridge network and passing the &lt;code&gt;--network&lt;/code&gt; flag when running each container.&lt;/p&gt;

&lt;p&gt;In this project, that’s exactly what we’ll do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a custom bridge network&lt;/li&gt;
&lt;li&gt;Connect both containers to it&lt;/li&gt;
&lt;li&gt;Configure the React app to communicate with the backend using the container name (backend) and the correct port (4000)&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;This internal communication model is how multi-container systems typically work. Services communicate over a private network without needing to expose every service to the outside.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Alright, now let’s move on to the project where we’ll put all of this into practice. You’re going to use a React frontend + Node.js backend running in separate containers, communicating over Docker’s internal network.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let’s go.&lt;/p&gt;




&lt;h2&gt;
  
  
  Project: Connect a React frontend container to a Node.js backend container
&lt;/h2&gt;

&lt;p&gt;You’ve now seen the theory behind Docker networking. Let’s put that knowledge into practice with a hands-on project.&lt;/p&gt;

&lt;p&gt;In this section, you’ll build/clone a two-service application using Docker: a React frontend that fetches data from a Node.js backend. Both services will run in separate containers, and you’ll configure them to communicate over a shared Docker network.&lt;/p&gt;

&lt;p&gt;Here’s what you’ll do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Containerize each application with a custom Dockerfile&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The React app will be built and served using Nginx&lt;/li&gt;
&lt;li&gt;The backend will run on Node.js 22&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Create a custom Docker bridge network&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Configure the frontend to communicate with the backend using the backend’s container name, not &lt;em&gt;localhost&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Verify that everything works in your browser and from inside the containers&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;If you don’t already have a project set up, you can clone these two minimal demo repositories to follow along:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;React frontend: &lt;a href="https://github.com/d-emeni/react-demo" rel="noopener noreferrer"&gt;https://github.com/d-emeni/react-demo&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Node.js backend: &lt;a href="https://github.com/d-emeni/node-api-demo" rel="noopener noreferrer"&gt;https://github.com/d-emeni/node-api-demo&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s get started.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Understand the project structure
&lt;/h3&gt;

&lt;p&gt;Let’s start by walking through what each part of the application does and how they interact once containerized.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The React app (frontend) runs on port 3000 in development. It fetches a list of users from a backend API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Node.js backend listens on port 4000, serving a JSON response at the &lt;code&gt;/api/users&lt;/code&gt; endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a typical local development setup, the frontend would send requests like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;http://localhost:4000/api/users
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That works because both the frontend and backend are running directly on your machine.&lt;/p&gt;

&lt;p&gt;However, once you move both apps into containers, that changes. Each container has its own isolated environment, including its own version of "localhost". So if the frontend container tries to send a request to "localhost:4000", it's actually trying to call itself, not the backend.&lt;/p&gt;

&lt;p&gt;To solve this, we’ll:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Connect both containers to a shared Docker network&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update the frontend to communicate with the backend using the container name, like "&lt;a href="http://backend:4000" rel="noopener noreferrer"&gt;http://backend:4000&lt;/a&gt;"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And in production (when the frontend is served via Nginx), we’ll configure it to call the backend using a relative path like &lt;code&gt;/api&lt;/code&gt;, which Nginx will proxy to the backend container&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach allows the two services to communicate reliably within Docker, without hardcoding IP addresses or exposing unnecessary ports.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Add Dockerfiles to both apps
&lt;/h3&gt;

&lt;p&gt;To run your applications inside Docker containers, you need to define how each one should be built. That’s what a Dockerfile does, it’s a step-by-step recipe that tells Docker how to package your app into a runnable image.&lt;/p&gt;

&lt;h4&gt;
  
  
  Dockerfile for the React frontend
&lt;/h4&gt;

&lt;p&gt;In the root of your React project (&lt;strong&gt;react-demo/&lt;/strong&gt;), create a file called &lt;strong&gt;Dockerfile&lt;/strong&gt; with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Stage 1: Build the React app&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:22-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;builder&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm run build

&lt;span class="c"&gt;# Stage 2: Serve the app with Nginx&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; nginx:alpine&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /app/dist /usr/share/nginx/html&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; nginx.conf /etc/nginx/nginx.conf&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 80&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["nginx", "-g", "daemon off;"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break this down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stage 1 (builder)&lt;/strong&gt;: Uses Node.js 22 to install dependencies and run the Vite build, which outputs the static files into the &lt;strong&gt;dist/&lt;/strong&gt; folder.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stage 2 (Nginx)&lt;/strong&gt;: Copies the build output into the default Nginx web root and starts the Nginx server to serve the static files.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This multi-stage Dockerfile keeps your final image small and production-optimized. You're only shipping the compiled frontend and not the entire Node environment.&lt;/p&gt;

&lt;p&gt;Make sure to also include an &lt;strong&gt;nginx.conf&lt;/strong&gt; file in your project root. This ensures that API requests like &lt;strong&gt;/api/users&lt;/strong&gt; are correctly forwarded to the backend container. You’ll add that later in the tutorial.&lt;/p&gt;

&lt;h4&gt;
  
  
  Dockerfile for the Node.js backend
&lt;/h4&gt;

&lt;p&gt;In the root of your backend project (&lt;strong&gt;node-api-demo/&lt;/strong&gt;), create a file named &lt;strong&gt;Dockerfile&lt;/strong&gt; with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Use Node.js 22 Alpine image&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:22-alpine&lt;/span&gt;

&lt;span class="c"&gt;# Set the working directory&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# Copy the backend code&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;

&lt;span class="c"&gt;# Install dependencies&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="c"&gt;# Start the server&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["node", "server.js"]&lt;/span&gt;

&lt;span class="c"&gt;# Expose the backend port&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 4000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Dockerfile defines everything Docker needs to run your backend service:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Installs dependencies&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Runs your &lt;strong&gt;server.js&lt;/strong&gt; file using Node.js&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Exposes port 4000 for incoming API requests&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’d like a more detailed walkthrough of Dockerizing a Node.js backend, check out my step-by-step &lt;a href="https://dev.to/deborahemeni1/how-to-dockerize-a-nodejs-application-with-a-custom-dockerfile-7ji"&gt;guide&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Build Docker images
&lt;/h3&gt;

&lt;p&gt;Now that both applications have Dockerfiles, it’s time to package them into Docker images. These images will serve as the blueprints for running your containers.&lt;/p&gt;

&lt;p&gt;Open your terminal and run the following commands from the root of each project:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Make sure your Docker daemon is running before you begin. If you're not sure, follow &lt;a href="https://dev.to/deborahemeni1/getting-started-with-docker-how-to-install-docker-and-set-it-up-correctly-4knb"&gt;this setup guide&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Inside react-demo/&lt;/span&gt;
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; react-app &lt;span class="nb"&gt;.&lt;/span&gt;

&lt;span class="c"&gt;# Inside node-api-demo/&lt;/span&gt;
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; backend-api &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break that down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;-t&lt;/code&gt; flag assigns a name (or “tag”) to your image. In this case, we're naming them &lt;code&gt;react-app&lt;/code&gt; and &lt;code&gt;backend-api&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;.&lt;/code&gt; at the end tells Docker to use the current directory as the build context, where it will look for the Dockerfile and app code.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once these build steps are complete, you’ll have two ready-to-run Docker images:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;react-app&lt;/code&gt; — a production-ready build of your React frontend, served with Nginx.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;backend-api&lt;/code&gt; — your Node.js server listening on port 4000.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ll see output logs during the build process. Here's what the end of the build typically looks like for each app:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;React frontend image build&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqbkwbwys8ysn4kt8w7q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqbkwbwys8ysn4kt8w7q.png" alt="React frontend image build" width="800" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Node.js backend image build&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7858gbngfkb3tb4ctnhy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7858gbngfkb3tb4ctnhy.png" alt="Node.js backend image build" width="800" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you see something like this, it means your images were built successfully and are now ready to be run in containers.&lt;/p&gt;

&lt;p&gt;Next up, we’ll create a Docker network to allow both containers to communicate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Create a custom Docker network
&lt;/h3&gt;

&lt;p&gt;For the frontend and backend containers to communicate by name, they must be connected to the same Docker network.&lt;/p&gt;

&lt;p&gt;Open your terminal (you can run this from any directory) and create a custom bridge network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker network create react-backend-net
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command sets up a new bridge network named react-backend-net. If successful, Docker will return a long alphanumeric string that represents the network ID:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwh2aqc0jj1i11rp4sb1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwh2aqc0jj1i11rp4sb1d.png" alt="network ID" width="800" height="70"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You won’t need to interact with this ID directly. What matters is the name of the network (&lt;code&gt;react-backend-net&lt;/code&gt;), because that’s what you’ll reference when running containers.&lt;/p&gt;

&lt;p&gt;Once both containers are connected to this network, Docker will enable them to resolve each other by container name. For example, the frontend can reach the backend simply using "&lt;a href="http://backend:4000" rel="noopener noreferrer"&gt;http://backend:4000&lt;/a&gt;" without any IP addresses or exposed ports.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you get an error saying the network already exists, that means it was previously created. You can safely skip this step and continue with the next one.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Next, we’ll run the backend container and connect it to this custom network.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Run the backend container
&lt;/h3&gt;

&lt;p&gt;Start by running the Node.js backend as a container. This will ensure the API is running and ready to handle requests before we launch the React app.&lt;/p&gt;

&lt;p&gt;Before running the command below, ensure the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You’re not already running the backend server locally on port 4000.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Port 4000 is free (not being used by another process).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Docker daemon is running properly on your system.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, open your terminal and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; backend &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--network&lt;/span&gt; react-backend-net &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 4000:4000 &lt;span class="se"&gt;\&lt;/span&gt;
  backend-api
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command does the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--name backend&lt;/code&gt; assigns the container a name. Other containers on the same network (like the frontend) can refer to it using this name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--network react-backend-net&lt;/code&gt; attaches the container to the custom Docker network we created in Step 4.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-p 4000:4000&lt;/code&gt; maps port 4000 inside the container to port 4000 on your host, so you can access the API via "&lt;a href="http://localhost:4000" rel="noopener noreferrer"&gt;http://localhost:4000&lt;/a&gt;".&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If everything starts correctly, Docker will output a long container ID like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl81exphttl8xx7hab3ry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl81exphttl8xx7hab3ry.png" alt="container ID" width="800" height="130"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This confirms the backend container is running in the background.&lt;/p&gt;

&lt;h4&gt;
  
  
  Just in case you run into any errors &amp;amp; how to resolve them…
&lt;/h4&gt;

&lt;p&gt;If you run the command and Docker throws an error, don’t worry. These are the two most common ones you might see during this step, and how to fix them quickly:&lt;/p&gt;

&lt;p&gt;(1). &lt;strong&gt;Port already in use&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:4000...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means something (like your local server) is already using port 4000.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Stop the process using the port (e.g., your locally running Node server), or&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Change the host port in the Docker command:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-p 4001:4000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This maps port 4001 on your machine to port 4000 inside the container.&lt;/p&gt;

&lt;p&gt;(2). &lt;strong&gt;Container name already in use&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Error response from daemon: Conflict. The container name &lt;span class="s2"&gt;"/backend"&lt;/span&gt; is already &lt;span class="k"&gt;in &lt;/span&gt;use...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker doesn’t allow duplicate container names. You can fix this in one of two ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Option A&lt;/strong&gt;: Remove the existing container:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Option B&lt;/strong&gt;: Run the new container with a different name:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nt"&gt;--name&lt;/span&gt; backend-v2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once your backend container is running successfully, you’re ready to launch the frontend container and configure it to communicate with the backend through the Docker network.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Configure the React frontend to communicate with the backend container
&lt;/h3&gt;

&lt;p&gt;Before running the React container, we need to update the frontend code so it knows how to communicate with the backend service inside Docker.&lt;/p&gt;

&lt;p&gt;During local development, you may have written a request like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:4000/api/users&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That works when both apps run on your machine. However, once they’re in containers, this URL will no longer work, because each container has its own isolated environment. Inside the React container, &lt;strong&gt;localhost&lt;/strong&gt; refers to itself, not the backend.&lt;/p&gt;

&lt;p&gt;To resolve this, you have two important steps:&lt;/p&gt;

&lt;p&gt;(1). &lt;strong&gt;Use a relative path in api.js&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Update the API request to use a relative path instead of a hardcoded URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/api/users&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures the React app can stay agnostic of the backend's full URL, letting us handle routing through Docker or Nginx.&lt;/p&gt;

&lt;p&gt;In your &lt;strong&gt;src/services/api.js&lt;/strong&gt; file, your updated code might look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;API_BASE_URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;meta&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;VITE_API_BASE_URL&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;fetchUsers&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;API_BASE_URL&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/api/users`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Failed to fetch users: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(2). &lt;strong&gt;Set &lt;code&gt;VITE_API_BASE_URL&lt;/code&gt;in the .env file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To make this work inside Docker, you should set the environment variable in your React app’s &lt;strong&gt;.env&lt;/strong&gt; file like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;VITE_API_BASE_URL=
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means: use a relative path like &lt;strong&gt;/api&lt;/strong&gt; (which Nginx will proxy internally to the backend container). In our container setup, the Nginx configuration ensures that requests to &lt;strong&gt;/api&lt;/strong&gt; are forwarded to the backend.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Inside the container, this relies on Docker’s internal networking and DNS resolution. Since the backend container is named &lt;strong&gt;backend&lt;/strong&gt;, Nginx knows how to forward requests to "&lt;a href="http://backend:4000" rel="noopener noreferrer"&gt;http://backend:4000&lt;/a&gt;".&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once these updates are made:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Your React app will send API requests to &lt;strong&gt;/api/users&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Nginx inside the container will forward them to "&lt;a href="http://backend:4000/api/users" rel="noopener noreferrer"&gt;http://backend:4000/api/users&lt;/a&gt;"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The backend will respond, and the data will be rendered in your React UI&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup keeps your frontend clean, avoids hardcoding environment-specific URLs, and works seamlessly inside Docker.&lt;/p&gt;

&lt;p&gt;Next, we’ll run the React container and connect it to the backend via the shared Docker network.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7: Run the React container
&lt;/h3&gt;

&lt;p&gt;With your Docker image for the frontend built and the backend container already running, you're ready to launch the React app inside a container.&lt;/p&gt;

&lt;p&gt;Run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; frontend &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--network&lt;/span&gt; react-backend-net &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 3000:80 &lt;span class="se"&gt;\&lt;/span&gt;
  react-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break down what this does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--name frontend&lt;/code&gt; assigns the container a name for internal communication and easier reference&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--network react-backend-net&lt;/code&gt; connects the container to the same Docker network as the backend, enabling internal communication&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-p 3000:80&lt;/code&gt; maps port 80 inside the container (used by Nginx to serve the app) to port 3000 on your machine, so you can access it at "&lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;"&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the container is running, visit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see your React app in the browser. If everything is configured correctly, it will fetch the user data from the backend container and display the list.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tt6su6ijo2xnwwojxmg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tt6su6ijo2xnwwojxmg.png" alt="backend running" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you run into any issues, don’t worry. We'll cover debugging in the next step.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Step 8: Debug or verify container communication
&lt;/h3&gt;

&lt;p&gt;If your React app displays an error like "Failed to fetch", it means the frontend was unable to reach the backend API. Here are several ways to diagnose and resolve the issue:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Check the backend logs
&lt;/h4&gt;

&lt;p&gt;Run the following command to inspect the backend container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker logs backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will show whether the request from the frontend reached the backend, and whether the server responded successfully or encountered an error (e.g., route not found or internal server error).&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Use browser developer tools (DevTools)
&lt;/h4&gt;

&lt;p&gt;In your browser:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open the Network tab (inside DevTools)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Refresh the page&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Look for the request to:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;   http://backend:4000/api/users
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Inspect the status code, response, and any error message in the preview or console.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This helps you determine if the request was blocked, returned a 404/500, or failed due to CORS or DNS issues.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Ping the backend container from inside the frontend container
&lt;/h4&gt;

&lt;p&gt;You can enter the frontend container's shell like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; frontend sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ping backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the containers are correctly connected to the same network, you'll see successful ping responses like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;PING&lt;/span&gt; &lt;span class="nf"&gt;backend &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;172.18&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="mi"&gt;56&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="nb"&gt;bytes&lt;/span&gt;
&lt;span class="mi"&gt;64&lt;/span&gt; &lt;span class="nb"&gt;bytes&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="mf"&gt;172.18&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;icmp_seq&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="n"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.102&lt;/span&gt; &lt;span class="n"&gt;ms&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;This confirms that the frontend can resolve and reach the backend container by name.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  (Optional) Test the backend API from within the frontend container
&lt;/h4&gt;

&lt;p&gt;Still inside the frontend shell, try:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget http://backend:4000/api/users
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or if &lt;code&gt;curl&lt;/code&gt; is available:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://backend:4000/api/users
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This helps you verify that a valid HTTP response is returned from the backend endpoint.&lt;/p&gt;

&lt;p&gt;Once you confirm the containers can communicate, but the React app still fails to fetch, check for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;typos in the API URL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;missing environment variables&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CORS issues in your backend (if applicable)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 9: Clean up
&lt;/h3&gt;

&lt;p&gt;Before we wrap up this project, let’s clean up everything we created: the containers and the custom network.&lt;/p&gt;

&lt;p&gt;This helps avoid conflicts when you’re working on future Docker projects, and keeps your environment tidy.&lt;/p&gt;

&lt;p&gt;Follow the steps below to remove both containers and the network.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Remove the frontend container
&lt;/h4&gt;

&lt;p&gt;We’ll start by stopping and removing the frontend container.&lt;/p&gt;

&lt;p&gt;Run this from any terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; frontend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If it works correctly, Docker will stop and remove the container, and you’ll see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fml87is5ae32tttl56l3o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fml87is5ae32tttl56l3o.png" alt="remove frontend" width="800" height="120"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Remove the backend container
&lt;/h4&gt;

&lt;p&gt;Next, remove the backend container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will stop and delete the backend container. Here’s what it looks like when it works:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2chv3j059s8y1wifq2ym.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2chv3j059s8y1wifq2ym.png" alt="remove backend" width="800" height="38"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Remove the custom Docker network
&lt;/h4&gt;

&lt;p&gt;Now let’s remove the network that connected the containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker network &lt;span class="nb"&gt;rm &lt;/span&gt;react-backend-net
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If successful, Docker will simply return the name of the network:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesjcb2pv9tck19lxzens.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesjcb2pv9tck19lxzens.png" alt="docker network remove" width="800" height="55"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This confirms the network has been deleted.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Confirm everything is gone
&lt;/h4&gt;

&lt;p&gt;To double-check that no containers are still running, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see no active containers, just an empty table like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8tulvt50milxaby4b79.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8tulvt50milxaby4b79.png" alt="no-containers-running" width="800" height="30"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  So what you just built (and why it’s useful)
&lt;/h2&gt;

&lt;p&gt;You’ve just completed a hands-on Docker networking project using a frontend and backend app.&lt;/p&gt;

&lt;p&gt;Let’s walk through what you did:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You containerized two apps: a React frontend and a Node.js backend&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You created a custom Docker network so the containers could communicate&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You updated the frontend to connect to the backend by container name, not localhost&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You verified everything worked, using your browser, terminal, and Docker commands&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By doing this step-by-step, you've learned how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Run separate services inside containers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Connect them on the same Docker network&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Avoid common issues developers face when containers can’t reach each other&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;In the next tutorial, I’ll show you how to simplify this setup using a &lt;strong&gt;docker-compose.yml&lt;/strong&gt; file, so you can launch everything with one command. Follow me on &lt;a href="https://dev.to/deborahemeni1"&gt;dev.to&lt;/a&gt; to get notified.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>dockernetworking</category>
      <category>docker</category>
      <category>containers</category>
      <category>networking</category>
    </item>
    <item>
      <title>How to dockerize a node.js application with a custom dockerfile</title>
      <dc:creator>Deborah Emeni</dc:creator>
      <pubDate>Thu, 25 Sep 2025 07:58:48 +0000</pubDate>
      <link>https://forem.com/deborahemeni1/how-to-dockerize-a-nodejs-application-with-a-custom-dockerfile-7ji</link>
      <guid>https://forem.com/deborahemeni1/how-to-dockerize-a-nodejs-application-with-a-custom-dockerfile-7ji</guid>
      <description>&lt;p&gt;&lt;em&gt;In this guide, you'll learn how to Dockerize a Node.js application.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you are a total beginner, start with this: &lt;a href="https://dev.to/deborahemeni1/understanding-virtualization-containers-in-the-simplest-way-18m3"&gt;Getting Started with Docker (Understanding virtualization &amp;amp; containers in the simplest way)&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Containers let you package your app with everything it needs. But how do you build one from scratch? In this guide, we'll dockerize a Node.js app using a custom Dockerfile.&lt;/p&gt;

&lt;p&gt;We'll build a Node.js app that fetches live data from GitHub and serves it via an Express server. Then we'll walk through every line of the Dockerfile that packages it.&lt;/p&gt;

&lt;p&gt;Before you begin, make sure you have Docker installed and properly set up. You can follow the steps in this &lt;a href="https://dev.to/deborahemeni1/getting-started-with-docker-how-to-install-docker-and-set-it-up-correctly-4knb"&gt;article&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why should I Dockerize my Node.js app?
&lt;/h2&gt;

&lt;p&gt;Dockerizing your app lets you run it consistently across development, testing, and production. You won’t have to worry about differences across machines. It works the same wherever it runs.&lt;/p&gt;

&lt;p&gt;Let's look at a few reasons why it’s a good idea:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Environment consistency&lt;/strong&gt;: When you package Node.js, your code, and all dependencies into one container, everything runs in a controlled setup. This reduces errors caused by mismatched versions or missing packages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplified deployment&lt;/strong&gt;: You can move your app from your local machine to a server or cloud provider using the same image. There’s no need to set up the environment from scratch each time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Team collaboration&lt;/strong&gt;: Your teammates can run the exact same version of the app with one command. They don’t need to install Node.js, configure ports, or manually set up dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Better isolation&lt;/strong&gt;: Containers run independently from your main system. This helps avoid conflicts with other apps or services running on your machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Portability&lt;/strong&gt;: Docker runs your app the same way across all operating systems, like macOS, Windows, and Linux. You don’t need to adjust for each platform.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most importantly, Docker is useful when you're preparing for CI/CD pipelines, working with cloud platforms, or building apps with a team. It goes beyond packaging your app to also creating a reliable and predictable way to run it anywhere.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is a Dockerfile?
&lt;/h2&gt;

&lt;p&gt;A Dockerfile is a plain text file that defines how your application will be built and run inside a Docker image. It includes everything the image needs: the base image to start from, any files to copy in, commands to run during setup, and how to start the app when the container runs.&lt;/p&gt;

&lt;p&gt;You can think of it as a build script; each instruction adds a new layer that eventually forms the final image.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Node.js app we'll be dockerizing
&lt;/h2&gt;

&lt;p&gt;We'll containerize a Node.js app built with Express. It fetches data from GitHub using Axios and serves it on the homepage. You’ll see how everything works step by step, from setting up the project to running it locally before we dockerize it.&lt;/p&gt;

&lt;p&gt;Start by creating a folder named &lt;strong&gt;stars_generator&lt;/strong&gt; and adding two files inside:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;package.json&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;server.js&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  1. The "package.json" file
&lt;/h3&gt;

&lt;p&gt;This file defines your app’s metadata, including the dependencies it needs and how to start it. We’ll use &lt;code&gt;express&lt;/code&gt; and &lt;code&gt;axios&lt;/code&gt;, and tell Node.js to run server.js when the app starts.&lt;/p&gt;

&lt;p&gt;Create &lt;strong&gt;package.json&lt;/strong&gt; with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docker-node-advanced"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"start"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"node server.js"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"dependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"express"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"^4.18.2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"axios"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"^1.4.0"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. The "server.js" file
&lt;/h3&gt;

&lt;p&gt;This is the main entry point of the app. It uses Express to serve a homepage, fetches GitHub repo data using Axios, and displays the star count in your browser.&lt;/p&gt;

&lt;p&gt;Add this code to &lt;strong&gt;server.js&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;axios&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;PORT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PORT&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://api.github.com/repos/nodejs/node&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`&amp;lt;h1&amp;gt;Node.js GitHub Stars&amp;lt;/h1&amp;gt;&amp;lt;p&amp;gt;This repo has &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stargazers_count&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; ⭐️ stars.&amp;lt;/p&amp;gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Failed to fetch data from GitHub&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;PORT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Server running on http://localhost:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;PORT&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What each part of this file does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;const express = require('express');&lt;/code&gt; – loads the Express framework to handle HTTP requests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;const axios = require('axios');&lt;/code&gt; – loads Axios to make API calls.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;const app = express();&lt;/code&gt; – creates a new Express app.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;const PORT = process.env.PORT || 3000;&lt;/code&gt; – sets the port from your environment or defaults to 3000.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;app.get('/', async (req, res) =&amp;gt; { ... })&lt;/code&gt; – defines a route for the root URL (/). When visited, it fetches data from GitHub.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;const response = await axios.get(...)&lt;/code&gt; – sends a GET request to GitHub’s API for the Node.js repo.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;res.send(...)&lt;/code&gt; – sends back HTML with the current star count.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;catch (...)&lt;/code&gt; – catches any errors and returns a 500 status code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;app.listen(...)&lt;/code&gt; – starts the server and logs a message with the local URL.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Run the application to test it
&lt;/h3&gt;

&lt;p&gt;Before we containerize anything, test the app locally to make sure everything works as expected.&lt;/p&gt;

&lt;p&gt;In your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install
&lt;/span&gt;node server.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then open your browser and go to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3h8kcmx42gdh6rg4jdga.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3h8kcmx42gdh6rg4jdga.png" alt="Running application" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If the app crashes or doesn’t respond correctly, fix that first before moving forward. It’s better to start with a working app before introducing Docker.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In the next step, we'll containerize the app.&lt;/p&gt;




&lt;h2&gt;
  
  
  Containerizing the application
&lt;/h2&gt;

&lt;p&gt;We’re now going to containerize the Node.js app so that it can run consistently anywhere, on your machine, on someone else’s, or on a production server, without needing to worry about system differences. The goal is to package the app and its environment into a reusable Docker image.&lt;/p&gt;

&lt;p&gt;Now let's containerize the app.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Create a Dockerfile
&lt;/h3&gt;

&lt;p&gt;To containerize the app, you’ll write a Dockerfile that gives Docker step-by-step instructions for building an image.&lt;/p&gt;

&lt;p&gt;Create a file named &lt;strong&gt;Dockerfile&lt;/strong&gt; in the root of your project folder (same level as &lt;strong&gt;server.js&lt;/strong&gt; and &lt;strong&gt;package.json&lt;/strong&gt;). Your folder structure should now look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stars_generator/
├── Dockerfile
├── package.json
└── server.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can create it by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch Dockerfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, add this content to the Dockerfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Start with Node.js 22 Alpine base image
FROM node:22-alpine

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy package.json and install dependencies
COPY package.json ./
RUN npm install

# Copy the rest of the application code
COPY . .

# Expose the port your app runs on
EXPOSE 3000

# Start the app
CMD ["npm", "start"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break it down line by line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:22-alpine
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells Docker to start from the official Node.js v22 Alpine image:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Alpine keeps things lightweight (around 40MB).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Node 22 is a stable version with long-term support (LTS).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The image includes both Node.js and npm.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WORKDIR /usr/src/app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This sets the working directory to &lt;code&gt;/usr/src/app&lt;/code&gt; inside the container. All following commands will be executed from this path. It helps keep things clean and consistent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY package.json ./
RUN npm install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;COPY&lt;/code&gt; command copies your local &lt;strong&gt;package.json&lt;/strong&gt; into the container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then &lt;strong&gt;RUN npm install&lt;/strong&gt; installs your dependencies inside the container.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;We do this before copying the rest of the code so Docker can cache this step. That way, if your code changes but your dependencies don’t, Docker won’t re-run &lt;code&gt;npm install&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY . .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This copies all the remaining files, including your &lt;strong&gt;server.js&lt;/strong&gt;, into the container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EXPOSE 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;This includes files like &lt;code&gt;app.js&lt;/code&gt;, &lt;code&gt;routes/&lt;/code&gt;, &lt;code&gt;controllers/&lt;/code&gt;, etc.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Expose the port your app runs on
EXPOSE 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It informs Docker (and anyone running this image) that the app listens on port 3000.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This doesn't publish the port; it's just documentation within the Docker image. You'll still need to map it using &lt;code&gt;-p&lt;/code&gt; or &lt;code&gt;--publish&lt;/code&gt; when running the container.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Start the app
CMD ["npm", "start"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It defines the default command Docker will run when a container is created from this image.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This runs &lt;code&gt;npm start&lt;/code&gt;, which typically starts your Node.js server.&lt;/li&gt;
&lt;li&gt;Make sure you've defined a &lt;code&gt;start&lt;/code&gt; script in your &lt;strong&gt;package.json&lt;/strong&gt;, like this:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"scripts": {
  "start": "node server.js"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Add a ".dockerignore" file
&lt;/h3&gt;

&lt;p&gt;This step prevents unnecessary files like &lt;strong&gt;node_modules&lt;/strong&gt;, &lt;strong&gt;.env&lt;/strong&gt;, &lt;strong&gt;.git&lt;/strong&gt;, &lt;strong&gt;logs&lt;/strong&gt;) from being copied into the image. And it should come before building the image for smaller and faster builds including fewer security risks and cleaner containers.&lt;/p&gt;

&lt;p&gt;Create a &lt;strong&gt;.dockerignore&lt;/strong&gt; file in your project root with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node_modules
npm-debug.log
Dockerfile
.dockerignore
.git
*.md
.env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See the reasons why we added those to the &lt;strong&gt;.dockerignore&lt;/strong&gt; file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;node_modules&lt;/code&gt;: You'll run &lt;code&gt;npm install&lt;/code&gt; inside the container instead, and so you don't want to copy bulky local dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;npm-debug.log&lt;/code&gt;: Log files aren't needed inside the image.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Dockerfile&lt;/code&gt; &amp;amp; &lt;code&gt;.dockerignore&lt;/code&gt;: These aren't usually needed inside the image.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.git&lt;/code&gt;: You don't need Git history or config inside a Docker image.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;*.md&lt;/code&gt;: Docs aren't needed unless they're part of the app logic.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.env&lt;/code&gt;: For security reasons, secrets and environment variables should be passed in at runtime.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3: Build and run the container
&lt;/h3&gt;

&lt;p&gt;Now that the Dockerfile and &lt;code&gt;.dockerignore&lt;/code&gt; are ready, let's move on to the following:&lt;/p&gt;

&lt;h4&gt;
  
  
  Build the Docker image
&lt;/h4&gt;

&lt;p&gt;We're building the Docker image to package our Node.js app, its dependencies, and environment into a single, reusable unit, so that it can run consistently anywhere, regardless of the host system.&lt;/p&gt;

&lt;p&gt;In your terminal, from the root of your project, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t &amp;lt;name_of_your_app&amp;gt; .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace the placeholder "" with the name of the app, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t stars_generator .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See what each part of this command means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-t stars_generator&lt;/code&gt;: This tags the image with the name &lt;code&gt;stars_generator&lt;/code&gt;, so you can later refer to it easily when running or managing containers (e.g., &lt;code&gt;docker run stars_generator&lt;/code&gt;). It's like giving your image a label or shortcut.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.&lt;/code&gt;: This tells Docker to look in the current directory (where your terminal is open and where the Dockerfile is located) for everything it needs to build the image, including the Dockerfile and source code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the command runs successfully, Docker processes the Dockerfile line by line, creating an image that can later be run as a container.&lt;/p&gt;

&lt;p&gt;Make sure your docker daemon is running, otherwise you'll get this error when you run the command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiur1ccpyopm0slwt70jf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiur1ccpyopm0slwt70jf.png" alt="docker daemon not running" width="800" height="33"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This error means Docker is installed, but the Docker daemon (background service) isn't running, so your system can't build images or manage containers.&lt;/p&gt;

&lt;p&gt;So, to avoid or tackle this error, and make sure your docker daemon is running, follow the steps in this &lt;a href="https://dev.to/deborahemeni1/getting-started-with-docker-how-to-install-docker-and-set-it-up-correctly-4knb"&gt;article&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or you can simply do the following to fix it:&lt;/p&gt;

&lt;p&gt;1) If you're on macOS and using Docker Desktop:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open Docker Desktop and wait for it to fully start. You'll usually see the Docker icon in your menu bar with a green dot once it's ready.&lt;/li&gt;
&lt;li&gt;Then rerun the command.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2) If you're on linux:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You may need to run:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; sudo systemctl start docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And possibly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; sudo usermod -aG docker $USER
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3) If Docker Desktop isn't installed, install it from &lt;a href="https://www.docker.com/products/docker-desktop/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you run the command, you should see the following to show that the build was successful:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zq2tyl8cozglgh107sx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zq2tyl8cozglgh107sx.png" alt="docker build successful" width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So this shows that Docker built your custom image from the Dockerfile without any errors.&lt;/p&gt;

&lt;h5&gt;
  
  
  What the build output means, line by line
&lt;/h5&gt;

&lt;p&gt;Let's walk through the major steps you see in the terminal&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;[internal] load build definition from Dockerfile&lt;/code&gt;: Docker is reading your Dockerfile and preparing the build context.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;transferring dockerfile: 423B&lt;/code&gt;: It reads the Dockerfile (423 bytes in size).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;load metadata for docker.io/library/node:22-alpine&lt;/code&gt;: It pulls the Node.js v22 Alpine base image from Docker Hub.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;extracting sha256:...&lt;/code&gt;: Docker unpacks (extracts) layers of the Node.js base image.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;[2/5] WORKDIR /usr/src/app&lt;/code&gt;: Sets the working directory in the container to &lt;code&gt;/usr/src/app&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;[3/5] COPY package.json ./&lt;/code&gt;: Copies only &lt;code&gt;package.json&lt;/code&gt; so Docker can cache dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;[4/5] RUN npm install&lt;/code&gt;: Installs the dependencies listed in your &lt;code&gt;package.json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;[5/5] COPY . .&lt;/code&gt;: Copies the rest of your project files (like &lt;code&gt;server.js&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;exporting to image&lt;/code&gt;: Your custom image is finalized and saved with the tag &lt;code&gt;stars_generator&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, the summary is you now have a reusable Docker image called &lt;strong&gt;stars_generator&lt;/strong&gt; that contains your Node.js app and its dependencies.&lt;/p&gt;

&lt;h4&gt;
  
  
  Run the container
&lt;/h4&gt;

&lt;p&gt;Now that the image is built, your next step is to run the container. This step is important because it allows you to start and test your Node.js app inside an isolated Docker environment using the image you just built.&lt;/p&gt;

&lt;p&gt;So, run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -p 3000:3000 stars_generator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What this command means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker run&lt;/code&gt;: Tells Docker to start a new container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-p 3000:3000&lt;/code&gt;: Maps port 3000 on your machine (host) to port 3000 inside the container, so you can access your app at &lt;code&gt;http://localhost:3000&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;stars_generator&lt;/code&gt;: The name of the image you built earlier.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In simple terms: you're launching your app in Docker and making it available at localhost:3000 on your browser.&lt;/p&gt;

&lt;p&gt;This is what you should see when you run the command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatwtuetotbv0fce1b1bo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatwtuetotbv0fce1b1bo.png" alt=" " width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What this output means is that your Docker container successfully started and ran the Node.js app inside it. Specifically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; docker-node-advanced@1.0.0 start
&amp;gt; node server.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This shows that the container is executing the &lt;code&gt;start&lt;/code&gt; script from your &lt;code&gt;package.json&lt;/code&gt;, which runs &lt;code&gt;server.js&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Then this line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Server running on http://localhost:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;...means your app is now live and listening on port 3000. Since you mapped that port to your local machine with -p 3000:3000, you can open your browser and visit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to test the app running inside Docker.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;But wait, we're not done because you might be asking: "we could already run the node.js app locally so what's the essence of these steps?" you'll find out in the next section.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why we containerized the Node.js app
&lt;/h2&gt;

&lt;p&gt;You might wonder: we could already run the app with node server.js, so why go through the trouble of Dockerizing it?&lt;/p&gt;

&lt;p&gt;Let's see the reasons why:&lt;/p&gt;

&lt;h3&gt;
  
  
   1. Consistent environment everywhere
&lt;/h3&gt;

&lt;p&gt;Docker lets you package your app with its runtime, dependencies, and system libraries into a single image. That means:&lt;/p&gt;

&lt;p&gt;It runs exactly the same on any system like Windows, macOS, or Linux.&lt;/p&gt;

&lt;p&gt;No more “works on my machine” issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Easy sharing and deployment
&lt;/h3&gt;

&lt;p&gt;Once you have a container image:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You can push it to &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;Docker Hub&lt;/a&gt; (a public or private container registry).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Anyone can pull and run it without needing to clone your code or install Node.js.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is especially useful in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Teams with different dev machines&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CI/CD pipelines&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploying to production (on cloud or Kubernetes)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Safe isolation
&lt;/h3&gt;

&lt;p&gt;The app runs inside a container, isolated from the rest of your system. No conflicts with global Node versions or ports. You can stop and delete containers without affecting anything on your machine.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to push your Docker image to Docker Hub
&lt;/h2&gt;

&lt;p&gt;Since this is a small project and for personal learning, this step is optional. However, it's important to do it at least once to understand the workflow. See the reasons why:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You'll see how teams share and deploy containerized apps.&lt;/li&gt;
&lt;li&gt;You'll be able to pull your image from any machine without copying  your code.&lt;/li&gt;
&lt;li&gt;You'll get familiar with real-world Docker usage which is very useful for DevOps, backend or platform roles.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you're using the free tier on Docker Hub, just be mindful not to push too many unused images. But for learning? It's definitely not a waste.&lt;/p&gt;

&lt;p&gt;Now see the steps to push your Docker image to Docker Hub&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Create an account on Docker Hub
&lt;/h3&gt;

&lt;p&gt;If you haven't already, go to the Docker Hub &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;website&lt;/a&gt;, sign up and create a Docker ID (which is your username).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0yzisa7f1uov1ml1yf6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0yzisa7f1uov1ml1yf6.png" alt="docker-hub-homepage" width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Tag the image with your Docker Hub username
&lt;/h3&gt;

&lt;p&gt;Let’s say your Docker Hub username is &lt;code&gt;your_dockerhub_username&lt;/code&gt;. You’ll tag the image like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag stars_generator &amp;lt;your_dockerhub_username&amp;gt;/stars_generator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This names your image so Docker knows where to push it.&lt;/p&gt;

&lt;p&gt;Make sure your terminal is in the project directory (where your Dockerfile is).&lt;/p&gt;

&lt;p&gt;Then tag the image for Docker Hub (replace  with your actual Docker Hub username)&lt;/p&gt;

&lt;p&gt;This command doesn’t depend on any specific files in the directory, it works because Docker knows the image you built (by name) and assigns it a new tag that includes your Docker Hub namespace.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Why tagging is important&lt;/strong&gt;:&lt;br&gt;
Docker needs to know where to push the image. &lt;strong&gt;stars_generator&lt;/strong&gt; is your local image name, but &lt;strong&gt;/stars_generator&lt;/strong&gt; tells Docker to push it to your personal namespace on Docker Hub.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Step 3: Log in to Docker from your terminal
&lt;/h3&gt;

&lt;p&gt;Before pushing the image to Docker Hub, you need to authenticate your Docker CLI with your Docker Hub account. Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll be prompted to enter your Docker Hub &lt;strong&gt;username&lt;/strong&gt; and &lt;strong&gt;password&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This step is necessary because Docker won’t let you push images to your account unless you’re logged in, it needs to confirm that you have permission to upload to the namespace you tagged the image with (e.g., &lt;strong&gt;/stars_generator&lt;/strong&gt;).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: If you're using Docker Desktop, you may already be signed in there, but it's still best to run docker login from the terminal to be sure your CLI is authenticated.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once you run the docker login command in your terminal, Docker may prompt you to authenticate using a web-based login flow, especially if you're using Docker Desktop or haven’t logged in recently.&lt;/p&gt;

&lt;p&gt;You’ll see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49jyb2eo1nbp5nhsaano.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49jyb2eo1nbp5nhsaano.png" alt="docker login command output" width="800" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What this means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker generated a one-time device confirmation code (like AKBT-IEBG) that links your terminal session to your Docker account.&lt;/li&gt;
&lt;li&gt;It also gives you a link: &lt;a href="https://login.docker.com/activate" rel="noopener noreferrer"&gt;https://login.docker.com/activate&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;When you press &lt;strong&gt;ENTER&lt;/strong&gt;, your browser will open that page.&lt;/li&gt;
&lt;li&gt;Once there, log in with your Docker Hub credentials and enter the code shown in your terminal.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This links your terminal session with your Docker Hub account, allowing you to push containers.&lt;/p&gt;

&lt;p&gt;Once logged in successfully, you should see this confirmation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcni6uguat1qgzw8pftlf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcni6uguat1qgzw8pftlf.png" alt="confirmation-showing-docker-login" width="800" height="843"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your terminal session will be authorized to push images under your username until the session expires or you log out. You will see "Login Succeeded" in your terminal:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnie9mtni4xn6tu62z79z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnie9mtni4xn6tu62z79z.png" alt="terminal-docker-login-success" width="800" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Push the image to Docker Hub
&lt;/h3&gt;

&lt;p&gt;Once you’re logged in, it’s time to upload your Docker image to Docker Hub. You’ll use the docker push command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push &amp;lt;your_dockerhub_username&amp;gt;/stars_generator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells Docker to push the image you previously tagged (/stars_generator) to your public Docker Hub repository under your account.&lt;/p&gt;

&lt;p&gt;You’ll see Docker upload the image layer by layer. It only pushes the layers that don’t already exist in your Docker Hub account, which saves time.&lt;/p&gt;

&lt;p&gt;Once it’s done, your terminal should look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzgc8m858187bcb7jepe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzgc8m858187bcb7jepe.png" alt="result of pushing the image to docker hub" width="800" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, visit your image directly at:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://hub.docker.com/repository/docker/your-dockerhub-username/stars_generator/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace  with your actual Docker Hub username.&lt;/p&gt;

&lt;p&gt;You’ll see a page like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qaypwsccvcj1jl8c6q6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qaypwsccvcj1jl8c6q6.png" alt="inside-docker-hub" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What you’re looking at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tag section&lt;/strong&gt;: Shows which versions of your image are available (e.g., latest)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Push and pull stats&lt;/strong&gt;: Tells you how recently the image was pushed and how many times it’s been pulled&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker command prompt&lt;/strong&gt;: Gives you the exact docker pull command anyone can use to download the image&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;General settings&lt;/strong&gt;: Lets you add a description, set categories, and control collaboration&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This makes your image publicly accessible, meaning anyone can now pull it using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull &amp;lt;your_dockerhub_username&amp;gt;/stars_generator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why this is important:&lt;br&gt;
Pushing to Docker Hub is useful if you want to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Share your app with teammates&lt;/li&gt;
&lt;li&gt;Use your image in a CI/CD pipeline&lt;/li&gt;
&lt;li&gt;Deploy it to a cloud platform like AWS&lt;/li&gt;
&lt;li&gt;Archive a version of your containerized app&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step 5: Anyone can now run your app
&lt;/h3&gt;

&lt;p&gt;Now that your image is on Docker Hub, anyone can pull and run it, no setup or dependencies required.&lt;/p&gt;

&lt;p&gt;All they need to do is run these two commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull &amp;lt;your_dockerhub_username&amp;gt;/stars_generator
docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 3000:3000 &amp;lt;your_dockerhub_username&amp;gt;/stars_generator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace  with your Docker Hub username.&lt;/p&gt;

&lt;p&gt;This will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Download the image from your Docker Hub repository&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the container and expose port 3000 on their local machine&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once it’s running, they can open:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;…and they’ll see your Node.js app in action - without needing to install Node, npm install, or clone any repo.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;So, now your app becomes portable and self-contained. Anyone on any OS can run it the same way either on their laptop, a VM, or a production server.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  When should you Dockerize and push your app?
&lt;/h2&gt;

&lt;p&gt;You might have this question in mind:&lt;br&gt;
“Should I finish building and testing my app before Dockerizing and pushing it to Docker Hub?”&lt;/p&gt;

&lt;p&gt;In real-world projects, the answer depends on your team’s workflow, deployment goals, and stage of development. Let’s break it down.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Finish building and testing first, then Dockerize and push
&lt;/h3&gt;

&lt;p&gt;This is the most common and practical approach in real-world scenarios, especially for production services.&lt;/p&gt;

&lt;p&gt;You typically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Build the core functionality of your app.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test it locally (unit tests, integration tests, etc.).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confirm that it works as expected outside of a container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then, write a Dockerfile and containerize it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test the container (does it build, does it start up, are all env vars respected?).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Only then do you push it to Docker Hub or your private registry.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For Example:&lt;br&gt;
Let's say you're building a payment microservice for an e-commerce system. You'd want to make sure the service can talk to your payment gateway, handle retries, fail gracefully, and return correct responses before packaging it in a container. Once it’s reliable, Dockerizing it ensures it behaves the same across staging and production.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Dockerize early during development
&lt;/h3&gt;

&lt;p&gt;In some teams, especially those working on cloud-native platforms (like Kubernetes), you might Dockerize the app early, even while it's still being built. This is useful if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Your team is standardizing development across machines using Docker containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You’re testing deployments continuously on a platform like AWS ECS or GKE.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You're writing infrastructure-as-code (IaC) alongside the app.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For Example:&lt;br&gt;
A team building a GraphQL backend might scaffold the app and immediately create a Dockerfile so every developer can spin it up using &lt;code&gt;docker-compose&lt;/code&gt; and work in a consistent environment. They may push early test builds to a registry like GitHub Container Registry, even before the app is "done."&lt;/p&gt;

&lt;h3&gt;
  
  
  So what’s the best approach?
&lt;/h3&gt;

&lt;p&gt;If your goal is to publish a reliable, working app on Docker Hub, it’s better to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Build it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Test it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then containerize it and push.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This keeps your registry clean and avoids wasting storage on incomplete or broken containers.&lt;/p&gt;

&lt;p&gt;But if you're iterating fast, and container consistency is a priority, Dockerizing early can help, as long as you're comfortable handling container-related issues while still building the app.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>dockerfile</category>
      <category>node</category>
      <category>containerize</category>
    </item>
    <item>
      <title>Getting Started with Docker - How to install Docker and set it up correctly</title>
      <dc:creator>Deborah Emeni</dc:creator>
      <pubDate>Sun, 22 Jun 2025 20:30:55 +0000</pubDate>
      <link>https://forem.com/deborahemeni1/getting-started-with-docker-how-to-install-docker-and-set-it-up-correctly-4knb</link>
      <guid>https://forem.com/deborahemeni1/getting-started-with-docker-how-to-install-docker-and-set-it-up-correctly-4knb</guid>
      <description>&lt;p&gt;Before running any docker commands, you need to install Docker and ensure it’s running properly. You’ll need to do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Docker on your system&lt;/li&gt;
&lt;li&gt;Verify that Docker is running&lt;/li&gt;
&lt;li&gt;Run a test container to confirm everything is set up&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 1: Install Docker
&lt;/h3&gt;

&lt;p&gt;Go to the official Docker website and download Docker Desktop for your operating system.&lt;/p&gt;

&lt;p&gt;Follow the steps &lt;a href="https://www.docker.com/get-started/" rel="noopener noreferrer"&gt;here&lt;/a&gt; to download Docker or click the “Dowload Docker Desktop” button on the website:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqgmn6xsebuxxilxyutz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqgmn6xsebuxxilxyutz.png" alt="Download docker" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose the version that matches your operating system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Windows&lt;/strong&gt;: Install &lt;strong&gt;Docker Desktop for Windows&lt;/strong&gt; (Follow the steps &lt;a href="https://docs.docker.com/desktop/setup/install/windows-install/" rel="noopener noreferrer"&gt;here&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mac&lt;/strong&gt;: Install &lt;strong&gt;Docker Desktop for Mac&lt;/strong&gt; (Follow the steps &lt;a href="https://docs.docker.com/desktop/setup/install/mac-install/" rel="noopener noreferrer"&gt;here&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Linux&lt;/strong&gt;: Install &lt;strong&gt;Docker Engine&lt;/strong&gt; manually (Follow the steps &lt;a href="https://docs.docker.com/engine/install/ubuntu/" rel="noopener noreferrer"&gt;here&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Verify that Docker is running
&lt;/h3&gt;

&lt;p&gt;After installation, Docker needs to be running before you can use it.&lt;/p&gt;

&lt;p&gt;On Windows &amp;amp; Mac, open Docker Desktop and wait for it to say &lt;strong&gt;"Docker is running."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On Linux, start Docker manually by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, verify it’s running with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker info
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If Docker is running, you’ll see detailed information about your system. For example, if you run the command, you’ll see something like this in your terminal:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlsn4g5uhgv8wuy4szbf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlsn4g5uhgv8wuy4szbf.png" alt="docker info terminal output" width="800" height="820"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What you can see in the terminal output for the (&lt;code&gt;docker info&lt;/code&gt; command) above means a few things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is running Docker version &lt;strong&gt;27.5.1&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The Docker daemon (the background service that runs on your machine and manages Docker containers, images, networks, and volumes) is active and responding (shown by the full server information being available).&lt;/li&gt;
&lt;li&gt;The Docker daemon (the background service that runs on your machine and manages Docker containers, images, networks, and volumes) is active and responding (shown by the full server information being available).&lt;/li&gt;
&lt;li&gt;Docker reports &lt;strong&gt;0 containers running&lt;/strong&gt;, which is expected if you haven’t started any yet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, see the screenshot below showing a Docker Desktop that shows &lt;strong&gt;“Engine running”&lt;/strong&gt; in the lower left corner, confirming the Docker engine is active:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmb9kuxifoix0naidaipb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmb9kuxifoix0naidaipb.png" alt="Engine running docker desktop" width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Run a test container
&lt;/h3&gt;

&lt;p&gt;To confirm everything works, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download the &lt;a href="https://hub.docker.com/_/hello-world" rel="noopener noreferrer"&gt;hello-world&lt;/a&gt; container from &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;Docker Hub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Run it as a container&lt;/li&gt;
&lt;li&gt;Display a message confirming that Docker is set up correctly&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Note: &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;Docker Hub&lt;/a&gt; is an online repository where you can find and download container images for different software.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A Docker image is a lightweight, standalone package that contains everything needed to run a container, including the OS, system tools, and application dependencies. When you run a container, it is created from an image.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you see an output saying &lt;strong&gt;"Hello from Docker!"&lt;/strong&gt;, then Docker is working.&lt;/p&gt;

&lt;p&gt;You’ll see something like this your terminal:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qpz7z3kylzwq7tjy0zp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qpz7z3kylzwq7tjy0zp.png" alt="docker run helloworld command" width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Again what this command does:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you run &lt;code&gt;docker run hello-world&lt;/code&gt;, you’re asking Docker to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;download a small test image called hello-world (if it's not already on your machine)&lt;/li&gt;
&lt;li&gt;create and start a container from that image&lt;/li&gt;
&lt;li&gt;run a program inside the container that prints a message to confirm everything is working&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s go into it in more detail, okay?&lt;/p&gt;

&lt;h3&gt;
  
  
  Step-by-step breakdown of what happened
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Docker checked for the image locally&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The terminal above said:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Unable to find image &lt;span class="s1"&gt;'hello-world:latest'&lt;/span&gt; locally
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That means Docker looked on your computer for the hello-world image but didn’t find it, so it had to pull it from Docker Hub.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Docker pulled the image from Docker Hub&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;latest: Pulling from library/hello-world

Pull &lt;span class="nb"&gt;complete

&lt;/span&gt;Status: Downloaded newer image &lt;span class="k"&gt;for &lt;/span&gt;hello-world:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This shows that Docker successfully downloaded the latest version of the hello-world image from Docker Hub. As I said earlier, Docker Hub is like a public library for container images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Docker ran the container&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the image was downloaded, Docker created a container and ran it. The container contains a tiny program that simply prints:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Hello from Docker!

This message shows that your installation appears to be working correctly.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This message means Docker is installed correctly, the Docker daemon is running, and Docker can download, create, and run containers successfully.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Docker explains what it just did&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The message continues:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;1. The Docker client contacted the Docker daemon.

2. The Docker daemon pulled the &lt;span class="s2"&gt;"hello-world"&lt;/span&gt; image from the Docker Hub.

3. The Docker daemon created a new container from that image...

4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives you insight into how Docker works behind the scenes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Docker client&lt;/strong&gt; (you) sends a command&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Docker daemon&lt;/strong&gt; (background service) does the work: pulling the image, creating the container, and running the code&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;output of the container&lt;/strong&gt; is sent back to your terminal&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. It gives you a next step&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At the bottom, Docker suggests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;To try something more ambitious, you can run an Ubuntu container with:

docker run &lt;span class="nt"&gt;-it&lt;/span&gt; ubuntu bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You have now successfully installed Docker and ensured it's running correctly. Congrats!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>gettingstartedwithdocker</category>
      <category>webdev</category>
      <category>devops</category>
    </item>
    <item>
      <title>Understanding virtualization &amp; containers in the simplest way</title>
      <dc:creator>Deborah Emeni</dc:creator>
      <pubDate>Sun, 22 Jun 2025 20:19:06 +0000</pubDate>
      <link>https://forem.com/deborahemeni1/understanding-virtualization-containers-in-the-simplest-way-18m3</link>
      <guid>https://forem.com/deborahemeni1/understanding-virtualization-containers-in-the-simplest-way-18m3</guid>
      <description>&lt;h2&gt;
  
  
  What you’ll learn
&lt;/h2&gt;

&lt;p&gt;By the end of this section, you’ll&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand what virtual machines (VMs) are and why they were created.&lt;/li&gt;
&lt;li&gt;Learn the problems VMs solve and their limitations&lt;/li&gt;
&lt;li&gt;See why containers exist and how they compare to VMs&lt;/li&gt;
&lt;li&gt;Get an introduction to Docker and why it is used&lt;/li&gt;
&lt;li&gt;Complete a hands-on project to run an Ubuntu container and execute basic commands inside it.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  How were applications traditionally run?
&lt;/h2&gt;

&lt;p&gt;Before we get into virtual machines and containers, let’s step back and talk about how teams used to run software in the early days.&lt;/p&gt;

&lt;p&gt;Now, every application, as you might know already, needs to run somewhere, right?&lt;/p&gt;

&lt;p&gt;And that means it requires a computer, which in turn needs an operating system (OS) such as Linux, Windows, or macOS.&lt;/p&gt;

&lt;p&gt;On top of that, applications rely on what we call “dependencies,” like runtime libraries or language versions, to function properly.&lt;/p&gt;

&lt;p&gt;Now, before what we call “virtualization,” which you will soon understand, each workload had its own server.&lt;/p&gt;

&lt;p&gt;By “workload”, I mean any software or service running on a server, like a web app, database, or file server, that uses system resources such as CPU, memory, and storage.&lt;/p&gt;

&lt;p&gt;To better understand this, let me explain what this looked like in practice:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm46m2vs1sgdecgm4s6h6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm46m2vs1sgdecgm4s6h6.png" alt="Three separate physical servers running a web app, a database, and a file server, each with its own OS and dependencies to avoid software conflicts" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s say a company wants to run three different parts of its system:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;a web application that customers interact with&lt;/li&gt;
&lt;li&gt;a database that stores all the data&lt;/li&gt;
&lt;li&gt;a file server for internal documents and media&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, this company wants to keep things simple and avoid problems like one service breaking another.&lt;/p&gt;

&lt;p&gt;For instance, the database could need a different version of a library that the web app can’t work with, which is usually called “software version conflicts”.&lt;/p&gt;

&lt;p&gt;So what do they do?&lt;/p&gt;

&lt;p&gt;They set up three separate physical servers, with one for each.&lt;/p&gt;

&lt;p&gt;Meaning that each server has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;its own operating system&lt;/li&gt;
&lt;li&gt;its own set of dependencies&lt;/li&gt;
&lt;li&gt;and it only runs one service, so that nothing conflicts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Okay, now that you get the “gist”.  Next, I’ll tell you what was wrong with this setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  What were the limitations?
&lt;/h3&gt;

&lt;p&gt;Don’t get me wrong, this setup worked, okay? But it came with serious issues like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;It was expensive&lt;/strong&gt;: You had to buy and maintain separate hardware for each workload, even if it didn’t use all the resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resources were wasted&lt;/strong&gt;: Most servers were sitting idle for most of the time, using only 10-30% of their total capacity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scaling was hard&lt;/strong&gt;: If you needed more resources for the web app, for example, you couldn’t just tweak something. You had to buy a whole new server, install everything again, and configure it from scratch.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So, the question became:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;How can we run more than one workload on the same machine, without creating conflicts or wasting resources?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s what led to the rise of “Virtualization”, which I will define in the next section.&lt;/p&gt;




&lt;h2&gt;
  
  
  So… what is virtualization &amp;amp; virtual machines (VMs)?
&lt;/h2&gt;

&lt;p&gt;Now that you understand what brought about virtualization. It’s time to understand what it is.&lt;/p&gt;

&lt;p&gt;Virtualization allows multiple operating systems to run on the same physical machine. &lt;/p&gt;

&lt;p&gt;So, in place of having one OS per machine, you can create multiple virtual machines on a single physical server.&lt;/p&gt;

&lt;p&gt;Each virtual machine acts like a separate computer with its own operating system, memory, and storage, even though they all share physical hardware.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example showing where virtualization is applied
&lt;/h3&gt;

&lt;p&gt;Next, I’ll give you a real-world use case in cloud computing so that you can better understand how virtualization works in practice.&lt;/p&gt;

&lt;p&gt;Have you heard of cloud providers? Like AWS, Google Cloud, or Microsoft Azure? They all use virtualization to rent out virtual machines in place of physical machines.&lt;/p&gt;

&lt;p&gt;So when you create a cloud server with any of these cloud providers, what’s really happening is you’re getting a virtual machine running inside a massive data center (which is a facility filled with thousands of interconnected physical servers that host VMs for multiple users).&lt;/p&gt;

&lt;p&gt;Now, without virtualization, as you can see, cloud computing wouldn’t exist, and companies would have to buy and maintain their own physical servers. If you don’t understand what cloud computing means here, this is how I’d define it:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cloud computing is the ability to access computing resources like servers, storage, and databases over the internet without owning a physical hardware.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So, what makes all this (virtualization) possible is this special software called a “hypervisor” which allows multiple VMs to run on the same physical machine. See the illustration below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ppv8by4kwuj1126h6or.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ppv8by4kwuj1126h6or.png" alt="Diagram showing virtualization: one physical server running multiple virtual machines using a hypervisor. Each VM includes its own OS, memory, and storage, while sharing the same physical hardware." width="800" height="693"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What problems did virtual machines solve?
&lt;/h3&gt;

&lt;p&gt;As you can see, VMs solved the wasted resources problem of physical servers because with virtualization, one physical server could host multiple VMs, each running different applications.&lt;/p&gt;

&lt;p&gt;And with this came several benefits like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Better resource utilization: A single machine can run multiple applications while maximizing resources.&lt;/li&gt;
&lt;li&gt;Cost savings: Fewer physical machines are needed which automatically reduces the cost of hardware and maintenance.&lt;/li&gt;
&lt;li&gt;Isolation: Each application runs its own VM, thereby preventing conflicts between applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, even though VMs obviously improved things, they still had limitations and that’s what I’ll talk about next.&lt;/p&gt;

&lt;h3&gt;
  
  
  What were the limitations of virtual machines?
&lt;/h3&gt;

&lt;p&gt;There are several reasons why VMs are not always the best solutions, let’s quickly run through them:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Heavy resource usage&lt;/strong&gt;: Each VM runs a full OS, which takes up a lot of RAM and CPU.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slow startup&lt;/strong&gt;: Booting a VM can take minutes, just like starting a computer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inefficient scaling&lt;/strong&gt;: Spinning up new VMs requires significant computing power and takes time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OS redundancy&lt;/strong&gt;: If you run 10 Ubuntu VMs, you’re running separate copies of Ubuntu, wasting storage.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Because of these limitations, it led to the need for “Containers” which we would discuss next.&lt;/p&gt;




&lt;h2&gt;
  
  
  What are containers, and how do they compare to VMs?
&lt;/h2&gt;

&lt;p&gt;Containers solve many of the problems VMs have. In place of running a full operating system for each application, containers share the host OS, making them lightweight, fast, and resource-friendly. See the illustration below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhkig5j8v4966mdyti3c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhkig5j8v4966mdyti3c.png" alt="vms vs containers" width="800" height="793"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, how are containers different from VMs? Look at the table below to understand their differences.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Feature&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;VMs&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;Containers&lt;/strong&gt; (Docker)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Startup time&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Minutes&lt;/td&gt;
&lt;td&gt;Seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Resource usage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High (full OS per VM)&lt;/td&gt;
&lt;td&gt;Low (shared OS)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Portability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Limited (OS-dependent)&lt;/td&gt;
&lt;td&gt;High (runs anywhere)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Slow&lt;/td&gt;
&lt;td&gt;Fast (instantly spin up containers)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Isolation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Strong (separate OS)&lt;/td&gt;
&lt;td&gt;Strong but lightweight&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;With containers, applications start in seconds instead of minutes, and they use fewer resources since they don’t need a full OS for each instance.&lt;/p&gt;

&lt;p&gt;Next we’ll talk about the platform that makes this possible.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is Docker, and why use it?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; is a containerization platform that allows developers to create, deploy, and manage containers easily. In place of setting up separate VMs, you can package an application with all its dependencies into a lightweight, portable container.&lt;/p&gt;

&lt;p&gt;An example use case of Docker can be seen when developer use Docker to make sure that applications run exactly the same way in development, testing, and production environments. It takes out the “works on my machine” problem by making software behave the same way everywhere.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Next, we’ll do a mini project to put all you’ve learned so far into practice.&lt;/em&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  Mini-project: Run an Ubuntu container using Docker
&lt;/h1&gt;

&lt;p&gt;Now that you understand the difference between VMs and containers, it’s time to get hands-on and run your first container using Docker.&lt;/p&gt;

&lt;p&gt;But before we do that, let’s talk about what an Ubuntu container is and why you would want to run an Ubuntu container in the first place.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an Ubuntu container?
&lt;/h2&gt;

&lt;p&gt;A containerized version of Ubuntu, also known as Ubuntu container, is a lightweight, minimal version of Ubuntu that runs inside a container. It does not include a full desktop environment, but it does have all the essential Linux utilities needed to run software.&lt;/p&gt;

&lt;p&gt;When you install Ubuntu on a computer, it comes with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Linux kernel (which interacts with the hardware).&lt;/li&gt;
&lt;li&gt;System files and utilities&lt;/li&gt;
&lt;li&gt;Preinstalled software&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When would you want to run an Ubuntu container?
&lt;/h2&gt;

&lt;p&gt;Let’s see some reasons why you would want to run an ubuntu container in a practical scenario:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Testing software in a clean environment
&lt;/h3&gt;

&lt;p&gt;Let’s say you’re developing an application and need to test it on Ubuntu 22.04, but your computer runs Windows or macOS. Instead of setting up a virtual machine, you can launch an Ubuntu container in seconds and test your application inside it.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Running Linux tools on a non-Linux system
&lt;/h3&gt;

&lt;p&gt;If you use Windows or macOS, you may sometimes need access to Linux commands or tools that are only available on Ubuntu. Running an Ubuntu container gives you access to an Ubuntu terminal without installing Ubuntu on your machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Experimenting with a different Linux distribution
&lt;/h3&gt;

&lt;p&gt;You might be working on a server that runs Ubuntu, but your computer runs another Linux distribution like Fedora or Arch. Running an Ubuntu container allows you to test commands in an Ubuntu-specific environment before applying them to a real server.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Learning Linux without installing a new OS
&lt;/h3&gt;

&lt;p&gt;If you want to practice Linux commands but don’t want to reinstall your operating system, running an Ubuntu container gives you a safe place to try out Linux without affecting your main system.&lt;/p&gt;

&lt;h2&gt;
  
  
  What other containers can you run?
&lt;/h2&gt;

&lt;p&gt;Ubuntu is just one example of a container you can run with Docker. There are many different container images available for different purposes, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Alpine Linux: A lightweight Linux container for minimal environments.&lt;/li&gt;
&lt;li&gt;Nginx: A web server container to serve web pages.&lt;/li&gt;
&lt;li&gt;PostgreSQL: A database container for managing data.&lt;/li&gt;
&lt;li&gt;Node.js: A container with Node.js preinstalled for JavaScript development.&lt;/li&gt;
&lt;li&gt;Python: A container with Python and all necessary dependencies for scripting.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can pull and run any of these containers using Docker, just like you will with the Ubuntu container.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: Follow the steps in this &lt;a href="https://dev.to/deborahemeni1/getting-started-with-docker-how-to-install-docker-and-set-it-up-correctly-4knb"&gt;article&lt;/a&gt; to properly install and setup Docker before you go on&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Running an Ubuntu container using Docker
&lt;/h2&gt;

&lt;p&gt;Let’s now run an Ubuntu container and interact with it as if it were a real Ubuntu system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Download the Ubuntu container image
&lt;/h3&gt;

&lt;p&gt;Before you can run an Ubuntu container, you need to download the official &lt;a href="https://hub.docker.com/_/ubuntu" rel="noopener noreferrer"&gt;Ubuntu image&lt;/a&gt; from Docker Hub.&lt;/p&gt;

&lt;p&gt;To pull the Ubuntu image, open your terminal and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull ubuntu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command downloads the latest Ubuntu container image from Docker Hub.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding the output of &lt;code&gt;docker pull ubuntu&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once you run the command, you’ll see several lines printed in the terminal. Let’s break down what each part means so you know exactly what’s happening.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdOurMUrzS13alggqTu6HawS4wkgVaXv2WK5qBgYMgmNzvSAAfCI00d9iiL-aD7iRsz5R1YzfWueCHf2S7uBeP7nSoBHPaDVJJfu3tXhVOcyyI3-txegd_waFH2uFmkZPcdmaEDww?key=KxDsKDvlvXPUR4B5l6LOkadx" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;docker pull ubuntu command&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Using default tag: latest&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You didn’t specify which version of Ubuntu you want, so Docker used the &lt;strong&gt;default tag&lt;/strong&gt;, which is &lt;code&gt;latest&lt;/code&gt;. That means it will pull the most up-to-date version available.&lt;/p&gt;

&lt;p&gt;If you wanted a specific version, you could run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull ubuntu:22.04
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;code&gt;latest: Pulling from library/ubuntu&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This shows where the image is coming from.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;library/ubuntu&lt;/strong&gt; is the official Ubuntu image maintained by Docker.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It lives on Docker Hub, which is Docker’s public registry of container images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;2f074dc76c5d: Pull complete&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Docker images are built in layers. Each one adds something on top of the previous layer.&lt;/p&gt;

&lt;p&gt;This message means a specific layer of the Ubuntu image has been successfully downloaded.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Digest: sha256:...&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the unique ID (checksum) of the image you pulled. Think of it like a fingerprint for this exact version. It helps Docker verify the integrity and version of the image.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Status: Downloaded newer image for ubuntu:latest&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Docker checked your system to see if you already had the image.&lt;/p&gt;

&lt;p&gt;In this case, it didn’t find it or found an older version, so it downloaded the newer one.&lt;/p&gt;

&lt;p&gt;If the image was already up to date, you would see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Status: Image is up to &lt;span class="nb"&gt;date &lt;/span&gt;&lt;span class="k"&gt;for &lt;/span&gt;ubuntu:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;code&gt;docker.io/library/ubuntu:latest&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This confirms the full path of the image you now have locally:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It came from &lt;code&gt;docker.io&lt;/code&gt; (Docker Hub)&lt;/li&gt;
&lt;li&gt;It’s the official &lt;code&gt;library/ubuntu&lt;/code&gt;image&lt;/li&gt;
&lt;li&gt;The tag is &lt;code&gt;latest&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that the Ubuntu image is on your system, you're ready to use it to run your first container, which we’ll do next. You don’t need to download it again; Docker will use this image from your local machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Run the Ubuntu container
&lt;/h3&gt;

&lt;p&gt;Once the image is downloaded, you can start an Ubuntu container.&lt;/p&gt;

&lt;p&gt;Run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; ubuntu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What does this command do?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker run&lt;/code&gt; tells Docker to start a new container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-it&lt;/code&gt;makes it interactive and gives you access to a terminal inside the container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ubuntu&lt;/code&gt; tells Docker to use the Ubuntu image (if it wasn’t already downloaded, Docker pulled it from Docker Hub automatically).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After running this command, your terminal will change. You are now inside the Ubuntu container, running a Linux shell.&lt;/p&gt;

&lt;p&gt;Your terminal will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzfm6ts2pp34yt5glvty1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzfm6ts2pp34yt5glvty1.png" alt="docker run ubuntu" width="800" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What you see in your terminal &lt;code&gt;root@6f63eabbab0e:/#&lt;/code&gt; is the Ubuntu container's shell prompt. You are now inside the container, running Ubuntu as the root user.&lt;/p&gt;

&lt;p&gt;Let’s break it down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;root&lt;/code&gt; → You are logged in as the &lt;strong&gt;root user&lt;/strong&gt; (the default in most containers).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@6f63eabbab0e&lt;/code&gt; → This is the &lt;strong&gt;short container ID&lt;/strong&gt; assigned by Docker. It uniquely identifies your running container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;:/#&lt;/code&gt; → You are currently at the &lt;strong&gt;root directory&lt;/strong&gt; (/) inside the Ubuntu file system. The # symbol confirms you're logged in as root.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From this point, you can run Linux commands inside the container as if you were using a real Ubuntu server. You're not in a simulation, you’re in a real Ubuntu environment, isolated from your host machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Check the OS version inside the container
&lt;/h3&gt;

&lt;p&gt;You are now inside a working Ubuntu container, but keep in mind that this is a &lt;strong&gt;minimal version&lt;/strong&gt; of Ubuntu. It’s stripped down to keep the container lightweight, so some common commands are not included by default.&lt;/p&gt;

&lt;p&gt;To check the Ubuntu version running inside the container, you can use this built-in command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; /etc/os-release
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command reads a system file that contains the OS version details. You should see output similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gdy3vht4fi51bwcor6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gdy3vht4fi51bwcor6k.png" alt="os release ubuntu command" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s what each part means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;PRETTY_NAME="Ubuntu 24.04.2 LTS"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is the full name of the OS version, written in a human-readable way. In this case, it's Ubuntu version 24.04.2, Long-Term Support (LTS).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;NAME="Ubuntu"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This confirms the base distribution is Ubuntu.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;VERSION_ID="24.04"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is the base version number of the operating system. It's commonly used by scripts or automation tools.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;VERSION="24.04.2 LTS (Noble Numbat)"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This gives both the version number and the codename ("Noble Numbat") assigned to this Ubuntu release.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;VERSION_CODENAME=noble&lt;/code&gt; and &lt;code&gt;UBUNTU_CODENAME=noble&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;These provide the codename in a more machine-friendly format.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;ID=ubuntu&lt;/code&gt; and &lt;code&gt;ID_LIKE=debian&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;These are used by tools to identify what kind of system they're running on. &lt;code&gt;ID_LIKE=debian&lt;/code&gt; means that although this is Ubuntu, it behaves similarly to Debian.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;HOME_URL&lt;/code&gt;, &lt;code&gt;SUPPORT_URL&lt;/code&gt;, and &lt;code&gt;BUG_REPORT_URL&lt;/code&gt; lines point you to official Ubuntu resources for learning more or reporting issues.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;LOGO=ubuntu-logo&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is mostly used in graphical environments or tools that display branding.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This output confirms that your Ubuntu container is based on &lt;strong&gt;Ubuntu 24.04.2 LTS&lt;/strong&gt;, and you're working inside a clean, isolated Linux environment (even if your main system is running something else like Windows or macOS).&lt;/p&gt;

&lt;p&gt;If you prefer using the &lt;code&gt;lsb_release&lt;/code&gt; command (which gives similar OS version details in a cleaner format), you'll need to install it manually, because it isn’t included in the minimal Ubuntu image.&lt;/p&gt;

&lt;p&gt;So, run this in your container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; lsb-release
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break this down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;apt update&lt;/code&gt; tells Ubuntu to &lt;strong&gt;refresh its list of available packages&lt;/strong&gt;. Think of it as checking for the latest versions and availability of software.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;apt install -y lsb-release&lt;/code&gt; installs the &lt;code&gt;lsb-release&lt;/code&gt; utility. The &lt;code&gt;y&lt;/code&gt; flag tells Ubuntu to &lt;strong&gt;automatically confirm&lt;/strong&gt; that you want to proceed, so it won’t stop and ask you to type “yes.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After running the command, you’ll see an output like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fru1lgae5kon94zufylob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fru1lgae5kon94zufylob.png" alt="lsb-release-output-command-ubuntu" width="800" height="641"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’ll see a long list of messages in your terminal. That’s completely normal, let’s quickly walk through what’s happening so you know what to expect.&lt;/p&gt;

&lt;p&gt;First, Docker connects to Ubuntu’s package servers and pulls the most up-to-date list of software. You’ll see lines like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Get:1 http://ports.ubuntu.com/ubuntu-ports noble InRelease

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These are the repositories being contacted. You don’t need to interact with any of this, just let it run.&lt;/p&gt;

&lt;p&gt;Once the package list is refreshed, Ubuntu starts installing the &lt;code&gt;lsb-release&lt;/code&gt; tool. You’ll see a few confirmations like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The following NEW packages will be installed: lsb-release
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That just tells you this package wasn’t already on the system and is being added now.&lt;/p&gt;

&lt;p&gt;It then downloads the package and unpacks it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Unpacking lsb-release...
Setting up lsb-release (12.0-2)...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This part completes in a few seconds. Once you see the “Setting up” line, you’re ready to use the command.&lt;/p&gt;

&lt;p&gt;Now you can type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lsb_release -a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and you’ll get a clean, structured summary of the Ubuntu version your container is running. Let’s move on to that next.&lt;/p&gt;

&lt;p&gt;Here’s what that command does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;lsb_release&lt;/code&gt; is a small tool that prints version info about the current Linux distribution.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;a&lt;/code&gt; flag means "all," so you’ll see a full breakdown: the distribution name, description, version, and codename.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This output will be very similar to what you saw earlier with &lt;code&gt;cat /etc/os-release&lt;/code&gt;, but a bit more focused and formatted for readability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qq14j3d48hv2f1xwlzj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qq14j3d48hv2f1xwlzj.png" alt="result of lsb_release -a command" width="800" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're in a situation where you're writing shell scripts or doing system automation and you only need the codename or release number, &lt;code&gt;lsb_release&lt;/code&gt; is a quick way to get exactly that.&lt;/p&gt;

&lt;p&gt;Either method is valid, you can stick with &lt;code&gt;cat /etc/os-release&lt;/code&gt; if you don’t want to install anything extra, or use &lt;code&gt;lsb_release -a&lt;/code&gt; if you prefer its format.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Install software inside the container
&lt;/h3&gt;

&lt;p&gt;You can install software inside the Ubuntu container just like you would on a normal Ubuntu system.&lt;/p&gt;

&lt;p&gt;For example, to install &lt;code&gt;curl&lt;/code&gt;, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt update &amp;amp;&amp;amp; apt install -y curl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What’s happening here?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;apt update&lt;/code&gt; updates the list of available packages.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;apt install -y curl&lt;/code&gt; installs &lt;code&gt;curl&lt;/code&gt; without asking for confirmation (&lt;code&gt;y&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once installed, you can run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This verifies that curl is now available inside the container.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5m8i740leye8ca1bvtb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5m8i740leye8ca1bvtb.png" alt="curl-installed" width="800" height="91"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Exit the container &lt;strong&gt;and restart it later&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;After you’re done working inside the container, you can exit it by typing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;exit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will stop the running container and return you to your regular terminal prompt. Exiting doesn’t delete the container, it just stops it temporarily.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check for stopped containers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To see a list of all containers (including the ones that have stopped), use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker ps &lt;span class="nt"&gt;-a&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command shows both running and exited containers. The output in your screenshot looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0o9yf5ot5ts8c9o7yvp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0o9yf5ot5ts8c9o7yvp.png" alt="docker-ps-a-command" width="800" height="68"&gt;&lt;/a&gt;&lt;br&gt;
Let’s break this down line by line:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CONTAINER ID&lt;/strong&gt;: This is the unique ID Docker assigns to each container. You can use this ID to start, stop, inspect, or remove a container. In the screenshot above, &lt;code&gt;6f63eabbab0e&lt;/code&gt; refers to your Ubuntu container.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IMAGE&lt;/strong&gt;: This shows which Docker image was used to create the container. In this case, it shows you used the &lt;code&gt;ubuntu&lt;/code&gt; image and also ran &lt;code&gt;hello-world&lt;/code&gt; earlier.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;COMMAND&lt;/strong&gt;: This is the default command that runs when the container starts. For Ubuntu, it’s &lt;code&gt;"/bin/bash"&lt;/code&gt;, which opens an interactive shell. For &lt;code&gt;hello-world&lt;/code&gt;, it’s &lt;code&gt;"/hello"&lt;/code&gt;, which just prints a message and exits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CREATED&lt;/strong&gt;: This tells you how long ago the container was created. It helps you keep track of how old a container is, especially if you’re managing several. For example, &lt;code&gt;26 hours ago&lt;/code&gt; shows your Ubuntu container was created a day ago.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;STATUS&lt;/strong&gt;: This shows the current state of the container. If it says &lt;code&gt;Exited&lt;/code&gt;, the container is stopped. If it says &lt;code&gt;Up&lt;/code&gt;, then the container is running. You can also see how recently it exited (like &lt;code&gt;About a minute ago&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PORTS&lt;/strong&gt;: This column lists any port mappings between your host machine and the container. For example, if a web server container exposes port 80, this column would show which host port it's connected to. In your case, the Ubuntu container has no ports exposed, so this is blank.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NAMES&lt;/strong&gt;: Docker assigns a random, readable name to each container if you don’t give it one yourself. In the screenshot, the Ubuntu container was named &lt;code&gt;keen_meninsky&lt;/code&gt;. You can rename containers or assign a custom name when creating one using the &lt;code&gt;-name&lt;/code&gt; flag.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Restart and attach to a stopped container&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you want to &lt;strong&gt;restart the Ubuntu container and attach to it&lt;/strong&gt;, use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker start &lt;span class="nt"&gt;-ai&lt;/span&gt; &amp;lt;container_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;container_id&amp;gt;&lt;/code&gt; with your container ID, which in this example is &lt;code&gt;6f63eabbab0e&lt;/code&gt;, so you would run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker start &lt;span class="nt"&gt;-ai&lt;/span&gt; 6f63eabbab0e
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;start&lt;/code&gt; brings the container back to life.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;a&lt;/code&gt; means “attach” — so your terminal will connect to the container’s input and output.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;i&lt;/code&gt; means “interactive” — so you can type commands and see results like before.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once this command runs, you’re &lt;strong&gt;back inside the same container&lt;/strong&gt;, with everything still intact. This is helpful if you've installed tools or created files in your container and want to pick up where you left off.&lt;/p&gt;

&lt;p&gt;The screenshot below confirms this worked. You're back at the container prompt:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh449dlze9lnnuee0b9dk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh449dlze9lnnuee0b9dk.png" alt="container-restart" width="800" height="69"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’ve covered a lot already, and it’s the kind of foundation that sets you up for everything else we’ll do with Docker.&lt;/p&gt;

&lt;p&gt;Here’s what you’ve done:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You learned how applications used to run, and why virtualization became necessary.&lt;/li&gt;
&lt;li&gt;You understood what virtual machines are, where they’re useful, and where they fall short.&lt;/li&gt;
&lt;li&gt;You broke down why containers were introduced, and how they solve those limitations.&lt;/li&gt;
&lt;li&gt;You saw the difference between VMs and containers using clear examples.&lt;/li&gt;
&lt;li&gt;You got a proper introduction to Docker (what it is and how it fits into the container world).&lt;/li&gt;
&lt;li&gt;You ran your first hands-on container, checked the OS inside it, installed software, exited it, and brought it back.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you followed along with the mini-project, you now know how to pull images, run containers interactively, install software inside them, and restart them after they’ve exited. That’s a major first step.&lt;/p&gt;

</description>
      <category>containers</category>
      <category>virtualization</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Build a Discord command bot in minutes with Appwrite Cloud Functions</title>
      <dc:creator>Deborah Emeni</dc:creator>
      <pubDate>Wed, 29 Nov 2023 10:42:20 +0000</pubDate>
      <link>https://forem.com/hackmamba/build-a-discord-command-bot-in-minutes-with-appwrite-cloud-functions-17bi</link>
      <guid>https://forem.com/hackmamba/build-a-discord-command-bot-in-minutes-with-appwrite-cloud-functions-17bi</guid>
      <description>&lt;p&gt;If you’re a developer looking for a secure and efficient solution to build your Discord bots rapidly, then you’ve come to the right place. In this article, we’ll discuss an excellent tool that you can use to suit your needs called &lt;a href="https://appwrite.io/docs/products/functions?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;Appwrite Functions&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Appwrite Functions is a feature provided by Appwrite that enables developers to create and automatically run custom backend code in response to events triggered by &lt;a href="https://appwrite.io/?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;Appwrite&lt;/a&gt; or according to a predefined schedule. With Appwrite Functions, you can access several benefits, including scalability and enhanced security.&lt;/p&gt;

&lt;p&gt;Why use a template for your Discord bots? Templates save time and effort by providing pre-built code for custom commands and features. Appwrite Functions offers a Discord command bot template, simplifying the development process. You can customize the template to meet your needs and seamlessly integrate unique functionalities.&lt;/p&gt;

&lt;p&gt;This tutorial will walk you through the process of building your Discord command bot in minutes using Appwrite Functions.&lt;/p&gt;

&lt;p&gt;So, without further ado, let's dive in!&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Ensure that you have the following to follow along with the tutorial:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An &lt;a href="https://cloud.appwrite.io/?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;Appwrite Cloud account&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://github.com/login" rel="noopener noreferrer"&gt;GitHub account&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://support.discord.com/hc/en-us/articles/204849977-How-do-I-create-a-server-?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;Server on Discord&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://play.google.com/store/apps/details?id=com.twofasapp&amp;amp;hl=en&amp;amp;gl=US" rel="noopener noreferrer"&gt;2FA App&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Creating your Appwrite project
&lt;/h2&gt;

&lt;p&gt;Log into your Appwrite Cloud account and create your Appwrite project to gain access to Appwrite's services. You’ll also gain access to your project ID and API key, which are required for using the Appwrite SDK. Set a name for your project, such as 'Discord Command Bot,' and then click the &lt;strong&gt;Create Project&lt;/strong&gt; button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FBWgJdeFcoOJJ3SZ3b59_lHE0It57xHsi1GpM9IDIm35Mrem4CbTACBj4hVv_lEC5auGj7ixHpuJcrbTUzsNglTayGRMBqSccrBMmNupBCAKZFJiL-vIBPWgbgl2H8gvBLMX50GdanzuD3dnyK241RTY" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FBWgJdeFcoOJJ3SZ3b59_lHE0It57xHsi1GpM9IDIm35Mrem4CbTACBj4hVv_lEC5auGj7ixHpuJcrbTUzsNglTayGRMBqSccrBMmNupBCAKZFJiL-vIBPWgbgl2H8gvBLMX50GdanzuD3dnyK241RTY" alt="Creating your Appwrite Project" width="1600" height="775"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating an Appwrite Function with the Discord command bot template
&lt;/h2&gt;

&lt;p&gt;Here, we’ll easily create an Appwrite Function to automate your backend tasks and extend Appwrite with custom code.&lt;/p&gt;

&lt;p&gt;Navigate to the side menu on your project dashboard and click on &lt;strong&gt;Functions&lt;/strong&gt; to access the template:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FYgZspJEAIA55dXwRYAmZFubrBqWRMZXpmM8npoEtkT5CSzKtI-1IWWWOAOyTmA8UXJopP20GuH_8YzkwRr6p4_7jwY5D5xzw9e6CYKnkOKGoRRVjsQDjkH-VKmbFZrHVHHbG3eaHQueOq4GAflEE0yc" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FYgZspJEAIA55dXwRYAmZFubrBqWRMZXpmM8npoEtkT5CSzKtI-1IWWWOAOyTmA8UXJopP20GuH_8YzkwRr6p4_7jwY5D5xzw9e6CYKnkOKGoRRVjsQDjkH-VKmbFZrHVHHbG3eaHQueOq4GAflEE0yc" alt="Opening Functions in Appwrite" width="1600" height="755"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the &lt;strong&gt;Templates&lt;/strong&gt; tab on the Functions page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FfgnBEfy7J0q5XWh83uMX5yJbmBXvfs2Ixry4eGPR5XpfnqLSz7Pypw2avIVxI-Sjz5_ZovzErpqaVE17u-2ReQYjIUxqiHDzb7zlgzHIi1YGh2u3XWVwN6feq8xD1VxCm2yFEIK4DgBD9tpnl3CDw04" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FfgnBEfy7J0q5XWh83uMX5yJbmBXvfs2Ixry4eGPR5XpfnqLSz7Pypw2avIVxI-Sjz5_ZovzErpqaVE17u-2ReQYjIUxqiHDzb7zlgzHIi1YGh2u3XWVwN6feq8xD1VxCm2yFEIK4DgBD9tpnl3CDw04" alt="Opening Templates" width="1600" height="744"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Search for the &lt;strong&gt;Discord Command Bot&lt;/strong&gt; template in the &lt;strong&gt;Search templates&lt;/strong&gt; box as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2F3rq6Tn52_9zfHHhw-tJJeGoXzXJ3FU5W1dZAOZT82UcBD5idGsToRnpZ2vf2XjY2DUeWpPb5Yba1Zk4zPXwiFsRBkM8CCXRcPj2eyuJcm1oAO0pkKnOne4I1JLlcYSKVKpBfAG2Ea5pgPNREIUZcOqE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2F3rq6Tn52_9zfHHhw-tJJeGoXzXJ3FU5W1dZAOZT82UcBD5idGsToRnpZ2vf2XjY2DUeWpPb5Yba1Zk4zPXwiFsRBkM8CCXRcPj2eyuJcm1oAO0pkKnOne4I1JLlcYSKVKpBfAG2Ea5pgPNREIUZcOqE" width="1600" height="766"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the &lt;strong&gt;Create function&lt;/strong&gt; button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FCwXhRchbY-VVs6ephia5VEiT5cafaaFRSe9vwH9uPjTDOFLF0kGNUGodPuJbAjpbQDvGxbD3HvPl5XAMHoL4bb-ss94bEXsSgyFBjyLJMFkjKYwZjpcwzYVvVBQLApzvpfcXsgY5HleBi05P36D1NEc" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FCwXhRchbY-VVs6ephia5VEiT5cafaaFRSe9vwH9uPjTDOFLF0kGNUGodPuJbAjpbQDvGxbD3HvPl5XAMHoL4bb-ss94bEXsSgyFBjyLJMFkjKYwZjpcwzYVvVBQLApzvpfcXsgY5HleBi05P36D1NEc" width="1600" height="766"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select &lt;strong&gt;Node.js-18.0&lt;/strong&gt; as the runtime for your discord command bot and click &lt;strong&gt;Next&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fawrjbrd6gwwd4Zo4ezWOPtHOgZkwL6Gwt5fPr0o71dYKBPEcgqJAiTGSROgpSEZIAEvlyv4ikgId6uje_it0Yjaq4KQqwWS2zBTkCCrG6vN7W7vPrhdh9EUj3VnacPZVF6CCu-UbHcR9i9zRVd28pQs" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fawrjbrd6gwwd4Zo4ezWOPtHOgZkwL6Gwt5fPr0o71dYKBPEcgqJAiTGSROgpSEZIAEvlyv4ikgId6uje_it0Yjaq4KQqwWS2zBTkCCrG6vN7W7vPrhdh9EUj3VnacPZVF6CCu-UbHcR9i9zRVd28pQs" width="1600" height="730"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before you proceed, you’ll need to retrieve several values from your &lt;a href="https://discord.com/developers/applications?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;Discord Developer&lt;/a&gt; &lt;a href="https://discord.com/developers/applications?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;P&lt;/a&gt;&lt;a href="https://discord.com/developers/applications?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;ortal&lt;/a&gt;, namely the &lt;code&gt;public key&lt;/code&gt;, &lt;code&gt;ID&lt;/code&gt;, and &lt;code&gt;Bot token&lt;/code&gt; of your application in your Discord Developer Portal. They will serve as the values for the environment variables passed to your function at runtime. &lt;/p&gt;

&lt;p&gt;On your Discord Developer Portal, click on &lt;strong&gt;New Application&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fvigr0ooggvngwrxajDUFbOUoElE4lbXoBxB9LgKNN8r_HrppPFDSpn011Zqy3mBFLVC350fVAVOzJ8NCvuj7gZXKeGkivZVuKqA1XBSF_P6RXQ_grx30nFvJkn8uCL1j9SzTyDqbR-LUcsvqYKyOrBM" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fvigr0ooggvngwrxajDUFbOUoElE4lbXoBxB9LgKNN8r_HrppPFDSpn011Zqy3mBFLVC350fVAVOzJ8NCvuj7gZXKeGkivZVuKqA1XBSF_P6RXQ_grx30nFvJkn8uCL1j9SzTyDqbR-LUcsvqYKyOrBM" width="1600" height="761"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the name of your application and click on the &lt;strong&gt;Create&lt;/strong&gt; button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FHQBX7kW8zF4MqfxXTwcUVGtLSsbazEKwOz1L-N5zokzn6FUJQK8A49hvBhnnnmFoNdjBoiPNT_Nxhok79q9i6Tq3Q8hOI4BAJJcCgSUbq18IxdihY3yhLaOC60KkO0Me1P-GpZ5WM6Ij9KedPuhPEvk" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FHQBX7kW8zF4MqfxXTwcUVGtLSsbazEKwOz1L-N5zokzn6FUJQK8A49hvBhnnnmFoNdjBoiPNT_Nxhok79q9i6Tq3Q8hOI4BAJJcCgSUbq18IxdihY3yhLaOC60KkO0Me1P-GpZ5WM6Ij9KedPuhPEvk" width="1010" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you do that, your application ID and public key will be generated. Copy both of them:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2F7TQ4Ejoh-JiuVSBOU8YZZ4RnXqn_oJ7Fo-xYC1T8Nx0FPABBzFjUNV8c_4Ub9en8w-eEkKOt4YC0K1kJSZ-XpmtskpX94rwWQG29zVCH6EdctZ-M0CaJk6Nydctbdx9dmZaD__r4ka_69ruEMHjIzd0" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2F7TQ4Ejoh-JiuVSBOU8YZZ4RnXqn_oJ7Fo-xYC1T8Nx0FPABBzFjUNV8c_4Ub9en8w-eEkKOt4YC0K1kJSZ-XpmtskpX94rwWQG29zVCH6EdctZ-M0CaJk6Nydctbdx9dmZaD__r4ka_69ruEMHjIzd0" width="1600" height="780"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, go back to your Functions in Appwrite and paste your copied &lt;code&gt;Application ID&lt;/code&gt; and &lt;code&gt;Public key&lt;/code&gt; as the values of your environment variables that will be passed to your function at runtime:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2F5aIb-ssxPojozsgOFhDpBdzGJHsRmRTImnIblts4d6_tF1zYV1iNwECnBtFY8Rh8WzfS-d8CD7iZkOcTYBLzwGcMUIqOxO67OFIlWTv2sr1EUtlwOSc8BQr700szGSJ6l2x6zmcP_-rmhP7aZHZ_BPY" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2F5aIb-ssxPojozsgOFhDpBdzGJHsRmRTImnIblts4d6_tF1zYV1iNwECnBtFY8Rh8WzfS-d8CD7iZkOcTYBLzwGcMUIqOxO67OFIlWTv2sr1EUtlwOSc8BQr700szGSJ6l2x6zmcP_-rmhP7aZHZ_BPY" width="1600" height="760"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go back to your Discord Developer Portal to get your &lt;strong&gt;DISCORD_TOKEN&lt;/strong&gt; variable, and on the slide menu, click on &lt;strong&gt;Bot&lt;/strong&gt;, then click on the &lt;strong&gt;Reset Token&lt;/strong&gt; button: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FN8j7sZy2sVDfA7kFwNiqXnrgZxQSzqqopt3lwKDzvgqt1QqjoZ9ollChBZb_2yhG2tGTk_Qes-JiUZgK7iRjrDMwvkFUQKCrxWWEye_UcQrKDmYxJ8ugNLt2MjdZ2YYwZHC5Db5c8gviY51bSjrwIHI" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FN8j7sZy2sVDfA7kFwNiqXnrgZxQSzqqopt3lwKDzvgqt1QqjoZ9ollChBZb_2yhG2tGTk_Qes-JiUZgK7iRjrDMwvkFUQKCrxWWEye_UcQrKDmYxJ8ugNLt2MjdZ2YYwZHC5Db5c8gviY51bSjrwIHI" width="1600" height="739"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you do that, you’ll see the following message: click &lt;strong&gt;Yes, do it!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FpAWo_8DgKFhbGqJObemhiSWgog_3acuqD-1iCTYsaUlvbf-42nuas2ecHftGtU0pWWz9LQAIB-nx8HrhbZUFQ80LvOa8eJopwH-F5BvSGDYPZphAqL1BS66VCIxdg4lPV9-Rz_pF4lUS9ya-BOpDT1k" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FpAWo_8DgKFhbGqJObemhiSWgog_3acuqD-1iCTYsaUlvbf-42nuas2ecHftGtU0pWWz9LQAIB-nx8HrhbZUFQ80LvOa8eJopwH-F5BvSGDYPZphAqL1BS66VCIxdg4lPV9-Rz_pF4lUS9ya-BOpDT1k" width="988" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you do that, you’ll get a 6-digit authentication code in your 2FA authentication app or whichever verification method you use. After entering the 2FA code, click on &lt;strong&gt;Submit&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FitKBB76ONDdP9Fp-EEdRyx5Qi6QcNj30e6gixkShpQvv0rz8DBoQ2-uKe8iG4IZEVIpUh8gi7csF0UXZsvKU3FdiWePc4PVgHHbb37rLSYwqVX_4t4QVJntDM9LgrwFqInSSELwIM-5WfCPFIQhDFC0" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FitKBB76ONDdP9Fp-EEdRyx5Qi6QcNj30e6gixkShpQvv0rz8DBoQ2-uKe8iG4IZEVIpUh8gi7csF0UXZsvKU3FdiWePc4PVgHHbb37rLSYwqVX_4t4QVJntDM9LgrwFqInSSELwIM-5WfCPFIQhDFC0" width="984" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you do that, your token will be generated. Click the &lt;strong&gt;Copy&lt;/strong&gt; button to copy your token:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FLKC9bfq7Ihu2LQsyZOdVVKV-ijWWdgMmlwrbmk9ZeK4g1CuHlumV6UacwkHFEhaQ-XAuwzhOyZAQx9AdLOeB_YVssvBhAuM0n3Z7qtlBHPwSwuULIcMBCmDvYp0zOBvkA-JVe8QxtFe1mKo2nOyFRvg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FLKC9bfq7Ihu2LQsyZOdVVKV-ijWWdgMmlwrbmk9ZeK4g1CuHlumV6UacwkHFEhaQ-XAuwzhOyZAQx9AdLOeB_YVssvBhAuM0n3Z7qtlBHPwSwuULIcMBCmDvYp0zOBvkA-JVe8QxtFe1mKo2nOyFRvg" width="1600" height="727"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go back to your Appwrite Functions, paste your copied token as the value for your &lt;strong&gt;DISCORD_TOKEN&lt;/strong&gt; variable, and click the &lt;strong&gt;Next&lt;/strong&gt; button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FrkbYng6yCWJ9qc_6uJ88bMIIb45pH_VQjzu2cmbWc8mGm7xwqOQm0aR2mruco_GJF3WHl2O-WQSDynEh1sVF0pRG6X6UDvwdrywFCJFqwuwl-pC2TkDM9S1IxW78SbiRBGFAsDu7Mffeg6T10xcN2Ts" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FrkbYng6yCWJ9qc_6uJ88bMIIb45pH_VQjzu2cmbWc8mGm7xwqOQm0aR2mruco_GJF3WHl2O-WQSDynEh1sVF0pRG6X6UDvwdrywFCJFqwuwl-pC2TkDM9S1IxW78SbiRBGFAsDu7Mffeg6T10xcN2Ts" width="1600" height="775"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will take you to the next step of connecting your function to a new repository or an existing one within a selected Git organization. Select &lt;strong&gt;Create a new repository&lt;/strong&gt; and click &lt;strong&gt;Next&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2F5Ldo-urY1xgRxZEs2fvbct7imwnuBwD-EHk8ipFHxmAYWyrYki9vgqYyHrWOXyrZKT8RlTWEWLwnSbOrMaxj_kGQK--wxALjctVSNLYMhE5LNrhgYvgKWP7JfUDAcB3b6nHxRv81P30W5mFAGIJM0Zo" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2F5Ldo-urY1xgRxZEs2fvbct7imwnuBwD-EHk8ipFHxmAYWyrYki9vgqYyHrWOXyrZKT8RlTWEWLwnSbOrMaxj_kGQK--wxALjctVSNLYMhE5LNrhgYvgKWP7JfUDAcB3b6nHxRv81P30W5mFAGIJM0Zo" width="1600" height="779"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You need to select a Git repository that will trigger your function deployments when updated. Choose GitHub from the options and click &lt;strong&gt;Next&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FvKYXDhUXdP9Re5aqHVMGinJbvukkDF2g8s6TVRDiyfCeA1KYBv0qyY61ag4X0pLMP6s2g0pFgWYmwyj-rnZLLEJe7hW2Dmzjg1EIq9NNbgOAGkMtUUCL9BjoC2tf_BVpy4somE7WYOEk11-bsbrRxQM" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FvKYXDhUXdP9Re5aqHVMGinJbvukkDF2g8s6TVRDiyfCeA1KYBv0qyY61ag4X0pLMP6s2g0pFgWYmwyj-rnZLLEJe7hW2Dmzjg1EIq9NNbgOAGkMtUUCL9BjoC2tf_BVpy4somE7WYOEk11-bsbrRxQM" width="1600" height="750"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Grant Appwrite permission to your GitHub account by clicking the &lt;strong&gt;Install &amp;amp; Authorize&lt;/strong&gt; button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FPgZ5VhMANCk1ymXgcmKMk-a55HrBdawT4aZVcOB_ibOXnQHnPeAm1gEw1UFowonGuyb6H8ZVPGIajiD-5CTtxN5uFVoud3T74TLKrZd_FPlb4muI03wZWd5Ipd4Wmh8gTycA3-0IXzsYLmODxQHRk24" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FPgZ5VhMANCk1ymXgcmKMk-a55HrBdawT4aZVcOB_ibOXnQHnPeAm1gEw1UFowonGuyb6H8ZVPGIajiD-5CTtxN5uFVoud3T74TLKrZd_FPlb4muI03wZWd5Ipd4Wmh8gTycA3-0IXzsYLmODxQHRk24" width="1600" height="967"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once that is done, you will be redirected to Appwrite to continue your installation. Now that Appwrite has access to your GitHub, you can go back to choose the &lt;strong&gt;Create a new repository&lt;/strong&gt; option, and this time, you will see the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FPIwy-SSMSPeRfVZ5s6Nd9TWLiFpGOK9JexkK1cDOWj_Z7X9hgxaXCkon31f4gkAiXyrbzAAiN4Vqynagcg_PGbztsXYs1INDE3QzSfeqmFH436j7cirT4lQwMALYHqo8I8ncdmaKr1JJ9p6wmQIE06I" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FPIwy-SSMSPeRfVZ5s6Nd9TWLiFpGOK9JexkK1cDOWj_Z7X9hgxaXCkon31f4gkAiXyrbzAAiN4Vqynagcg_PGbztsXYs1INDE3QzSfeqmFH436j7cirT4lQwMALYHqo8I8ncdmaKr1JJ9p6wmQIE06I" width="1600" height="754"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, a discord-command-bot repository has been automatically created for you. You can change the repository name if you so choose. You can leave your repository private and click &lt;strong&gt;Next&lt;/strong&gt;. Afterward, your repository will be successfully created. &lt;/p&gt;

&lt;p&gt;Next, name your production branch or leave it as &lt;strong&gt;main&lt;/strong&gt;, then click the &lt;strong&gt;Create&lt;/strong&gt; button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FjVkHtJxuJAZWgPUJhAR_IacISD1srsSpv8CR7v4NCFwDbAdZCjlX4nEuX509I582WzCxBm-Pl9iE8Wf9ESBf_WYH-k2dDIjpIEKJbWURu5W_k5_vifHAHRBBUu98lzPMINi0mFdtkj9lW8lV3JaS_gE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FjVkHtJxuJAZWgPUJhAR_IacISD1srsSpv8CR7v4NCFwDbAdZCjlX4nEuX509I582WzCxBm-Pl9iE8Wf9ESBf_WYH-k2dDIjpIEKJbWURu5W_k5_vifHAHRBBUu98lzPMINi0mFdtkj9lW8lV3JaS_gE" width="1600" height="754"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you do that, you will see an alert saying that your Discord Command Bot has been created:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FZivkpWUkVNPlecQ7v08boes2w1UuDdXf8y0vhPvVeZyNttU9YKz2-WAFJ3wih-3jCEiqDBqQVZZD84INg3wGzt6aIcg581F81MhOF2uehwpKwc8k_pbN_GasInprVN6ygRYF3wP65eSN4ycxMZWiKAo" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FZivkpWUkVNPlecQ7v08boes2w1UuDdXf8y0vhPvVeZyNttU9YKz2-WAFJ3wih-3jCEiqDBqQVZZD84INg3wGzt6aIcg581F81MhOF2uehwpKwc8k_pbN_GasInprVN6ygRYF3wP65eSN4ycxMZWiKAo" width="1600" height="736"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your deployment will automatically display the deployment ID, build time, size, source, and link to your domains. Copy the link to your domains:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FFtJ7hXwdygrFTiX-mKHfp0sQI1iF6uSr_D-dSVRkrWlWn6M1jJjvNIDXXqgffIs1NJrU2AY8UBQj91MuF4k-UtdOysWPXZ9LyUg5oJI3Phw2wGJHF9j0l7cvZ8LasTsUgIz9F94RR9c0dUJMyL37fQQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FFtJ7hXwdygrFTiX-mKHfp0sQI1iF6uSr_D-dSVRkrWlWn6M1jJjvNIDXXqgffIs1NJrU2AY8UBQj91MuF4k-UtdOysWPXZ9LyUg5oJI3Phw2wGJHF9j0l7cvZ8LasTsUgIz9F94RR9c0dUJMyL37fQQ" width="1600" height="776"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring an interactions endpoint for your app
&lt;/h2&gt;

&lt;p&gt;Go back to your Discord developer portal from your side menu, click on &lt;strong&gt;General Information,&lt;/strong&gt; and scroll down to the &lt;strong&gt;Interactions Endpoint URL&lt;/strong&gt; to configure an interactions endpoint to receive interactions via HTTP POSTs rather than over Gateway with a bot user:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FOS99XMTv23Ubw2nyGTNyExUhjAr25DkEF-YhZrgoj8ds25ZdbHFLm2TWQ538FlX51RUnPreTquQGKWDomBiBqeggzZE8iIA30esNGRD8_gs2cU8GrpcEFepFuR4sfOvDIf6sgzwRpvbqF0v9HadjTOQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FOS99XMTv23Ubw2nyGTNyExUhjAr25DkEF-YhZrgoj8ds25ZdbHFLm2TWQ538FlX51RUnPreTquQGKWDomBiBqeggzZE8iIA30esNGRD8_gs2cU8GrpcEFepFuR4sfOvDIf6sgzwRpvbqF0v9HadjTOQ" width="1600" height="663"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Paste the link to your domains that you copied to configure an interactions endpoint by adding the &lt;code&gt;https://&lt;/code&gt; in front of it and the “interactions” endpoint at the end. Your link should look like this: &lt;code&gt;https://&lt;/code&gt;&lt;code&gt;[your&lt;/code&gt; &lt;code&gt;domains&lt;/code&gt;&lt;code&gt;]&lt;/code&gt;&lt;code&gt;/interactions.&lt;/code&gt; Afterward, ensure that you save your changes by clicking the &lt;strong&gt;Save Changes&lt;/strong&gt; button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FPkHtVxWURz3PO2D5YXnqkneOfzzMG6S_-B4D6Z18mmW6CjpNe62VQRCF1JtDHTM3pP3uNxoJHarf9H2_DZUmBefLq0otxnwzClc6FTuSqF-JbDlnSOPgJa6kho2To9ALMxP34bdcKB8jYyIIP505GsY" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FPkHtVxWURz3PO2D5YXnqkneOfzzMG6S_-B4D6Z18mmW6CjpNe62VQRCF1JtDHTM3pP3uNxoJHarf9H2_DZUmBefLq0otxnwzClc6FTuSqF-JbDlnSOPgJa6kho2To9ALMxP34bdcKB8jYyIIP505GsY" width="1600" height="763"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating an invite link for your app
&lt;/h2&gt;

&lt;p&gt;Next, generate an invite link for your application by picking the scopes and permissions it needs to function. To do this, go to &lt;strong&gt;OAuth2&lt;/strong&gt; on the side menu, and in the dropdown menu, click on &lt;strong&gt;URL Generator&lt;/strong&gt;. From the &lt;strong&gt;SCOPES&lt;/strong&gt; section, select &lt;strong&gt;bot&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2F3IWHSuw0Ru7p50hqknnZpvZ4F2QtiB_Dd6MZJ63n-OSkUcI9omuuSjAc-dA_TSX_EL192BPhRdrhArkJyec89AHrFbsBllUqtbJF0GXLiU5uwyyFWnrOULd8w0_LNPF1NIlBXpAeX2aGGBLikOEtSGA" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2F3IWHSuw0Ru7p50hqknnZpvZ4F2QtiB_Dd6MZJ63n-OSkUcI9omuuSjAc-dA_TSX_EL192BPhRdrhArkJyec89AHrFbsBllUqtbJF0GXLiU5uwyyFWnrOULd8w0_LNPF1NIlBXpAeX2aGGBLikOEtSGA" width="1600" height="769"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under the &lt;strong&gt;Bot Permissions&lt;/strong&gt; section, select &lt;strong&gt;Administrator&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FoMA_J80mgt6qbwWi5oT8YZYL12YhuLJN6m8Gc2bTeqKMCTS_JuH6BRqxUnNR6ZA06kKxPmU2XH-n-3fl-GmBN0Km20mySqV5UMcjc_rRrhdVMKHZGsAjupHCvrI3_mXNWKZ8SbslKqhTJ6zIGkb3KUg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FoMA_J80mgt6qbwWi5oT8YZYL12YhuLJN6m8Gc2bTeqKMCTS_JuH6BRqxUnNR6ZA06kKxPmU2XH-n-3fl-GmBN0Km20mySqV5UMcjc_rRrhdVMKHZGsAjupHCvrI3_mXNWKZ8SbslKqhTJ6zIGkb3KUg" width="1600" height="766"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you select your scope and bot permissions, a URL will be generated. Scroll down to the bottom of the page and click the &lt;strong&gt;Copy&lt;/strong&gt; button to copy your generated URL:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2F87j5DbiHosJup__aZ2Qu8FOln-jIPeFFucBKUO_XsN-6P07jBi7PQrmYBG6pC4G_XUKJ7krDYX6DBTV8YoFIHhb6LPSzUDbg6Gz9RSMph_lDIG_IMGvWAerB70dL9R8-vtZrACTz1DnyjsxcgJmakJI" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2F87j5DbiHosJup__aZ2Qu8FOln-jIPeFFucBKUO_XsN-6P07jBi7PQrmYBG6pC4G_XUKJ7krDYX6DBTV8YoFIHhb6LPSzUDbg6Gz9RSMph_lDIG_IMGvWAerB70dL9R8-vtZrACTz1DnyjsxcgJmakJI" width="1600" height="779"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding your application to your Discord server
&lt;/h2&gt;

&lt;p&gt;Go to your favorite browser and paste the URL in the search bar. This will open Discord with a message telling you that an external application wants to access your Discord account with information about what you are permitted to do, which includes adding a bot to a server and creating a command. Remember to have a server created and ready to go.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fo3dIuf5-_YP3N1TYvYsEEAM_GQtn73AhtoEseBU9ONg8lCPkmqKeUzjuckC9P0BL3S0_6-6ZgbE-SuiRQY-cse_7S8GIxeuuLHIFFadZtdgtXm6JkJvCA896Pj-9NjpiPgueTGlG5RwC76TxzePHNSI" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fo3dIuf5-_YP3N1TYvYsEEAM_GQtn73AhtoEseBU9ONg8lCPkmqKeUzjuckC9P0BL3S0_6-6ZgbE-SuiRQY-cse_7S8GIxeuuLHIFFadZtdgtXm6JkJvCA896Pj-9NjpiPgueTGlG5RwC76TxzePHNSI" width="1600" height="823"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the server that you would like to add from your list of servers and click the &lt;strong&gt;Continue&lt;/strong&gt; button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FEgrl4rdDk5xf4h1LyambDao3WtbAkjrMN4slJXaSM3b8ZVFcJVgfOSvoe7sAy6RqsbDaEBX1OSB9RkvvhkE53U8Aby4sdTPGGKNKC6yiaAfy-IwmnTaCWDkladptW0cR4xNUEQWrhADSKqOQjvxcgPo" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FEgrl4rdDk5xf4h1LyambDao3WtbAkjrMN4slJXaSM3b8ZVFcJVgfOSvoe7sAy6RqsbDaEBX1OSB9RkvvhkE53U8Aby4sdTPGGKNKC6yiaAfy-IwmnTaCWDkladptW0cR4xNUEQWrhADSKqOQjvxcgPo" width="1600" height="758"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Grant administrator permission to your server and click the &lt;strong&gt;Authorize&lt;/strong&gt; button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FJV1lu1lDG6Dd4v3FePm2V57FrxBtqGYV0243yV62xvrSAKErepvcftCp7v-h9107Ik68f-6sIdPICp5lPvEzoZiDJqOVvo4sbWQeHMW7uV-XtgXB_m3aupK97GVFOfgdF9SkSRGBfAyXNm4PLuH_CQY" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FJV1lu1lDG6Dd4v3FePm2V57FrxBtqGYV0243yV62xvrSAKErepvcftCp7v-h9107Ik68f-6sIdPICp5lPvEzoZiDJqOVvo4sbWQeHMW7uV-XtgXB_m3aupK97GVFOfgdF9SkSRGBfAyXNm4PLuH_CQY" width="1600" height="764"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you do that, you’ll be required to confirm that you are not a robot:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FAynldEuQ9jYqf2TRZbn8S3ugYUTsSIWbJXW-hqxkzrfjiIR47y-OzKBYavbxPyt05aPCnG638UVio6nPA1JbHYt821wytJEQyMuvjPpCvb6q4dF--rv1zmFP98b8tGkIA_10p8o912oUOknUsZxn1C8" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FAynldEuQ9jYqf2TRZbn8S3ugYUTsSIWbJXW-hqxkzrfjiIR47y-OzKBYavbxPyt05aPCnG638UVio6nPA1JbHYt821wytJEQyMuvjPpCvb6q4dF--rv1zmFP98b8tGkIA_10p8o912oUOknUsZxn1C8" width="1600" height="706"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you’ve been verified, your application will be successfully authorized and added to the server you created on Discord. Click on the &lt;strong&gt;Go to Appwrite Bot Server&lt;/strong&gt; button to go to your server:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FHPdaCteh1kd8MLB1pZP8NLJtBiNmvGaR-xBmTqqLt9Sgc0yAKvizI6nV42cCsGNk77z5F1LLPYXndHosrgzEZkyTV47HhP-01H5_f-j51tb5FkSXx02an4RNkrNDOhF5qJMSO8Ry3WIAOTjzoDqlVw8" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FHPdaCteh1kd8MLB1pZP8NLJtBiNmvGaR-xBmTqqLt9Sgc0yAKvizI6nV42cCsGNk77z5F1LLPYXndHosrgzEZkyTV47HhP-01H5_f-j51tb5FkSXx02an4RNkrNDOhF5qJMSO8Ry3WIAOTjzoDqlVw8" width="1600" height="756"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will take you to your server on Discord:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FhNkg3RJBHl8Db6fCVUjKoLQfgVX2pPzK-D6tm8b88pytf-AI3LZIclkRajQO2S6P43AnBcJzKhRLrPJPL1WuGyH2gfo-j13Uag6v8F8L27acJkHXiruYxdHQB_XT5EV0x4vjO-7kBUbXoPOmEaohggY" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FhNkg3RJBHl8Db6fCVUjKoLQfgVX2pPzK-D6tm8b88pytf-AI3LZIclkRajQO2S6P43AnBcJzKhRLrPJPL1WuGyH2gfo-j13Uag6v8F8L27acJkHXiruYxdHQB_XT5EV0x4vjO-7kBUbXoPOmEaohggY" width="1600" height="834"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, you can add commands to your Discord command bot using Appwrite functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Customizing the template by adding Commands to your Discord command bot
&lt;/h2&gt;

&lt;p&gt;With the Discord command template provided by Appwrite functions, you have access to a variety of customization options. You can do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add, remove, or modify commands.&lt;/li&gt;
&lt;li&gt;Customize the responses of the Discord bot to commands.&lt;/li&gt;
&lt;li&gt;Set permissions for who can use the commands.&lt;/li&gt;
&lt;li&gt;Register the Discord bot to listen for and respond to Discord events, including reactions when a member leaves the Discord server or messages.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not only do you have access to several customization options, but you also benefit from the speed that the Discord command bot template provides. For instance, instead of spending time developing the Discord bot from scratch, the Discord command bot template offers you a pre-built foundation with code that you can use to get started quickly. If you want to add more features to your discord bot, such as commands, you can easily customize the template by creating a new function and registering it with the Discord bot.&lt;/p&gt;

&lt;p&gt;Let’s see this in action by customizing the template by adding a command to your Discord server that tells jokes.&lt;/p&gt;

&lt;p&gt;In your Discord command bot GitHub repository, you have a &lt;strong&gt;setup.js&lt;/strong&gt; file that registers the command on Discord by defining a name and description for the command. By default, the template already registers a “Hello World Command”. &lt;/p&gt;

&lt;p&gt;Here’s the link to the complete &lt;a href="https://github.com/debemenitammy/discord-command-bot" rel="noopener noreferrer"&gt;source code&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s create another command by registering the command in the &lt;strong&gt;setup.js&lt;/strong&gt; file. Modify the code in your &lt;strong&gt;setup.js&lt;/strong&gt; file with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;    &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;fetch&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;undici&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;throwIfMissing&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./utils.js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;setup&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;throwIfMissing&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;DISCORD_PUBLIC_KEY&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;DISCORD_APPLICATION_ID&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;DISCORD_TOKEN&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;]);&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;registerApi&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`https://discord.com/api/v9/applications/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DISCORD_APPLICATION_ID&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/commands`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;commands&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hello&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Hello World Command&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;jokes&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Tell a random joke&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;]&lt;/span&gt;

      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;responses&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;commands&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;registerApi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;Authorization&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Bot &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DISCORD_TOKEN&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
      &lt;span class="p"&gt;}))&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;responses&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;some&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Failed to register command&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Command registered successfully&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nf"&gt;setup&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s a detailed explanation of what this code does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;throwIfMissing()&lt;/code&gt; function checks if the environment variables that you set in your Appwrite function exist — here, this will include your &lt;strong&gt;DISCORD_PUBLIC_KEY,&lt;/strong&gt; &lt;strong&gt;DISCORD_APPLICATION_ID,&lt;/strong&gt; and &lt;strong&gt;DISCORD_TOKEN&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;registerApi&lt;/code&gt; saves the URL to add the command to the Discord bot.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;commands&lt;/code&gt; variable is an array of the commands you want to add to Discord.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;responses&lt;/code&gt; variable stores the response from Discord after a request is made to register the command on your Discord server. If it fails, it throws an error; if it is successful, it logs a message to the console saying, “Command registered successfully.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, go to the &lt;strong&gt;main.js&lt;/strong&gt; file, which contains the code that handles the request to add a command to Discord and modify the code with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
    &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;InteractionResponseType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;InteractionType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;verifyKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;discord-interactions&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;throwIfMissing&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./utils.js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;log&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;throwIfMissing&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;DISCORD_PUBLIC_KEY&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;DISCORD_APPLICATION_ID&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;DISCORD_TOKEN&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;]);&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nf"&gt;verifyKey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
          &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bodyRaw&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;x-signature-ed25519&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
          &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;x-signature-timestamp&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
          &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DISCORD_PUBLIC_KEY&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Invalid request.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Invalid request signature&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;401&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Valid request&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;interaction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;InteractionType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;APPLICATION_COMMAND&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
        &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hello&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Matched hello command - returning message&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
          &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;InteractionResponseType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CHANNEL_MESSAGE_WITH_SOURCE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Hello, World!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="mi"&gt;200&lt;/span&gt;
        &lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
       &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nx"&gt;InteractionType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;APPLICATION_COMMAND&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
        &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;jokes&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Matched jokes command - returning message&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://v2.jokeapi.dev/joke/Any?type=twopart&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
          &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;InteractionResponseType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CHANNEL_MESSAGE_WITH_SOURCE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;setup&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; ---------- &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;delivery&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="mi"&gt;200&lt;/span&gt;
        &lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Didn't match command - returning PONG&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;InteractionResponseType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PONG&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code does the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It verifies the data coming from the Discord server.&lt;/li&gt;
&lt;li&gt;It gets the name of the registered command, and it responds with the appropriate data. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After customizing the template, commit your changes and push them to GitHub.&lt;/p&gt;

&lt;p&gt;Once you commit and push your changes, your building process will start, and your code will be deployed on Appwrite. Go to your &lt;strong&gt;Appwrite Functions&lt;/strong&gt; and click on the &lt;strong&gt;Deployments&lt;/strong&gt; tab; you will see your build time and other information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FwVuNmaZk3VV9pLxQJxQF3-NvlgjRxLcWXvuXzYXirs_8yV_m4jnvM0I1KA3K_Y0KqEjjBRcsMlqbMrZRE6P5w8PHZW51UOXwGL9iklj66nDu_i45p_X5K06apQ9HVoVRSuIPW3mMUo_apoi6-B2LvTY" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FwVuNmaZk3VV9pLxQJxQF3-NvlgjRxLcWXvuXzYXirs_8yV_m4jnvM0I1KA3K_Y0KqEjjBRcsMlqbMrZRE6P5w8PHZW51UOXwGL9iklj66nDu_i45p_X5K06apQ9HVoVRSuIPW3mMUo_apoi6-B2LvTY" width="1600" height="685"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down on your &lt;strong&gt;Deployments&lt;/strong&gt; tab to see the status of your deployment:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2F7w8zKniHpigHqd_8XCX2kjhGY9iA_5YmGyuSqq6xrQtxo4e-h38c7t6s8IG2Vzdt_Rk7FPkTLwj6f5CzXGR1K7MXROpUahVq0u8OS_ci6ezSR-GzqCdFgO8TAExzBwC6HKGLmgJFCa_az0pRJ-WWzpk" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2F7w8zKniHpigHqd_8XCX2kjhGY9iA_5YmGyuSqq6xrQtxo4e-h38c7t6s8IG2Vzdt_Rk7FPkTLwj6f5CzXGR1K7MXROpUahVq0u8OS_ci6ezSR-GzqCdFgO8TAExzBwC6HKGLmgJFCa_az0pRJ-WWzpk" width="1600" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, you can test this out to see if your command was successfully added to your Discord server by going to your discord server and typing “/jokes” as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FxDRkPBLoE7dDJYYrQR9pEhPbGZMFRrOaOM9a65j4szeAvngYNcKp7pUrprQXYKKcliCnYzwLrsBl1Q-56vGxnmui6feeKOBxAUYWGlpynt1GQIyBHZYgDeV_DYme2GWJumERstZRK3KX7eYHjcWIHII" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FxDRkPBLoE7dDJYYrQR9pEhPbGZMFRrOaOM9a65j4szeAvngYNcKp7pUrprQXYKKcliCnYzwLrsBl1Q-56vGxnmui6feeKOBxAUYWGlpynt1GQIyBHZYgDeV_DYme2GWJumERstZRK3KX7eYHjcWIHII" width="1600" height="948"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see the description of your command that you set below your command name:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FyCU6eexn6CcnGfGDwPdhQ4iyH8slyAlxjS_FI4-Bjo8oj3zhrH9jV6ayJyJwjRsrRbSVafLZ-tPzwhUIGyglS5YSuFLxBi3-Q5xSdvxk-fcIzbJpweob-yhdqJkbzhE8vVgCaNcT6XzUqSuFuRIgrr8" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2FyCU6eexn6CcnGfGDwPdhQ4iyH8slyAlxjS_FI4-Bjo8oj3zhrH9jV6ayJyJwjRsrRbSVafLZ-tPzwhUIGyglS5YSuFLxBi3-Q5xSdvxk-fcIzbJpweob-yhdqJkbzhE8vVgCaNcT6XzUqSuFuRIgrr8" width="1600" height="952"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you hit enter, it starts sending your command to the Appwrite Function for execution and displays a response:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2F40dR3DNZVTOncZThrw95_fsVE0QtwoHmDE4PGJx1HlXr-OYHFbq6I2qVdzf5EaCGYIB-rEk7qRCcfTxawhdwOQTT3JABZYLXfm6CcJCmnNjrzWXMZrNRvEHMF1yPvA1BT7bFRchtihAV1xcYKHNJwvU" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2F40dR3DNZVTOncZThrw95_fsVE0QtwoHmDE4PGJx1HlXr-OYHFbq6I2qVdzf5EaCGYIB-rEk7qRCcfTxawhdwOQTT3JABZYLXfm6CcJCmnNjrzWXMZrNRvEHMF1yPvA1BT7bFRchtihAV1xcYKHNJwvU" width="1600" height="948"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This tutorial demonstrated the use of Appwrite Cloud Functions, which provides a Discord command bot template for building a Discord command bot. You learned how to customize the template and add commands to your Discord server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;You may find the following resources useful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/hackmamba/how-to-use-cloud-functions-to-automate-github-moderation-3g5p"&gt;How to Use Cloud Functions to Automate GitHub Moderation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/hackmamba/keep-your-online-community-safe-build-automated-message-moderation-with-appwrite-cloud-functions-23eg"&gt;Keep Your Online Community Safe: Build Automated Message Moderation with Cloud Functions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>appwrite</category>
      <category>discordcommandbot</category>
      <category>appwritecloudfunctions</category>
    </item>
    <item>
      <title>Demystifying AWS VPC Network Firewall using Terraform</title>
      <dc:creator>Deborah Emeni</dc:creator>
      <pubDate>Tue, 24 Oct 2023 21:10:32 +0000</pubDate>
      <link>https://forem.com/hackmamba/demystifying-aws-vpc-network-firewall-using-terraform-3n3o</link>
      <guid>https://forem.com/hackmamba/demystifying-aws-vpc-network-firewall-using-terraform-3n3o</guid>
      <description>&lt;p&gt;This post was originally published on &lt;a href="https://hackmamba.io/blog/2023/10/demystifying-aws-vpc-network-firewall-using-terraform/" rel="noopener noreferrer"&gt;Hackmamba&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Many organizations that deal with sensitive data, such as financial information, may suffer from cyber attacks such as malware infections, data breaches, distributed denial-of-service (DDoS) attacks, and so on, which could result in financial loss. Fortunately, organizations can avoid such situations by creating a Virtual Private Cloud (VPC) with a tool like Amazon Web Services (AWS), allowing them to monitor their network for security threats and isolate it from the public internet.&lt;/p&gt;

&lt;p&gt;Organizations must ensure their infrastructure is always secure by implementing security measures such as firewalls, data encryption, access control, among other features. Organizations can use Terraform, a powerful and reliable tool, to automate the deployment of security groups; this tool can help control access to an organization’s resources while ensuring that the security groups remain compliant with their security policies.&lt;/p&gt;

&lt;p&gt;When manually building infrastructure, the organization risks many human errors. However, combining AWS and Terraform reduces the risk of human errors and enables organizations to build a secure and scalable infrastructure.&lt;/p&gt;

&lt;p&gt;This article will help you understand the AWS VPC Network Firewall and how it improves cloud security. It will also include a hands-on demonstration of using Terraform to manage and automate your project’s AWS VPC Network Firewall configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS VPC Network Firewall?
&lt;/h2&gt;

&lt;p&gt;To understand the AWS VPC Network Firewall, it's essential to grasp the concept of a firewall. A firewall is a network security device that filters incoming and outgoing network traffic based on security rules, acting as a barrier between a private network and the public internet to block bad or malicious traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FZsAshblU9D2aoaRlO32cP85f5rQAXuwqeL6xYdJ4ystkuRMn2KLemif2568B1SIKM6IFsAOFckAcMn8JfPoMxnahXjFrpc2Vn7O_i4c93LOFPyA5vVMDU67JBepoANrzkv51R0a2m2nND2EfggLBmdc" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FZsAshblU9D2aoaRlO32cP85f5rQAXuwqeL6xYdJ4ystkuRMn2KLemif2568B1SIKM6IFsAOFckAcMn8JfPoMxnahXjFrpc2Vn7O_i4c93LOFPyA5vVMDU67JBepoANrzkv51R0a2m2nND2EfggLBmdc" alt="Illustration of a network firewall" width="1020" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Depending on an organization's or individual's needs, a private network or public internet can be chosen. If there are concerns about system security, cost, control, or performance, a private network becomes the preferred option due to its significant benefits over the public internet. A private network is more secure than the public internet, using private IP addresses and enabling control through security devices like firewalls to manage network traffic.&lt;/p&gt;

&lt;p&gt;Unauthorized access to a private network can severely impact an organization or individual, leading to financial losses from cyber attacks such as data breaches and disruptions, rendering the network unavailable. Implementing security measures such as firewalls and intrusion detection systems is crucial to protect private networks.&lt;/p&gt;

&lt;p&gt;AWS VPC Network Firewall secures and protects your AWS VPC from unauthorized access. As an AWS service, it's managed by Amazon, relieving you of infrastructure management. This service watches incoming and outgoing traffic, identifying and blocking malicious activity to maintain VPC security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of AWS VPC Network Firewall&lt;/strong&gt;&lt;br&gt;
AWS VPC Network Firewall is an excellent choice for organizations due to its numerous benefits.&lt;br&gt;
Let’s explore a few of them.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Applications can experience surges or drops in traffic depending on the number of active users. If your app can't handle more users, it might slow down or crash. To prevent this, monitor for issues and consider adding extra resources, like more servers. It's hard to do this yourself, so the AWS VPC Network Firewall can help. If users suddenly increase, it adjusts automatically, saving time and money.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance:&lt;/strong&gt; If your app can't handle many users, it becomes slow and might not work well when many people use it simultaneously. This can make your users dissatisfied, make your app less safe, and even cost you money. To avoid this, you can use AWS VPC Network Firewall. It helps your app handle lots of traffic without slowing down using rule-based filtering and stateful inspection techniques.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost:&lt;/strong&gt; Using AWS VPC Network Firewall is a cost-effective choice as it allows you to only pay for the resources you use. This means that you are charged based on the amount of your application’s traffic and the number of firewalls deployed. Additionally, Since AWS VPC Network Firewall is a managed firewall service, it saves the cost of managing and maintaining your firewall by yourself.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security:&lt;/strong&gt; AWS VPC Network Firewall provides features such as &lt;a href="https://aws.amazon.com/network-firewall/features/#Intrusion_prevention" rel="noopener noreferrer"&gt;intrusion prevention&lt;/a&gt; (which helps detect and block malicious traffic), &lt;a href="https://aws.amazon.com/network-firewall/features/#Web_filtering" rel="noopener noreferrer"&gt;web filtering&lt;/a&gt; (which can help block traffic), and more for protecting your VPCs from malware and brute force attacks.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Features of the AWS VPC Network Firewall&lt;/strong&gt;&lt;br&gt;
What makes AWS VPC Network Firewall stand out are its extensive features. Let’s take a look at them.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stateful firewall&lt;/strong&gt;: This &lt;a href="https://aws.amazon.com/network-firewall/features/#Stateful_firewall" rel="noopener noreferrer"&gt;feature&lt;/a&gt; of AWS VPC Network Firewall keeps track of the connections between your network and other networks to allow or block traffic based on the type of traffic and the direction of the connection. A few examples of what the firewall tracks include source and destination IP addresses, ports, and protocol type. &lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated scaling&lt;/strong&gt;: AWS VPC Network Firewall offers &lt;a href="https://aws.amazon.com/network-firewall/features/#High_availability_and_automated_scaling" rel="noopener noreferrer"&gt;automatic scaling&lt;/a&gt; for the firewall capacity of your network to scale up or down based on the traffic load. &lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Intrusion prevention&lt;/strong&gt;: This feature utilizes &lt;a href="https://docs.aws.amazon.com/network-firewall/latest/developerguide/aws-managed-rule-groups-threat-signature.html" rel="noopener noreferrer"&gt;signature-based detection&lt;/a&gt; to inspect network traffic patterns for matches against known threat signatures to protect your VPC from unauthorized access or malicious activities.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Web filtering&lt;/strong&gt;: This feature blocks unencrypted web traffic to known malicious websites and monitors &lt;a href="https://g.co/kgs/cY4C4a" rel="noopener noreferrer"&gt;fully qualified domain names&lt;/a&gt; (FQDNs) for encrypted web traffic using the &lt;a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/use-network-firewall-to-capture-the-dns-domain-names-from-the-server-name-indication-sni-for-outbound-traffic.html" rel="noopener noreferrer"&gt;Server Name Indication (SNI&lt;/a&gt;) extension that blocks access to specific sites. &lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Central management&lt;/strong&gt;: &lt;a href="https://aws.amazon.com/network-firewall/features/#Central_management_and_visibility" rel="noopener noreferrer"&gt;This feature&lt;/a&gt; centrally manages and enforces firewall policies across multiple VPCs to ensure the same security policies protect all the resources.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How does AWS Network Firewall work?&lt;/strong&gt;&lt;br&gt;
Because it is an AWS service, AWS Network Firewall requires the use of an &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html" rel="noopener noreferrer"&gt;Amazon VPC&lt;/a&gt;. When you create an AWS account, it will contain a default VPC for each AWS region, and if you choose to, you can create more VPCs. &lt;/p&gt;

&lt;p&gt;After creating your VPC, you can add &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html" rel="noopener noreferrer"&gt;subnets&lt;/a&gt; (where each subnet must reside within one Availability Zone to allow the launching of AWS resources in separate Availability Zones) and then deploy AWS resources like &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html" rel="noopener noreferrer"&gt;Elastic Compute Cloud&lt;/a&gt; &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html" rel="noopener noreferrer"&gt;(&lt;/a&gt;&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html" rel="noopener noreferrer"&gt;EC2&lt;/a&gt;&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html" rel="noopener noreferrer"&gt;)&lt;/a&gt; &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html" rel="noopener noreferrer"&gt;instances&lt;/a&gt; in your VPC. In addition, you’ll need to configure the &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html" rel="noopener noreferrer"&gt;route tables&lt;/a&gt; for your VPC to send Network traffic through the Network Firewall endpoints before the AWS Network Firewall is enabled.&lt;/p&gt;

&lt;p&gt;So, the AWS Network Firewall protects the subnets you’ve added to your VPC by filtering the traffic between the subnets and locations outside your VPC. &lt;/p&gt;

&lt;p&gt;The following is an illustration from &lt;a href="https://docs.aws.amazon.com/network-firewall/latest/developerguide/how-it-works.html" rel="noopener noreferrer"&gt;AWS Documentation&lt;/a&gt; of how AWS Network Firewall works:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FFQrf-7JzHmqMoPDeVmeH6_bprYyNm5vkY2wNJEGvokqaCCMXUPdwCZxEA4oFmkijBFtbWZwZWc7vX0LNHlPxnu6MhHQFStUn_sUmNGnu63C8yKbqN1EOXfVaREtNwUX4-3zXnLVSaeFO5tPEHXrsQ98" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FFQrf-7JzHmqMoPDeVmeH6_bprYyNm5vkY2wNJEGvokqaCCMXUPdwCZxEA4oFmkijBFtbWZwZWc7vX0LNHlPxnu6MhHQFStUn_sUmNGnu63C8yKbqN1EOXfVaREtNwUX4-3zXnLVSaeFO5tPEHXrsQ98" alt="Illustration showing an AWS Network Firewall" width="401" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The AWS Network Firewall filters the traffic using &lt;a href="https://docs.aws.amazon.com/network-firewall/latest/developerguide/rule-groups.html" rel="noopener noreferrer"&gt;r&lt;/a&gt;&lt;a href="https://docs.aws.amazon.com/network-firewall/latest/developerguide/rule-groups.html" rel="noopener noreferrer"&gt;ule groups&lt;/a&gt;, either &lt;a href="https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-rule-groups-ips.html" rel="noopener noreferrer"&gt;stateless&lt;/a&gt; (evaluating packets in isolation) or &lt;a href="https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-rule-groups-ips.html" rel="noopener noreferrer"&gt;stateful&lt;/a&gt; (evaluating packets in the context of their traffic flow). You can configure these rules inside a &lt;a href="https://docs.aws.amazon.com/network-firewall/latest/developerguide/firewall-policies.html" rel="noopener noreferrer"&gt;firewall policy&lt;/a&gt;. The configuration involves several &lt;a href="https://docs.aws.amazon.com/network-firewall/latest/developerguide/firewall-settings.html" rel="noopener noreferrer"&gt;settings&lt;/a&gt;, including specifying the subnets for the firewall endpoints for each Availability zone defined and more. &lt;/p&gt;

&lt;p&gt;You can write your stateful rules in Suricata-compatible format, and the Network Firewall will process the rules using a Suricata rules engine.&lt;/p&gt;

&lt;p&gt;Then, the AWS Network Firewall uses a firewall to connect the inspection rules configured in the firewall policy to the VPC.&lt;/p&gt;
&lt;h2&gt;
  
  
  Overview of Amazon Virtual Private Cloud (VPC)
&lt;/h2&gt;

&lt;p&gt;You’ve seen the term Amazon Virtual Private Cloud (VPC) mentioned a couple of times, and from all that has been explained, you may now have some clue about what it is. For clarity, let’s take a look at a brief overview. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Amazon VPC?&lt;/strong&gt;&lt;br&gt;
Amazon Virtual Private Cloud (Amazon VPC) is an AWS service that lets you establish a secure and isolated virtual network for deploying AWS resources. This network resembles a traditional on-premises network but leverages AWS's scalable infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core features of Amazon VPC&lt;/strong&gt;&lt;br&gt;
Amazon VPC consists of primary features, as shown in the illustration below from the Amazon VPC &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zv12ltyjragr2rwfozb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zv12ltyjragr2rwfozb.png" alt="An Illustration of a VPC (Source: Amazon VPC documentation)" width="521" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The core features include the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;VPCs&lt;/strong&gt;: VPCs (Virtual Private Clouds) are isolated sections of the AWS cloud that you can use to launch AWS resources. Each VPC has its IP address range, subnets, and routing tables.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Subnets&lt;/strong&gt;: Subnets are smaller divisions of a VPC. You can launch AWS resources, such as Amazon &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html" rel="noopener noreferrer"&gt;EC2 instances&lt;/a&gt;, into subnets. Subnets must be located within a single Availability Zone.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Route tables&lt;/strong&gt;: Route tables control how traffic is routed within a VPC. Each route table contains a list of routes, specifying the destination IP address range and the next hop router.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Internet gateways&lt;/strong&gt;: Internet gateways allow traffic to flow between a VPC and the internet. You can also use internet gateways to connect your VPC to other VPCs or your on-premises network.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;VPC Peering&lt;/strong&gt;: VPC Peering allows you to connect two VPCs together without using an internet gateway or VPN. This can be useful for connecting VPCs that are in the same region or different regions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Types of Amazon VPCs&lt;/strong&gt;&lt;br&gt;
There are two types of Amazon VPCs: default VPCs and non-default VPCs. Let’s explore them below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Default VPCs&lt;/strong&gt;: These &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html" rel="noopener noreferrer"&gt;VPCs&lt;/a&gt; are created automatically when you create an AWS account. They are pre-configured with a single subnet in each Availability Zone in the Region. You can use a default VPC to launch your Amazon EC2 instances, Amazon Relational Database Service (RDS) instances, and other AWS resources. To create a default VPC, check the &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html" rel="noopener noreferrer"&gt;user guide&lt;/a&gt; provided by Amazon VPC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The illustration below shows the components of a default VPC from the &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html" rel="noopener noreferrer"&gt;Amazon VPC documentation&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvnuhsvgrsye4cmb1gpp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvnuhsvgrsye4cmb1gpp.png" alt="Illustration of a default VPC (Source: Amazon VPC Documentation)" width="503" height="561"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Non-default VPCs&lt;/strong&gt;: These types of VPCs are VPCs that you create yourself. You have more control over the configuration of a non-default VPC, such as the IP address range, the number of subnets, and the routing configuration. You can also create non-default VPCs in multiple Regions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Advantages of using Amazon VPC&lt;/strong&gt;&lt;br&gt;
Amazon VPC offers several advantages over a traditional data center, including the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost-effectiveness&lt;/strong&gt;: Amazon VPC is a cost-effective way to host your applications and data. You only pay for the resources that you use.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Amazon VPC is scalable, so you can easily add or remove resources. You can also create multiple VPCs to meet the needs of different applications or workloads.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;: Amazon VPC provides high security for your data and applications. Your VPC is isolated from other VPCs and the public internet, and you can control who can access your resources.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reliability&lt;/strong&gt;: Amazon VPC is a reliable platform that is backed by Amazon's infrastructure. Your data is stored in multiple Availability Zones, protecting it from unplanned outages.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Things to consider when setting up an Amazon VPC&lt;/strong&gt;&lt;br&gt;
Here are some things to consider when setting up your Amazon VPC.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Determining IP address ranges&lt;/strong&gt;: When setting up your VPC, you need to consider how the resources within your VPC would communicate with each other and how they would communicate with resources over the internet. 

&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, depending on the size of your VPC, which also depends on the number of resources that you plan to deploy, you’d need to select a range of IP addresses (such as 172.16.0.0/16, 192.168.0.0/16) that would be used for this purpose. You can use the &lt;a href="https://aws.amazon.com/what-is/cidr/#:~:text=Classless%20Inter%2DDomain%20Routing%20(CIDR)%20allows%20network%20routers%20to,specified%20by%20the%20CIDR%20suffix." rel="noopener noreferrer"&gt;Classless Inter-Domain Routing block&lt;/a&gt; (CIDR) to represent the range of IP addresses. &lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Selecting Availability Zones (AZs)&lt;/strong&gt;: You must consider your application's availability and fault tolerance when setting up your VPC. When you create your VPC, you can deploy your resources in multiple Availability Zones, isolated locations within a highly available, fault-tolerant region with its own networking infrastructure, power, and cooling. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Doing so would ensure that your application does not suffer a single point of failure. For instance, if one Availability Zone goes down, the other Availability Zones will still have your application and data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Determining Internet Connection for Resources&lt;/strong&gt;: You need to decide how your resources in the VPC will connect to the Internet. You can choose to have a public subnet directly connected to the internet or a private subnet connected to the internet through a VPN or AWS Direct Connect.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Creating the VPC&lt;/strong&gt;: Once you have considered the above factors, you can create your VPC. When creating your VPC, you need to specify the IP address range, the number of AZs, and the internet connectivity options.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consideration of the Applications’ Architectural Design&lt;/strong&gt;: When designing your VPC, you need to consider the architectural design of your applications. For example, if you have an application that needs to be highly available, you must place its components in multiple AZs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  How to utilize Terraform for AWS VPC Network Firewall management
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; is an Infrastructure as Code (IaC) tool used to automate the creation and management of AWS VPC Network Firewall rules. It streamlines the process by providing automated provisioning, ensuring consistent and repeatable deployments, enabling version control for tracking changes and easy rollbacks, and aiding in audibility to maintain compliance with security policies.&lt;/p&gt;

&lt;p&gt;Let’s do a quick demo showing how to use Terraform to create and manage AWS VPC Network Firewall rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
To follow along, you’ll need the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Knowledge of Terraform&lt;/li&gt;
&lt;li&gt;Terraform installed (check the &lt;a href="https://developer.hashicorp.com/terraform/downloads" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; to install Terraform for your operating system)&lt;/li&gt;
&lt;li&gt;AWS Command Line Interface (CLI) installed (install &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Terminal&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Creating project directory&lt;/strong&gt;&lt;br&gt;
Now, let’s create a directory called &lt;strong&gt;terraform_project&lt;/strong&gt; that will house the configuration file called “&lt;strong&gt;conf.tf”&lt;/strong&gt;. In your terminal, run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir terraform_project
cd terraform_project
touch conf.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we want to utilize Terraform for AWS VPC Network Firewall management, we’ll need to define and configure AWS as our Cloud Provider for the project. In your &lt;strong&gt;conf.tf&lt;/strong&gt; file, add the following configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-east-1"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the configuration above, you set the AWS provider to operate in the “us-east-1” region. AWS has its own set of &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html" rel="noopener noreferrer"&gt;regions&lt;/a&gt; available for deployment, so you can use any desired region of your choice. You specified the region by setting the ‘region’ parameter. This means that any resources you define within this configuration will be created in the “us-east-1” region.&lt;/p&gt;

&lt;p&gt;Before proceeding, ensure that you have added your Access and Secret access keys from your AWS account. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Defining the Resources&lt;/strong&gt;&lt;br&gt;
Since you have defined the cloud provider, you can now define the resources you want to create using the AWS provider. As mentioned earlier in the article, AWS provider offers a wide range of resources that you can create using Terraform such as S3 Bucket ("aws_s3_bucket" type), AWS EC2 instance (“aws_instance” resource type), VPC ("aws_vpc" resource type), Security Group ("aws_security_group" resource type), Subnet ("aws_subnet" resource type) and &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs" rel="noopener noreferrer"&gt;more&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s create an AWS EC2 instance, which would involve setting up networking components like VPC, subnets, and security groups. Using Terraform’s configuration language within the &lt;strong&gt;conf.tf&lt;/strong&gt; file defines the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "sample_vpc" {
   cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "sample_subnet" {
   vpc_id     = aws_vpc.sample_vpc.id
   cidr_block = "10.0.0.0/24"
}
resource "aws_security_group" "sample_security_group" {
   name_prefix = "sample-security-group"
   ingress {
     from_port = 22
     to_port   = 22
     protocol  = "tcp"
     cidr_blocks = ["0.0.0.0/0"]
   }
 }
 resource "aws_instance" "sample_instance" {
   ami           = "ami-0c55b159cbfafe1f0"
   instance_type = "t2.micro"
   subnet_id     = aws_subnet.sample_subnet.id
   security_groups = [aws_security_group.sample_security_group.name]
      tags = {
        Name = "SampleInstance"
      }
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the configuration above, you are setting the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The aws_vpc resource creates a VPC with the specified CIDR block.&lt;/li&gt;
&lt;li&gt;The aws_subnet resource creates a subnet within the VPC.&lt;/li&gt;
&lt;li&gt;The aws_security_group resource defines a security group that allows incoming secure shell (SSH) traffic (port 22) from anywhere (0.0.0.0/0).&lt;/li&gt;
&lt;li&gt;The aws_instance resource uses the previously created VPC, subnet, and security group to launch the EC2 instance. Note that AWS provides several EC2 instance types, differing in terms of memory, CPU, storage, and more. Here, you’re using the "t2.micro" instance that is designed for small to medium workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Initializing your Terraform configuration&lt;/strong&gt;&lt;br&gt;
Now that you have defined the resources, you need to initialize your Terraform configuration to ensure that the necessary components, such as plugins and state dependencies, are in place.&lt;/p&gt;

&lt;p&gt;Within your project directory, run the following command to initialize Terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running the command, Terraform will be initialized:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2F6pWd7c11Z_NmI3KhOw59ErLsqHH9rjjJHNue41ftgQJKFoFFfs-wm-JBm3JVmi3WSS4rcKwOhrDGFbtkWea6B775Mm1UFy6mCRP1HS_2Xo0jtkoq7seOsmeTz9v4Kqhk1wBhDPJhW_0eogUN9WTvnc4" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2F6pWd7c11Z_NmI3KhOw59ErLsqHH9rjjJHNue41ftgQJKFoFFfs-wm-JBm3JVmi3WSS4rcKwOhrDGFbtkWea6B775Mm1UFy6mCRP1HS_2Xo0jtkoq7seOsmeTz9v4Kqhk1wBhDPJhW_0eogUN9WTvnc4" width="1600" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Previewing changes&lt;/strong&gt;&lt;br&gt;
Based on your configuration, you can take this precautionary step for early detection of errors in your configuration to reduce the chances of failed deployments and to see what changes Terraform will make to your infrastructure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the command runs successfully, Terraform will use the selected providers to generate the execution plan, as shown in the screenshots below:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FbDR-il-h-ks7oxgdjNNapGsLQjtyyGY_c_L84ANWpB8C937AdcMjV0nUScxfVD_DWGfosb9Jn6G0Cge8aans6iu8P83oUbn9ShGP0FldirCdsgzB7WvhVxDBYIleMb8wHo6yfeJRWEyw_CtfoW6_QLI" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FbDR-il-h-ks7oxgdjNNapGsLQjtyyGY_c_L84ANWpB8C937AdcMjV0nUScxfVD_DWGfosb9Jn6G0Cge8aans6iu8P83oUbn9ShGP0FldirCdsgzB7WvhVxDBYIleMb8wHo6yfeJRWEyw_CtfoW6_QLI" width="1600" height="948"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FalQezpVI5k5ha-pmWWE4t4R_5QS4Hb9SCkP-1eWBUAbIso09Mmj2HWkObs3JRSdScyC6-AbBVgOkk9EGnXOJlo7grhZqqci06AnfrknK0uMZIRXufhPoSmLKjl1GyckcldcFHlWbWaS8FHVm642ALsc" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FalQezpVI5k5ha-pmWWE4t4R_5QS4Hb9SCkP-1eWBUAbIso09Mmj2HWkObs3JRSdScyC6-AbBVgOkk9EGnXOJlo7grhZqqci06AnfrknK0uMZIRXufhPoSmLKjl1GyckcldcFHlWbWaS8FHVm642ALsc" width="1600" height="949"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FTNTsTJoJHRVBC-vIQickDfkprY9Vm2vx2s86t3jkMMjLIrJczeinpt7PE_NyhxOdS6Kbff6XQOOwygnpPzC6FbeS_mkK1Kl3vpcO4OIeDpw5oACIYXI3QXNgIZoJZP71VipcPz7CkLjK7YXb4zn2nDc" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FTNTsTJoJHRVBC-vIQickDfkprY9Vm2vx2s86t3jkMMjLIrJczeinpt7PE_NyhxOdS6Kbff6XQOOwygnpPzC6FbeS_mkK1Kl3vpcO4OIeDpw5oACIYXI3QXNgIZoJZP71VipcPz7CkLjK7YXb4zn2nDc" width="1600" height="948"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2F0fBCBEhj3P6RlLw6T9C3P1ogStx1nj_nJVL2Vbb81Hc062U4K9dzz1nL75c1OWgtpbb62vplkDoSQTtfRjPg1kxD1BonbjwEFQoWB1ENWJEre5U44EQu0yQGBJUo7ZiHUYqnHSx8GaLoBGH6wABXXqw" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2F0fBCBEhj3P6RlLw6T9C3P1ogStx1nj_nJVL2Vbb81Hc062U4K9dzz1nL75c1OWgtpbb62vplkDoSQTtfRjPg1kxD1BonbjwEFQoWB1ENWJEre5U44EQu0yQGBJUo7ZiHUYqnHSx8GaLoBGH6wABXXqw" width="1600" height="941"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Applying changes&lt;/strong&gt;&lt;br&gt;
Executing the planned infrastructure modifications to create or update resources based on your Terraform configuration is necessary. Run the following command in your project’s directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2FKBU9AD13nqS0WpTedBeV7ildbX1n5IBAh6HP80oSrnfnmPRKsQEzbOQtxpDhHfZrclW9ha11YJ4rmi0vvox3g71zZgHvH-mcyWYfOYJqAEI_m1kMrZ4u3NeQgPJYggOrWzsmnXBVsFNYDTqRcPVAK9g" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh6.googleusercontent.com%2FKBU9AD13nqS0WpTedBeV7ildbX1n5IBAh6HP80oSrnfnmPRKsQEzbOQtxpDhHfZrclW9ha11YJ4rmi0vvox3g71zZgHvH-mcyWYfOYJqAEI_m1kMrZ4u3NeQgPJYggOrWzsmnXBVsFNYDTqRcPVAK9g" width="1600" height="952"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You’ve come to the end of this article, where you learned about AWS VPC Network Firewall, including what it is, its benefits, features, and how it works. You also learned about Amazon VPC, including what it is, its core features, types, and advantages. The article also included a demo showing how to set up and use Amazon VPC and Terraform for AWS VPC Network Firewall Management. &lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;You may find the following resources helpful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developer.hashicorp.com/terraform/docs" rel="noopener noreferrer"&gt;Terraform Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/network-firewall/latest/developerguide/what-is-aws-network-firewall.html" rel="noopener noreferrer"&gt;AWS Network Firewall Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>How Appwrite Cloud frees up time for freelancers to drive creative solutions</title>
      <dc:creator>Deborah Emeni</dc:creator>
      <pubDate>Thu, 03 Aug 2023 13:37:19 +0000</pubDate>
      <link>https://forem.com/hackmamba/how-appwrite-cloud-frees-up-time-for-freelancers-to-drive-creative-solutions-279k</link>
      <guid>https://forem.com/hackmamba/how-appwrite-cloud-frees-up-time-for-freelancers-to-drive-creative-solutions-279k</guid>
      <description>&lt;p&gt;Juggling multiple projects, freelancers worldwide often struggle to meet tight deadlines and maintain high-quality work. This struggle not only strains them but also hampers their productivity. For example, freelancers could spend more time on essential tasks for another project instead of spending time on mundane tasks for one job.&lt;/p&gt;

&lt;p&gt;Fortunately, tools like &lt;a href="https://appwrite.io/cloud/?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;Appwrite Cloud&lt;/a&gt; exist to help freelancers save time and automate tedious tasks, thereby improving their productivity. Appwrite Cloud offers numerous benefits, allowing freelancers to focus on delivering high-quality work to their clients.&lt;/p&gt;

&lt;p&gt;This article discusses the time-saving features in Appwrite Cloud and how you can leverage them as a freelancer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is a time-saving tool essential for freelancers?
&lt;/h2&gt;

&lt;p&gt;Most freelancers strive for maximum productivity to deliver high-quality work and satisfy their clients. They often use time-saving tools to handle repetitive and common tasks such as project management, organization, communication, file sharing, collaboration, task tracking, etc. &lt;/p&gt;

&lt;p&gt;Let's explore the importance of a time-saving tool in the life of a freelancer through this illustration below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FHv49VTAD63CcwuMZr8ESTV7ftFt9FZ1ndCxz6Nb3bKWVizfRccaHvR_KpNyYenUQEJI2xnrSwZhf6t4UV6_peBdKOhSgHc5bF2v0ZB0dQAqQ16eabR48MgCg6ePC5uoUvxXXtnqdQ_QcXBE-cgx8GSk" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FHv49VTAD63CcwuMZr8ESTV7ftFt9FZ1ndCxz6Nb3bKWVizfRccaHvR_KpNyYenUQEJI2xnrSwZhf6t4UV6_peBdKOhSgHc5bF2v0ZB0dQAqQ16eabR48MgCg6ePC5uoUvxXXtnqdQ_QcXBE-cgx8GSk" alt="A simple illustration of a freelancer managing their work" width="1600" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;John is a freelance programmer who handles multiple projects for different clients simultaneously. These projects require his attention and skill. On a regular day, John faces challenges in manually tracking tasks, managing deadlines, and organizing his workflow.&lt;/p&gt;

&lt;p&gt;Luckily for John, he discovered a project management tool that could help him streamline processes, centralize project information, automate repetitive tasks, track his time, set reminders, collaborate with clients, and prioritize work effectively.&lt;/p&gt;

&lt;p&gt;As a result, John can now spend more time on coding and problem-solving, allowing him to deliver high-quality work promptly and maximize his productivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of Appwrite Cloud
&lt;/h2&gt;

&lt;p&gt;Appwrite Cloud is the open-source version of &lt;a href="https://appwrite.io/?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;Appwrite&lt;/a&gt;, a backend as a service (BaaS) platform that offers a range of features and tools, including time-saving capabilities for freelancers. As a BaaS platform, Appwrite handles the entire backend infrastructure, such as REST API management and server provisioning, relieving freelancers from these responsibilities.&lt;/p&gt;

&lt;p&gt;Freelancers can use the Appwrite Cloud platform to easily incorporate features like user management, file storage, and push notifications into their applications without developing them from scratch. Moreover, Appwrite Cloud offers a collaborative environment for freelancers to chat, work together on their apps, and conduct code reviews or issue tracking.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Appwrite Cloud benefits freelancers
&lt;/h2&gt;

&lt;p&gt;Let’s explore some of the time-saving features Appwrite Cloud offers and how they significantly enhance the productivity of freelancers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated deployment&lt;/strong&gt;&lt;br&gt;
Freelancers working on multiple projects simultaneously may experience delays in manually deploying their code changes. The manual deployment process can hinder progress when new features must be deployed to production and when obtaining user feedback is crucial. As a result, freelancers might deploy features to production that could adversely affect the user experience, leading to a loss of user trust. This, in turn, would require additional time to fix and update, potentially leaving the client dissatisfied with the freelancer's work quality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FlUxM2N67QfKk6jNnHQ1YFK9Gu-fJnVWZY7dBdwF1dCmuz_6oVhq-f9qFi5W3zionznNdl867CQi3JdHS0K7yF1HF2g-AkpH_SXkIvNUTM3-bwWSkUgEPKki60YlyWxsdszLkBzPnr4zdbQhu1Z2Rg74" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FlUxM2N67QfKk6jNnHQ1YFK9Gu-fJnVWZY7dBdwF1dCmuz_6oVhq-f9qFi5W3zionznNdl867CQi3JdHS0K7yF1HF2g-AkpH_SXkIvNUTM3-bwWSkUgEPKki60YlyWxsdszLkBzPnr4zdbQhu1Z2Rg74" alt="Illustration from freepik" width="1600" height="900"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To avoid such situations, freelancers can utilize &lt;a href="https://appwrite.io/docs/command-line-deployment" rel="noopener noreferrer"&gt;Appwrite Cloud's&lt;/a&gt; &lt;a href="https://appwrite.io/docs/command-line-deployment" rel="noopener noreferrer"&gt;a&lt;/a&gt;&lt;a href="https://appwrite.io/docs/command-line-deployment" rel="noopener noreferrer"&gt;utomated&lt;/a&gt; &lt;a href="https://appwrite.io/docs/command-line-deployment" rel="noopener noreferrer"&gt;d&lt;/a&gt;&lt;a href="https://appwrite.io/docs/command-line-deployment" rel="noopener noreferrer"&gt;eployment feature&lt;/a&gt;, which automatically deploys their code changes to production. This enhances freelancers' productivity by saving time on manual deployment, allowing them to focus on tasks like debugging or building new features.&lt;/p&gt;

&lt;p&gt;The automated deployment feature also prevents deploying bad code changes to production by automatically running a series of tests before deploying the changes. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;br&gt;
Several issues may arise when a freelancer fails to handle a traffic spike in an application on time. The application's performance can be affected, leading to downtimes that impact the user experience. Additionally, the business and the client may suffer financial losses, which would reflect poorly on the freelancer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2Fp1zLrKd5KAcASx_kO_EzQxu9zUAiCykS87d3tKuCGLD7cCLDuF025UOqCNqoIYE6Qi5LozJAq0eZkLpRxrFggHMyelblz4Jq9-wAYK-bhJyOqIDPGm_wwVlLIXFCm6hRX0cAv2qi5TusW7_upXvm8lw" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2Fp1zLrKd5KAcASx_kO_EzQxu9zUAiCykS87d3tKuCGLD7cCLDuF025UOqCNqoIYE6Qi5LozJAq0eZkLpRxrFggHMyelblz4Jq9-wAYK-bhJyOqIDPGm_wwVlLIXFCm6hRX0cAv2qi5TusW7_upXvm8lw" alt="Illustration from freepik" width="1600" height="1600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, freelancers can leverage Appwrite Cloud’s auto &lt;a href="https://appwrite.io/docs/production#scaling" rel="noopener noreferrer"&gt;s&lt;/a&gt;&lt;a href="https://appwrite.io/docs/production#scaling" rel="noopener noreferrer"&gt;caling&lt;/a&gt; feature, which enables simple scaling of applications up or down based on traffic, without worrying about infrastructure or manual adjustments.&lt;/p&gt;

&lt;p&gt;Freelancers have several options for utilizing the auto scaling feature, including: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Manual&lt;/strong&gt; &lt;strong&gt;s&lt;/strong&gt;&lt;strong&gt;caling&lt;/strong&gt;: Freelancers can manually adjust the allocation of resources to their applications, allowing them to handle sudden traffic spikes by increasing or decreasing the resources as needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automatic&lt;/strong&gt; &lt;strong&gt;s&lt;/strong&gt;&lt;strong&gt;caling&lt;/strong&gt;: Freelancers can let Appwrite Cloud automatically scale their applications up or down dynamically, based on the workload.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Load balancing&lt;/strong&gt;: Appwrite Cloud offers load-balancing capabilities to enhance application availability and performance, enabling the distribution of applications across multiple servers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Real-time monitoring&lt;/strong&gt;&lt;br&gt;
Real-time updates play a vital role in ensuring high-quality work delivery and optimal application performance. When developers don't receive immediate notifications about errors in their application, they may unknowingly continue working with incorrect information, leading to potential issues in other parts of the application and causing delays in resolving the problem. Consequently, this could waste time and effort and hinder development. Fortunately, with Appwrite Cloud, freelancers can swiftly identify and address issues by promptly receiving error notifications — improving efficiency, client satisfaction, and meeting project deadlines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FqgNKgm8XHUoXPCZpXEd6_IYMLBLhGnrhnmRwqLBnilvER49z1WZd31rzTt2WbiTCkS-4SIggy3ywc08Hiyyb7TlT1oMoI9NzDmscobM0Q02yCqoqSRFfR53l4ZcM7UFylheJ8SB0uoP8eVKMtOSLV_s" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh3.googleusercontent.com%2FqgNKgm8XHUoXPCZpXEd6_IYMLBLhGnrhnmRwqLBnilvER49z1WZd31rzTt2WbiTCkS-4SIggy3ywc08Hiyyb7TlT1oMoI9NzDmscobM0Q02yCqoqSRFfR53l4ZcM7UFylheJ8SB0uoP8eVKMtOSLV_s" alt="Illustration from freepik" width="1600" height="1600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Appwrite Cloud offers a &lt;a href="https://appwrite.io/docs/realtime" rel="noopener noreferrer"&gt;r&lt;/a&gt;&lt;a href="https://appwrite.io/docs/realtime?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;eal-time&lt;/a&gt; feature that allows developers to seamlessly integrate real-time updates into their applications. By utilizing Appwrite's real-time feature, freelancers can create their dashboard to receive timely updates and notifications whenever changes occur, facilitating efficient workflow, troubleshooting, and collaboration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated workflows&lt;/strong&gt;&lt;br&gt;
Freelancers regularly perform repetitive tasks such as writing code for similar functionalities across various projects, updating and troubleshooting code, conducting software testing, deploying applications, and communicating with clients or team members. The manual execution of these tasks can be challenging, prone to errors, time-consuming, counterproductive, and inefficient.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh4.googleusercontent.com%2FyjFB3hz_Eb7Y5H3wJBtgdKKbjBKrJfbKLiSDsGLNOIhqOn2MfSVKbVdlx8bZGgtJQaskF3aLpKM4bJPJpGt76YYU1i1OHwUDni9JHelmSIrTYd5-N3AZXiMGiJlJJdDyLku2PHblUi3HTO70vekGNdw" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh4.googleusercontent.com%2FyjFB3hz_Eb7Y5H3wJBtgdKKbjBKrJfbKLiSDsGLNOIhqOn2MfSVKbVdlx8bZGgtJQaskF3aLpKM4bJPJpGt76YYU1i1OHwUDni9JHelmSIrTYd5-N3AZXiMGiJlJJdDyLku2PHblUi3HTO70vekGNdw" alt="Illustration from freepik" width="1600" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Automating these repetitive tasks is highly productive, resulting in better accuracy, improved quality of work, and time savings. Freelancers can easily automate these tasks using the Automated Workflow feature provided by Appwrite Cloud, which seamlessly integrates with popular CI/CD tools like &lt;a href="https://about.gitlab.com/" rel="noopener noreferrer"&gt;GitLab&lt;/a&gt; and &lt;a href="https://github.com/features/actions" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Freelancers can utilize Appwrite Cloud's serverless functions to configure workflows that perform actions, such as emailing the client whenever a project update is made.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Database management&lt;/strong&gt;&lt;br&gt;
Freelancers may encounter challenges in creating, managing, and accessing databases for their projects, which can hinder their productivity. These challenges may arise from inefficient database systems or a lack of seamless access to databases, resulting in time-consuming processes for organizing and storing data. Consequently, the quality of work delivered may also be affected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FNNUvk4XbwvnM_EA0BdJvz4W3CTdld5M6sfLNUhuk0Zwl9oZGH-p2a8wCriHLWxXqy3mMsAputgndAFeCeBY3Wky6F5syWrhloo-kJ9tdce4dGk3T6qiU8rFGScYH0YVJQFF6GcjLzKSJ6s-XLd9RBZE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh5.googleusercontent.com%2FNNUvk4XbwvnM_EA0BdJvz4W3CTdld5M6sfLNUhuk0Zwl9oZGH-p2a8wCriHLWxXqy3mMsAputgndAFeCeBY3Wky6F5syWrhloo-kJ9tdce4dGk3T6qiU8rFGScYH0YVJQFF6GcjLzKSJ6s-XLd9RBZE" alt="Illustration from freepik" width="1600" height="1600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nevertheless, freelancers can effectively and securely address these challenges by utilizing the &lt;a href="https://appwrite.io/docs/databases?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;Database Management&lt;/a&gt; feature offered by Appwrite Cloud. This feature facilitates the creation, management, and access of databases, enabling freelancers to handle their projects in a timely manner and efficiently.&lt;/p&gt;

&lt;p&gt;Check out this &lt;a href="https://dev.to/hackmamba/getting-started-with-appwrite-cloud-and-flutter-ao1"&gt;article&lt;/a&gt; to create an account and explore Appwrite.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;This article explored how Appwrite Cloud saves freelancers time and highlighted the time-saving features of Appwrite Cloud and how they greatly enhance freelancers' productivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;You may find the following resources useful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://dev.to/hackmamba/breaking-through-growth-barriers-how-appwrites-cloud-enables-scalability-1f57"&gt;How Appwrite's Cloud&lt;/a&gt; &lt;a href="https://dev.to/hackmamba/breaking-through-growth-barriers-how-appwrites-cloud-enables-scalability-1f57"&gt;e&lt;/a&gt;&lt;a href="https://dev.to/hackmamba/breaking-through-growth-barriers-how-appwrites-cloud-enables-scalability-1f57"&gt;nables&lt;/a&gt; &lt;a href="https://dev.to/hackmamba/breaking-through-growth-barriers-how-appwrites-cloud-enables-scalability-1f57"&gt;s&lt;/a&gt;&lt;a href="https://dev.to/hackmamba/breaking-through-growth-barriers-how-appwrites-cloud-enables-scalability-1f57"&gt;calability&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/hackmamba/how-appwrite-cloud-can-help-you-build-scalable-web-and-mobile-apps-3fe1"&gt;How&lt;/a&gt; &lt;a href="https://dev.to/hackmamba/how-appwrite-cloud-can-help-you-build-scalable-web-and-mobile-apps-3fe1"&gt;A&lt;/a&gt;&lt;a href="https://dev.to/hackmamba/how-appwrite-cloud-can-help-you-build-scalable-web-and-mobile-apps-3fe1"&gt;ppwrite&lt;/a&gt; &lt;a href="https://dev.to/hackmamba/how-appwrite-cloud-can-help-you-build-scalable-web-and-mobile-apps-3fe1"&gt;C&lt;/a&gt;&lt;a href="https://dev.to/hackmamba/how-appwrite-cloud-can-help-you-build-scalable-web-and-mobile-apps-3fe1"&gt;loud can help you build scalable web and mobile apps&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://appwrite.io/docs?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;Appwrite&lt;/a&gt; &lt;a href="https://appwrite.io/docs?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;d&lt;/a&gt;&lt;a href="https://appwrite.io/docs?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;ocumentation&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/hackmamba/how-to-use-appwrite-cloud-database-in-your-nuxtjs-app-3e99"&gt;How to use Appwrite&lt;/a&gt; &lt;a href="https://dev.to/hackmamba/how-to-use-appwrite-cloud-database-in-your-nuxtjs-app-3e99"&gt;C&lt;/a&gt;&lt;a href="https://dev.to/hackmamba/how-to-use-appwrite-cloud-database-in-your-nuxtjs-app-3e99"&gt;loud database in your Nuxt.js app&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/hackmamba/getting-started-with-appwrite-cloud-and-flutter-ao1"&gt;Getting started with Appwrite Cloud and Flutter&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>appwrite</category>
      <category>freelancers</category>
    </item>
    <item>
      <title>Building Trust in BaaS Platforms: The Essential Guide to Better Data Management</title>
      <dc:creator>Deborah Emeni</dc:creator>
      <pubDate>Mon, 27 Mar 2023 10:05:26 +0000</pubDate>
      <link>https://forem.com/hackmamba/building-trust-in-baas-platforms-the-essential-guide-to-better-data-management-ba5</link>
      <guid>https://forem.com/hackmamba/building-trust-in-baas-platforms-the-essential-guide-to-better-data-management-ba5</guid>
      <description>&lt;p&gt;In recent years, there has been a spike in the number of Backend-as-a-Service (BaaS) platforms like &lt;a href="https://appwrite.io/?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;Appwrite&lt;/a&gt;, Firebase, AWS Amplify, and so on that render many essential services, such as database management, push notifications, cloud storage, user authentication, hosting, and more. Businesses have taken advantage of BaaS platforms to handle server functionalities and backend tasks so that their applications can continue to run smoothly, saving development costs and time.&lt;/p&gt;

&lt;p&gt;As the number of BaaS platforms increases, businesses have more providers to select from. Security is a major concern for businesses that handle user data (that includes personal information like addresses, credit card information, and so on). Businesses require specific details on the critical factors to consider when selecting a BaaS provider.&lt;/p&gt;

&lt;p&gt;This article will provide information about the essential factors when choosing a BaaS provider or platform for businesses. It will teach you the best practices for secure data management with BaaS providers. &lt;/p&gt;

&lt;h1&gt;
  
  
  Overview of BaaS Platforms
&lt;/h1&gt;

&lt;p&gt;BaaS platforms are cloud architectures that handle server functionalities and repetitive backend tasks for businesses. They provide Application Programming Interfaces (APIs) and Software Development Kits (SDKs) through which applications connect to and gain access to third-party cloud services. As a result, developers can focus on the client side of the applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases of BaaS Platforms in Businesses
&lt;/h2&gt;

&lt;p&gt;There are numerous BaaS use cases in various business sectors ranging from financial tech (fintech) to gaming or even healthcare. The server side of applications requires significant effort including the database management, user authentication, push notifications, and API management.&lt;/p&gt;

&lt;p&gt;Let's take a look at some of these use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Database Management&lt;/strong&gt;&lt;br&gt;
BaaS platforms offer services that store and manage business data in the cloud and allow the application to scale while synchronizing and maintaining a unified user profile. BaaS platforms have several database service mechanisms where good BaaS platforms often provide services such as creating, listing, getting, updating, and deleting documents among other things. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. User Authentication&lt;/strong&gt;&lt;br&gt;
Businesses can use BaaS platforms to authenticate and manage user accounts, such as creating, registering, updating, verifying, and deleting user accounts. BaaS platforms provide services such as email/phone verification, two-factor authentication, password recovery, session creation, and more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Notifications&lt;/strong&gt;&lt;br&gt;
BaaS platforms provide cost-effective cloud infrastructures with an easy-to-use API for sending users' notifications (push/email/SMS). As a result, businesses can avoid the stress and time required to build and manage their notification infrastructure. Businesses concerned about user engagement and sending real-time notification updates to their users can use BaaS platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. API Management&lt;/strong&gt;&lt;br&gt;
Most applications rely on multiple APIs to provide third-party services. Businesses can use BaaS platforms to securely create and manage their APIs, including controlling access, checking and tracking usage, setting usage quotas (rate limits), and reducing the effort and time required to build and manage the infrastructure. BaaS platforms offer various API management services, such as rate limits and a dashboard for tracking API usage and downtime.&lt;/p&gt;

&lt;h1&gt;
  
  
  How BaaS Platforms are Critical for Better Data Management
&lt;/h1&gt;

&lt;p&gt;BaaS platforms have improved data management for businesses due to the vital and critical benefits they offer. Businesses have built more cost-effective, scalable, accessible, and real-time data solutions using BaaS platforms. Businesses use BaaS security measures to enable data backups, disaster recovery, and security.&lt;/p&gt;

&lt;p&gt;Furthermore, BaaS providers offer cross-platform support, which means they support multiple Operating Systems/tools/platforms. Businesses can integrate with multiple tools and platforms and easily migrate data from one platform to another (web to mobile) without affecting application performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Essential factors to consider when selecting BaaS platforms for businesses
&lt;/h2&gt;

&lt;p&gt;When selecting a BaaS platform, businesses must consider several important factors. Understanding these factors will enable businesses to develop trust in BaaS platforms. &lt;/p&gt;

&lt;p&gt;Let's take a closer look at some of these factors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Security measures and updates&lt;/strong&gt;&lt;br&gt;
Before choosing a BaaS platform for your business, you must consider the security procedures it employs. These procedures include the authentication, data encryption, access control, and so on that the BaaS platform employs.&lt;/p&gt;

&lt;p&gt;In application downtime or disaster, you should note how the BaaS platform handles disaster recovery to avoid losing your data. Appwrite, for example, uses &lt;a href="https://appwrite.io/docs/authentication#jwt" rel="noopener noreferrer"&gt;JSON Web Tokens&lt;/a&gt; (JWT), &lt;a href="https://appwrite.io/docs/authentication#login" rel="noopener noreferrer"&gt;social logins&lt;/a&gt;, &lt;a href="https://appwrite.io/docs/keys" rel="noopener noreferrer"&gt;API keys&lt;/a&gt;, and &lt;a href="https://appwrite.io/docs/authentication#oauth" rel="noopener noreferrer"&gt;OAuth2&lt;/a&gt; for authentication and authorization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Data Transparency&lt;/strong&gt;&lt;br&gt;
Data is intricate and should be handled securely and with care when developing and managing applications. Before selecting a BaaS platform, consider its policies to handle data in terms of transparency, validation, tracking, updates, formatting, security, accessibility, ownership, and usage.&lt;/p&gt;

&lt;p&gt;You can use Appwrite to access various data access methods such as &lt;a href="https://appwrite.io/docs/databases#querying-documents" rel="noopener noreferrer"&gt;queries&lt;/a&gt;, formatting (JSON), &lt;a href="https://appwrite.io/docs/realtime" rel="noopener noreferrer"&gt;real-time&lt;/a&gt; data updates, and CRUD operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Monitoring performance&lt;/strong&gt;&lt;br&gt;
Monitoring the performance of running applications is critical and is best practice to avoid situations such as downtime. Not only that, but the ability to note any issues or errors the application may encounter and test the application's performance in real time is critical. &lt;/p&gt;

&lt;p&gt;You should ensure that the BaaS platform you select can monitor application performance in real time. Appwrite provides a &lt;a href="https://appwrite.io/docs/server/health" rel="noopener noreferrer"&gt;Health API&lt;/a&gt; service for monitoring the application's performance and an &lt;a href="https://appwrite.io/docs/production#errorReporting" rel="noopener noreferrer"&gt;Error Reporting&lt;/a&gt; service for reporting performance errors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Data/Industry regulations compliance&lt;/strong&gt;&lt;br&gt;
When choosing a BaaS platform, it is critical to ensure that the BaaS complies with data regulations governing security, storage, and data collection. Ensure that the BaaS platforms have security standards certifications such as ISO/IEC 27001 and that they comply with the industry regulations where the business operates.&lt;/p&gt;

&lt;p&gt;Appwrite processes data under the General Data Protection Regulation (GDPR) and has a detailed &lt;a href="https://appwrite.io/policy/privacy" rel="noopener noreferrer"&gt;Privacy Policy&lt;/a&gt; for data retention, transfer, disclosure, and security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Scalability&lt;/strong&gt;&lt;br&gt;
As the business expands and the traffic volume spikes as the number of users increases, it is essential to verify the BaaS platform's ability to handle the increased load without affecting the application's performance. It is also critical to ensure that the BaaS platform can efficiently distribute multiple requests across multiple instances.&lt;/p&gt;

&lt;p&gt;Appwrite handles business traffic by automatically &lt;a href="https://appwrite.io/docs/production#scaling" rel="noopener noreferrer"&gt;scaling&lt;/a&gt; applications with Kubernetes and distributing incoming requests across multiple instances in a way that does not affect application performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Best Practices for Businesses using BaaS platforms
&lt;/h2&gt;

&lt;p&gt;To secure data management for businesses, it's essential to know these best practices described below. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Regular security audits&lt;/strong&gt;&lt;br&gt;
It is best practice to analyze the BaaS platform's security policies and methods to ensure that they comply with security regulations to ensure your business's data privacy and security (e.g., GDPR). You can identify potential security risks and vulnerabilities that can be exploited by conducting regular analyses.&lt;/p&gt;

&lt;p&gt;Appwrite performs regular &lt;a href="https://appwrite.io/policy/security" rel="noopener noreferrer"&gt;security audits&lt;/a&gt; using various mechanisms such as penetration testing, log analysis, and vulnerability scanners. Then, it provides a report on the results of each security audit, including the vulnerabilities discovered and ways to address them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Executing proper access control procedures&lt;/strong&gt;&lt;br&gt;
Businesses need to consider the type of access control procedures used by the BaaS platform. Some of the access control procedures include software updates, data encryption, and the type of user authentication and authorization used. &lt;/p&gt;

&lt;p&gt;Appwrite secures data for businesses by utilizing access control procedures such as &lt;a href="https://appwrite.io/docs/authentication#jwt" rel="noopener noreferrer"&gt;JSON Web Tokens&lt;/a&gt; (JWT) for authentication and Hypertext Transfer Protocol Secure (HTTPS) for encryption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Using secure APIs&lt;/strong&gt;&lt;br&gt;
To avoid a security breach in business sensitive data, it is best practice to ensure that the application's underlying APIs are secure. The BaaS platform should use a standard authentication method, such as Appwrite, which uses &lt;a href="https://appwrite.io/docs/client/account#accountCreateOAuth2Session" rel="noopener noreferrer"&gt;OAuth&lt;/a&gt; or multifactor authentication and monitoring logs to report security issues and validation to secure the APIs.&lt;/p&gt;

&lt;p&gt;Appwrite secures APIs with various tools, including regular software updates, &lt;a href="https://appwrite.io/docs/certificates" rel="noopener noreferrer"&gt;SSL/TLS encryption&lt;/a&gt;, &lt;a href="https://appwrite.io/docs/server/health" rel="noopener noreferrer"&gt;validation&lt;/a&gt;, &lt;a href="https://appwrite.io/docs/models/log" rel="noopener noreferrer"&gt;monitoring logs&lt;/a&gt;, &lt;a href="https://appwrite.io/docs/rate-limits" rel="noopener noreferrer"&gt;rate limiting&lt;/a&gt;, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Regular security updates&lt;/strong&gt;&lt;br&gt;
Typically, BaaS platforms send notification alerts to users when a security update is required. Pay attention to any BaaS platform alerts to ensure that your software is up to date. This decreases the risk of security vulnerabilities in your application. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Evaluating third-party security&lt;/strong&gt;&lt;br&gt;
Third-party services may introduce security vulnerabilities into a business application. Before integrating a third-party service into your application, ensure that it meets security standards. Ensure that its source is trusted, and perform a proper evaluation, which may include going through the documentation in search of any known vulnerabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we explored an overview of BaaS platforms such as Appwrite, use cases of BaaS platforms in businesses, how BaaS platforms are critical for better data management, essential factors to consider when selecting BaaS platforms for businesses, and security best practices for businesses using BaaS platforms. Appwrite is a good BaaS platform and you can quickly get started with it. &lt;/p&gt;

&lt;h2&gt;
  
  
  Resource
&lt;/h2&gt;

&lt;p&gt;Check out &lt;a href="https://appwrite.io/docs/?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;Appwrite Documentation&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>A Beginner's Introduction to DevOps</title>
      <dc:creator>Deborah Emeni</dc:creator>
      <pubDate>Mon, 27 Mar 2023 09:57:01 +0000</pubDate>
      <link>https://forem.com/deborahemeni1/a-beginners-introduction-to-devops-34p5</link>
      <guid>https://forem.com/deborahemeni1/a-beginners-introduction-to-devops-34p5</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;We live in a society where customer satisfaction and application performance are critical to the success of any software product designed for human consumption. Since developers build applications, they are prone to errors, which may affect code quality. As a result, necessary updates and changes may need to be integrated into the software in real time without interfering with the application's performance.&lt;/p&gt;

&lt;p&gt;Many organizations have realized in recent years that using the set of practices involved in DevOps is the key to a faster release cycle of quality software, including collaboration between Developers and Operations teams.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This article will teach you about DevOps, the Software Development Life Cycle (SDLC), its limitations, and the advantages of DevOps over the traditional system. You'll also learn DevOps practices, terminology, and case studies.&lt;br&gt;
Table of Contents&lt;/p&gt;

&lt;p&gt;What is DevOps?&lt;br&gt;
Understanding the SDLC&lt;br&gt;
A system architecture for an e-learning platform&lt;br&gt;
Limitations of the Traditional SDLC&lt;br&gt;
Benefits of DevOps over the traditional system&lt;br&gt;
Set of Practices and Terms used in DevOps&lt;br&gt;
Case Studies: Examples of Companies implementing DevOps practices&lt;br&gt;
● Google&lt;br&gt;
● Microsoft&lt;br&gt;
● Netflix&lt;br&gt;
● Amazon&lt;br&gt;
Conclusion&lt;br&gt;
Resources&lt;/p&gt;

&lt;h2&gt;
  
  
  What is DevOps?
&lt;/h2&gt;

&lt;p&gt;DevOps is a set of practices that fosters collaboration between IT teams and software developers to automate the operations involved in both development and IT to achieve faster software deployment and code quality.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;I know, that's a lot of buzzwords, but don't worry, you'll understand better as you read on&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You must first understand the Software Development Life Cycle process before diving into DevOps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the SDLC
&lt;/h2&gt;

&lt;p&gt;The Software Development Life Cycle comprises processes/stages that follow a hierarchical pattern. These stages are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Planning Stage —- This is where the project's scope (features and functions) and everything needed to develop the project are defined.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Analysis Stage — This is where we analyze the requirements that have been gathered, which could be the components (i.e. the building blocks that would make up the system). For example, the hardware, software, data exchange mechanisms, data itself, interfaces that promote interaction between the components and sub-systems, and network infrastructure for the application's running and deployment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then, we define their use cases and apply our findings to conceptualize the system architecture, which defines how the system components will collaborate to achieve the project's end goal.&lt;/p&gt;

&lt;p&gt;See the illustration below to better understand what a system architecture entails:&lt;/p&gt;

&lt;p&gt;A system architecture for an e-learning platform&lt;/p&gt;

&lt;p&gt;Design stage —The next stage is to create the detailed design from the software architecture, which includes the user interface and database schema.&lt;/p&gt;

&lt;p&gt;Below is an illustration of a System design:&lt;/p&gt;

&lt;p&gt;Implementation stage - This is where the integration of the components that have been designed happens — the writing of the code and unit tests. &lt;/p&gt;

&lt;p&gt;Testing stage - This is the stage where the software is tested to find errors or bugs before production.&lt;/p&gt;

&lt;p&gt;Deployment stage - This is the stage at which the application is made available for users to install.&lt;/p&gt;

&lt;p&gt;Maintenance stage - This is where the software is monitored to detect any problems, fix them, or make necessary updates.&lt;/p&gt;

&lt;p&gt;The stages listed above can be carried out using different methodologies such as, the Waterfall model (which employs a sequential approach) and the Agile methodology (which uses an iterative approach). &lt;/p&gt;

&lt;p&gt;The Waterfall model requires completion of each stage before moving on to the next, whereas Agile focuses on continuous feedback and communication between a team of developers and customers to ensure that the software developed meets the needs and requirements of the customers.&lt;/p&gt;

&lt;p&gt;I'll emphasize the Agile methodology because it is the method DevOps uses to carry out SDLC. What kind of practices happens in the Agile methodology?&lt;/p&gt;

&lt;p&gt;Continuous planning and prioritization of work&lt;br&gt;
Continuous integration and testing of code&lt;br&gt;
Continuous delivery of working software&lt;br&gt;
Regular retrospectives to evaluate and improve the process&lt;/p&gt;

&lt;p&gt;Limitations of the Traditional SDLC&lt;br&gt;
The Waterfall model was the traditional SDLC methodology before DevOps, and it has several limitations, including the following:&lt;/p&gt;

&lt;p&gt;Lack of flexibility: When changes are made to the software later on, there is no going back to make any updates because the model uses a linear approach. It would mean restarting the entire SDLC from scratch.&lt;/p&gt;

&lt;p&gt;Lack of customer involvement: When developing a product for human consumption, including customers in the SDLC process is critical to ensure that the product meets the customer's needs. &lt;strong&gt;I mean, what's the point of a product that customers don't find useful, right?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Slow development: The Waterfall model's architecture makes it much more challenging to develop and make software available to users.&lt;/p&gt;

&lt;p&gt;High Cost: With the sequential approach to SDLC, a significant amount of testing, which is costly, will be required because there is no going back if an error occurs in the future.&lt;/p&gt;

&lt;p&gt;Because of these limitations, many organizations have shifted to an iterative approach to SDLC management. DevOps, on the other hand, employs a flexible and iterative approach — the Agile methodology.&lt;/p&gt;

&lt;p&gt;Let's look at some of the advantages DevOps provides to compensate for the limitations of traditional SDLC.&lt;/p&gt;

&lt;p&gt;Benefits of DevOps over the traditional system&lt;br&gt;
DevOps has several benefits compared to traditional SDLC (Software Development Life Cycle). They include the following:&lt;/p&gt;

&lt;p&gt;Flexibility: DevOps is designed to be flexible, compensating for the lack of flexibility caused by traditional SDLC methodologies. The speed with which DevOps responds to and updates software demonstrates flexibility.&lt;/p&gt;

&lt;p&gt;Collaboration: DevOps encourages teamwork and collaboration between development and operations teams.&lt;/p&gt;

&lt;p&gt;Quality: DevOps enhances code quality through practices such as automated testing and infrastructure as code.&lt;/p&gt;

&lt;p&gt;Customer satisfaction: DevOps allows customers to participate in the SDLC process, resulting in software that meets the customer's needs.&lt;/p&gt;

&lt;p&gt;Cost: DevOps significantly reduces costs by employing an iterative rather than a sequential approach, which typically necessitates rework.&lt;/p&gt;

&lt;p&gt;Faster release cycles: DevOps reduces the time it takes to release software by automating collaboration between development and operations teams.&lt;/p&gt;

&lt;p&gt;Next, let's look at some of the DevOps practices and terminologies&lt;/p&gt;

&lt;p&gt;Set of Practices and Terms used in DevOps&lt;/p&gt;

&lt;p&gt;As you learn DevOps, you will come across these terms, and it is good practice to understand what they mean and how they work. Let's look at some of them:&lt;/p&gt;

&lt;p&gt;Continuous Integration &amp;amp; Continuous Delivery (CD/CD):: DevOps ensures that every piece of code tested by developers is integrated as soon as possible. Also, DevOps ensures software changes are automatically built, tested, and deployed to production.&lt;/p&gt;

&lt;p&gt;Infrastructure as Code (IAC): To configure and manage infrastructures, DevOps employs tools such as Ansible, Puppet, and Chef.&lt;/p&gt;

&lt;p&gt;Automated Testing: DevOps automates testing such as unit testing, integration testing, and acceptance testing.&lt;/p&gt;

&lt;p&gt;Monitoring and Logging: DevOps also monitors the system's performance to identify any problems or errors affecting the system's overall performance.&lt;/p&gt;

&lt;p&gt;Collaboration and Communication: DevOps promotes teamwork and communication between development and operations teams.&lt;/p&gt;

&lt;p&gt;Agile Methodologies: DevOps employs Agile methodologies such as Scrum to respond to software changes and provide continuous customer feedback.&lt;/p&gt;

&lt;p&gt;Automation Tools: I've mentioned several times that DevOps uses automation. Several tools, including Jenkins, Docker, and Kubernetes, automate the process.&lt;/p&gt;

&lt;p&gt;Version Control: A code change management system that allows developers to track changes and collaborate with others.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before learning a new technology, it is critical to understand real-world applications of the technology. It provides a better idea of what you can create with that technology!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, we'll look at some real-world DevOps use cases.&lt;br&gt;
Case Studies: Examples of Companies implementing DevOps practices&lt;br&gt;
These organizations use DevOps practices such as Continous Integration and Delivery, automated testing, and containerization to easily and quickly deliver new services and features, as well as high-quality software.&lt;br&gt;
Google&lt;/p&gt;

&lt;p&gt;Microsoft&lt;/p&gt;

&lt;p&gt;Netflix&lt;/p&gt;

&lt;p&gt;Amazon&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
In this article, we explored what DevOps is, the limitations of the traditional SDLC and the benefits DevOps offers. We also covered the set of practices in DevOps and showed some real-world case studies.&lt;/p&gt;

&lt;p&gt;Resources&lt;br&gt;
You may find the following resources useful:&lt;br&gt;
Continuous Integration &amp;amp; Continuous Delivery (CI/CD)&lt;br&gt;
Infrastructure as Code (IAC)&lt;br&gt;
Automated testing&lt;br&gt;
Monitoring and Logging&lt;br&gt;
Collaboration and Communication&lt;br&gt;
Agile Methodologies&lt;br&gt;
Automation Tools&lt;br&gt;
Version Control&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Picking the Right Backend for your AI Project</title>
      <dc:creator>Deborah Emeni</dc:creator>
      <pubDate>Mon, 30 Jan 2023 19:19:25 +0000</pubDate>
      <link>https://forem.com/hackmamba/picking-the-right-backend-for-your-ai-project-51kp</link>
      <guid>https://forem.com/hackmamba/picking-the-right-backend-for-your-ai-project-51kp</guid>
      <description>&lt;p&gt;This article was originally posted on &lt;a href="https://hackmamba.io/blog/2023/01/picking-the-right-backend-for-your-ai-project/" rel="noopener noreferrer"&gt;Hackmamba&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;One of the primary considerations when developing an application is choosing the right infrastructure. A standard application that provides services to users would require a proper backend infrastructure to handle user data, security, servers, and APIs.&lt;/p&gt;

&lt;p&gt;Backend developers must also find the right tools for each project, including programming languages, servers, databases, frameworks, etc. There are numerous advantages to selecting the right backend infrastructure for a project and it will have a significant impact on the user experience and the project's success or growth.&lt;/p&gt;

&lt;p&gt;This article will teach you about the various backend infrastructure types suitable for an AI project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of Backend Infrastructures
&lt;/h2&gt;

&lt;p&gt;Backend Infrastructures are all the components and tools required to design the backend architecture for your AI application by handling backend services such as storage, authentication, database, security, and so on, allowing you to focus on your application's frontend functionality.&lt;/p&gt;

&lt;p&gt;Your AI application's foundation is based on your backend infrastructure, which can vary in type depending on your AI application's requirements while considering factors such as cost-effectiveness, budget, bandwidth, volume of work, project type, project timeline, or scalability.&lt;/p&gt;

&lt;p&gt;To choose the best backend infrastructure for your project, you must weigh the pros and cons of each backend infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases of Backend Infrastructures
&lt;/h2&gt;

&lt;p&gt;There are several use cases of backend infrastructures for your project, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Maintenance&lt;/strong&gt;: Backend Infrastructures handle the backend maintenance of your application, from updates to bug fixes, saving you the stress of doing it yourself. This use case must be considered because it prevents downtime in your application.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;: The safety of your users' data should be a top priority. Backend infrastructures can protect your servers and databases from cyber-attacks.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Speed and Scalability&lt;/strong&gt;: Backend infrastructures provide speed which improves user experience. Backend infrastructures can provide scalability for your application depending on the type of project and the rate at which it grows in traffic volume.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Infrastructure Requirements for AI Project
&lt;/h1&gt;

&lt;p&gt;The performance of your AI projects is highly dependent on your infrastructure as it handles a vast amount of the workload, like deep learning and algorithms. Choosing an Infrastructure for an AI project depends on the following project's requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Storage Capacity&lt;/strong&gt;: The amount of data generated or real-time data required varies depending on the scope of your AI project. Your database will grow as your AI project grows and this should be considered when selecting the right infrastructure. The level of data generated by your AI project should influence your infrastructure selection.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;: Security is a vital requirement when choosing an Infrastructure. AI project inferences are highly dependent on user data, so you should consider cases like data breaches and bad data which could result in incorrect inferences. Your choice of Infrastructure should be capable of providing a high level of security for your AI project.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Computing Capability&lt;/strong&gt;: AI projects require high computing power and speed to process large amounts of data, which includes algorithms, machine learning, and neural networks. The infrastructure should be capable of providing high computing power, such as CPUs for handling workloads and cloud-based GPUs (Graphic Processing Units) for deep learning.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost-effective solutions&lt;/strong&gt;: As the complexity of AI projects grows, so does the cost of managing them and the demand for infrastructure services such as storage, servers, and networks. Consider the long-term cost-effectiveness of your infrastructure before selecting it for your AI project.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Networking Infrastructure&lt;/strong&gt;: The expansion of your AI project involving deep learning algorithms will be powered by multiple containers that depend primarily on communication. Your infrastructure should be capable of providing a viable and scalable networking service while taking into account network speed, scalability, high bandwidth, reliability, and low latency.&lt;br&gt;
&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Types of Backend Infrastructure Suitable for AI Projects
&lt;/h1&gt;

&lt;p&gt;Cloud infrastructures (Hybrid Cloud) are the foundation of AI projects to handle the projects' flexibility and adaptability. The workload and demand on the volume of data increase as the project scales and cloud infrastructures can handle the workload and meet the demands at a reasonable cost without affecting performance. Some of these infrastructures include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Infrastructure-as-a-Service (IAAS)&lt;/li&gt;
&lt;li&gt;Platform-as-a-Service (PAAS)&lt;/li&gt;
&lt;li&gt;Backend-as-a-Service (BAAS)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s look at each of these infrastructures in detail.&lt;/p&gt;

&lt;h2&gt;
  
  
  IAAS
&lt;/h2&gt;

&lt;p&gt;IAAS is a cloud infrastructure service that consists of resources that provide services such as networking, computing, and storage. These resources help you cut costs by allowing you to pay for only the resources you need for your AI project. IAAS providers like &lt;a href="https://azure.microsoft.com/" rel="noopener noreferrer"&gt;Azure&lt;/a&gt; and &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS&lt;/a&gt; support AI projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros and Cons of IAAS
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;It allows you to migrate your project to the cloud and eliminate the need to manage your backend on premises.&lt;/li&gt;
&lt;li&gt;Provides services that can scale and handle a large workload depending on your project's requirements without affecting application performance.&lt;/li&gt;
&lt;li&gt;The infrastructure configuration is less transparent and visible, preventing easy monitoring.&lt;/li&gt;
&lt;li&gt;Downtime at an IAAS provider can impact your project's workload.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  PAAS
&lt;/h2&gt;

&lt;p&gt;PAAS is an infrastructure comprised of storage, servers, development tools, and networks that support the entire application life cycle from development to testing, deployment, and updating. As with IAAS, you can pay for only the resources you need as you continue to use the platform. &lt;a href="https://cloud.google.com/appengine" rel="noopener noreferrer"&gt;Google App Engine&lt;/a&gt; is a PAAS suitable for AI projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros and Cons of PAAS
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Provide free access to various project testing options, such as operating systems, languages, databases, or development tools.&lt;/li&gt;
&lt;li&gt;Provides services to developers that enable provisioning, application development, and easy collaboration.&lt;/li&gt;
&lt;li&gt;The migration from one PAAS provider to another is complex because the project depends heavily on the provider's platform.&lt;/li&gt;
&lt;li&gt;Data security can be at risk as the entire project is on the cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  BAAS
&lt;/h2&gt;

&lt;p&gt;BAAS is an infrastructure that handles an application's backend or server side, such as pre-defined authentication, database, hosting, cloud storage, and so on, allowing developers to focus on building the application's front end.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros and Cons of BAAS
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;BAAS can seamlessly handle the backend of your application.&lt;/li&gt;
&lt;li&gt;It saves time and money that would otherwise be spent on backend developers.&lt;/li&gt;
&lt;li&gt;Increased data and project expansion can cause infrastructure costs to skyrocket.&lt;/li&gt;
&lt;li&gt;Migrating from one BAAS provider to another is difficult because the backend is built on the provider's architecture.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Appwrite for AI Projects
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://appwrite.io/?utm_source=hackmamba&amp;amp;utm_medium=hackmamba-blog" rel="noopener noreferrer"&gt;Appwrite&lt;/a&gt; is an open-source BAAS that offers a set of easy-to-use APIs for outstanding backend infrastructure services, making it suitable for AI projects. Let's take a look at some of these services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Authentication
&lt;/h2&gt;

&lt;p&gt;Appwrite offers a &lt;a href="https://appwrite.io/docs/authentication" rel="noopener noreferrer"&gt;Service&lt;/a&gt; for authenticating, updating, creating, and retrieving user accounts; as well as built-in integration with multiple OAuth providers, including GitHub and Google. This service allows you to manage user registration and log in for your applications easily.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://appwrite.io/docs/client/databases" rel="noopener noreferrer"&gt;database service&lt;/a&gt; enables you to manage your data collection in a structured or flexible manner. Each document can have multiple child components with functionality that allows you to set permissions for users and teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;Appwrite supports several encryption methods that provide &lt;a href="https://appwrite.io/policy/security" rel="noopener noreferrer"&gt;security services&lt;/a&gt; for your project by providing end-to-end protection and preventing cyber-attacks on your application's data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Storage
&lt;/h2&gt;

&lt;p&gt;The Appwrite &lt;a href="https://appwrite.io/docs/storage" rel="noopener noreferrer"&gt;Storage service&lt;/a&gt; allows you to manage file uploads and downloads with the ability to assign read or write permission using an access control method.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Choosing the right backend infrastructure is critical to the growth and performance of an AI project. In this article, you learned about backend infrastructures, their use cases, their requirements for an AI project, types, and the Appwrite services for building AI projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;Check &lt;a href="https://appwrite.io/docs?utm_source=hackmamba&amp;amp;utm_medium=blog&amp;amp;utm_campaign=hackmamba" rel="noopener noreferrer"&gt;Appwrite Documentation&lt;/a&gt; for more information on getting started with its services.&lt;/p&gt;

</description>
      <category>cryptocurrency</category>
      <category>crypto</category>
      <category>blockchain</category>
      <category>web3</category>
    </item>
  </channel>
</rss>
